After a good deal of research and discussion in the community, my initial solution (below) isn't feasible for many reasons which were beyond my initial understanding of the system.
It is a problem that has yet to be solved by some of the top computer scientists in the world, so I surely have no chance to contribute to a solution - but if I can lead people to where the current discussions and research is, perhaps someone will pick up where I left off and find something more elegant than what I came up with.
Every possible solution / scenario to attempt to apply a layer like this that sits on top of the EOS file system opens up more attack surfaces and chances for centralization than it solves which defeats the purpose.
As it stands with this issue currently, should anyone ask you - 1. cryptography is the first line of defense, 2. arbitration is the second.
Plausible deniability is on the side of an EOS block producer in the event the situation I outlined below did happened, as they would provably have no way of knowing.
This is because after encryption, there is no way to flag and identify what files are floating around on a BP node. And, since IPFS makes many copies of a file across the network, and the network is chained together in a sequence, there is no IP to locate the node in question. They all are essentially offenders if what I outlined really did happen.
Until someone can find a solution (most solutions to this are in the hypothetical stages as I am reading / researching) those are the options.
There have been discussions around this in the IPFS community already, and a means of identifying the contents of files that have been encrypted through a method of fingerprinting.
The new direction I am headed is an IPFS gateway blacklist that could be shared across the network.
This is more reactive than proactive, but it seems any other approach compromises the integrity of the system. Also, considering only this approach works on individual nodes, the blacklist would have to be current across all nodes to work. Take into account that if any BP refuses content they would also quickly lose their seat.
Also, even that won't always work because copies of files vary because of compression applied to images and video. Even the slightest variation will result in a completely different hash. So any attempt to create a network wide blacklist (for files somehow identified after the fact) would not be reliable or consistent.
Research:
https://link.springer.com/chapter/10.1007/978-3-319-22729-0_6
https://www.technologyreview.com/s/402961/fingerprinting-your-files/
https://brage.bibsys.no/xmlui/bitstream/handle/11250/198751/OKjelsrud.pdf?sequence=1
Community Discussions & Resources:
https://steemit.com/eos/@eosio/eos-io-storage-white-paper-now-available
https://discuss.ipfs.io/t/avoid-hosting-of-illegal-material/4
https://www.reddit.com/r/ipfs/comments/3m351b/discussion_permanent_content_dmca_and_illegal/
Also see be sure to look into Dan's comments regarding the EOS file system in this video...
There is a fatal flaw I have been pointing out in EOS Gov for a few weeks that no one seems to think is a big priority at this stage, but I believe should be front and center.
The EOIS.io file system would enable people to upload pretty fowl content and no one but the people with the key to those files would know exaxctly what it is from my understanding. Although we would hope all people in the EOS network would follow the consitution, there are many attack vectors and reasons people would not. Knowing human nature, let's just assume some people won't.
So let's play this scenario out. A BP is hosting a ton of child pornography and copwritten content. A local municipalty or government catches the offender through another centralized online channel and said government agency confiscates the persons computers and is able to unlock all these files.
The files are tracked back to a BP node in the same country (or another) and the authorities kick in their door and arrest them. Then apparently, the BP is expected to go through the EOS binding arbitration process from a person who is yet to be attached to a public identity (all we have is a screen name at this point guys).
This scenario happens all the time. It's all good and well that the files will be encrypted, but the point of failire isn't going to be a quantum computer. It's going to be a lot more simple and straight forward than that because... humans.
So, what are we here in the EOS developer community doing to solve these problems for Block Producer candidates? Currently, they are more or less flying by the seat of their pants with little to no answers to these questions.
That is unecessary though. There are well established proceses for identifying shady content that would not voilate anyones privacy or the ideals that the community holds near and dear.
I am making a formal introduction to the idea of an "EOS Image Classifier" library that will identify, classify, flag and pass on to an "EOS Watcher Node" who is essentially a dedicated Block Producer role who's role is manually review, approve or block content coming into the EOS file system.
Perhaps this is not a BP role, but rather one of the first BP funded projects. It would be in their best interest to do so, because currently this is one of the biggest risks BPs face. This could be very light handed, but mitigate a massive amount of potnetial threats to the system.
We could use any (or a mix of the best of all) of the existing NSFW libraries already avalaible to construct something liight, efficient but as sophisticated as anything available currently.
Doing so would show a willingness of the EOS ecosystem to comply with current laws and jurisdictons as it relates to this content as well.
Here is a list of current libraries that would help provide a great head start:
Open nsfw model - https://github.com/yahoo/open_nsfw
PixLab NSFW endpoint - https://github.com/symisc/pixlab/blob/master/python/blur_image_nsfw_score.py
ScanCode toolkit - https://github.com/nexB/scancode-toolkit
Copyvios - https://github.com/earwig/copyvios
PySceneDetect - https://github.com/Breakthrough/PySceneDetect
PornDetector - https://github.com/bakwc/PornDetector
How exactly to impliment this without causing bottlnecks could be as simple as layering the library over all upload functions and then passing a flag on content to be reviewed. Meaning, all content may be uploaded, but if a detection of NSFW or Copyright is detected, an error message is thrown and the person is notified that a watcher node is reviewing their content. Watcher nodes may be passed on a temporary key and given a window to review said content.
If they fail to review that content within the window, the content is left up. If it is content that violates the EOS constitution and / or any local / regional / international laws, it is removed and reported appropriately depending on the severity - which can also be weighted with biases to automate some of that process.
As I said, there of course needs to be a well thought out process of governance that conforms to the EOS constitution.
If you are interested in researching a potential solution, or at least developing an official statement on this matter when it comes up - please send me an email to steve@eosdallas.com is you are interested in this initiative.