> That wouldn't be a 'random idiot'; it would be the story's beta-reader, and someone the document is shared with.
The System(TM) doesn't know what a "beta-reader" is, and will react the same way to any random idiot's report. You're pretending that Google somehow responds to categories that neither Google's policies, Google's automated systems, nor probably many of Google's employees and agents, even know or care about.
Not that a "beta-reader" can't also be an idiot anyway.
> Also- if it's true, then...
You seem to be relying on the idea that Google might have been legally required to act against child pornography, so the "it" that would have to be true would be that the text was in fact child pornography.
As far as I can find, US child pornography laws don't mention or apply to pure text. Even if the unsubstantiated rumor you're spreading were true, then the unsubstantiated claim underlying it still could not possibly be true.
Any automated system or set of procedures that acts on the impossible supposition that a pure text document could even possibly be "child pornography" under those laws is automatically wrong.
... and the more credible claim is that the suspension was for "sexually explicit content", not "illegal child pornography", or the ever-popular "illegal obscenity", or illegal anything. That's a purely voluntary choice by Google. And even if that weren't true in this case, it is definitely true in many, many others.
> what else was google supposed to do here?
For the legal side, notice that text can never be "child pornography" in the US, and have it actually read for violations of any other law you're worried about by somebody who's actually qualified to evaluate its legal status and actually taking the time to do so.
For the "completely voluntarily chosen Google policy" side, which is the one that actually seems to be at stake here, hold off on doing anything until the material has been reviewed by a human who actually understands the issues, actually has authority to make meaningful decisions, and has the time and motivation to do so. Also, don't adopt pointless, silly policies.
Of course, commercial incentives assure that Google won't do any of that. And neither will any other provider of a similar cloud service.
So what every single user should do is get the hell off of all of those services. That would be a good idea even if policies weren't silly and if policy enforcement weren't hair-trigger, error-prone, capricious garbage, but it's especially important because this and other stories give every reason to believe that they are. On all of them.
That applies even if there is no commercially viable approach to handling these issues correctly. If a service can't be offered in any reasonable way, then that service should not be offered, or at least should not be used. "What are they supposed to do?" isn't a valid reason to use an unreliable, dangerous service.
... and the "story" is that many, many people are in fact using dangerous systems and should be moving off of them.
The System(TM) doesn't know what a "beta-reader" is, and will react the same way to any random idiot's report. You're pretending that Google somehow responds to categories that neither Google's policies, Google's automated systems, nor probably many of Google's employees and agents, even know or care about.
Not that a "beta-reader" can't also be an idiot anyway.
> Also- if it's true, then...
You seem to be relying on the idea that Google might have been legally required to act against child pornography, so the "it" that would have to be true would be that the text was in fact child pornography.
As far as I can find, US child pornography laws don't mention or apply to pure text. Even if the unsubstantiated rumor you're spreading were true, then the unsubstantiated claim underlying it still could not possibly be true.
Any automated system or set of procedures that acts on the impossible supposition that a pure text document could even possibly be "child pornography" under those laws is automatically wrong.
... and the more credible claim is that the suspension was for "sexually explicit content", not "illegal child pornography", or the ever-popular "illegal obscenity", or illegal anything. That's a purely voluntary choice by Google. And even if that weren't true in this case, it is definitely true in many, many others.
> what else was google supposed to do here?
For the legal side, notice that text can never be "child pornography" in the US, and have it actually read for violations of any other law you're worried about by somebody who's actually qualified to evaluate its legal status and actually taking the time to do so.
For the "completely voluntarily chosen Google policy" side, which is the one that actually seems to be at stake here, hold off on doing anything until the material has been reviewed by a human who actually understands the issues, actually has authority to make meaningful decisions, and has the time and motivation to do so. Also, don't adopt pointless, silly policies.
Of course, commercial incentives assure that Google won't do any of that. And neither will any other provider of a similar cloud service.
So what every single user should do is get the hell off of all of those services. That would be a good idea even if policies weren't silly and if policy enforcement weren't hair-trigger, error-prone, capricious garbage, but it's especially important because this and other stories give every reason to believe that they are. On all of them.
That applies even if there is no commercially viable approach to handling these issues correctly. If a service can't be offered in any reasonable way, then that service should not be offered, or at least should not be used. "What are they supposed to do?" isn't a valid reason to use an unreliable, dangerous service.
... and the "story" is that many, many people are in fact using dangerous systems and should be moving off of them.