Surveil and segregate and suppress and snitch


Estimated reading time: < 1
Category: Policy

Someone recently told me that I would get along great with Eric Goldman. This happened after I discussed his “segregate and suppress” model in a talk, but with a twist.

The “segregate and suppress” model, which people in policy circles discuss as a matter of course, refers to a paper he wrote on regulatory models for child safety online, which you can and should read here. He noted how “segregate and suppress” mandates for age and/or identity verification as a requirement for access to content, with the goal of withholding that content from those who have failed the check, “leads to surprising and counterproductive outcomes”.

(That phrase is so acidly sarcastic you would think it came from a Brit.)

Whilst fully endorsing his model, I made the point that it could easily be expanded to what precedes and follows it: surveil and segregate and suppress and snitch.

Because in order to segregate specific audiences, first you have to decide that all audiences are under suspicion and must be checked on the presumption of wrongdoing. That’s surveillance. And with identifiable segregation forming databases of which audiences have “wrongly” attempted to access suppressed content, you have information that someone, somewhere, wants to exploit, under the cover of keeping those very people safe. That’s snitching.

In other words, segregate and suppress is not always the destination. Sometimes it’s the vehicle.

So yes, let’s talk about the segregate and suppress model as a form of technical regulation! But let’s also talk about the deeply politicised ugliness riding in on it. Because we have to.

The Author

I’m a UK tech policy wonk based in Glasgow. I work for an open web built around international standards of human rights, privacy, accessibility, and freedom of expression. The content and opinions on this site are mine alone and do not reflect the opinions of any current or previous team.