In the summer of 2019, a friend posted a quote from St. Augustine on his Facebook feed and then received a notice from Facebook that it violated their terms of service and wouldn't be visible to others. I thought it was odd because the quote was fairly innocuous so as an experiment I posted it on my feed and had the same result. The message said, "Your post goes against our Community standards on hate speech."
I'll let you read that post for the whole story, but the short result is that I never got an explanation of what was wrong with the post. There was no clear process for finding out and I never had the ability to communicate with an actual human being. And when the post was eventually restored, I never got any explanation for what was wrong with the post in the first place.
As I wrote at the time: "This makes it the worst kind of rule: Where you don’t know where the line is. At this point, I don’t even know what kind of post will land me a suspension because there’s no clear indicator of why this quote is a violation, but others aren’t. It’s completely arbitrary and capricious."
Since the US presidential election in November and then accelerating through the beginning of January 2021, I have seen many friends quitting Facebook and Twitter and many more declaring they will be using the services less. Many cited the deplatforming of Trump and other conservatives as a prime reason as well as their general distrust of the companies behind the sites.
There were also many posts alleging various hidden or incipient actions being taken by other technology companies, often precipitated by the deplatforming of right-leaning social media sites like Parler. I saw one person claiming that Apple would slip in an update to disable the Emergency Broadcast System on iPhones to prevent Trump from communicating directly with Americans. Another shared a "hidden" feature to prevent Apple from remotely deleting apps from our devices; a feature which would do nothing to impede Apple and would only prevent the user from deleting apps.
Above all, what we have been seeing is a rising tide of fear surrounding the technology services we use every day that we rely on for communication and much else. Much of the reason people are afraid of Facebook, Google, Apple, and Amazon is because the reasons for why we might be de-platformed, banned, or put in online jail are so opaque. We don't adequately understand the rules, which often seem arbitrary and capricious. And there's no one to talk to when it happens. We just get terse, blanket statements about violating "standards" with no specifics.
There's no court of appeal, no human voice to explain, no list of specific rules. As I wrote: "Imagine a law that was passed that had the same arbitrary vagueness. Let’s say the law said, 'If you drive too fast you’ll get a ticket,' but it doesn’t say what 'too fast' means. Maybe for one policeman it’s 30 mph and another it’s 50 for the same stretch of road. It’s madness." Can you just imagine the stress of driving on a road, not knowing whether you are below, at, or over the speed limit.
"When you can’t be certain whether your ability to say things that would be constitutionally protected on a public street will get you banned from the platform that a supermajority of people in the United States use as their platform of daily free expression, it doesn’t bode well for where we are going."
Regaining Our Confidence
If Big Tech wants to regain our confidence--or if politicians want to regulate them--the place to start is with transparency and accountability. Users need to be given a clear and understandable due process that includes a list of rules written in plain language, a set of steps that occur when an alleged violation occurs, and a right of appeal, including to an objective third party arbitrator.
Facebook has started to address this situation by establishing an Independent Oversight Board and its first case will be to review Trump's ban from Facebook. It's a good first step, but hardly sufficient. I'm happy to see that the membership of the board will be independent from Facebook.
But, it does not address the nearly complete lack of transparency regarding the "community standards" that are imposed by the company. We still don't know what crosses the line. For another thing, the oversight board is limited in the number of cases it can review and so will only look at the highest profile requests. A single board of 40 members overseeing a social media site of over 1 billion members cannot hope to deal with even a tiny fraction of the cases that come up, just like the US Supreme Court can't handle all the legal cases in the US. There needs to be a hierarchical system of boards of oversight and appeal.
And this is just Facebook. Will Twitter follow suit? How do we address the questions arising over the actions by Apple, Google, and Amazon?
Perhaps it's time for the bipartisan pool of Members of Congress distrustful of Big Tech to start discussing an Online Bill of Rights for Americans. Given the ubiquity of these privately-owned quasi-public squares and how vital they've become to American society in a year of pandemic lockdown, it may be time for government to step in and recognize a new and unprecedented set of rights for users of these platforms.
Off the top of my head I can think of the following rights I'd like to see enumerated:
- A right to free speech1
- A right to peacefully assemble
- A right to due process
- A right to face your accuser
- A right to know what the community standards are
- A right to know what data is being stored about the user and for the user to have it deleted
- A right to privacy
There could be more and or perhaps fewer. But it would be a start.
- Someone may at this point try to respond that the First Amendment only applies to the government, not to private corporation. My response to that is: This is precisely what this post is about. The Online Bill of Rights would be in addition to the Constitutional Bill of Rights, if not at the same legal level. ↩