Terms of Service Aren't Just Annoying—They're a Failure

I’m from the internet myself, so when we started out, I wanted to believe that the architects of my home would make things better if they just knew how. After nearly two years of working closely with tech companies, that illusion has been shattered. I’m gambling many of my relationships revealing this, but I believe the fight against online abuse will continue to stagnate unless I speak out.

Information spreads impossibly fast on the internet, which is great if it’s a video of a skateboarding dog but a nightmare if it’s your home address. Quick responses to bad actors can be the difference between someone having a rough day and someone becoming targeted for years. Unfortunately, between vague Terms of Service, bad reporting tools, and the delay between reports and action, speedy and effective redress is a major pain point when it comes to fighting online abuse. When I founded Crash Override Network to help those targeted with online abuse, there was a clear need for us to build relationships with the people who could help the folks who came to us for assistance. We needed to talk to them, get fast action taken for our clients in the most dangerous situations, and share our information on how their Terms of Service were actually playing out in the real world.

Initially, this task went pretty well. I was thankful that most platforms were receptive, due partly to the growing public outcry about how widespread online abuse had become (thanks in no small part to Gamergate). We had two main goals with tech partners—establishing escalation channels so that we could get action on sensitive cases in a timely fashion and sharing what we’d learned about larger patterns of how online abuse happens on their platforms from our experiences both as targets and caseworkers.

It’s truly amazing when we’re able to get a case escalated, the platform takes immediate action, and we can go back to our people with good news, but it’s rarely that simple. Our early victories felt tremendous, and some of our partners have been incredible all the way through. But we’d often get cases that seemed to fall into gray areas in the Terms of Service—something that wasn’t as immediately obvious a violation as a death threat, for example, but was absolutely abuse. I’d collect all the evidence and write up the context to make clear what was going on, and my reports would be bounced.

I started to notice upsetting patterns. Even in cases of clear Terms of Service violations, escalations were frequently ignored or hand-waved for increasingly indefensible reasons. I spent hours putting together one report that listed a multitude of accounts that had shared nude photographs of my client, taken when she was a teenager, passed around with her personal information by people who were also telling her to kill herself and targeting her family. Most of the reports were bounced back as unactionable, even with the supporting context and documentation.

Platforms do not treat users equally, either. I started to notice that if I escalated a report on behalf of a client who was black, less or no action would be taken than in the case of my white clients facing similar problems. On one occasion, I reported a post that contained threats and personal information that had been sent out in identical versions by a number of different accounts—some bots, some people manually copy-pasting. Even when the content was identical, the actions taken were not. Newer accounts were banned, while accounts that were more established on the service remained untouched.

about the author

About

Zoë Quinn is an indie video game developer and a leading voice in the fight against online abuse.

It doesn’t take insider information to see this problem. Look at how quickly illicit episodes of popular TV shows are wiped off the face of YouTube shortly after they’re posted. When Leslie Jones, star of the most recent Ghostbusters movie, was deluged with racist harassment on Twitter, the platform banned the Internet Inquisitor targeting her that same day. This example sounds like a step in the right direction until you realize that this particular Inquisitor had been targeting dozens of people and remained untouched for years—he got his start targeting me and my family and used Twitter to build his audience. It often takes a major platform mere minutes to remove copyrighted material, but it can take years, dozens of victims, and targeting someone powerful enough to cause bad PR for the company for it to move on abusive content.

The opacity of varying companies’ Terms of Service is frequently by design. Most platforms have detailed internal Terms of Service that get very granular and specific, but their public-facing policies are purposefully vague. This distinction is actually more practical than it is shady and makes it easier for a company to work within gray areas—the ambiguity allows it to exercise discretion without having to worry about breaking its own rules. However, a balance must be struck between that freedom and being communicative enough with users to set boundaries for what is and isn’t acceptable on the platform, and by and large, companies err on the side of making their Terms of Service baffling and useless, especially when they’re failing to enforce any of their rules consistently. One company that we work with went so far as to hide its actual Terms of Service procedures from us when we were reporting cases because it was so worried about potential PR fallout—we were effectively trying to hit a moving, blurry target.

It’s important to note that threats, nonconsensual intimate images (commonly known as revenge porn), and harassment are not protected as free speech, and even if they were, privately owned companies are not the government. Think of how many Terms of Service agreements you’ve consented to—these are companies, and we are their customers. They are allowed to set and enforce their Terms of Service, and we are allowed to take our business elsewhere. They can ban you for hate speech. They can ban you for vague threats. They can ban you for spamming dick-pill messages. If they want to ban anyone, they totally can. That’s their right.

Abuse

Twitter has vowed to make harassment a priority, and today issued its 6-month progress report. But the company could still use more transparency in how it responds to abuse.

Podcasts

Gamergate's first target has a new book that's part memoir, part manifesto, and all about combating online harassment.

Culture

Game developer Zoe Quinn made national headlines last year as the first target of Gamergate, an online movement of angry videogame fans that has inspired widespread harassment. Now Quinn is turning the tables on harassers with the launch of Crash Override, a task force devoted to helping targets of online harassment.

Online abuse isn’t just an issue of rights; it’s an issue of quality. I am a software engineer and designer, and part of that job is quality assurance—making sure your users get something out of interacting with your creations and that you’ve executed your intentions for your product. I signed up for Twitter hoping to tell dumb jokes to my ridiculous friends, not to have nude photos of me plastered into any conversation I’m having on the platform. It seems like bad business to ignore the experiences of your users. You can easily draw the line at letting people use your service to actively terrorize others. You can suspend a user for sending racial slurs to minorities or posting stolen social security numbers and sleep well at night. If given a choice between keeping a user who goes out of their way to use your service to harm others or showing that you are unwilling to tolerate your platform being misused, it seems obvious which of those would make your product suck less.

We need to start evaluating platforms based on the experiences of their least privileged users. The online platforms that allow marginalized people to congregate and find community when they may be isolated from one in their physical lives can mean the difference between life and death. There are countless LGBTQA+ people, young and otherwise, who are able to be heard and find community only through the internet. Some of us remain in the closet out of the very real fear of consequences or violence, especially trans women of color, who are the most frequent victims. According to a 2013 report by the National Coalition of Anti-Violence Programs, 72 percent of the victims of anti-LGBT homicide were transgender women, and 67 percent of the victims were people of color. Having spaces online that don’t require risking your physical safety for participation is even more crucial for people who are at such high risk of offline violence for simply existing. Without paying special attention to secure online spaces for the people who arguably need that space the most, we will always be failing to let the internet live up to its real potential as a force for equality.

Sometimes companies don’t yet understand online harassment—many of the people who are making the decisions about what to do about online abuse aren’t the ones undergoing it themselves. The most striking example that I’ve witnessed was during a safety summit with Google Ideas. After eight long hours of Google employees and experts on online safety talking about the issue, it wasn’t until the head of the summit tweeted a photo of all of us at dinner afterward and saw the abusive replies that he really seemed to get it. The first step is talking to those of us in the trenches who have practical knowledge and who will inevitably have very different experiences with the platform. I have been in more meetings with multinational companies than I can count since we founded Crash Override, which is a great first step. But I find myself consistently surprised at how many things they’re totally unaware of that are so painfully obvious to me. I’ve spoken with abuse departments that didn’t know what SWATing was. And almost no one seemed to know that there were chronically abusive Internet Inquisitors making a living by abusing people on their platforms.

Even if a company has done its homework and come up with a stellar Terms of Service agreement, enforcement is a whole other ballgame and is one of the biggest obstacles to combating online abuse. Some platforms have billions of users, and creating a sustainable enforcement process is a logistical nightmare, especially when it comes to issues of mob-based harassment. Yet one of the critical ways to thwart an abuse campaign is by slowing down its momentum. The effort and time required to re-create an account and get a mob’s attention again can be a massive blow to someone trying to organize their abusive supporters. Some platforms have a policy of simply making a user remove the content that violates their Terms of Service. While this works well for first-time offenders or people who screw up once or twice, it backfires when it comes to chronically abusive users. Some users will refuse to act in good faith and need to be removed from a service. Chronically abusive users are not likely to stop for any other reason.

There’s a long road ahead, full of potholes and pitfalls, made worse by the fact that we’re trying to fix the car while we’re still driving it.

Yet even with good Terms of Service and effective tools to enforce them, there are issues of how a platform’s architecture factors into abuse or, in the worst cases, perpetuates it by proxy. When tech companies remove abusive content, it can hurt victims in unforeseen ways. Is the abusive content stored anywhere? Can it be subpoenaed? Sometimes yes, sometimes no. Twitter’s data-retention policy frequently discards reported abuse after the user is removed, and it becomes impossible to retrieve it. However, when one of our clients at Crash Override was targeted en masse by an anti-Islamic hate group, Facebook’s support team had stored the data after it was removed and provided an address to email if it was needed for a subpoena. Content-neutral algorithms that can be manipulated to falsely smear someone must be resistant to being gamed and ideally have the ability to be manually overridden in extreme cases. Google has taken an exemplary first step in this direction by crafting a narrow policy to remove nonconsensual intimate imagery from search results.

While it’s good to see tech companies starting to think about this stuff, it’s important to keep in mind that we’re not even hitting the bare minimum yet. All these efforts are crucial to moving the conversation past “Oh, gee, it sure sucks that people are using the internet to try to get each other killed,” but they still feel very much in their infancy. While many major players in the tech sector have gone on record as acknowledging that online abuse is a massive problem and some have started taking an active role in being part of the solution, it’s not quite as simple as I hoped it would be when I set out. There’s a long road ahead, full of potholes and pitfalls, made worse by the fact that we’re trying to fix the car while we’re still driving it. And a troubling number of people in high places are far from enthusiastic partners in making their products safer.

As I experienced myself, in many cases, abuse doesn’t take place on any one platform exclusively. Unfortunately, no platform to my knowledge makes any effort to coordinate with other platforms’ abuse departments on policy, specific actors, or trends except by occasionally choosing to make their own information publicly available. Additionally, showing evidence to a platform’s abuse department that I have or one of my clients has been abused by a user on other sites as well has never met with a response beyond “Well, that’s not our service, so it’s not our problem.” Tech as a whole is extremely siloed and secretive, generally to protect trade secrets and head off potential PR nightmares. But this attitude is incompatible with effectively combating online abuse because of the networked nature of abuse campaigns. Until this mind-set changes, tech companies’ efforts will remain severely limited.

From Crash Override: How Gamergate (Nearly) Destroyed My Life, and How We Can Win the Fight Against Online Hate, by Zoë Quinn. Published in September 2017 by PublicAffairs, an imprint of the Hachette Book Group.

Advertisement