Screenshot from Depression Quest.
Three times in the past several weeks, women have felt compelled to leave their homes after being targeted on Twitter with rape and death threats personalized with their home addresses — all because they voiced contrary opinions about the video-game industry while possessing two X chromosomes.
That’s a problem for the game industry, which is seeing what would otherwise be a worthwhile discussion of game-journalism ethics twisted into a reactionary lashing out against feminist critiques.
It’s a bigger problem for Twitter, where this breed of hateful nonsense has been going on since well before the current rash of harassment known as “Gamergate.” Unless the social network starts dealing with this problem more seriously, it will be festering long after the current controversy drops out of the headlines.
How Gamergate got ugly
Game developers Zoe Quinn (Depression Quest) and Brianna Wu (Revolution 60) and game critic Anita Sarkeesian separately ran afoul of a nutcase fringe of the gaming industry for reasons that amount to “only the right kind of women are welcome in our industry.”
That mind-set is not unique to games. Far too much of the tech business deals poorly with women taking roles outside of PR and HR. But in gaming, outright resentment of people working to change the gender ratio — often disparaged as “SJWs,” for “social justice warriors” — is public and prominent in many of the #gamergate tweets.
The contention that the feminist agenda — that all-powerful force that’s secured all of 20 percent female representation in the Senate after decades of effort — suppresses creativity in the industry is laughable. (A more common Gamergate argument, that game journalism needs more ethics and transparency, is fairer and less gender-weighted.)
Gamergate got especially ugly in three instances: When Quinn shipped an unusual and not-always-fun game, when Sarkeesian objected to the portrayal of women in games, and when Wu mocked Gamergate complaints. Hateful tweets led to various Internet creeps doxing these women — looking up their personal info — and using those details to personalize threats of rape and murder with their home addresses.
All three made the understandable choice to stay with friends for a while. Said Wu in an email Sunday: “I’d rather not talk about when I will be home, but I can say I’m in communication with law enforcement.”
“I’m not able to sleep very much right now,” she added.
Twitter is not helping enough
While Quinn, Wu, and Sarkeesian and other Gamergate targets refused to leave Twitter and other networks (see, for example, Quinn’s “ask me anything” on Reddit), two other accounts of social-media-fueled harassment emerged to underline that this isn’t just a gaming issue.
Developer and educator Kathy Sierra posted a lengthy essay about her experience being doxed and threatened by an online mob in 2007, and then choosing to leave Twitter this year. In it, she wrote that Twitter’s dynamics fuel these distributed attacks: “Twitter, for all its good, is a hate amplifier. Twitter boosts signal power with head-snapping speed and strength.”
Sierra’s post, in turn, inspired developer Adria Richards to speak about getting the same treatment after calling out sexist jokes at a developer conference last year. Her summary of the experience in an email: “Social networks feel like a city without 911.”
What makes Twitter so tempting to trolls? Until a target blocks them, they can make her read whatever they write by tweeting to that person’s username — and if they attach a photo that features images of dead or mutilated bodies, that is displayed by default. And when the recipient does block them, it’s easy to create another account.
Twitter has been shamefully slow in addressing this problem. It didn’t add a report-abuse button until August 2013, after British activist Caroline Criado-Perez was hit with a torrent of violent threats because she campaigned to get Jane Austen’s portrait on the £10 note.
Until perhaps a week ago, it routinely rejected third-party reports of abuse — its documentation still says only first parties or “authorized representatives” can do this.
If somebody tweets a violent threat and then deletes the tweet, good luck getting Twitter to act. Its report-abuse form still requires a link to a tweet, not a screen capture of it. Twitter public-policy rep Nu Wexler told me last year that screencaps suffice, but I keep seeing reports that this doesn’t work.
And Twitter still doesn’t give its users options to block certain types of accounts (for instance, those younger than 30 days) or content (like violent keywords) as developer Danilo Campos suggested in July. It’s shown no sign of learning from such collaborative-blocking experiments as Jacob Hoffman-Andrews’ BlockTogether. And it’s yet to apply its powerful analytics to detecting hostile behavior by its users.
A Twitter spokesperson emailed: “We evaluate and refine our policies based on input from users, while working with outside organizations to ensure that we have industry best practices in place.”
The site might want to start by talking to management at Facebook, which has been making a concerted effort to deal with toxic hatred. Said Soraya Chemaly, author of a nearly 5,800-word piece for The Atlantic about social-media misogyny: “Despite ongoing issues, Facebook is committed to addressing concerns and responsive when problems arise. Twitter is not this far along and seems to be just beginning to consider this process.”
Twitter can do this. I want to see this company, which showed that it had a backbone when it sued the government for its right to provide details about how often it fields national-security inquiries about its users, recognize the problem in its own house, and start the difficult work of fixing it. Today, please.