Online hate might just be an issue of bad design - Action News
Home WebMail Monday, November 25, 2024, 10:51 PM | Calgary | -14.7°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Opinion

Online hate might just be an issue of bad design

Companies such as Google and Riot Games are implementing new strategies to tackle poisonous speech, proving that the sorry state of online discourse can be fixed.

Anonymity sets the foundation for aggression, and the lack of consequences is what keeps the harassment going

Companies such as Google and Riot Games are implementing new strategies to tackle poisonous speech. (Justin Sullivan/Getty Images)

The misanthrope's view of the internet is that it's a hotbed for hate speech and angry trolling, and that it will forever be so.

This is because, according to this view, the online world is basically a lawless frontier where we're free from the structure, confines and civility of real life. Another perspective is that the internet simply acts as a massive floodlight, exposing the ugliest parts of human nature.

But new approaches to taming trolls show that the current state of online toxicity may just be an issue of bad design. Companies such as Google and Riot Games the makers of the massive multiplayer game League of Legends are implementing new strategies to tackle poisonous speech, and these solutions might also prove successful in taming trolls on news sites and other online communities.

The old tactics

The existing strategy for dealing with toxic speech has, by and large, been to raise a white flag: news outlets such as the Toronto Star, NPR, Reuters, Popular Science, The Telegraph and Recode have all closed down their comment sections. Others have eliminated the anonymity element, believing that forcing commenters to use their real nameswill inspire more accountability.

These approaches are flawed, however. The internet is all about engaging with your audience, something that becomes much more difficult when comments are prohibited. What's more, anonymity is one of the most powerful assets of the internet: itallowsindividuals to explore different aspects of their identities and express their beliefs without fear.We should be able to rein in the bad without having to forfeit the good.

Beyond that, a number of studies show that anonymity might not be driving online toxicity after all.Rather, it could very well be the lack of repercussions and real-life consequences coupled with anonymity that fuel nasty behaviour online. Indeed, anonymity might set the foundation for aggression, but the lack of consequences is arguably what keeps the harassment going.

The new tools

Jeffrey Lin, former designer at Riot Games, tried to remedy that with a tool called "The Tribunal." Using The Tribunal, League of Legends players report behaviour they find unacceptable, and other community members then vote on whether they believe the behaviours permissible or not. After initially implementing the tool, Riot Games incorporated artificial intelligence to make the whole process more efficient. Humans still identify which behaviour is and isn't acceptable, but the machine learning system delivers swift, customized consequences and penalties,such as chat restrictions and temporary bans from the game.

Now Google is trying a similar approach. The company's tech incubator, Jigsaw, along with its Counter Abuse Technology team, recently launched Perspective, a public API that uses artificial intelligence to automatically flag toxic online speech. By comparing new comments with a large data set of archived comments, previously flagged as toxic,from sources such asWikipedia or online news comment sections, Jigsaw believes it can positively identify hateful speech.As a result, a user's commenting privileges may berevoked, or else, he or she might be subject to "shadowbanning," wherebycommentsappear invisible to other members of the community.

Both of these models suggest that maybe the current of online toxicity isn't inevitable, or irreversible. Instead, perhaps it is just an issue of bad design, or as is often the case, of no design at all. When people step out of line, there need to be consequences, which is where good design strategies and machine learning can help. Online discourse can get better; communities just need the tools to help make it happen.

This column is part of CBC'sOpinion section.For more information about this section, please read thiseditor'sblogandourFAQ.

Corrections

  • A previous version of this column stated that Jeffrey Lin was an employee of Riot Games. Lin actually left the company in 2016.
    Feb 28, 2017 12:52 PM ET