IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How can machine learning help avoid Internet trolls?

Answer: Perspective

It’s easy to speak your mind from behind the safety of a computer screen, which is why the Internet can be uncertain ground for positive interaction. Research finds that almost a third of people self-censor online for fear of angry retaliation. Moderating the comments on news sites can be a full-time job as staff try to root out abusive language, sometimes causing them to shut down commenting altogether. But there may be a better way.

On Thursday, Google and Jigsaw, a tech incubator under Google’s parent company, Alphabet, announced Perspective, which uses machine learning to help identify online harrassment. The company looked at hundreds of thousands of online comments and had human users rank how abusive they were.

Perspective then took that information and identified what text was likely to be perceived as “toxic,” or “a rude or unreasonable comment that is likely to make you leave a discussion,” and is now able to rate text to find what is most offensive. Although it is still in early development, the API is available to publishers to try now. And you can test Perspective yourself with its online writing experiment, where you can enter text to “see the potential effect of what you’re writing.” 

 

 
Lauren Kinkade is the managing editor for Government Technology magazine. She has a degree in English from the University of California, Berkeley, and more than 15 years’ experience in book and magazine publishing.