It’s easy to speak your mind from behind the safety of a computer screen, which is why the Internet can be uncertain ground for positive interaction. Research finds that almost a third of people self-censor online for fear of angry retaliation. Moderating the comments on news sites can be a full-time job as staff try to root out abusive language, sometimes causing them to shut down commenting altogether. But there may be a better way.

On Thursday, Google and Jigsaw, a tech incubator under Google’s parent company, Alphabet, announced Perspective, which uses machine learning to help identify online harrassment. The company looked at hundreds of thousands of online comments and had human users rank how abusive they were.

Perspective then took that information and identified what text was likely to be perceived as “toxic,” or “a rude or unreasonable comment that is likely to make you leave a discussion,” and is now able to rate text to find what is most offensive. Although it is still in early development, the API is available to publishers to try now. And you can test Perspective yourself with its online writing experiment, where you can enter text to “see the potential effect of what you’re writing.”