Researchers from McGill University, Montreal, have developed an artificial intelligence program (AI) that they believe is one way to put an end to online hate speech.

Haji Mohammad Saleem and his team have developed a system that learns hate speech from a data dump that they collected from Reddit between 2006 and 2016, that is 10 years of hate speech from support and abuse groups on the platform, reports Futurism. They were also able to use such data dumps from other sites.

Their focus was three major groups who tend to receive a lot of hate – women, African Americans and overweight people. "We then propose an approach to detecting hateful speech that uses content produced by self-identifying hateful communities as training data," wrote the researchers in a release.

"Our approach bypasses the expensive annotation process often required to train keyword systems and performs well across several established platforms, making substantial improvements over current state-of-the-art approaches."

One such "state-of-the-art" approach was Google-powered Jigsaw of the Alphabet parent company. It failed when it came to identifying hate speech because of its methodology. It assigned scores for hate speech called the toxicity score when it identified certain keywords or phrases. Some obviously hateful phrases like, "you're pretty smart for a girl" only got an 18% score as to what people would consider toxic.

The McGill AI, one the other hand, relies on subtext and not just keywords which can be easily lost. This resulted in far fewer false-positives. Professor Thomas Davidson of Cornell University, in an interview with New Scientist, said: "Comparing hateful and non-hateful communities to find the language that distinguishes them is a clever solution."

One of the drawbacks was that the AI relies on Reddit for its data. This might be an issue as Reddit has its own form and style of speaking. According to Mashable, the site has a distinct way of speaking, and this may not be fully compatible with other social media outlets.

The report also pointed out that the AI was not able to catch some instances of obvious hate speech that a keyword-based AI would have easily caught. "Ultimately, hate speech is a subjective phenomenon that requires human judgment to identify," Davidson added.

This development comes at a time when governments around the world have started to crack down on hate speech and terrorist propaganda. Speaking at the UN recently, Prime Minister Theresa May urged internet companies to detect and remove extremist content.

EU laws might reach a point where governments would be able to censor the internet and block content at their own free will.

Machine intelligence
AI could be one way to filter hate speech online FABRICE COFFRINI/AFP/Getty Images