Artificial Intelligence Has Learned To See Network Trolls - Alternative View

Artificial Intelligence Has Learned To See Network Trolls - Alternative View
Artificial Intelligence Has Learned To See Network Trolls - Alternative View

Video: Artificial Intelligence Has Learned To See Network Trolls - Alternative View

Video: Artificial Intelligence Has Learned To See Network Trolls - Alternative View
Video: Manipulation: Bots and Trolls | Very Verified: Online Course on Media Literacy 2024, May
Anonim

Artificial Intelligence read the aggressive comments from Reddit users and understood how people who hate other people talk. It is more difficult to deceive him than a regular bot moderator.

The Internet is inhabited by fat and thin trolls and just rude people who do not go into their pockets for words. Moderating the Internet manually is a hard and thankless job. Bots programmed to search for words from the "forbidden list" do better, but they cannot know when a caustic comment with code words is a harmless joke, and when it is an evil verbal attack.

Researchers in Canada have taught artificial intelligence to distinguish rude jokes from hurtful jokes and what sociologists call "hate speech."

Dana Wormsley, one of the creators of AI, notes that "hate speech" is difficult to formalize. Indeed, formally offensive words can be used both with irony and in their milder meanings; only some of the texts containing such words seriously incite and humiliate. We will not give examples, because the government organizations that regulate the Internet in Russia do not yet have artificial intelligence).

The neural network was trained on the samples of the statements of community members known for their hostility to different groups of the population. The neural network learned from the posts of the Reddit site - a platform where you can find a wide variety of interest groups, from civil rights activists to radical misogynists. The texts that were uploaded to the system most often insulted African Americans, overweight people and women.

A neural network trained in this way gave fewer false positives than programs that determine "hate speech" by keywords. The system caught racism where there were no indicator words at all. But despite the good performance, the creators of the AI moderator are not sure that their development will be widely used. So far, she has successfully found hate speech on Reddit, but whether she will be able to do the same on Facebook and other platforms is unknown. Plus, the system is imperfect, sometimes missing out on many of the apparently rude racist statements that a keyword search engine would not miss. The only one who is able to distinguish an insult from an innocent joke while the person remains.