Google-Funded Troll Algorithm Targets Antisocial Behavior

Google has funded a study by Cornell and Stanford researchers who have created an algorithm for identifying trolls before they become too much a problem, and though it isn't perfectly accurate, it does a good job of weeding out users who are likely to end up getting the banhammer. All the while, the algorithm isolates a number of online behaviors typical of trolls, things referred to as antisocial behaviors, including making far more posts during a block of time that regular non-troll users.

The study looked at some big communities with frequent posts, such as CNN's community, that had a combined user base of 1.7 million users and about 40 million posts made during the year and a half the researchers monitored the forums for the study.

The study monitored the behaviors of users who ended up being permanently banned from the communities, as well as the behavior of users that weren't banned. Those who were given only temp-bans weren't factored into the study's data. The researchers found that trolls tend to immediately have poorer quality posts than regular users, and that they were more likely to use language designed to stir up other users, such as swearing.

In addition, trolls were found to post far more often than regular users. CNN users that ended up banned, for example, posted 264 times before getting the banhammer, while regular users in the same span of time only made about 22 posts. Trolls often also get more replies from other users than regular users, likely due to their use of negative statements and such. At the end of it all, the algorithm only needs up to ten posts to determine whether the user will be a troll.