YouTube has announced a crackdown targeting harassment on its platform, building upon its existing policies related to problematic videos. Though the company already has policies related to removing content that violated its guidelines, it has announced new changes and policies that broaden the way it deals with harassment. As well, the company is rolling out a new system that squashes toxic comments as they arrive.
YouTube has historically prohibited videos that dox people, contain explicit threats, and that encourage viewers to harass or otherwise target a particular person. However, under these new policies, YouTube says it is also prohibiting videos that contain ‘veiled or implied threats.’ Using language that suggests someone may face physical violence falls under this category, as do videos featuring simulated violence against someone.
Videos containing ‘demeaning language that goes too far’ will also be prohibited going forward. YouTube will remove videos that contain ‘malicious insults’ based upon various ‘protected attributes,’ such as sexual orientation and race. This policy isn’t limited to private people — YouTube says it will also be enforced on videos related to public figures and other YouTube creators.
The platform is addressing users who engage in what it considers a pattern of harassment that may not be noticeable in only a single video, as well. As such, channels found to ‘repeatedly brush up against’ the company’s policies on the topic will be booted from the YouTube Partner Program, meaning they won’t be able to make money from advertisements.
Finally, YouTube is rolling out a system that flags potentially toxic comments and allows the creator to review them before they are published. This system was found to reduce the majority of comments users would otherwise flag and it is now rolling out as a default setting to the platform’s biggest channels. Users retain the option of opting out of the system, however.