Twitter's Community Notes Are Getting A Massive Video Expansion

X, formerly known as Twitter, has a crowdsourced fact-checking system in place called Community Notes. Essentially, it allows approved members to attach written notes to a post that might contain misleading information. Once a Community Note is attached to a problematic post, it becomes visible to all users so that they get the "verified" context about the content they're seeing.

So far, the Community Notes feature has been exclusive to text posts. It was only a week ago that X added support for images. Today, the company has announced that Community Notes now supports video content as well. Moving forward, every time a questionable video is shared on the platform, an AI-driven system will match it with the source, identify the clip, and then attach the Community Note so that viewers are aware of its nature.

With multimedia support now enabled for stills as well as videos, X hopes to stop the dissemination and damage done by "edited clips, AI-generated videos," and other forms of harmful content. Community Notes are contributed by a select bunch of experts from across more than 40 countries. As helpful as this feature sounds, it has a few fundamental flaws. For example, for a Community Note to become visible, it first needs to be approved by members on both sides of the discourse. This can create a situation where harmful or misleading content can go unchecked for a whle before it gets tagged with the proper disclaimer, if at all.

It is no panacea for bad content

This is in no way going to replace the rigorous work done by dedicated fact-checking organizations, which are faster and have a pool of certified experts without any of the consensus limitations that X has imposed on Community Notes.

"Essentially, it requires a 'cross-ideological agreement on truth,' and in an increasingly partisan environment, achieving that consensus is almost impossible," says Poynter Institute in an analysis of the Community Notes system. Another critical flaw is X's dramatically disproportionate implementation of its moderation, safety, and security features.

For example, Twitter has been repeatedly called out for censoring critical voices targeting the government in markets like India and the Middle East, where content shared by journalists and media houses is regularly pulled or withheld at the behest of the government. As the 2024 elections inch closer, in both India and the U.S., the stakes are only going to get higher.

In hindsight, Community Notes indirectly pass the onus of fact-checking to its most prolific users with a certain level of expertise instead of having a dedicated trust and safety team do that job. Elon Musk famously gutted the company's safety team soon after he took over, but he's now rebuilding it as X is opening up to political ads in its home market after imposing a ban in 2019.