Instagram deploys AI to reduce offensive post captions

Instagram is using artificial intelligence to reduce the number of offensive captions on its platform. The company has already used this technology in an effort to get users to self-police the comments they leave on posts. The AI works by warning users that their commentary may be harmful to those who read it (attempting to call someone an idiot, for example, would result in a warning).

Instagram allows users to report comments for various issues including bullying. These reports are reviewed and either dismissed or acted upon, the data from which is then used to train the platform's anti-harassment AI system.

Using this technology and the existing human reports, Instagram can automatically detect new comments similar to ones reported by humans. The AI produces a prompt that encourages the user to review and, if necessary, edit their comment to make it less offensive.

The same technology is now being used to review captions on posts. Assuming the system is tripped, the Instagram app will alert the user to the fact that their caption may be harmful, giving them the chance to edit the text before they publish the post. Instagram doesn't require the captions to be edited, however.

The caption alerts are live starting today for many Instagram users, though it will take a few months for the company to roll out this system to everyone. Though users have the option of publishing a caption even if it received a warning, Instagram reminds users that they must follow the company's rules to avoid putting their accounts at risk.