How is Twitter cracking down on verified users who post negative content?

Answer: By revoking their verified badges and imposing new guidelines for users seeking verification.

by / November 17, 2017
Away from the Brexit negotiating table, European Union bureaucrats have started tweeting about the U.K.'s two-year process of leaving the bloc. (Dreamstime/TNS) TNS

On Wednesday, Nov. 15, Twitter announced via tweet that it had updated its qualification guidelines for verification on its platform, and that it will be reviewing currently verified users and revoking their badges if they do not meet these new guidelines. The social media company originally announced it had paused general verifications, pending new rules.

The social media giant’s blue verification badges were not originally designed to be endorsements, but simply an indicator that an account owner has proved that they are in fact who they say they are. However, the badges have come to carry that unintended meaning, which posed an issue for Twitter last week when it received significant backlash for verifying the account of an organizer behind the Unite the Right rally in Charlottesville, Va., in August that left one dead and many injured.

Per the new rules, verified users will no longer be so if their account displays or promotes hate, violence or violent images, harassment, threats to others or themselves, or in any way violates pre-existing Twitter Rules. The announcement tweet suggests that these new rules are an official acknowledgment by the company that many people view its verification badges as an endorsement.