After launching an initial test of new alert formats back in July, Twitter’s now rolling out its improved labels for misinformation, with variable messaging for different kinds of potentially misleading elements within tweets.
As you can see here, the new labels will now be displayed with different messages and alert colors, in order to provide more context, and to better explain why each tweet has been flagged.
Twitter says that its initial misinformation tags, released in February last year, were criticized for being too small and too unclear, which is why it’s moved to update the format, and ensure that it’s doing its part, where possible, to make users aware of misleading claims that don’t otherwise violate its guidelines.
In testing over the last few months, with the updated format available to some users on the web version of the app, the results have been positive:
“In our tests, the new design increased the clickthrough rate on labels by 17%, from 3% to 3.5%. This number might sound low, but in many contexts, a 2% clickthrough rate is considered exceptionally good. The new label design also decreased shares by 10%, and decreased likes by 15%. Reducing sharing and engagement helps keep misleading content from propagating across Twitter.”
While Facebook has copped the brunt of criticism over the spread of misinformation and manipulation on social networks, Twitter, too, has played a part, with various research reports showing harmful misinformation trends often originate from the platform, before spreading to other networks.
Much of this is attributed to bot activity – in the wake of the 2016 US Election, for example, researchers uncovered “huge, inter-connected Twitter bot networks” seeking to influence political discussion, with the largest incorporating some 500,000 fake accounts. An investigation by Wired in 2019 showed that bot profiles dominated political news streams, with bot accounts contributing up to 60% of tweet activity around some major events, while early last year, a network of Twitter bots was found to be spreading misinformation about the Australian bushfire crisis, amplifying anti-climate change conspiracy theories in opposition to established facts.
Because Twitter is smaller, in regards to overall users, its influence is seemingly less significant, but many of the most highly engaged news consumers, and conspiracy theorists, stay in touch with the latest updates via tweet. They then aggregate that info to other networks – so while Twitter itself may only have 211 million daily active users, versus Facebook’s 1.9 billion, it still plays a key role in disseminating information, in both positive and negative respects.
Which is why it’s important for Twitter to take steps where it can to address potentially harmful misinformation.
Of course, the criticism then comes back to who decides what’s misinformation and what’s not, but Twitter, in partnership with fact-checking groups, is taking the right steps here in advancing its fact-checking alerts and efforts.