X, the social media platform owned by Elon Musk, played a “central role” in stoking last year’s racist riots, a damning report has concluded.
In news that will come as a surprise to almost nobody, analysis by Amnesty International found that X, formerly called Twitter, prioritises comments that are most likely to contain misinformation and hatred.
Their report found that this encouraged the spread of lies and fake news following the Southport stabbings last summer, in which three schoolgirls were killed.
READ NEXT: Over 40% of those arrested in 2024 UK riots ‘previously reported for domestic abuse’
Posts on the platform claimed the attacker, British-born Axel Rudakubana, was a Muslim or illegal immigrant who had arrived in the UK by small boat. Within 24 hours, posts making these claims had been viewed 27 million times.
The horrific stabbings became a lightning rod for far-right agitators such as Tommy Robinson, and within days of the attack there were anti-immigration and racist marches taking place across the UK, including in Southport.
The report states: “In the critical window after the Southport attack, X’s engagement-driven system meant that inflammatory posts, even if entirely false, went viral, outpacing efforts to correct the record or de-amplify harmful content – some of which amounted to advocacy of hatred that constitutes incitement to discrimination or violence.”
Amnesty’s report condemned Musk for having “dismantled or weakened” key safeguards on X after he took it over in 2022.
They said the result of all this was a “staggering amplification of hate speech and anti-immigrant sentiment.”
Sacha Deshmukh, Amnesty International UK’s chief executive, said: “By amplifying hate and misinformation on such a massive scale, X acted like petrol on the fire of racist violence in the aftermath of the Southport tragedy.
“The platform’s algorithm not only failed to ‘break the circuit’ and stop the spread of dangerous falsehoods; they are highly likely to have amplified them.”
The report found that the social media platform prioritises any content that drives conversation, with no concern for whether this content is fake news or hateful.
The ability for accounts to buy a premium subscription, which pushes their content even more, has increased the risk of “toxic, racist, and false” content.”
Pat de Brún, Amnesty’s head of big tech accountability, said: “X’s algorithm favours what would provoke a response and delivers it at scale. Divisive content that drives replies, irrespective of their accuracy or harm, may be prioritised and surface more quickly in timelines than verified information.”
Amnesty said X “continues to present a serious human rights risk.”