Dispy

X's Hate Speech Promise Falls Flat

· news

X’s Hollow Promise: Reducing Hate Content in the UK is Just a Start

X’s recent pledge to reduce hate and terror content in the UK has been met with skepticism and relief. The move, announced in partnership with regulator Ofcom, promises to expedite reviews of offending content and ban accounts operated by or on behalf of terrorist organizations. However, given X’s track record and the context surrounding its commitment, it’s difficult not to view this as a half-hearted attempt to placate concerns rather than a genuine effort to tackle the issue.

The statistics paint a damning picture: hate speech increased by 50 percent on Twitter (eventually renamed to X) after Elon Musk’s takeover. This spike was largely driven by an influx of bots, which are often used to spread hateful and extremist content with ease. The fact that X is now promising to review such content within 24 hours or less raises more questions than it answers.

Millions of users have been exposed to hate speech on the platform, and their experiences will likely be taken into account when reviewing the effectiveness of X’s new policies. However, those who have already been targeted by hateful posts, only to see them remain online despite being reported multiple times, may not find much solace in X’s promises.

X has a long history of failing to adequately address hate and terror content issues on its platform. From its early days as Twitter to its current iteration as X, the company has struggled to balance free speech with user safety concerns. The contrast between X’s words and actions is striking: while the company claims to prioritize user safety and take a hard line on hate content, its owner Elon Musk continues to post racist content without consequence.

The regulator Ofcom will be keeping a close eye on X’s performance over the next year, reviewing data quarterly to ensure that the company meets its obligations. But even if X manages to reduce hate and terror content in the UK, it’s unlikely to have any significant impact on the broader global issue of online extremism.

X’s efforts may be seen as an attempt to deflect attention from more pressing concerns, such as Musk’s own AI company Grok, which is currently under investigation by Ofcom for generating CSAM and non-consensual intimate images. The fine imposed on image board 4chan earlier this year was a small victory, but it’s clear that there’s still much work to be done in holding social media companies accountable.

Ultimately, X’s promise to reduce hate content in the UK is just a starting point. The real test will come when we see whether the company follows through on its commitments or continues to prioritize profits over people.

Reader Views

  • CM
    Columnist M. Reid · opinion columnist

    X's promise to crack down on hate content in the UK feels like a strategic move to shift public opinion rather than a genuine commitment to reform. The company's lack of transparency regarding its moderation processes and algorithms raises questions about how these new policies will be implemented. One glaring omission from X's plan is any mention of addressing the role of bots in spreading hateful content. Without concrete measures to address this issue, it's unlikely that X's efforts will have a significant impact on reducing hate speech online.

  • RJ
    Reporter J. Avery · staff reporter

    X's latest promise to crack down on hate content is nothing more than a PR Band-Aid, designed to stem the tide of public outcry rather than genuinely address the problem. One aspect that's been largely overlooked in the coverage is the role of AI moderation in policing user-generated content. Can X truly say it has the tech and expertise in place to effectively identify and remove hate speech within 24 hours, or will this pledge ultimately rely on human moderators overwhelmed by an endless stream of reports? The company's failure to provide a clear answer speaks volumes about its commitment to change.

  • EK
    Editor K. Wells · editor

    X's latest attempt to address hate speech on its platform is a classic example of too little, too late. The real question is not how quickly X can review offending content, but whether the company has the internal mechanisms in place to prevent such content from spreading in the first place. Given the platform's history of bot-driven hate speech and Elon Musk's own track record on posting inflammatory material, it's hard to trust that X will follow through on its promises.

Related