Are social media platforms becoming lawless zones where freedom of expression is used as a pretext for spreading hate? This question is particularly acute for X (formerly Twitter), which has undergone a radical shift in content regulation. Between the promise of freer speech and very real abuses, the platform’s new policies raise deep concerns about user safety and the responsibility of tech giants. This evolution, far from anecdotal, is redefining the boundaries of our digital public space.
Moderation on X: a radical turning point under the Musk era
Since Elon Musk’s acquisition, X has transformed its content moderation approach, moving away from established policies to embrace a philosophy of “almost absolute freedom of speech.” This shift has resulted in a drastic reduction of human moderation teams, a loosening of rules, and increased reliance on automated systems. Previously, the platform published detailed transparency reports, often around fifty pages long, providing a granular view of actions taken against misinformation, hate speech, and other harmful content. Today, communication is scarcer and reports are briefer, using new metrics that make direct comparison difficult.
This ideological turn has had concrete consequences on the types of content tolerated. For example, the platform dismantled its policy against COVID-19 misinformation. Additionally, practices such as misgendering or deadnaming (using the birth-assigned name of a transgender person without consent) are no longer systematically classified as hate speech. These decisions, made in the name of less constrained speech, have opened the door to a resurgence of problematic content, leaving many users and observers concerned about the direction the social network is taking.
This new era contrasts starkly with the past, where, though imperfect, a balance was sought between protecting users and freedom of speech. The dismantling of advisory councils and the reinstatement of accounts previously banned for serious rule violations sent a clear signal: the priority is no longer rigorous content curation but minimizing interventions, even if toxic discourse flourishes.
What do the numbers say? Analysis of the latest transparency report
The first transparency report published by X in two years, covering the first half of 2024, presents a troubling picture. Raw numbers reveal a massive disconnect between user reports and platform actions. On one hand, reports have exploded, with over 224 million accounts and tweets reported by users, compared to just 11.6 million in the second half of 2021 — a staggering increase of nearly 1830%.
On the other hand, enforcement measures have not matched this trend. The number of account suspensions has increased only 300% over the same period, rising from 1.3 million to 5.3 million. The gap is even more striking in critical areas such as child safety: out of more than 8.9 million posts reported for endangering minors, only 14,571 were removed. Regarding hateful content, the contrast is just as sharp: the platform suspended only 2,361 accounts for this reason, compared to 104,565 in the second half of 2021.
Although X partly justifies these gaps by changes in definitions and measurement methods, the underlying trend is undeniable: significantly reduced moderation action facing a booming number of reports. This situation fuels fears of a less safe digital environment where the most dangerous content—especially related to child exploitation and hate incitement—is increasingly slipping through the cracks.
[Image: Graph showing the growing gap between user reports and moderation actions on X]
AI at the helm: the new backbone of content regulation
To compensate for reduced human staff, X heavily bets on artificial intelligence. The platform claims its moderation is based on a “combination of machine learning and human review,” with AI acting directly or flagging content for further verification. However, this growing reliance on algorithms raises fundamental questions about their ability to handle the complexity and nuances of human language.
Limitations of automated moderation
Despite progress, automated systems are known for their errors. They struggle to correctly interpret sarcasm, coded language, or cultural context. A study by the University of Oxford and the Alan Turing Institute showed that AI models for hate speech detection have significant shortcomings: some over-detect by wrongly flagging benign content, while others under-detect by letting clearly hateful speech pass.
Examples of failures abound on other platforms:
In 2020, Facebook’s systems blocked ads for struggling small businesses.
This year, Meta’s algorithm wrongly flagged posts from the Auschwitz Memorial as violating its standards.
Another major problem is training data bias. Most algorithms are developed from datasets primarily sourced from Northern countries, making them less effective in analyzing dialects or cultural contexts, such as Maghrebi Arabic. This cultural insensitivity can lead to unequal and unfair moderation.
Impact on marginalized communities
This over-dependence on AI risks disproportionately harming marginalized communities. Their language, which may include reappropriated terms or insider jargon, is often misunderstood and wrongly flagged as offensive. Meanwhile, the subtle and coded forms of hate targeting them frequently escape algorithmic filters. The result is a double bind: censorship of their legitimate expression and inadequate protection against the harassment they face. Entrusting complex moral judgments to machines risks not only infringing on freedom of expression but also amplifying the inequalities platforms claim to fight.
AI, a double-edged tool
Artificial intelligence is a powerful tool, but it is not a silver bullet. Without rigorous human oversight, diverse training data, and clear policies, automated moderation systems can worsen the problems they aim to solve, creating an environment that is both too restrictive for some and too lax for others.
Real consequences: when online speech fuels violence
Leniency in X’s moderation is not just a theoretical debate; it has tangible real-world repercussions. A recent case in the UK illustrates this dramatically. Amid riots partly triggered by misinformation on social media, a woman posted a tweet calling to “set fire to all the fucking hotels full of bastards.”
Her full message was unequivocal:
"Mass deportation now, set fire to all the fucking hotels full of bastards, I don't care, and while you're at it, take the government and traitor politicians with them. [...] If that makes me a racist, so be it."
This message was reported to X for violating its rules. The platform’s response? The tweet did not violate any rules. Yet, the UK justice system saw things very differently. The author was prosecuted and pleaded guilty to racial hate incitement. This example highlights an alarming gap between what the law regards as a serious criminal offense and what a leading global platform deems acceptable. Allowing such calls for violence online risks turning them into real acts, endangering actual lives.
The responsibility of platforms at stake
When content is judged illegal by a court but allowed by a platform, the question of that platform’s responsibility arises directly. The immunity often granted to content hosts is increasingly challenged, especially when their laxity contributes to offline violence.
A challenge for the entire digital ecosystem
X’s moderation difficulties are not an isolated case. Other giants like Meta (Facebook, Instagram) acknowledge that their algorithms often fail to correctly identify misinformation or hate speech, generating both false positives and undetected harmful content. The problem is systemic and worsened by the industry’s trend to cut costs by replacing human moderators with AI solutions that are cheaper but also less reliable.
This challenge is compounded by growing opacity. The August 2024 closure of CrowdTangle, a Meta tool that allowed researchers to monitor misinformation, and Elon Musk’s decision to charge for X’s API access in 2023 have significantly limited civil society and academic capacity to study these phenomena. Without data access, it becomes nearly impossible to assess the scale of the problem and hold platforms accountable. With major elections approaching worldwide, this lack of transparency is particularly worrying, as it hinders efforts to counter influence and manipulation campaigns.
Toward smarter and more responsible systems?
The content moderation crisis urges us to reconsider the design of more reliable and responsible systems, digital or otherwise. Seeking a balance between technology, human expertise, and ethical responsibility is a central issue of our time. This quest for optimization is found in very different fields, such as energy transition.
In this sector, companies like Les Nouveaux Installateurs exemplify how an integrated approach can provide robust solutions. Their work doesn’t simply involve installing solar panels but designing an intelligent energy ecosystem for each home. They offer comprehensive support, from initial energy studies to remote monitoring of installations. Their solutions incorporate cutting-edge technologies, like smart control that optimizes self-consumption, charging stations for electric vehicles, and heat pumps, all managed via a dedicated app.
This parallel is instructive. Just as Les Nouveaux Installateurs combine technological performance (panels, inverters, control) with indispensable human expertise (personalized study, administrative procedures, installation by qualified RGE teams), the future of a healthy digital space likely lies in a hybrid model. A powerful AI to process massive data volumes, supervised and complemented by trained human moderators who understand context, cultural nuances, and ethical stakes. By combining the best of machine and human, we can hope to build fairer, safer systems.
Expert advice: think of the ecosystem as a whole
Whether managing energy consumption or one’s online presence, a systemic vision is essential. For energy transition, this means not just installing panels but optimizing everything with control solutions, storage (virtual battery), and efficient equipment like heat pumps. Les Nouveaux Installateurs offer a turnkey approach, ensuring maximum coherence and efficiency for your energy project.
The debate about content moderation on X reveals a broader tension between technological innovation and social responsibility. Platform choices directly impact the quality of our public debate and the safety of our societies. The current model, leaning toward excessive automation and human disengagement, shows serious limits. The future will likely require stricter regulation, greater transparency from tech companies, and a return to a vision where technology serves to protect users rather than expose them to harm.
FAQs about content moderation on X
What are the main changes in content moderation on X?
Since Elon Musk’s takeover, major changes include a significant reduction of human moderation teams, loosening of rules (notably on COVID misinformation and misgendering), increased reliance on AI, and a notable decrease in moderation actions (suspensions, removals) despite a surge in user reports.
Is AI a viable solution for content moderation?
Currently, AI alone is not a viable solution. Algorithms struggle to grasp human language’s nuances like sarcasm and cultural context, leading to both censorship of legitimate content and failure to detect subtle hate speech. Experts agree that a hybrid model combining AI with qualified human oversight is essential for effective and fair moderation.
What are the implications of these policies for freedom of expression?
X’s promoted vision of “absolute freedom of expression” paradoxically threatens many users’ speech. By allowing hate speech, harassment, and misinformation to proliferate, the platform becomes a hostile environment, especially for minorities and marginalized groups who may be silenced. True freedom of expression requires a safe environment where everyone can speak without fearing for their safety.






