Criminalising the algorithmic amplification of “harmful” content would allow governments to “dictate what you see”, bury “opposing views”, and replace the “free exploration of ideas” with curated propaganda, Telegram’s Chief Executive Officer (CEO) Pavel Durov has warned Spanish users.

In a message posted on X for Telegram users in Spain, Durov said measures announced by Spain’s Prime Minister Pedro Sánchez’s government threaten “internet freedoms” and could “turn Spain into a surveillance state under the guise of ‘protection’”.

What does Durov’s post on X exactly say?

Durov’s X post – essentially an alert that Telegram sent to users in Spain – presents the Spanish government’s measures as a series of risks to free speech and privacy. First, it highlights plans to ban under-16s from social media using mandatory age verification that requires “strict checks, like needing IDs or biometrics”. Telegram warns this could “set a precedent for tracking EVERY user’s identity”, eroding anonymity and enabling mass data collection beyond minors.

Secondly, it raises concerns over proposed “personal and criminal liability for platform executives” if “illegal, hateful, or harmful” content is not removed fast enough. According to the alert, this would “force over-censorship”, as platforms would delete “anything remotely controversial”, affecting journalism, political dissent, and everyday opinions.

Finally, it points to proposals requiring platforms to track and report their “hate and polarisation footprint”, warning that vague definitions of “hate” could allow authorities to label criticism of the government as divisive and then use that classification to justify fines or shutdowns, effectively suppressing the opposition.

What did the Spanish PM announce?

Spain’s head of government Sánchez said the country would move quickly to tighten control over social media platforms, arguing that the digital space has become a “failed state” where “laws are ignored”, “disinformation is worth more than truth”, and “algorithms distort the public conversation” in a speech at the World Governments Summit in Dubai. He outlined four core actions his government will implement:

  • Firstly, Spain will change the law to hold platform executives legally accountable, meaning CEOs could face criminal liability for failing to remove “illegal or hateful content”.
  • Secondly, the government will make “algorithmic manipulation and amplification of illegal content” a criminal offence, insisting there will be “no more hiding behind code” and “no more pretending that technology is neutral”. 
  • Third, Spain will introduce a “hate and polarisation footprint” system to “track, quantify, and expose how digital platforms fuel division”, creating a basis for future penalties.
  • And finally, the government will ban access to social media for under-16s, requiring “real barriers that work” rather than “just checkboxes”.

Furthermore, Sánchez said Spain cannot act alone and has joined forces with five other European countries in a “coalition of the digitally willing” to pursue coordinated, multinational enforcement. 

Can algorithmic amplification be criminalised?

Medianama Founder Nikhil Pahwa argues in an analysis of teen social media bans that, “The problem is platform design, not social media.” Platforms, he writes, are “optimised for high engagement, and getting us addicted,” keeping users hooked by being “designed to deliver dopamine hits”. That design choice matters because “platform algorithms shape behaviour”. 

If platforms can tune algorithms to privilege one form of content over another, they can also redesign them to reduce harm, yet current laws and platform economics push them in the opposite direction.

That framing exposes why criminalising algorithms would represent a fundamental break from how internet law currently works. Across jurisdictions, platforms benefit from an intermediary safe harbour: legal protections that limit liability for third-party content. In India, Section 79 of the Information Technology Act (IT Act) shields intermediaries so long as they follow due diligence rules and act on lawful takedown orders. 

The United States takes this logic further. Section 230 of the country’s Communications Decency Act broadly prevents courts from treating platforms as publishers of user content, even when algorithms decide what users see. Additionally, the EU’s Digital Services Act (DSA) preserves safe harbour while forcing large platforms to assess and mitigate how recommendation systems contribute to illegal content. 

This is the legal wall Spain now presses against. Criminal law demands intent, attribution, and causation. Proving that an algorithm knowingly amplified illegal content, and that executives intended that outcome, is far harder than proving failure to remove a specific post.

Why this matters

Spain’s proposals matter because they cut against the direction most democracies have taken so far. Rather than tightening compliance duties or demanding greater transparency around recommendation systems, Madrid is signalling a willingness to use criminal law to intervene directly into how platforms design and deploy algorithms. That is a qualitative shift. If enacted, it would move responsibility upstream and could reshape how platforms assess legal risk across Europe.

At the same time, Durov’s warning is not frivolous. Vague or expansive definitions of “harmful”, “hateful”, or “illegal” content can incentivise over-removal, particularly when executives face personal criminal liability. Faced with uncertainty, platforms may err on the side of suppression, narrowing political debate, journalistic speech, and lawful dissent.

Yet the opposite risk also deserves attention. Leaving algorithmic amplification untouched preserves a status quo in which systems optimised for engagement continue to shape public discourse with little democratic oversight.

Ultimately, this debate is not about choosing between free expression and regulation. It is about whether law can meaningfully constrain systems that influence speech without handing governments the power to decide what speech should circulate. 

Also Read:

Support our journalism by subscribing

For You

Share.

Comments are closed.