Why Regulating Speech Is a Bad Idea
Europe’s Turn to Censorship-By-Proxy and the Legal Case for Limiting Government Control Over Speech in the EU
Preface
This article assesses the present European moment, in which policymakers and regulators across the EU are pressing to tighten control over freedom of expression. I write from a limited-government - indeed, libertarian - point of view: the state should be strong enough to secure rights, but too limited to curate truth. At the core of that view is the social contract between the state and its citizens. Each of us yields a portion of our natural sovereignty so that the state can perform essential tasks - protection through police and armed forces, courts to resolve disputes, infrastructure, and a framework of general laws that allow free people to coordinate their lives. In return, the state owes us more than services: it owes restraint. It must wield delegated power as a fiduciary, never as a censor.
From both a legal and philosophical standpoint, recent attempts to regulate “acceptable” speech in the EU strain this bargain. Legally, the social contract is instantiated in constitutions, charters, and conventions that secure expression as a default rule and allow limits only as narrow, exceptional derogations. Philosophically, the contract presupposes that sovereignty begins with the individual conscience and reason; it is because citizens can deliberate that they can consent to be governed at all. To invert that logic - to presume citizens unfit to encounter disfavored ideas unless pre-filtered by the state or its deputized intermediaries - is to repossess sovereignty that was only ever conditionally ceded.
My view is simple: it must be up to each individual to confront the full spectrum of viewpoints, weigh evidence, and form judgments. A government that narrows the field of permissible ideas inevitably drifts toward a single, state-favored narrative. Over time, citizens deprived of contesting arguments lose not only access to information but also the civic muscles of discernment. That is precisely the danger President Dwight D. Eisenhower warned against in his Farewell Address when he insisted that only “an alert and knowledgeable citizenry” can compel the proper meshing of power with liberty. The state should trust its citizens enough to let them think; citizens, in turn, can only discharge their democratic responsibilities if their informational horizon is not managed on their behalf.
This examination proceeds on the premise that regulating speech beyond the narrowest and most traditional limits violates the spirit and structure of the social contract. It substitutes administrative convenience for constitutional principle, and paternalism for autonomy. If the EU is to remain a community of free and equal citizens, the answer to bad or dangerous ideas is more argument, not less speech; stronger civic education, not broader legal silencing; a thicker marketplace of ideas, not a thinner one.
The Architecture of Speech: Two Civilizations, Two Theories of Liberty
Any honest discussion about regulating speech has to begin with a clear view of how modern democracies define it. In Europe, the touchstone is Article 10 of the European Convention on Human Rights and, within the European Union’s legal order, Article 11 of the EU Charter of Fundamental Rights. Both protect the freedom to hold opinions and to “receive and impart information and ideas,” and both immediately qualify that protection with a catalogue of permissible restrictions - public safety, the prevention of disorder or crime, the protection of the reputation or rights of others, national security, and so on. From its earliest judgments, the European Court of Human Rights (ECtHR) has described this right as covering not only inoffensive opinions but also ideas that “offend, shock or disturb.” Yet in the same breath, the Court embraced a proportionality framework that invites governments to prove that a restriction was “necessary in a democratic society.” This two-step - broad promise, calibrated limits - has defined European speech law ever since.
Once you adopt proportionality as the engine of analysis, the destination depends on what you weigh in the balance. Europe’s courts have not been shy about letting other rights or social interests outweigh speech. In cases like Jersild v. Denmark, the ECtHR protected a journalist who reported on racist remarks without endorsing them, recognizing the value of contributing to public debate. But in other lines of cases, especially those involving hate speech, the Court narrows the field sharply. The Convention’s Article 17 - often called the “abuse of rights” clause - allows judges to exclude from protection expression that aims to destroy the rights of others. That device has been used to uphold penalties for Holocaust denial and similar forms of extreme speech. The result is that in Europe, one can be punished for speech that denigrates protected groups even without any call to violence, as in Vejdeland v. Sweden, or for remarks about religion that courts view as needlessly inflammatory and threatening to “religious peace,” as in E.S. v. Austria. Europe is not monolithic: the Grand Chamber’s decision in Perinçek v. Switzerland shows that context and intent still matter, and not every historical denial will be treated as incitement. But the gravitational pull is clear. The balancing test gives states a “margin of appreciation,” and many states use it to criminalize categories of expression that Americans would consider constitutionally protected.
That approach has repercussions online. In Delfi AS v. Estonia, the ECtHR approved liability for a news portal that failed to remove manifestly unlawful hate comments quickly; in Sanchez v. France, it allowed criminal fines for a politician who left hate speech by third parties on his Facebook wall. EU legislation adds structure and force to this posture: the Framework Decision 2008/913/JHA requires member states to criminalize certain forms of racist and xenophobic speech; audiovisual media rules prohibit incitement to hatred in broadcasting; and the Digital Services Act imposes due-diligence obligations on platforms to act against illegal content, backed by risk assessments, audits, and fines. The Court of Justice of the European Union has also accepted wide takedown orders, including for “equivalent” content and, in some contexts, beyond EU borders, as in Glawischnig-Piesczek v. Facebook. In short, the European model protects expression in the abstract while building a dense thicket of exceptions and compliance duties in practice.
The American constitutional tradition proceeds from almost the opposite direction. The First Amendment does not contain an explicit limitations clause, and the Supreme Court has been reluctant to create new, free-floating categories of unprotected speech. The modern rule took shape in Brandenburg v. Ohio (1969) - the Ku Klux Klan rally case that still anchors U.S. incitement doctrine. Brandenburg holds that advocacy of violence or lawbreaking is protected unless the speech is directed to inciting or producing imminent lawless action and is likely to produce such action. The emphasis on imminence and likelihood forces the state to meet a demanding standard; it is not enough that speech is dangerous in the long run or corrosive to public morals. Later cases reaffirmed that commitment against efforts to carve out a general “hate speech” exception. In R.A.V. v. St. Paul, the Court struck down a hate-symbol ordinance as impermissible viewpoint discrimination, even though it targeted “fighting words.” And in Virginia v. Black, the Court upheld bans on true threats (like cross-burning done with the intent to intimidate) while rejecting a statute that presumed such intent from the act alone. The core idea is constant: the government may punish threats, targeted harassment, or imminent incitement - but may not silence offensive, even odious, viewpoints merely because they are offensive or odious.
These divergent architectures matter for anyone tempted by the promise of “smart” regulation. The European framework’s proportionality test appears sober and moderate, yet it shifts the inquiry away from bright-line protections toward managerial trade-offs by courts and regulators. Once the state is authorized to balance speech against an expanding set of social goods - dignity, equality, religious peace, public order - the list of justifications for restriction tends to grow, and so does the scope of compliance demanded from private intermediaries. This produces a chilling effect that is hard to measure but easy to feel: editors over-moderate; platforms suppress borderline content; citizens learn to self-censor. The U.S. model, with its insistence on narrow and historically grounded exceptions, accepts a higher level of social friction in exchange for clearer limits on state power. It keeps the government out of the business of deciding which ideas are too dangerous to be heard until those ideas cross a concrete line - incitement of imminent lawless action or a true threat.
None of this denies that speech can wound, that propaganda can mobilize, or that the internet has amplified harms. The question is whether legal regulation - especially broad, content-based regulation - actually solves those problems without creating worse ones. Europe’s experience suggests that once you legitimate balancing as the method, the balance rarely comes out in favor of unpopular speakers, and the infrastructure you build for the worst cases will be used in the close ones. Brandenburg points to a different lesson: when the state cannot suppress odious advocacy unless it is about to spark violence here and now, society must answer bad speech with better speech - counter-argument, ridicule, refusal to join. For a liberal culture that distrusts concentrations of power, that trade remains the wiser one. If we are serious about safeguarding a marketplace of ideas robust enough to withstand the shocks of the modern world, we should resist the lure of regulation and keep the bright lines bright.
A case study in how narratives harden: Amsterdam, November 2024
Consider the Europa League night of 7–8 November 2024 in Amsterdam, after Ajax hosted Maccabi Tel Aviv. Within hours, global headlines framed the unrest as a near-pogrom against Israelis; wire copy, chyrons, and push alerts stressed “antisemitic squads” and “targeted attacks,” and many outlets fronted a single, highly circulated video clip of hooded men sprinting down a central street and striking passersby. Reuters’ initial framing line for that very clip - “what Israeli authorities said was an attack targeting Israeli citizens” - was then syndicated far and wide, shaping early interpretations for newsrooms that rely on agency feeds.
But when that video is placed back in context - using the original filmer’s account and triangulation with other footage - the scene is not of Israeli fans being attacked. It shows a group that includes Maccabi supporters (several in the club’s blue-and-yellow) charging and beating a local Dutch man; The Washington Post’s subsequent visual reconstruction, drawing on multiple angles and geolocation, confirmed that this widely shared “attack on Israelis” clip actually depicted Maccabi supporters as aggressors at that spot and time. The filmer, Amsterdam photographer Annet de Graaf, later explained that she told major outlets and agencies what her material showed, yet her video continued to be packaged in ways that implied or asserted the opposite; she described this as the media “changing the whole narrative.”
The skew was not confined to a single outlet. Sky News initially aired a segment that accurately described Maccabi fans attacking locals and noted extreme and hateful anti-Arab chants - before re-editing the package within hours and publishing a note that the first version did not meet “balance and impartiality,” softening concrete attributions in the script to generic references to “hooded men.” Independent media critics documented the edit; Sky’s later coverage aligned with the now-standard framing of Israelis primarily as targets. Meanwhile, agency wires and early straight news pieces around the world emphasized attacks on Israelis and official condemnations, often omitting contemporaneous evidence of Israeli hooligan violence elsewhere in the city center that same night - evidence later compiled not only by de Graaf but also by a teenage Dutch videographer (“Bender”) whose longer footage shows Maccabi ultras arming themselves with poles and planks and pursuing locals.
None of this erases the fact - also established by court cases and official chronologies - that Israeli supporters were attacked in separate incidents that night, some with explicitly antisemitic rhetoric, and that dozens of suspects were later arrested and convicted. The point is narrower and more troubling: one flagship clip was repeatedly used to tell a story directly contradicted by its own content, even after the originator warned distributors; and as that narrative fossilized, corrections or nuance struggled to catch up. Mainstream retrospectives eventually presented a more complex picture - of Israeli hooliganism, anti-Arab chants, and, later, targeted assaults on Israelis - but by then the initial “pogrom” storyline had imprinted itself on public consciousness.
This is exactly the informational hazard that animates the rest of this article. You do not need a formal censorship law to narrow a society’s epistemic horizon; a mix of institutional incentives, reputational fear, and editorial “risk management” can produce a de facto one-sided narrative that crowds out competing facts. Once the state leans into that tendency - through pressure, guidance, or regulation - the feedback loop hardens. Fewer raw angles circulate; more “pre-interpreted” material does. Citizens receive conclusions rather than evidence. The social contract I described in the preface depends on the opposite premise: the state must trust individuals with the full range of viewpoints and primary material, and citizens must be equipped to judge for themselves. President Eisenhower’s warning that liberty requires “an alert and knowledgeable citizenry” was not a plea for curated certainty; it was a demand for access and vigilance. The Amsterdam episode is a cautionary vignette of what happens when access gives way to narrative - how truth is first tilted, then laundered, and finally enforced - and why, if we invite state-backed gatekeeping into the arena of speech, we risk institutionalizing that drift.
Cyprus: when “fake news” becomes a criminal code
Cyprus offers a live demonstration of why broad, content-based speech offences built around “misinformation” and “disinformation” are both unlawful and dangerous. In July–October 2024, the government advanced draft amendments to the Criminal Code to criminalize the “dissemination of fake news,” alongside offences for “offensive” online speech and the online sharing of “indecent” material - converting what are typically civil or platform-policy matters into crimes carrying prison terms of up to five years. The proposal drew immediate objections from domestic and international media-freedom groups, including the Union of Cyprus Journalists and the IFJ/EFJ, who warned of a serious chilling effect on reporting and debate. In response, the Justice Ministry paused the bill and convened consultations with media stakeholders in October 2024; nevertheless, the core idea - criminalizing ill-defined “fake news” - has remained on the table in Nicosia’s legislative conversation ever since.
From a European human-rights law perspective, the problem starts with legality and foreseeability. Under Article 10 ECHR, any restriction on expression must be “prescribed by law,” pursue a legitimate aim, and be “necessary in a democratic society.” The European Court of Human Rights (ECtHR) has long required that speech restrictions meet a “quality of law” test: they must be accessible, clear and foreseeable in their application (the classic statement is The Sunday Times v. UK (No. 1)). Labels like “fake news,” “misinformation,” or “disinformation” fail that test unless defined with exacting precision; otherwise citizens cannot reasonably predict what is punishable and officials enjoy open-ended discretion. The Council of Europe’s own guidance summarizes the Court’s approach: lack of clarity on what counts as “false” risks failing the legality test before proportionality is even reached.
Even if the Cypriot text were tightened, criminalization is presumptively disproportionate. The Grand Chamber in Cumpănă and Mazăre v. Romania held that custodial criminal penalties for speech exert a chilling effect rarely compatible with Article 10; Strasbourg jurisprudence repeatedly urges states to prefer civil remedies over criminal law for reputational and accuracy disputes. The Court has also signaled that Article 10 protects even statements whose truth is contested and that bans on “false news” are in tension with democratic debate - see Salov v. Ukraine in the electoral context. These are not eccentric outliers; they are the backbone of modern ECtHR doctrine on speech. Cyprus’s move to put journalists, commentators, or ordinary users under the shadow of imprisonment for alleged falsity thus conflicts with Strasbourg’s proportionality analysis.
The OSCE Representative on Freedom of the Media reached the same conclusion in an urgent opinion on Cyprus’s draft, warning that criminal defamation and “false news” offences are incompatible with international standards and will chill legitimate speech. Media-freedom coalitions (MFRR, ARTICLE 19, IPI, EFJ) likewise urged withdrawal, documenting how such laws inevitably invite selective enforcement. The Platform to Promote the Protection of Journalism at the Council of Europe logged the bill as a formal alert; the government’s written reply invoked constitutional aims but did not cure the vagueness and disproportionality concerns. Taken together, this record would weigh heavily against Cyprus if a Strasbourg challenge were later brought.
Nor can Nicosia justify criminalization by pointing to EU secondary law. The Digital Services Act (DSA) already provides a comprehensive, non-criminal framework for tackling illegal content online through due-diligence, notice-and-action, transparency, and risk-mitigation duties for platforms; it does not create a new EU-level crime of “disinformation,” and it cautions against over-removal by requiring statements of reasons and user redress. Indeed, in 2025 the Commission initiated proceedings against Cyprus (among others) for insufficient implementation of the DSA’s institutional safeguards - hardly a mandate to leap to penal law. Meanwhile, the European Media Freedom Act (EMFA) - now in force - tilts the other way, entrenching protections for editorial independence and journalistic sources and warning against state surveillance of journalists. Cyprus’s separate 2025 draft said to authorize surveillance of journalists under the guise of EMFA transposition has already drawn criticism for contradicting the Regulation’s spirit. It would be perverse to criminalize “fake news” while simultaneously weakening reporter protections intended to ensure the very pluralism that corrects falsehoods.
What about the domestic media climate? Cypriot outlets and professional bodies have been forthright. The Cyprus Mail ran editorials warning that the bill threatens to jail journalists for errors or contested assessments; the Union of Cyprus Journalists publicly opposed the draft; and European journalist federations amplified those concerns. The government then paused the bill for consultations in October 2024 - a welcome step, but not a substantive fix. As long as the legislative aim remains “criminalizing fake news,” the constitutional defect remains: a vague offence that invites politicized enforcement against dissent and investigative reporting.
In legal terms, then, a Cypriot “fake news” crime would likely fail Article 10 ECHR on at least three grounds: (1) lack of foreseeability/quality of law (what, precisely, is “fake”? who decides, by what criteria, and with what defenses?); (2) disproportionality, given the availability of less-restrictive means (counter-speech, corrections, media self-regulation, civil defamation with robust defenses, platform procedures under the DSA); and (3) chilling effect aggravated by custodial penalties and police powers. It also sits uneasily with the EU’s systemic commitments to media pluralism under the EMFA. Philosophically - and in the spirit of the social contract outlined above - the state’s legitimate interest in combating harm does not entitle it to occupy the field of truth-telling. A criminalized “falsity” rule collapses the space where citizens test and revise beliefs; it converts government from referee into arbiter of truth. That is precisely the drift this article warns against: once the law blesses censorship in elastic terms, the path from “guarding the public” to managing the narrative is short - and well-paved.
Why limited control over speech is the lawful - and safer - path
The deeper danger of state control over speech is not merely that some opinions are suppressed; it is that law itself is bent into an instrument of viewpoint management. European human-rights law sets a higher bar than that. Article 10 of the European Convention on Human Rights and Article 11 of the EU Charter require that any speech restriction be precisely defined (“prescribed by law”), pursue a legitimate aim, and be strictly necessary - tests that were crafted to keep governments from converting generalized fears about “harmful” narratives into open-ended censorship mandates. The Court’s classic decisions in Handyside v. United Kingdom and The Sunday Times v. United Kingdom (No. 1) crystallize two pillars that are indispensable to a free society: tolerance for expression that “offend[s], shock[s] or disturb[s]” and the “quality of law” requirement of accessibility and foreseeability. Those pillars do not license paternalism; they constrain it.
Where governments revert to elastic offences - “misinformation,” “fake news,” “offensive content” - they collide with these pillars twice over. First, vague labels fail the legality/foreseeability test, because citizens cannot know ex ante what is punishable and officials enjoy boundless discretion. Second, criminal sanctions for contested speech are presumptively disproportionate because they chill legitimate debate - precisely the concern the Grand Chamber emphasized in Cumpănă and Mazăre v. Romania. Strasbourg has also warned against criminalizing alleged falsity as such, recognizing in cases like Salov v. Ukraine (election context) that democratic discourse includes speech whose truth is disputed and must be tested in the marketplace of ideas, not foreclosed by the penal code.
The digital layer magnifies these risks. The ECtHR’s Delfi AS v. Estonia and Sanchez v. France show that states may, in narrow circumstances, impose duties regarding clearly unlawful third-party content - but those judgments do not authorize states to declare entire categories of controversial opinion “illegal” and then conscript intermediaries to erase them. They presuppose a baseline in which what counts as “illegal” is itself tightly defined by law and proportionate to a concrete harm. Likewise, the Court of Justice in Glawischnig-Piesczek accepted targeted removal orders for identical and, sometimes, “equivalent” content, but that remedy still depends on a prior, exacting illegality finding - not on broad semantic nets cast over public debate. When governments use these doctrines as pretexts for narrative control, they invert their logic.
EU legislation points the same way. The Digital Services Act (Regulation (EU) 2022/2065) builds due-diligence and transparency obligations for platforms; it does not create an EU crime of “disinformation,” and it embeds safeguards against over-removal, including statements of reasons and user redress. The Commission’s readiness to enforce institutional guardrails under the DSA - up to and including infringement actions when Member States fail to implement independent Digital Services Coordinators - confirms that the EU’s solution to online harms is procedural accountability, not speech crimes. In parallel, the European Media Freedom Act (Regulation (EU) 2024/1083) entrenches editorial independence, safeguards journalistic sources, and affirms recipients’ rights to a plurality of content. Criminalizing elastic categories of speech, or pressuring editorial lines through regulatory leverage, would be flatly at odds with both instruments’ purpose and text.
The legal conclusion is therefore straightforward. A regime of “expanded control” over expression - anchored in vague offences, backed by criminal penalties, and operationalized through platform deputization - cannot be reconciled with the Convention’s requirements of legality, necessity, and proportionality or with the Charter’s guarantee of media freedom and pluralism. The jurisprudence is not an indulgence for chaos; it is a constitutional design to keep the state out of the business of curating truth. The safer, lawful course is a return to limited control: narrowly tailored rules aimed at clearly defined unlawful speech (incitement, true threats, direct criminal facilitation), civil - not criminal - remedies for reputation, and strong procedural checks that keep intermediaries transparent and contestable. That model equips the well-informed citizen by maximizing access to facts and viewpoints and trusting people to deliberate - exactly the democratic premise celebrated since Handyside and codified in Article 11 of the Charter. Anything else courts narrative monoculture under color of law - and invites the very abuses the Convention was written to prevent.
