Rethinking Digital Media Regulation
The Internet and social media have triggered a radical shift in our digital media environment. Discourse production in society has moved onto a new medium and changed its structure and dynamic. The most fundamental features of this new environment have been, first, a shift from an offline “broadcasting” to an online “participatory” communication model and, second, the rise of dominant, privately owned digital intermediaries, the so-called “Big Five”. These intermediaries, namely, Alphabet (formerly Google), Meta (formerly Facebook), Microsoft, Amazon, and Apple, not only effectively “own” and operate the Internet, but function as increasingly decisive arbiters of what content ultimately reaches the public sphere.
Along with the decline of traditional press institutions and their democratically vital “watchdog” function, this redistribution of power among digital media stakeholders has created new regulatory challenges and triggered profound tensions with earlier legal principles and approaches. The Internet is not only increasingly prone to problematic online content—including hate speech, image-based harassment, racism, extremism, disinformation, and fake news—but has given rise to mounting lower-salience structural threats to democracy, manifesting in unprecedented global surveillance, censorship, and control. If regulators fail to prioritize these structural threats, they risk treating only symptoms of our increasingly dysfunctional public sphere, rather than grasping their aetiology of broader tensions, patterns, and interrelationships.
Conventional regulatory approaches have been largely unsuccessful. Neither social media platforms’ self-imposed terms of use, nor national, international, or supranational laws have kept pace with rapidly evolving communications technology and the spread of detrimental content online. Moreover, regardless of which of the two leading regulatory approaches is considered—the European Union’s predominant “notice-and-action” model or the USA's contrasting system of “market self-regulation”—conventional online regulations exhibit near-singular focus on restricting “problematic” online content (e.g., hate speech and misinformation), leaving the accelerating and more disquieting phenomena of mass surveillance and privatized government censorship unaddressed. This growing regulatory trend in online content moderation began in 2017 with Germany’s Netzwerkdurchsetzungsgesetz (NetzDG) and continues today with Europe’s Digital Services Act (DSA), Britain’s Online Safety Act, and Canada’s Online Harms Act, its newest proposed hate speech law. Whether introduced as a “risk-assessment” or “systems-based” approach, this trend increasingly risks providing government agencies with the legislative mandates, informational transparency, and com-pliance authority essential for regulatory capture, a situation that represents one of the Internet’s “Big Picture” governing dilemmas.
Overall, these developments pose substantial threats to fundamental freedoms, particularly freedom of expression and press liberty, forcing social media platforms into the role of powerful gatekeepers at the threshold of human rights. Rethinking digital media regulation remains one of the greatest unmet challenges of the 21st century.
Research outcome: | peer-reviewed publications (2023–2026) |
---|---|
Project language: | English |
Photo: | © Lightspring/Shutterstock.com |
Publications
Source: Journal MaxPlanckForschung 2022(3).