Skip to main content

Ness: Platform Regulation Should Focus on Transparency, Not Content

Susan Ness, co-chair of the Transatlantic Working Group on Content Moderation Online and Freedom of Expression, a project sponsored by the Annenberg Public Policy Center, argued in an op-ed published in Slate that European and American governments should work in tandem to create and enact transparent social media platform regulation.  

Susan Ness, co-chair of the Transatlantic Working Group and a distinguished fellow of the Annenberg Public Policy Center, wrote about platform regulation for Slate.

Despite efforts by digital platforms to curb the tsunami of disinformation surrounding U.S. elections, cyberspace remains awash in conspiracy theories and democracy-damaging disinformation. Meanwhile, terrorist attacks in France and Austria have spurred European efforts to clamp down on hatred and incitement to violence online. On Dec. 15, the European Commission is slated to release a draft set of comprehensive platform regulations. These European rules could become the standard for the global net—leaving the U.S. behind.

We have seen this before. American policymakers sat on the sidelines while the EU enacted its General Data Protection Regulation, which has become the de facto global standard. If America wants to help shape the rules of the road governing online discourse, it must step up and engage now.

What if, instead of pursuing conflicting paths, Europeans and Americans collaborated on a good governance framework for online platforms, adaptable for different legal systems and societal norms?

Right now, President-elect Joe Biden is setting his administration’s policy agenda and selecting personnel. Platform governance undoubtedly is on the table. By expressly endorsing trans-Atlantic collaboration on a digital framework, Biden would underscore his commitment to the trans-Atlantic alliance and ensure that American voices are heard. Similarly, Europeans could signal that they would welcome American engagement in developing the rules of the road for digital networks.

Government regulation of online harms is a daunting challenge. Not all toxic content is illegal, and lawmakers must tread carefully to avoid infringing on free expression and due process. And while the platforms enjoy their own free speech rights to set and enforce standards for their online communities, they also must respect widely recognized free expression exclusions for illegal content such as child pornography and incitement to violence.

There are two regulatory methods to curtail online harms that might constrain freedom of expression. The first is eliminating the platform’s safe harbor from liability for user content, which is exactly what multiple proposals currently before Congress would do. Section 230 of the Communications Decency Act protects platforms from lawsuits over third-party posts, and it has become a target of both the left and the right. The bills take polar opposite stands on the problem and the solution, whipsawing platforms between demands from the left that they remove blatantly false or manipulated speech and allegations from the right that conservative voices are deliberately censored. (On Tuesday evening, President Trump tweeted that he would veto the National Defense Authorization Act if Congress doesn’t repeal Section 230.) If online companies become liable for content that users post, platforms could well choose to eliminate popular services featuring user-generated content.

The second method is requiring immediate removal of specific kinds of content or face stiff penalties, as with Germany’s NetzDG, which incentivizes platforms to delete questionable yet legal content. This approach deputizes companies to adjudicate the legality of content without affording users judicial redress. (Indeed, the French Constitutional Council struck down a similar law for violating free expression.) Ominously, such laws also provide cover to authoritarian regimes to expand categories of speech subject to censorship. As a Chinese academic once proudly intoned, “there is no hate speech on our internet.”

But it is possible to tackle hate speech and disinformation without trampling on free expression, if the U.S. and Europe work together: by mandating transparency—with accountability—instead of regulating content. Require social media companies to provide greater transparency of their content moderation rules and procedures, including how their algorithms influence what users see, and enforce these disclosures through robust oversight.

Excerpt published by permission of Slate.

Click here to read the rest of Ness’s December 2, 2020, essay in Slate: “Platform regulation should focus on transparency, not content.”