Section 230 as First Amendment Rule

Section 230 of the Communications Decency Act of 1996 1 has been lauded as “the most important law protecting internet speech” and called “perhaps the most influential law to protect the kind of innovation that has allowed the Internet to thrive.” 2 The law’s tremendous importance stems from the shield it provides to websites against suits based on torts committed by users. For instance, Wikipedia cannot be held liable for defamation posted by a user. This intermediary liability protection encourages websites to engage in content moderation without fear that their efforts to screen content will expose them to liability for defamatory material that slips through. Without this protection, websites would have an incentive to censor constitutionally protected speech in order to avoid potential lawsuits. 3

But § 230 is under attack on multiple fronts. 4 From the popular media 5 to Capitol Hill, 6 some view the law with disdain. Various scholars have also heavily criticized § 230, saying amending the law would help to reduce defamation online. 7 And, in the courts, 2016 was perhaps a nadir for § 230, as judges repeatedly adopted narrow readings of the law. 8

Against this current, this Note provides the first thorough argument that the First Amendment requires § 230’s bar on holding websites liable for the defamation of their users. While the First Amendment does not “require” the federal statute, of course, this Note argues that the First Amendment rule should be the same as § 230’s rule. Under the Supreme Court’s First Amendment case law on defamation, the private censorship produced by defamation liability for internet intermediaries cannot be justified by a government interest in defamation law. Recognizing § 230’s more stable constitutional provenance explains why courts traditionally adopted a broad reading of the law, demonstrates the law’s substantive importance, and helps predict what might occur should detractors succeed in achieving amendment by Congress.

Part I describes secondary liability for defamation and § 230. Part II explains the prevailing assumption among judges and scholars that the First Amendment does not require § 230. Part III then challenges this assumption, arguing that the Constitution protects internet intermediaries from liability for defamation committed by their users. The censorship that would result from internet intermediary liability for defamation cannot be saved by the government’s interest in imposing liability. 9 Part IV discusses this Note’s implications and concludes.

I. Defamation, Intermediary Liability, and § 230

Defamation is a common law tort that protects individuals against the publication of harmful false statements about them. 10 “Publication” includes intentional and unreasonable failure to remove defamatory material under one’s control. 11 Distributors, such as booksellers, may be held liable for defamation they transmit if they knew or had reason to know of its defamatory nature, but are not under a general duty to screen the items they retail. 12

In the 1990s, courts began to apply these doctrines to internet services. In Cubby, Inc. v. CompuServe Inc., 13 a district court held that an internet service provider was not liable for allegedly defamatory content in one of its online forums because it had “no more editorial control” than would “a public library, book store, or newsstand,” 14 and therefore was a mere distributor that did not know or have reason to know of the content. 15 Later, in Stratton Oakmont, Inc. v. Prodigy Services Co., 16 a state court held that because an owner of online bulletin boards had exercised “editorial control” over offensive content, it could be held liable as a publisher of defamatory posts. 17 This pair of cases posed a troubling choice for websites. If they took a hands-off approach to moderation, they received significant protection from liability. However, if they sought to proactively regulate content on their websites, they might face liability. 18 This dilemma “created a minor sensation.” 19

These concerns were heard on Capitol Hill when Congress enacted section 509 of the Communications Decency Act (codified at 47 U.S.C. § 230), which overruled Stratton Oakmont. 20 Section 230 provides that no website that relies on user-generated content “shall be treated as the publisher or speaker of any information provided by another information content provider.” 21 Therefore, a website cannot be held liable for defamation posted by a user even if the website knows or has reason to know of the defamatory content. 22 Of course, if an intermediary website itself created defamatory content, it could be held liable 23 — for example, if Facebook itself wrote a blog post on its website defaming the creators of Google Plus. In other words, websites are not immune from defamation claims. They are merely protected from being held secondarily liable for the defamatory statements of others.

In interpreting § 230, courts have largely followed through on congressional hopes of providing intermediary liability protection to websites for defamation claims. 24 For example, in the “seminal” 25 case Zeran v. America Online, Inc., 26 then–Chief Judge Wilkinson held that § 230 protected America Online from a defamation claim based on messages posted on its bulletin boards. 27 Judge Wilkinson explained § 230 succinctly: it “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.” 28 The law bars suits that would hold websites liable for decisions about whether and how to moderate user-generated content. 29 As to congressional purpose, Judge Wilkinson identified first that the “specter of tort liability in an area of such prolific speech would have an obvious chilling effect” and second that § 230 encourages websites to moderate content without fear of liability. 30

II. The Assumption that the First Amendment Does Not Require § 230

Judges and academics are nearly in consensus in assuming that the First Amendment does not require § 230. 31 Since the enactment of § 230, courts have had little reason to reach this constitutional question. In Cubby, decided before the enactment of § 230, while the court cited a First Amendment case to support its holding, it did not discuss the notion that the First Amendment might provide even more protection to websites. 32 In Stratton Oakmont, the court acknowledged that the website’s moderation system “may have a chilling effect on freedom of communication in Cyberspace,” even though the court in effect required this type of website to employ similar moderation to avoid liability. 33 There too the court did not consider First Amendment concerns. As one district court put it, “Section 230 reflects a ‘policy choice,’ not a First Amendment imperative, to immunize ISPs from defamation . . . driven, in part, by free speech concerns.” 34 More recently, in Gonzalez v. Google, Inc., 35 the court stated in passing that “[i]n the absence of the protection afforded by section 230(c)(1), one who published or distributed speech online” may be liable for defamation even if the website had no knowledge of the content. 36 In 2016, a First Circuit panel acknowledged that “First Amendment values . . . drive” § 230, but wrote that this rule could be amended via mere legislation. 37

Academics share the assumption that the First Amendment does not require § 230. 38 As Professor Rebecca Tushnet writes, the “First Amendment does not currently require a particular solution” for internet intermediary defamation liability. 39 In defending § 230, Professor Jeff Kosseff admits that its “immunity extends beyond intermediary protections provided by the First Amendment.” 40 And Professor William H. Freivogel puts it bluntly: “It would not be accurate to argue that the First Amendment requires Section 230.” 41 In canvassing the First Amendment options for addressing how internet platforms moderate content, one scholar does not address the possibility of § 230 as a First Amendment rule. 42 Other commentators seem to share this assumption as well. 43 Moreover, the many scholars who have criticized § 230 do not seem to believe that a response is necessary against the charge that the rule is mandated by the Constitution. For instance, two critics simply write that § 230 is “not required by the First Amendment.” 44

III. Why the First Amendment Requires § 230

This Part begins by explaining First Amendment scrutiny of defamation law and then argues that, under that case law, imposing defamation liability on internet intermediaries is unconstitutional.

A. Defamation and the First Amendment

Like § 230, the First Amendment operates as a constraint on the scope of defamation law. While some regulations of speech may be reviewed, for example, under the “generic” strict scrutiny test, 45 other types of speech are governed by specific tests devised by the Court “on a largely ad hoc basis.” 46 The specific rules that the Court devised to govern defamation law, for instance in the 1964 landmark case New York Times Co. v. Sullivan, 47 exemplify this ad hoc approach.

In New York Times, the Supreme Court held that under the First Amendment public officials alleging defamation must show the defen-dant acted with “actual malice” — knowledge of falsity or recklessness toward this potential. 48 The Court reasoned that not requiring actual malice could stifle vital discourse because of the fear of civil liability. 49 “A rule compelling the critic of official conduct to guarantee the truth of all his factual assertions,” the Court feared, leads to “self-censorship.” 50 Potential defendants might worry that they could not prove in court the legality of their statements or afford expensive litigation and therefore “make only statements which ‘steer far wider of the unlawful zone.’” 51

Later, in Gertz v. Robert Welch, Inc., 52 the Court held that private individuals alleging defamation did not need to meet an actual malice requirement. 53 The Court noted that “punishment of error runs the risk of inducing a cautious and restrictive exercise of the constitutionally guaranteed freedoms of speech and press.” 54 The Court explained that the interest supporting defamation law is “the compensation of individuals for the harm inflicted on them by defamatory falsehood.” 55 This interest, the Court explained, emanated from the importance of protecting individuals’ reputations. 56 In resolving the “tension” between this interest and freedom of speech, the Court sought “breathing space” for the right to free speech by bestowing “strategic protection” under the New York Times standard. 57 The Court distinguished New York Times on two grounds. First, public officials and figures are better able to engage in counterspeech, whereas private individuals find it more difficult to refute published falsehoods. 58 Second, public officials and figures, unlike private individuals, voluntarily assume the risk of being subject to falsehoods. 59 Additionally, because the Court “require[d] that state remedies for defamatory falsehood reach no farther than is necessary to protect the legitimate interest involved” 60 in order to balance “compensating private individuals for wrongful injury to reputation” 61 with “the constitutional command of the First Amendment,” 62 it held unconstitutional punitive damages awarded with no actual malice. 63

In devising the rules governing defamation claims, and in other areas of First Amendment doctrine, the Supreme Court has engaged in a methodology of constitutional reasoning grounded in optimizing practical results. As Professor Richard Fallon explains, in developing various areas of constitutional doctrine, the Supreme Court must make determinations about empirical matters that inform the rules it crafts. 64 In New York Times and Gertz, Fallon recounts, the Court did not merely “balance, in an abstract way,” freedom of speech and the interest undergirding defamation law. 65 Instead, it also made “more concrete, empirical, and predictive assessments” regarding the “proclivity of the press to engage in self-censorship under alternative liability regimes,” “the proportion of truthful and untruthful assertions that would be chilled by such regimes,” “the harms that would be done by false speech,” and “the benefits of truthful speech that would be forgone under various imaginable rules.” 66 More dramatically, Professor Daniel Farber identifies New York Times as an example of the notion that “First Amendment doctrines reflect the fear that certain laws overdeter speech and thus lead to a suboptimal amount of total information disseminated in society,” 67 in order to demonstrate that First Amendment doctrines embody “public choice theory — that is, the application of economics methodology to political institutions.” 68 Finally, implementing this policy-based method of constitutional reasoning often involves what Professor David Faigman terms “constitutional fact-finding,” the Court’s use of empirical claims to create constitutional law. 69 As Fallon agrees, New York Times and Gertz are not “atypical in their reliance on empirical, predictive calculations,” 70 and Faigman demonstrates that the Supreme Court routinely makes assumptions about empirical propositions to support constitutional decisionmaking. 71

In employing this practical optimization methodology in New York Times, the Court was comfortable calibrating a rule for public officials that intentionally “overenforce[s]” constitutional goals. 72 Indeed, as Professor David Strauss observes, in constitutional law, prophylactic rules are both ubiquitous and necessary. 73 Strauss notes that from Miranda warnings to strict scrutiny, constitutional law is replete with rules aimed at protecting rights through overenforcement. 74 Expressly building on Strauss’s foundation, Professor Daryl Levinson identifies “[d]efamation law [as] another clear example of a First Amendment prophylactic rule.” 75 Agreeing that prophylactic rules are ubiquitous, Levinson explains that constitutional rules necessarily “depend on such factors as the administrability and expense of a more precise rule and the error costs of false negatives and false positives.” 76

The practical optimization the Supreme Court employed in New York Times and Gertz to calibrate such a First Amendment prophylactic rule suggests that the constitutionality of internet intermediary defamation liability should be assessed along two dimensions that mirror the analysis in those cases: the degree to which this type of defamation liability, first, impinges on protected speech and, second, promotes a governmental interest. Those cases addressed the First Amendment constraints on setting mental states for defamation liability, whereas this Note employs their framework to promote First Amendment constraints on secondary liability for defamation. This Note contends that the censorship that would result from internet intermediary liability for defamation cannot be saved by the government’s interest in imposing liability. In contrast to scholars and jurists who have paid these First Amendment questions relatively little attention, this Note intends to demonstrate the constitutional relevance of the policy-based arguments in favor of § 230, though this Note does not itself engage in a full-fledged policy analysis.

B. Collateral Censorship

Without § 230 as the constitutional rule, internet intermediaries would limit a significant amount of constitutionally protected speech. The New York Times Court feared that without the requirement of actual malice, “would-be critics of official conduct” would hesitate to speak. 77 Internet intermediary liability implicates a specific variety of self-censorship — collateral censorship — which the New York Times Court explained by quoting Smith v. California 78 at length. 79 What Professor Jack Balkin has termed “collateral censorship” arises not when individuals limit their own speech based on a fear of liability, but rather “when A censors B out of fear that the government will hold A liable for the effects of B’s speech.” 80 In Smith, the Court held unconstitutional an ordinance that prohibited bookstores from possessing obscene books. 81 In rejecting that strict liability rule, the Court explained that many “legal devices and doctrines, in most applications consistent with the Constitution, . . . cannot be applied in settings where they have the collateral effect of inhibiting the freedom of expression, by making the individual the more reluctant to exercise it.” 82 While obscenity is not protected by the First Amendment, the ordinance’s lack of a scienter requirement jeopardized citizens’ access to a variety of protected speech. 83 New York Times quoted from the following key passage 84 :

For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature. . . . And the bookseller’s burden would become the public’s burden, for by restricting him the public’s access to reading matter would be restricted. . . . The bookseller’s limitation in the amount of reading material with which he could familiarize himself, and his timidity in the face of his absolute criminal liability, thus would tend to restrict the public’s access to forms of the printed word which the State could not constitutionally suppress directly. The bookseller’s self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered. 85

As in Smith, exposing internet intermediaries to liability for defamation communicated by their users would lead to collateral censorship.

First, content moderation to cope with intermediary liability is difficult, and therefore costly. 86 When a website confronts potentially defamatory user-generated content, it must resolve questions of both law and fact. As to questions of law, there is no national law of defamation but instead a fifty-state patchwork. 87 Therefore, websites must resolve the choice of law inquiry regarding which state’s law applies and then determine what that state’s rule is. 88 Moreover, defamation law abounds with privileges and exceptions. Even if a website determined that certain content would support a prima facie case for defamation, it would still need to determine the applicability of various privileges and exceptions. 89 Questions of fact are also difficult for websites to resolve, involving “considerable costs of investigation.” 90 For example, a statement that a business often fails to meet its commercial obligations is not easily verifiable. To the extent that it is difficult for judges and juries to determine the truthfulness of potentially defamatory statements, it is even more difficult for intermediary websites to do so. 91 Even upon receiving notice that a statement is allegedly defamatory, a website does not know whether a complainant is correct or merely hoping to illegitimately induce takedown. 92 In the copyright context, a large number of takedown requests to websites are illegitimate. 93 Some websites have experimented with artificial intelligence algorithms to moderate content. 94 However, algorithms have struggled to correctly moderate content: for example, differentiating between impermissible nudity and fine art. 95 It would be even more difficult for artificial intelligence to properly identify defamation and quite costly to develop that software. And humans are not happy performing the task. 96 It is difficult to quickly determine whether certain speech is merely critical or actionable defamation. These difficulties are amplified by the volume of content websites face. As Zeran recognized about moderating “millions of postings,” 97 “[a]lthough this might be feasible for the traditional print publisher, the sheer number of postings on interactive computer services would create an impossible burden in the Internet context.” 98 Efforts to surmount these difficulties, and thus increase the accuracy of moderation to avoid intermediary liability, would be costly because those efforts require investments in labor, time, or technology.

Second, as Smith recognized, the difficulties and costs created by intermediary liability would cause many websites to engage in various forms of collateral censorship — often the least costly method of avoiding liability. 99 In general, websites would err on the side of caution, defaulting to removing allegedly defamatory content instead of engaging in costly legal and factual investigation. 100 The cost to websites of collaterally censoring is very low, whereas the cost of not censoring content is much higher because that decision risks expensive litigation and adverse judgments. 101 Websites “may be deterred from” permitting certain content, as New York Times explained, “even though it is believed to be true and even though it is in fact true, because of doubt whether it can be proved in court or fear of the expense of having to do so.” 102 Individual website employees are unlikely to face repercussions for playing it safe but could face ramifications for allowing content that later leads to litigation expenses. Whether or not websites believe a potential lawsuit is meritorious, they will often default to removal because of the potential costs of litigation or an adverse result. 103 Even websites, like Facebook, that can “afford” high moderation and litigation costs would still prefer to avoid them, and this judgment will likely influence their moderation. Therefore, in the words of New York Times, websites would tend to permit “only statements which ‘steer far wider of the unlawful zone.’” 104

More generally, some websites might decide not to allow entire categories of content that will be more likely to expose them to liability. For example, politically controversial speech or business and product reviews may be more likely to lead to defamation actions than more mundane content. 105 Or bloggers might decline to include a comment section. 106

Worse still, some websites might never launch. 107 Because of their business models, perhaps to focus solely on particularly controversial content, the anticipated costs of moderation and litigation could prevent them from even securing capital or launching. 108 This issue might be termed complete collateral censorship — where an intermediary fails to come into existence because of a fear of being held liable for the speech of others. Various websites credit § 230 with their very existence. 109

Additional collateral censorship will result from mistakes. Because the imposition of liability would lead to more moderation and removal, websites are more likely to make mistakes in removal decisions. Websites may make technical mistakes (perhaps from a user’s accidental clicking of a “report” button). But given the difficulty of factual investigation, they are also likely to make fundamental mistakes about the factual basis of defamation claims — removing content based on incorrect understandings of the veracity of users’ allegations. Moreover, websites will make mistakes of law. Fearing these mistakes, websites may default to adherence to the strictest state laws, thus censoring more speech and allowing the most speech-restrictive states to govern the entire internet. If websites employ algorithms to shoulder this legal burden, they expose themselves to the inaccuracies in those programs.

Due to the problems noted above, opportunistic lawyers or other individuals will attempt to exploit websites’ vulnerabilities. Businesses and individuals that do not like posts about them on websites will request that the posts be taken down whether they are defamatory or not. 110 Individuals and businesses hoping to have material taken down will learn how to manipulate intermediaries. 111 Websites would face difficulties dealing with even good faith reports of defamation, let alone handling individuals who allege defamation as a cynical tactic to remove the content they dislike. 112 If a business wants to hide a bad review or an individual hopes to conceal a piece of truthful but unflattering information, the business or individual can notify the website that the content is false and threaten to sue. Even if a website does not immediately capitulate, it will incur large costs investigating these claims and may reach the incorrect conclusion. During the investigation period, the website may take down the content, which would also inhibit speech. For potentially defamatory posts, websites might decide to implement a delay so that they can prescreen content for defamation.

For these reasons, notice-based liability is problematic. As then–Chief Judge Wilkinson explained in Zeran, “liability upon notice has a chilling effect on the freedom of Internet speech” “[b]ecause service providers would be subject to liability only for the publication of information, and not for its removal, [so] they would have a natural incentive simply to remove messages upon notification, whether the contents were defamatory or not.” 113

Third, the nondefamatory speech lost to collateral censorship is often valuable. In cases like Reno v. ACLU, 114 the Supreme Court has demonstrated an appreciation for the vital role internet speech plays in modern society. The Court lauded the then-nascent internet’s “vast democratic forums.” 115 It described the internet as a “dynamic, multifaceted category of communication includ[ing] not only traditional print and news services, but also audio, video, and still images, as well as interactive, real-time dialogue.” 116 It noted that “any person with [internet access] can become a town crier with a voice that resonates farther than it could from any soapbox.” 117 In addition, the Court observed that because of the tremendous scale of the internet, speech regulations that threatened liability for certain acts could limit many types of protected speech. 118 More recently, in Packingham v. North Carolina, 119 the Supreme Court held unconstitutional a statute that prohibited registered sex offenders from accessing social networking websites, like Facebook or Twitter, that allow children to have accounts. 120 The Court explained that “to foreclose access to social media altogether is to prevent the user from engaging in the legitimate exercise of First Amendment rights.” 121 It deemed the internet “the most important place[] (in a spatial sense) for the exchange of views.” 122 The Court continued that an understanding of the internet “informs the analysis” 123 of a law in question:

Social media offers “relatively unlimited, low-cost capacity for communication of all kinds.” On Facebook, for example, users can debate religion and politics with their friends and neighbors or share vacation photos. On LinkedIn, users can look for work, advertise for employees, or review tips on entrepreneurship. And on Twitter, users can petition their elected representatives and otherwise engage with them in a direct manner. . . . In short, social media users employ these websites to engage in a wide array of protected First Amendment activity on topics “as diverse as human thought.”

. . . While we now may be coming to the realization that the Cyber Age is a revolution of historic proportions, we cannot appreciate yet its full dimensions and vast potential to alter how we think, express ourselves, and define who we want to be. 124

The Supreme Court’s veneration of internet speech suggests special caution before permitting laws that limit it. 125

More specifically, the nondefamatory speech lost to collateral censorship will often be vulnerable speech. 126 Individuals who want certain speech taken down sometimes file illegitimate content takedown requests. 127 This dynamic allows the majority to suppress minority views or could constitute a potential heckler’s veto. 128 The speech that is the first to be collaterally censored may be the most vulnerable and least likely to appear through alternative channels. At its core, the First Amendment seeks to protect unpopular views 129 — unobjectionable views are less frequently jeopardized. As noted above, because of the cost of additional content moderation, some websites may turn to algorithms for assistance. Yet recently, algorithms have fared no better in protecting marginalized speech: Google’s artificial intelligence moderation system that seeks to highlight toxic speech accidentally flags sentences such as “I am a gay woman.” 130

Other vulnerable speech includes speech of little immediate personal benefit but that, when part of a community, provides a large public benefit — such as business reviews or Wikipedia edits. Some of the most socially beneficial forms of speech that can pose defamation concerns are consumer reviews, such as those on Yelp. These websites have flourished because of § 230. 131 Facing liability, review websites would become more cautious and manipulable, and therefore less accurate, thus decreasing competition. Nonprofits like Wikipedia also depend on § 230 to freely provide accurate content. 132

Ultimately, the threat of defamation liability will often cause websites to seek to avoid liability by overcensoring valuable user speech.

C. Interest

The second area of First Amendment analysis concerns the government’s interest underlying defamation law. In Gertz, the Court held that the “legitimate state interest underlying the law of libel is the compensation of individuals for the harm inflicted on them by defamatory falsehood.” 133 However, the Court articulated a rationale for the compensation interest that spoke to a broader purpose: each individual has the “right to the protection of his own good name.” 134 This reputational rationale is broader than the interest in compensation because it undergirds a larger swath of defamation law. For example, a reform that would increase only the deterrent effect of defamation law could not be supported by the compensation interest because that reform would not necessarily increase the likelihood of compensation; however, it would certainly promote the reputational rationale by decreasing the prevalence of defamation through deterrence.

In general, a reputational interest is a much more natural understanding of the justification for defamation law. The Court should adopt reputation protection, which involves deterrence, not mere compensation, as the interest justifying defamation laws. As the Court explained in Rosenblatt v. Baer, 135 “underl[ying] the law of defamation [is an] interest in preventing and redressing attacks upon reputation.” 136 Would one prefer an ideal world in which every victim of defamation was compensated or one in which defamation law deterred all defamation before it took place, thus protecting all individuals’ reputations? More realistically, the objective of defamation law should be reducing instances of defamation as much as possible while compensating individuals who are nonetheless defamed. 137 Analogously, the interest underlying “battery” is not merely securing a remedy for those who have been battered but also reducing the occurrence of that tortious action. 138 This distinction matters because it expands the denominator: if one contemplates a broader interest than compensation alone, different laws may pass or fail constitutional muster. For instance, as argued below, § 230 does limit compensation, but the law mitigates this limitation because it encourages websites to remove defamation. The net effect on a general reputational interest is greater than the effect on compensation. When a legitimate interest is artificially narrowed, it can promote the constitutionality of laws that could fail as rights infringing under a more naturally broad interest. 139

Intermediary defamation liability does not serve this interest well because it would not significantly reduce defamation beyond the status quo. First, in the status quo, many websites moderate their content and remove defamatory content even without the threat of intermediary liability. 140 They make this decision because of “a sense of corporate social responsibility, but also, more importantly, because their economic viability depends on meeting users’ speech and community norms.” 141 Websites have significant existing incentives to remove defamatory material. And, “[b]ecause they seek to please their customers, intermedi-aries are more likely than courts to develop content standards that conform to basic community values.” 142 Second, some defamation may be persistent in the face of intermediary liability. Consider, for instance, the extreme amount of copyright infringement that persists on the internet even though federal law imposes liability on intermediaries for copy-right infringement committed by their users. 143 Persistent users will often be able to disseminate whatever information they want by using multiple accounts, anonymous accounts, or other websites. Certain bad-actor websites will also persist by remaining outside the jurisdiction of U.S. courts. 144 Third, intermediary liability could lead to less of a reduction in defamation because some websites will meet the “Moderator’s Dilemma” 145 posed by Stratton Oakmont by taking a more hands-off approach to content. In other words, instead of attempting to avoid liability by overcensoring their users, they will reduce the screening they engage in to avoid acquiring knowledge that might subject them to liability. 146 If they otherwise would have moderated content and removed some defamation, this choice renders defamation law less effective.

Those who have been defamed still retain various tools that may mitigate the harms of defamation. Section 230 does not prevent a defamed person from engaging in counterspeech. 147 Nor does it prevent plaintiffs from suing the party that originally defamed them. 148 In fact, an empirical study found that in a majority of § 230 cases, plaintiffs “were able to identify and sue the original source of the content that caused them harm.” 149 Additionally, the same study revealed that even if potential plaintiffs do not recover in court, they are often successful in getting the content in question removed. 150 While these options are sometimes of limited efficacy, they are at minimum marginally mitigating.

The considerable collateral censorship that intermediary liability would cause is not worth the meager benefit to the reputational interest such liability might provide. The fact that all plaintiffs could not achieve compensation is insufficient to reject this rule — New York Times has the same consequence. As the Court there explained, “erroneous statement is inevitable in free debate, and . . . it must be protected if the freedoms of expression are to have the ‘breathing space’ that they ‘need . . . to survive.’” 151 The Court creates broad prophylactic rules, “breathing space,” to protect the freedom of expression through intentional overenforcement of the constitutional right. 152 Gertz consciously devised an “accommodation of the competing values at stake in defamation suits,” 153 and “attempt[ed] to reconcile state law with a competing interest grounded in the constitutional command of the First Amendment.” 154 To this analysis must be added the Court’s more recent statements on the importance of internet speech and the need for restraint in regulating it. 155 Given the new “relationship between the First Amendment and the modern Internet,” the Court has warned that it “must exercise extreme caution before suggesting that the First Amendment provides scant protection.” 156 For the First Amendment, intermediary liability imperils a significant amount of constitutionally protected speech through the collateral censorship explained above. Collateral censorship may be even more troublesome than the self-censorship feared in New York Times because the censored speakers do not themselves decide when to refrain from speaking. 157 For the interest in enforcing defamation law, imposing intermediary liability will be of limited utility because websites already moderate content, much defamation will persist in the face of intermediary liability, and intermediary liability might encourage some websites to decrease their moderation. The Court must require confidence in the benefits of the defamation law, especially when the speech at stake may be so valuable. Here, the gains for defamation law are doubtful whereas the harms to speech are significant. Therefore, under the Court’s defamation, collateral censorship, and internet speech case law, the First Amendment requires the prophylactic rule of § 230.

Applying the First Amendment in the untrodden ground of (1) internet (2) intermediary (3) defamation liability combines three areas of doctrine. By (1) recognizing the value and vulnerability of internet speech (Reno and Packingham), (2) identifying the First Amendment harm — collateral censorship — that intermediary liability imposes (Smith), and (3) employing the framework the Court uses to evaluate the constitutionality of defamation laws (New York Times and Gertz), the optimal constitutional rule comes into focus. To be sure, Packingham merely lauded internet speech, Smith rejected only strict liability, and New York Times calibrated a mental state (actual malice) and not secondary liability. However, § 230’s rule is the best extension of these precedents into the new context of internet intermediary defamation, for the reasons detailed above.

D. Section 230’s Critics

By way of framing potential critiques of § 230, as Cathy Gellis brilliantly explains, “§ 230 is potentially in jeopardy of becoming a victim of its own success,” because its benefits are less salient than are particular instances of defamation. 158 As she notes, “§ 230 has done so well creating a new normalcy that it’s much harder to see just how much it has allowed to go right,” such that “when things do go wrong . . . we are always at risk of letting our outrage at the specific injustice cause us to be tempted to kill the golden goose by upending something that on the whole has enabled so much good.” 159

Some might argue that § 230 unacceptably creates a different constitutional standard for online, versus offline, speech. 160 However, the proposed rule would be equally desirable in truly analogous offline contexts. More importantly, the Court has been willing to set different rules under the First Amendment for different forms of media based on their different factual contexts. 161 The Court treats the regulation of adult content, for example, differently across different types of media such as newspapers, broadcast, and cable. 162 More broadly, much of this line drawing is based on sound factual distinctions between various types of media. Here, for instance, internet intermediary liability would be less successful than offline intermediary liability in reducing defamation and is therefore less constitutionally desirable. And, as the Court has explained, given the relatively new “relationship between the First Amendment and the modern Internet,” it “must exercise extreme caution before suggesting that the First Amendment provides scant protection.” 163

Some critics of § 230 argue that the statute has unacceptable distributional consequences. Professor Mary Anne Franks, in particular, has written thoughtfully about the concern that § 230 may shield defamation that “disproportionately burden[s] vulnerable private citizens including women, racial and religious minorities, and the LGBT community.” 164 This Note accepts this claim. However, First Amendment doctrine is not necessarily concerned with disproportionately distributed harm 165 and may be particularly skeptical of laws explicitly aimed at remedying it. 166 Yet the First Amendment should be particularly skeptical of laws that disproportionately hurt the speech of certain marginalized groups. Intermediary liability has this potential, as it would provide a heckler’s veto to those who object to minority speech. Content moderation has “shut down conversations among women of color about the harassment they receive online,” “censor[ed] women who share childbirth images in private groups,” and “disappeared documentation of police brutality, the Syrian war, and the human rights abuses suffered by the Rohingya.” 167 Intermediary liability would increase websites’ incentive to cautiously accede to takedown requests targeting vulnerable private citizens. Liability may increase the use of moderation algorithms, and “[d]ecisions based on automated social media content analysis risk further marginalizing and disproportionately censoring groups that already face discrimination.” 168 While marginalized communities may be particularly vulnerable to online defamation, they are also particularly vulnerable to the collateral censorship that would result from intermediary liability. In addition, even if a repeal of § 230 would generally benefit defamation plaintiffs, it is unclear whether these plaintiffs would benefit. Given the cost of litigation, our most marginalized citizens are the ones least likely to be able to take advantage of a new liability regime. Most importantly, as argued above, collateral censorship is a major threat to vulnerable voices online. Therefore, it is at best uncertain which regime has superior distributional consequences.

IV. The Implications of a Constitutional Rule

Several implications flow from the idea that the First Amendment requires internet intermediary liability protection. First, regardless of whether one is an internet exceptionalist, 169 this Note demonstrates how constitutional questions regarding the internet occasionally require unique answers at least due to dramatically changed factual circumstances. The volume of internet speech and its resistance to regulation produce a potentially surprising result for defamation law. Second, understanding § 230 as being equal to the constitutional requirement helps explain why courts have generally taken a broad view of the statute and consistently held against defamation claims. This realization also might explain why courts at first provided broad protection under the statute against defamation claims and then began to grow more reluctant in cases where speech seems less directly implicated, such as failure-to-warn claims. Third, recognizing the First Amendment as requiring § 230 shows how § 230 may be reminiscent of other federal statutes that would now likely constitute the rule required by the Constitution. 170 This type of statute demonstrates how Congress can enforce constitutional law prior to the courts and also how statutory experimentation can yield enduring norms. Fourth, in new cases on the edge of § 230’s protections, this First Amendment underpinning provides a rationale, perhaps via constitutional avoidance, for interpreting immunity broadly. Fifth, § 230 covers more claims than defamation. If the First Amendment requires intermediary liability protection from defamation suits, other claims may also be implicated. Sixth, though this Note argues for shielding certain editorial decisions of websites, this legal argument should not preclude public debate regarding their practices. As discussed, many websites laudably expend resources seeking to remove defamation. But many websites should make more strides, seeking to provide a “fair opportunity to participate” and “direct accountability.” 171 Finally, if Congress amends or repeals § 230, 172 courts should be willing to step in with the First Amendment if warranted.

This Note finds for § 230 enduring constitutional footing. 173 Given the risk of collateral censorship and meager gains in stopping defamation that an alternate rule would produce, the First Amendment cannot permit holding websites liable for the defamation of their users. When and if the time comes, courts should be willing to recognize the importance of this protection and hold it provided for by the Constitution.

Footnotes Hide ^ 47 U.S.C. § 230 (2012). Return to citation ^

^ CDA 230: The Most Important Law Protecting Internet Speech, Electronic Frontier Found ., https://www.eff.org/issues/cda230 [https://perma.cc/JN9Y-TVNT]; accord Jack M. Balkin, Old-School/New-School Speech Regulation, 127 Harv. L. Rev. 2296, 2313 (2014) (“Section 230 immunity . . . ha[s] been among the most important protections of free expression in the United States in the digital age.”); David Post, A Bit of Internet History, or How Two Members of Congress Helped Create a Trillion or So Dollars of Value, Wash. Post: Volokh Conspiracy (Aug. 27, 2015), http://wapo.st/1K9AmTh [https://perma.cc/S4LN-WE9P]. Return to citation ^

^ See infra Part III, pp. 2032–47. Return to citation ^

^ This Note seeks to demonstrate the constitutional relevance of the policy-based arguments in favor of § 230, though it does not itself engage in a full-fledged policy analysis. Return to citation ^

^ Restatement (Second) of Torts § 558 ( Am. Law Inst. 1977). Return to citation ^ ^ Id. § 581 & cmts. d & e. Return to citation ^ ^ 776 F. Supp. 135 (S.D.N.Y. 1991). Return to citation ^

^ Id. at 140–41. The court held there was no genuine issue of material fact as to knowledge. Id. at 141. Return to citation ^

^ 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995). Return to citation ^ ^ Id. at *4–5. The website also held itself out as engaging in moderation. Id. Return to citation ^

^ Zeran v. Am. Online, Inc., 129 F.3d 327, 331 (4th Cir. 1997) (noting that Stratton Oakmont created “disincentives to selfregulation [sic]”). Return to citation ^

^ David R. Sheridan, Zeran v. AOL and the Effect of Section 230 of the Communications Decency Act upon Liability for Defamation on the Internet, 61 Alb. L. Rev. 147, 159 (1997) (citing Robert Cannon, The Legislative History of Senator Exon’s Communications Decency Act: Regulating Barbarians on the Information Superhighway, 49 Fed. Comm. L.J. 51, 62 nn.51–52 (1996)). Return to citation ^

^ Id. at 150–51; Cannon, supra note 19, at 61–63, 62 nn.51–52. Return to citation ^ ^ 47 U.S.C. § 230(c)(1) (2012). Return to citation ^ ^ Zeran, 129 F.3d at 331–33. Return to citation ^

^ See 47 U.S.C. § 230(f)(3) (defining “information content provider”); id. § 230(c)(1) (exempting websites from liability only for information “provided by another information content provider” (emphasis added)). Return to citation ^

^ See David S. Ardia, Free Speech Savior or Shield for Scoundrels: An Empirical Study of Intermediary Immunity Under Section 230 of the Communications Decency Act, 43 Loy. L.A. L. Rev. 373, 452 (2010) (“Defamation-type claims were far and away the most numerous claims in the section 230 case law, and the courts consistently held that these claims fell within section 230’s protections.” (footnotes omitted)); see also, e.g., Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003); Green v. Am. Online, 318 F.3d 465 (3d Cir. 2003). Courts have reached inconsistent results in nontraditional cases outside defamation law. See, e.g., Doe v. Internet Brands, Inc., 824 F.3d 846 (9th Cir. 2016) (holding that § 230 did not protect the owner of the website Model Mayhem from a failure-to-warn claim); Recent Case, Doe v. Internet Brands, Inc., 824 F.3d 846 (9th Cir. 2016), 130 Harv. L. Rev. 777, 777 (2016) (criticizing the Ninth Circuit’s decision for “declin[ing] to adopt an alternative understanding of the statute more in line with the law’s stated policy objectives”). Return to citation ^

^ 129 F.3d 327. Return to citation ^

^ For courts, see, for example, Batzel v. Smith, 333 F.3d 1018, 1020 (9th Cir. 2003). Return to citation ^

^ Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135, 139 (S.D.N.Y. 1991) (citing Smith v. California, 361 U.S. 147, 152–53 (1959)). Return to citation ^

^ Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710, at *5 (N.Y. Sup. Ct. May 24, 1995). Return to citation ^

^ Gucci Am., Inc. v. Hall & Assocs., 135 F. Supp. 2d 409, 421 (S.D.N.Y. 2001) (quoting Zeran, 129 F.3d at 330). Return to citation ^

^ No. 16-CV-03282, 2017 WL 4773366 (N.D. Cal. Oct. 23, 2017). Return to citation ^

^ Id. at *4 (citing Batzel, 333 F.3d at 1026–27) (ignoring First Amendment question). Return to citation ^

^ Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 29 (1st Cir. 2016), cert. denied, 137 S. Ct. 622 (2017). To be sure, perhaps the panel would differentiate between the constitutional law of defamation and potential criminal liability for sex trafficking. Return to citation ^

^ See, e.g., Jack M. Balkin, The Future of Free Expression in a Digital Age, 36 Pepp. L. Rev. 427, 434 (2009) (“[Section 230] is not required by First Amendment doctrine.”); Rebecca Tushnet, Power Without Responsibility: Intermediaries and the First Amendment, 76 Geo. Wash. L. Rev. 986, 1008 n.95 (2008) (“Before the CDA, the assumption in the law reviews tended to be that the [New York Times v. Sullivan] standard was the best to be hoped for as a constitutional matter.”). Return to citation ^

^ Tushnet, supra note 38, at 988. Return to citation ^

^ Jeff Kosseff, Defending Section 230: The Value of Intermediary Immunity, 15 J. Tech. L. & Pol’y 123, 136 (2010). Return to citation ^

^ William H. Freivogel, Does the Communications Decency Act Foster Indecency?, 16 Comm. L. & Pol’y 17, 48 (2011). Return to citation ^

^ Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1613–15 (2018). Return to citation ^

^ See, e.g., James Grimmelmann, No ESC, Law.com: The Recorder (Nov. 10, 2017, 2:03 AM), https://www.law.com/therecorder/sites/therecorder/2017/11/10/no-esc/ [https://perma.cc/B2ZW-HQHJ] (referring to § 230 as “subconstitutional free speech law”). But cf. Cecilia Ziniti, Note, The Optimal Liability System for Online Service Providers: How Zeran v. America Online Got It Right and Web 2.0 Proves It, 23 Berkeley Tech. L.J. 583, 605–06 (2008) (arguing briefly that a notice-based system would be unconstitutional). Return to citation ^

^ Danielle Keats Citron & Benjamin Wittes, The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity, 86 Fordham L. Rev. 401, 419 (2017); see also Heather Saint, Note, Section 230 of the Communications Decency Act: The True Culprit of Internet Defamation, 36 Loy. L.A. Ent. L. Rev. 39, 69 (2015). Return to citation ^

^ Richard H. Fallon, Jr., Strict Judicial Scrutiny, 54 UCLA L. Rev. 1267, 1292 (2007). Return to citation ^