Section 230 has famously been called “the twenty-six words that created the internet.” By stipulating that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” the statute insulates social media companies like Facebook and Twitter from liability that might otherwise ensue as a result of real-life damages caused by social media posts.
Section 230 faces a new challenge in Gonzalez v. Google, a pivotal Supreme Court case that stands to alter the legal interpretation of the statute…
Politically, Section 230 certainly cannot be described as “popular.” In fact, in today’s gridlocked political environment, a shared hatred for Section 230 could almost be considered a flashpoint of rare bipartisan agreement. Former president Donald Trump tweeted his distaste for the provision almost 40 times between May 2020 and January 2021. In July of that same year, the Democrat-proposed “Health Misinformation Act” was introduced to the senate. The bill was designed to curb Section 230’s protection of social media companies when their products are used to spread dubious information about vaccines and other health concerns. Bill co-sponsor Amy Klobichar (D-MN) noted, “These are some of the biggest, richest companies in the world and they must do more to prevent the spread of deadly vaccine misinformation.”
Section 230 faces a new challenge in Gonzalez v. Google, a pivotal Supreme Court case that stands to alter the legal interpretation of the statute and, from a broader angle, the course of conduct for massive internet companies like Facebook (owner of Instagram), Reddit, and Google (owner of YouTube). The facts of the case center around 23-year old Nohemi Gonzalez, who was killed in a brutal terrorist attack in 2015 while studying abroad in France. The suit, brought against YouTube’s parent company Google, alleges that “Google selected the users to whom it would recommend ISIS videos based on what Google knew about each of the millions of YouTube viewers, targeting users whose characteristics indicated that they would be interested in ISIS videos.” The radicalization impacts of that content, Gonzalez’s family members and estate argue, resulted in her death.
Yasmin Khodaei, one of my esteemed law school compatriots and quite possibly the smartest person I know, wrote about this case last year in a stellar blog post entitled Unintentional Terrorists: An Upcoming Supreme Court Case Challenges the Role of the Youtube Algorithm in Terrorist Attacks￼. Now, the public has even more information about what the justices were considering as they engaged in oral argument. Oral argument for Gonzalez v. Google convened on February 21, with most of the justices seemingly poised to continue Section 230’s protections for social media giants. But first, it is clear the justices had done their homework on the complexities of internet algorithms: arguably more so than the congresspeople questioning Tik Tok CEO Shou Zi Chew last month about his company generally and its implications for privacy among U.S. residents.
But even baby lawyers know that oftentimes, the questions asked at oral argument are far more important than the answers given by arguing attorneys. And that certainly seems likely here. As expected, the plaintiff’s attorney began oral argument by asking the Court to “decline th[e] invitation” to treat internet-based “recommendations” as inherently protected by the liability release provision of Section 230. A dispute over the nature of such “recommendations”—and the characterization of algorithmic content-serving actions—informed the remainder of the debate. The Court narrowed in on the idea that the content algorithms used by a given social media companies would “act” the same way, whether they were serving content related to innocuous activities or problematic content. For instance, Justice Thomas clarified that companies use “the same algorithm to present cooking videos to people who are interested in cooking and ISIS videos to people who are interested in ISIS.” Both Justice Thomas and Justice Roberts—evidently with lunch on the brain— referenced an example internet search for “rice pilaf” (specifically rice pilaf from Uzbekistan, in Thomas’s words) to imply that social media companies distribute content in a neutral manner. As Chief Justice Roberts opined, “It may be significant if the algorithm is the same across … different subject matters, because [if] they don’t have a focused algorithm with respect to terrorist activities or — or pilaf or something, … then I think it might be harder for you to say that there’s selection involved for which [social media companies] could be held responsible.”
More generally, Justice Kagan reflected on the difficulty of mapping a statute as seasoned as the federal Communications Decency Act—originally passed in 1996—onto the complexities of today’s internet environment. Justice Kagan particularly clarified the ubiquitous role of algorithms generally as applied to the modern internet, noting that algorithms are generally used to “organiz[e] and prioritiz[e] material.” “Every time anybody looks at anything on the Internet, there is an algorithm involved,” Justice Kagan said during argument, shortly later asking “Does [plaintiff’s] position send us down a road such that [Section] 230 really can’t mean anything at all?” It was clear that Justice Kagan, at least, was concerned about the broader implications of corralling the power of Section 230 outside of the international terrorism context at bar. She reckoned with the fact that the Court’s decision in Gonzalez could implicate “any number of harms that can be done by speech,” including less blatant problematic speech like simple defamation or “content that violates some other law.”
Justice Kagan expresses a widely held concern. While the Supreme Court is grappling with deplorable terror propaganda footage in this particular case, its decision could implicate more general political speech in other cases. Even moreso, the prospect of increased liability for “real world” damages could incentivize social media companies to take a more hardlined stance on even “borderline” content generated by their users. After all, social companies have long lauded Section 230’s protections. For instance, when asked by the House Energy and Commerce Committee in 2021 how the controversial provision could be changed to protect internet users, Facebook CEO’s Mark Zuckerberg’s solution was decidedly lackluster: “Platforms should not be held liable if a particular piece of content evades its detection—that would be impractical for platforms with billions of posts per day—but they should be required to have adequate systems in place to address unlawful content.”
The incentivization implications of the court’s limiting Section 230 protection also create a minefield: It seems logical that Facebook, for instance, would react to the increased risk of liability by over-policing online content. Ensuing constitutional complaints related to such over-policing (for instance, “you violated my free speech rights by revoking my political content!”) would be rendered impotent by section (c)(2)(A) of the law, which insulates internet service providers from civil liability for any good-faith actions to “restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” This implication is not a far-fetched one because Facebook is a corporation, after all, and like every U.S.-incorporated entity, the chief aim of Facebook is to generate profit for its shareholders: not to enact a nebulous and ill-defined philosophical stance of a “free and open internet” at the risk of costly lawsuits and settlement agreements.
Several justices also voiced concerns about a flood of lawsuit activity that might ensue were the litigation protections of section 230 stripped away. Justice Kagan mentioned the possibility of “a world of lawsuits” driven by an online environment in which “anytime you have content, you … have these presentational and prioritization choices that can be subject to suit.” Justice Kavanaugh emphasized a rare moment of agreement (stating, “Certainly, as Justice Kagain says, lawsuits will be non-stop”).
One particularly illuminating section of oral argument came in the Defendant’s portion, wherein Justice Jackson, during a short conversational volley, seemed to convince the counsel for defendants to take a harder-line stance against liability on a hypothetical she proposed: whether YouTube would face liability for an algorithm-based decision to place a third-party-generated video of ISIS activities on its homepage. While counsel for defendants initially took a hesitant stance in answer to her hypothetical (“well, it depends on whether you think it’s an endorsement of …”), counsel eventually stated that a home page video placement is “absolutely” covered by Section 230 protections for content publishers. While the justices’ questions at oral argument certainly seem indicative of their voting intentions, nothing is quite certain until pen hits paper: or, more aptly, until a judicial opinion hits the internet waves. So set your Google alerts and be prepared to clear your schedule at a moment’s notice, because there is still a chance—albeit slim—that the Gonzalez v. Google decision could change the Section 230 status quo and life on the internet as we know it.
Adrianne earned her undergraduate degree from UNC-Chapel Hill, where she majored in Journalism and Religious Studies. Prior to law school, she worked for a Raleigh-based digital marketing startup. She currently serves as Vice President of UNC Law’s Innocence Project and hopes to have a career in corporate law. In her spare time, she enjoys doting on her Golden Retriever, Fargo, and scouring thrift stores for mid-century modern furniture.