Unintentional Terrorists: An Upcoming Supreme Court Case Challenges the Role of the Youtube Algorithm in Terrorist Attacks

Like many cases that challenge the status quo, Gonzalez v. Google has its origins in tragedy. In 2015, Nohemi Gonzalez was a 23-year-old American woman studying in Paris when she was murdered in a wave of ISIS terror attacks. Her family is now suing Google, arguing that Google was fully aware of the use of their website by ISIS and was capable of acting to stop this but chose not to. Importantly, they argue that Google’s algorithm promoted radicalizing YouTube videos to people whom their algorithm thought it might appeal to, directly contributing to the growth of groups like ISIS that murdered Nohemi and others. This is not the first time that social media platforms are implicated in facilitating terrorism. Multiple people have already shown that the pipeline from TikToks or YouTube videos about the January 6th insurrection to radicalizing white supremacist concept is far less theoretical than one might hope. Chances are, if you’re part of the majority of the U.S. population that uses social media, you’ve seen radicalizing content. 

The legislation at the heart of this challenge is section 230 of the 1996 Communications Decency Act, which states that social media companies “shall [not] be treated as the publisher[s] or speaker[s]” of anything said on or otherwise posted to their site. As a result, these sites cannot be held legally liable for their content and remain shielded from civil lawsuits. They also remain protected if they choose to engage in any form of content moderation, such as editing posts or even deleting them. Notably, because the language of section 230 applies to any “provider or user” of social media, any media site that maintains a comments section is also protected from liability under section 230. 

The parties bringing suit on Nohemi’s behalf face a significant barrier in attempting to challenge this legislation, as the bulk of previous litigation after section 230 is against their claim. In fact, a federal appeals court already rejected their claim due to section 230. In appealing, the Gonzalez’s lawyers argue that Google’s behavior skirts section 230 by affirmatively promoting content, which is not one of the protected forms of content moderation. 

Another formidable barrier facing the Gonzalez family is one of causation. Specifically, they must prove that Nohemi’s murderers actually consumed YouTube content that contributed to their radicalization and specifically resulted in her murder. So far, the Gonzalez plaintiffs were able to prove in a previous trial that two of the twelve terrorists implicated in the attack used social media platforms to post links to radicalizing ISIS content.  

Chances are, if you’re one of the majority of U.S. population that uses social media, you’ve seen radicalizing content.

If the Court does find Google complicit in Gonzalez’s death, this could mark a massive turning point in how we interact with the internet. 

Proponents for abolishing section 230 argue that regardless of the outcome of this case, there have already been clear demonstrations of how easily individuals can be radicalized via social media. This lack of regulation around hate group content has directly led to the rise of hate groups that we see today.

On the other side, however, are concerns against restricting free speech in any form. Further, if social media companies can be sued for what users use their platform for, the increased flood of litigation would result in, at minimum, wildly differing policies regarding use than what we have today or, at most, a wave of familiar websites shutting down. Many commentators anticipate that the latter is very likely to happen with potentially litigation-generating forums, such as 4chan and Reddit. They argue that we may even see the shutdown of more mainstream media sources such as Twitter and Yelp, who may choose to cease providing services rather than face increased litigation risk. 

Ultimately, the clearly demonstrable loss in life must be addressed. At the same time, the use of these platforms is deeply ingrained in twenty-first century life. The ideal solution would satisfy both camps, such as some form of limited regulation. Algorithms could be modified to restrict extremist and hate group content in particular, while allowing most other forms of liability shielding to remain intact. The concern with this legislation, as with much other similar legislation, is the immense amount of power that will be wielded by whoever is able to decide what groups get removed from promotion algorithms. It doesn’t seem far from the realm of speculation to imagine a world in which social media sites ban content producers who criticize them or in which government actors ban groups they deem unpopular. In these scenarios, free speech concerns feel alarmingly prescient. Thus, some form of regulation, albeit with safeguards to prevent abuse, might be the only way forward. In conclusion, deaths like Nohemi’s show that change is sorely needed. The form this will take remains to be seen.

Yasmin M. Khodaei

Yasmin attended the University of Virginia, where she majored in Biology and English Literature. Since coming to law school, Yasmin has been involved in the Anti-Death Penalty Project, the National Lawyers Guild, Outlaw, and Women in Law. Yasmin is broadly interested in health law and spends her free time gardening, perusing farmers markets in the triangle, and spending time with her cat.