With Great Power Comes Great Responsibility: How the White House’s “Blueprint for an AI Bill of Rights” Gives Big Tech the Opportunity to Better Protect American’s Rights

In today’s world, automated technologies are becoming ubiquitous in every aspect of our lives. From autonomous vehicles to artificial intelligence (AI) systems that can diagnose diseases in patients, automated technologies are driving innovation. Yet, along with the evolution of these technologies comes threats to the rights of the American public. 

To begin addressing these issues, the White House’s Office of Science and Technology Policy released a Blueprint for an AI Bill of Rights, a non-binding white paper “intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems.” Notably, the blueprint outlines five principles–areas of protection for Americans–in relation to AI:

  • Safe and Effective Systems: Automated systems should be safe and effective. Protective measures that should be taken to demonstrate that systems are safe and effective based on their intended use include “pre-deployment testing, risk identification and mitigation, and ongoing monitoring.” Systems should be independently evaluated to confirm their safety and effectiveness and the results of these evaluations should be made public whenever possible. 
  • Algorithmic Discrimination Protections: Automated systems should not reinforce unjustified different treatment or impacts disfavoring people who are members of protected classes. Designers, developers, and deployers can protect against this by including throughout the application development life cycle proactive equity assessments, use of representative data, protection against proxies for demographic features, accessibility for people with disabilities, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight.
  • Data Privacy: People should be protected from abusive data practices and should have control over how their data is used. Automated systems should be designed to include privacy by default and to collect “only data strictly necessary for the specific context.” Consent should only be used in instances where it can be appropriately and meaningfully given. In addition, consent requests should be brief, understandable in plain language, and provide individuals with the ability to control the contexts in which their data can be collected and used. Data in sensitive domains (e.g., health, work, education, criminal justice, finance, and youth) should be subjected to enhanced protections and restrictions, including limiting use to necessary functions and ethical review and use prohibitions. 
  • Notice and Explanation: Designers, developers, and deployers of automated systems should provide up-to-date notice and explanation of a system and its outcomes to its users in a clear, timely, and accessible manner. People should be notified when, how, and why an automated system outcome impacts them, including instances when the system is not the sole input determining the outcome. 
  • Human Alternatives, Consideration, and Fallback: People “should be able to opt out from automated systems in favor of a human alternative, where appropriate.” If an automated system fails, produces an error, or an individual wants to appeal or contest its impacts, individuals should have access to timely human consideration and remedy via a fallback and escalation process. 

Also included within the Blueprint is “From Principles to Practice,” “a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process.”

Like Uncle Ben once told Peter Parker, the Blueprint for an AI Bill of Rights now tells big tech: “with great power comes great responsibility.”

The Blueprint comes at a critical time, as “a growing number of measures are being introduced to study the impact of the use of AI or algorithms and the potential roles for policymakers.” Indeed, in 2022, 17 states have introduced generalized artificial intelligence bills or resolutions; Colorado, Illinois, and Vermont have enacted such legislation and have even gone a step further by creating task forces to study AI. 

While these initiatives are a move in the right direction, they do not fill the legislative void of the AI world, which has left a slew of problems unaddressed in its wake. For example, there is currently no regulation around the use of algorithms that systematically discriminate against candidates of particular genders, races, or religions in company recruiting. Likewise, despite the high-profile Congressional hearings of Facebook (now known as Meta), no legislation has been enacted to control the role that Facebook’s AI plays in amplifying hate speech on the popular social media application. 

So what can the Blueprint for an AI Bill of Rights actually do to help solve these problems? In short: sadly nothing; as mentioned previously, the Blueprint is non-binding. Nevertheless, the Office of Science and Technology Policy should be applauded for starting what is, quite frankly, a conversation this country has needed to have for some time. Accordingly, unless the Blueprint makes its way into state or federal privacy law sometime soon, the onus shifts to the technology industry to adopt and adhere to these principles to better protect the rights of Americans. Like Uncle Ben once told Peter Parker, the Blueprint for an AI Bill of Rights now tells big tech: “with great power comes great responsibility.”

Sophia Vouvalis

Sophia attended Penn State, where she studied Information Sciences & Technology and Security & Risk Analysis. Since coming to law school, Sophia has served as a Research Assistant for Professor Deborah Gerhardt, and has also been involved in Carolina Intellectual Property Law Association and Women in Law. In her free time, she enjoys thrifting/antiquing, sudoku, and spending time with her cat.