Rules for Robots: Constitutional Challenges with the AI Bill of Rights’s Principles Regulating Automated Systems

Melany Amarikwa* | 26.4 | Citation: Melany Amarikwa, Rules for Robots: Constitutional Challenges with the AI Bill of Rights’s Principles Regulating Automated Systems, 26 U. Pa. J. Const. L. 1176 (2024).

Read Full Article

A few years ago, conversations about artificial intelligence (“AI”) were confined to the pages of books and the ivory towers of academia. Now, even older generations know that AI makes many of the decisions in their lives. The heightened public awareness around AI has generated exciting conversations about its potential to push society into the future but it has also raised concerns about AI’s safety and inherent fairness. These concerns raises the following question: Can I trust a “robot” or automated system that makes decisions on my behalf?

As the use of AI by federal agencies continues to grow, concerns have been raised about the potential for “corporate capture of public power.” As many government agencies lack the expertise and resources to develop their own AI models, they rely on private companies to create them, leading to questions about bias and privacy safeguards in automated systems. These concerns add to the larger conversation about the trustworthiness of AI decision makers in our daily lives.

***

* J.D. Candidate, 2024, University of Pennsylvania Law School; B.A., 2019, University of California, Berkeley.

Previous
Previous

Salvaging the Speaker Clause: The Constitutional Case Against Nonmember Speakers of the House

Next
Next

The Roberts Court Revolution, Institutional Legitimacy, and the Promise (and Peril) of Constitutional Statesmanship