•  
  •  
 

SMU Science and Technology Law Review

Abstract

This Essay addresses a growing Constitutional challenge in public governance: the increasing delegation of consequential decisions to algorithmic systems that encode value trade-offs between liberty and security, equity and efficiency, and expression and control, without visibility, legal justification, or institutional oversight. We view this hidden normative choice an example of the “Artificial Intelligence (AI) Trolley Problem.” Like the classic moral dilemma, it involves unavoidable sacrifices among competing goods. Unlike its philosophical counterpart, however, algorithmic trade-offs occur silently. They are embedded in data proxies, optimization logic, and model design, and insulated from scrutiny by claims of technical neutrality. This Essay argues that such systems are not merely tools, they are sites of public power and instruments that increasingly drive and govern public administration. In a Constitutional democracy, that power must be subject to judgment. Yet current legal doctrine remains ill-equipped to identify, evaluate, or constrain algorithmic governance. With the United States Supreme Court’s 2024 decision in Loper Bright Enterprises v. Raimondo, effectively ending Chevron deference, U.S. courts can no longer assume that decisions rendered by automated systems are presumptively lawful. They must now confront the normative structure of those systems directly and the automation technologies which underpin them. To meet that obligation, this Essay develops the AI Trolley Problem as a diagnostic and doctrinal framework. Our proposed framework empowers courts and regulatory agencies to detect when algorithmic systems make governance decisions. Our framework assesses whether those decisions implicate protected civil rights or interests and evaluates if those decisions meet prevailing standards of legal justification. This Essay advances institutional solutions, including statutory thresholds for the deployment of AI tools, algorithmic impact assessments, rights to explanation, strengthened judicial review procedures, and structural reforms like independent oversight and sunset provisions. The algorithm of any given AI tool may optimize, but it does not justify. That role is for the judiciary. Judges must determine whether the values embedded in automated decisions are consistent with Constitutional protections and democratic legitimacy. This Essay offers a roadmap for that judgment, and for restoring the visibility, accountability, and reason-giving that public power, however encoded, must always entail.

Share

COinS