SMU Science and Technology Law Review
Abstract
The stability of the constitutional order turns, in part, on a stable economy and reliable advances in technology. Political order cannot withstand economic collapse or a massive technological failure. Such crises chip away at the collective will to adhere to the social contract because they indicate the government may not have the capacity to uphold its end of the bargain—protecting individual liberty from broad threats. “Unprecedented” economic downturns, however, have a precedent of emerging from the very deliberate decision of some actors to pursue extremely risky behavior in their self-interest at the expense of the public. Societal disruption from over-dependence on specific technology is also the product of known factors. Governments have no excuse, then, for serially allowing the risky behavior of a few to imperil the political order upon which the many rely for liberty, opportunity, and stability. This is not to say that detecting and stemming such risky behavior before a crisis occurs is easy. It is not. Ignorance, though, can no longer serve as an excuse for governments not taking more seriously systemic risks to the economy and, by extension, the political order. The transition from the Articles of Confederation to the Constitution and the text of early state constitutions make clear that both levels of government must proactively and successfully mitigate systemic risks. Overreliance on AI could cause severe economic and technological chaos upon some failure. If governments allow such risks to go unaddressed, they will be in violation of the social contract—a fact that mandates that state and federal officials do more than simply try to adjust old regulatory frameworks, such as antitrust law, to this novel risk. A few policy solutions could demonstrate a good-faith effort to shield people if AI does indeed go south. None of the solutions will alone mitigate the risks of excessive reliance on a few AI labs. First, the state and federal governments can insist on a diversified AI portfolio in their own procurement practices. The government’s purchasing power can induce competition in the AI space-chipping away at the dominance of the first scalers, such as OpenAI, in the field. Knowledge of lucrative contracts with the federal government could be the seed of several startups that grow to become key players in the market. Second, state legislatures and Congress should explore imposing an insurance requirement on the largest AI labs. If you build it and break it, then you should pay for it. This approach has historical and modern precedent. As far back as World War I, the government took active steps to ensure against worst-case economic scenarios during turbulent times. Other strategies abound and should be proposed and explored. The key is that the government should not sit on the sidelines and allow further risks to develop.
Recommended Citation
Kevin Frazier,
Systemic Risk and the Social Contract,
28
SMU Sci. & Tech. L. Rev.
29
(2025)
Included in
Computer Law Commons, Intellectual Property Law Commons, Internet Law Commons, Science and Technology Law Commons
