•  
  •  
 

SMU Law Review

Abstract

Artificial Intelligence’s (AI) global race for comparative advantage has the world spinning, while leaving people of color and the poor rushing to reinvent AI imagination in less racist, destructive ways. In repurposing AI technology, we can look to close the national racial gaps in academic achievement, healthcare, housing, income, and fairness in the criminal justice system to conceive what AI reparations can fairly look like. AI can create a fantasy world, realizing goods we previously thought impossible. However, if AI does not close these national gaps, it no longer has foreseeable or practical social utility value compared to its foreseeable and actual grave social harm. The hypothetical promises of AI’s beneficial use as an equality machine without the requisite action and commitment to address the inequality it already causes now is fantastic propaganda masquerading as merit for a Silicon Valley that has yet to diversify its own ranks or undo the harm it is already causing. Care must be taken that fanciful imagining yields to practical realities that, in many cases, AI no longer has foreseeable practical social utility when compared to the harm it poses to democracy, privacy, equality, personhood and global warming.

Until we can accept as a nation that the Sherman Antitrust Act of 1890 and the Clayton Antitrust Act of 1914 are not up to the task for breaking up tech companies; until we can acknowledge DOJ and FTC regulators are constrained from using their power because of a framework of permissibility implicit in the “consumer welfare standard” of antitrust law; until a conservative judiciary inclined to defer to that paradigm ceases its enabling of big tech, then workers, students, and all natural persons will continue to be harmed by big tech’s anticompetitive and inhumane activity. Accordingly, AI should be vigorously subject to anti-trust monopolistic protections and corporate, contractual, and tort liability explored herein, such as strict liability or a new AI prima facie tort that can pierce the corporate and technological veil of algorithmic proprietary secrecy in the interest of justice. And when appropriate, AI implementation should be phased out for a later time when we have better command and control of how to eliminate its harmful impacts that will only exacerbate existing inequities.

Fourth Amendment jurisprudence of a totalitarian tenor—greatly helped by Terry v. Ohio—has opened the door to expansive police power through AI’s air superiority and proliferation of surveillance in communities of color. This development is further exacerbated by AI companies’ protectionist actions. AI rests in a protectionist ecology including, inter alia, the notion of black boxes, deep neural network learning, Section 230 of the Communications Decency Act, and partnerships with law enforcement that provide cover under the auspices of police immunity. These developments should discourage a “safe harbor” protecting tech companies from liability unless and until there is a concomitant safe harbor for Blacks and people of color to be free of the impact of harmful algorithmic spell casting.

As a society, we should endeavor to protect the sovereign soul’s choice to decide which actions it will implicitly endorse with its own biometric property. Because we do not morally consent to give the right to use our biometrics to accuse, harass, or harm another in a line up, arrest, or worse, these concerns should be seen as the lawful exercise of our right to remain a conscientious objector under the First Amendment. Our biometrics should not bear false witness against our neighbors in violation of our First Amendment right to the free exercise of religious belief, sincerely held convictions, and conscientious objections thereto.

Accordingly, this Article suggests a number of policy recommendations for legislative interventions that have informed the work of the author as a Commissioner on the Massachusetts Commission on Facial Recognition Technology, which has now become the framework for the recently proposed federal legislation—The Facial Recognition Technology Act of 2022. It further explores what AI reparations might fairly look like, and the collective social movements of resistance that are needed to bring about its fruition. It imagines a collective ecology of self-determination to counteract the expansive scope of AI’s protectionism, surveillance, and discrimination. This movement of self-determination seeks: (1) Black, Brown, and race-justice-conscious progressives to have majority participatory governance over all harmful tech applied disproportionately to those of us already facing both social death and contingent violence in our society by resorting to means of legislation, judicial activism, entrepreneurial influential pressure, algorithmic enforced injunctions, and community organization; (2) a prevailing reparations mindset infused in coding, staffing, governance, and antitrust accountability within all industry sectors of AI product development and services; (3) the establishment of our own counter AI tech, as well as tech, law, and social enrichment educational academies, technological knowledge exchange programs, victim compensation funds, and the establishment of our own ISPs, CDNs, cloud services, domain registrars, and social media platforms provided on our own terms to facilitate positive social change in our communities; and (4) personal daily divestment from AI companies’ ubiquitous technologies, to the extent practicable to avoid their hypnotic and addictive effects and to deny further profits to dehumanizing AI tech practices.

AI requires a more just imagination. In this way, we can continue to define ourselves for ourselves and submit to an inside-out, heart-centered mindfulness perspective that informs our coding work and advocacy. Recognizing we are engaged in a battle of the mind and soul of AI, the nation, and ourselves is all the more imperative since we know that algorithms are not just programmed—they program us and the world in which we live. The need for public education, the cornerstone institution for creating an informed civil society, is now greater than ever, but it too is insidiously infected by algorithms as the digital codification of the old Jim Crow laws, promoting the same racial profiling, segregative tracking, and stigma labeling many public school students like myself had to overcome. For those of us who stand successful in defiance of these predictive algorithms, we stand simultaneously as the living embodiment of the promise inherent in all of us and the endemic fallacies of erroneous predictive code. A need thus arises for a counter-disruptive narrative in which our victory as survivors over coded inequity disrupts the false psychological narrative of technological objectivity and promise for equality.

Included in

Law Commons

Share

COinS
 

Digital Object Identifier (DOI)

https://doi.org/10.25172/smulr.75.3.7