ORCID (Links to author’s additional scholarship at ORCID.org)
Contemporary privacy law does not go far enough to protect our privacy interests, particularly where artificial intelligence and machine learning are concerned. While many have written on problems of algorithmic bias and data deletion, this Article introduces the novel concept of the “algorithmic shadow” and explains the new privacy remedy of “algorithmic destruction,” also known as algorithmic disgorgement or machine unlearning. The algorithmic shadow describes the persistent imprint of training data that has been fed into a machine learning model and used to refine that machine learning system. This shadow persists even if data is deleted from the initial training data set, which means privacy rights like data deletion do not solve for the new class of privacy harms that arise from algorithmic shadows. Algorithmic destruction (deletion of models or algorithms trained on misbegotten data) has emerged as an alternative, or perhaps supplementary remedy and regulatory enforcement tool to address these new harms.
This Article introduces two concepts to legal scholarship—the algorithmic shadow and algorithmic destruction. First, the Article defines the concept of the algorithmic shadow, a novel concept that has so far evaded significant legal scholarly discussion, despite its importance in changing understandings of privacy risks and harms. Second, the Article argues that data deletion does not solve for algorithmic shadow harms and advocates for the development of new privacy remedies to address these new harms. Finally, the Article introduces algorithmic destruction as a potential right and remedy, explaining its theoretical and practical applications, as well as potential drawbacks and concerns.
Tiffany C. Li,
SMU L. Rev.