ORCID (Links to author’s additional scholarship at ORCID.org)



Low-income people suffer from digital discrimination on the basis of their socio-economic status. Automated decision-making systems, often powered by machine learning and artificial intelligence, shape the opportunities of those experiencing poverty because they serve as gatekeepers to the necessities of modern life. Yet in the existing legal regime, it is perfectly legal to discriminate against people because they are poor. Poverty is not a protected characteristic, unlike race, gender, disability, religion or certain other identities. This lack of legal protection has accelerated digital discrimination against the poor, fueled by the scope, speed, and scale of big data networks. This Article highlights four areas where data-centric technologies adversely impact low-income people by excluding them from opportunities or targeting them for exploitation: tenant screening, credit scoring, higher education, and targeted advertising. Currently, there are numerous proposals to combat algorithmic bias by updating analog-era civil rights laws for our datafied society, as well as to bolster civil rights within comprehensive data privacy protections and algorithmic accountability standards. On this precipice for legislative reform, it is time to include socio-economic status as a protected characteristic in antidiscrimination laws for the digital age. This Article explains how protecting low-income people within emerging legal frameworks would provide a valuable counterweight against opaque and unaccountable digital discrimination, which undermines any vision of economic justice.

Included in

Law Commons



Digital Object Identifier (DOI)