•  
  •  
 

Abstract

This article explores the use of Artificial Intelligence (AI) in emerging eye-tracking diagnostic technology, with a focus on both the patient data privacy and security regulations that firms, specifically device inventors and manufacturers, may face and how such firms can address the developing privacy and regulatory legal challenges. In addition, we discuss the ethical considerations of algorithmic bias, the impact such biases have on society and emerging technology, along with specific actions companies should take to maximize patient outcomes. Lastly, we offer a case study of Oculogica, an emerging digital health technology company—and its medical device (EyeBOX) – to illustrate how digital health firms can enhance patient outcomes, while ensuring data security and privacy, while simultaneously promoting responsible development of advanced algorithms for diagnostic AI.

Share

COinS