Publication | Open Access
Human-Centered Design to Address Biases in Artificial Intelligence
166
Citations
35
References
2023
Year
Artificial IntelligenceEngineeringCognitionIntelligent SystemsBiomedical Artificial IntelligenceResponsible AiData ScienceHealth Care DisparitiesBiasAi HealthcarePublic HealthHumanartificial Intelligence CollaborationHealth Services ResearchHealthcare Big DataCognitive ScienceHealth PolicyAlgorithmic BiasClinical InnovationClinical Decision Support SystemHealth Informatics
Artificial intelligence holds promise to reduce health‑care disparities, yet it can also worsen inequities if implemented without equity considerations. This perspective identifies biases across the AI life cycle and proposes mitigation strategies. The authors recommend engaging diverse stakeholders and applying human‑centered AI principles to address these biases. By recognizing and addressing biases at every stage, AI can reduce health disparities and achieve its full potential in health care.
The potential of artificial intelligence (AI) to reduce health care disparities and inequities is recognized, but it can also exacerbate these issues if not implemented in an equitable manner. This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities. By recognizing and addressing biases at each stage of the AI life cycle, AI can achieve its potential in health care.
| Year | Citations | |
|---|---|---|
Page 1
Page 1