Publication | Closed Access
Addressing bias in artificial intelligence for public health surveillance
41
Citations
27
References
2023
Year
Artificial IntelligenceEngineeringText MiningNatural Language ProcessingAlgorithmic BiasesBiasAi HealthcarePublic HealthContent AnalysisHealthcare Big DataBias In Natural Language ProcessingAlgorithmic BiasHealth Care AnalyticsNlp TaskMedical Language ProcessingPublic Health SurveillanceEpidemiologyHealth DataNlp AlgorithmsHealth Informatics
AI, particularly NLP, has enhanced the timeliness and robustness of health data, yet algorithmic bias can misrepresent populations, skew results, and worsen health disparities, requiring careful attention from researchers. This paper investigates how data collection, labeling, and modeling contribute to algorithmic bias in NLP-based public health surveillance. The authors propose open collaboration, auditing processes, and guideline development to mitigate bias arising from these stages and improve NLP algorithms for health surveillance.
Components of artificial intelligence (AI) for analysing social big data, such as natural language processing (NLP) algorithms, have improved the timeliness and robustness of health data. NLP techniques have been implemented to analyse large volumes of text from social media platforms to gain insights on disease symptoms, understand barriers to care and predict disease outbreaks. However, AI-based decisions may contain biases that could misrepresent populations, skew results or lead to errors. Bias, within the scope of this paper, is described as the difference between the predictive values and true values within the modelling of an algorithm. Bias within algorithms may lead to inaccurate healthcare outcomes and exacerbate health disparities when results derived from these biased algorithms are applied to health interventions. Researchers who implement these algorithms must consider when and how bias may arise. This paper explores algorithmic biases as a result of data collection, labelling and modelling of NLP algorithms. Researchers have a role in ensuring that efforts towards combating bias are enforced, especially when drawing health conclusions derived from social media posts that are linguistically diverse. Through the implementation of open collaboration, auditing processes and the development of guidelines, researchers may be able to reduce bias and improve NLP algorithms that improve health surveillance.
| Year | Citations | |
|---|---|---|
Page 1
Page 1