Publication | Closed Access
Explainable artificial intelligence: A survey
1.1K
Citations
37
References
2018
Year
Unknown Venue
Artificial IntelligenceEngineeringMachine LearningAi FoundationAi SafetyIntelligent SystemsData ScienceInterpretabilityRobot LearningSupervised LearningMachine Learning ModelArtificial General IntelligenceComputer ScienceDeep LearningExplanation-based LearningAutomated ReasoningModel InterpretabilityExplainable Artificial IntelligenceExplainable Ai
Machine learning has achieved superhuman performance across many tasks, yet its lack of transparency hampers trust in critical domains such as healthcare and finance, driving interest in explainable AI. This paper seeks to summarize recent XAI developments in supervised learning, examine their connection to artificial general intelligence, and propose future research directions. It reviews recent XAI advances, discusses links to AGI, and outlines potential research avenues.
In the last decade, with availability of large datasets and more computing power, machine learning systems have achieved (super)human performance in a wide variety of tasks. Examples of this rapid development can be seen in image recognition, speech analysis, strategic game planning and many more. The problem with many state-of-the-art models is a lack of transparency and interpretability. The lack of thereof is a major drawback in many applications, e.g. healthcare and finance, where rationale for model's decision is a requirement for trust. In the light of these issues, explainable artificial intelligence (XAI) has become an area of interest in research community. This paper summarizes recent developments in XAI in supervised learning, starts a discussion on its connection with artificial general intelligence, and gives proposals for further research directions.
| Year | Citations | |
|---|---|---|
Page 1
Page 1