Publication | Open Access
Human-Centered Explainable Artificial Intelligence for Marine Autonomous Surface Vehicles
44
Citations
37
References
2021
Year
Artificial IntelligenceEngineeringMarine EngineeringIntelligent SystemsAutonomous SystemsXai NeedsIntelligent Autonomous SystemsAutonomous VehiclesSystems EngineeringInterpretabilityRobot LearningKnowledge RepresentationAutonomous Surface VehiclesCommon-sense ReasoningComputer ScienceAutonomous DrivingUnderwater RobotReasoningUnderwater VehicleAutomationHuman-ai InteractionHuman-computer InteractionExplainable Artificial IntelligenceRoboticsExplainable Ai
XAI for ASVs must support model interpretation, understandability, and trust, and as ASVs move toward widespread deployment, these needs extend beyond expert users to include passengers, other vessels, and remote operators, whose requirements differ from those addressed by technology‑centered XAI. The study proposes a human‑centered XAI concept to meet the emerging end‑user interaction needs of ASVs. The concept is structured using a model‑based reasoning method that employs analogy, visualization, and mental simulation, illustrated by recent ASV research at NTNU. Examples demonstrate that usability, trust, and safety are the core processes of human‑centered XAI, and the study’s contribution is the formulation of this concept to broaden interpretability research for ASV end‑user interactions.
Explainable Artificial Intelligence (XAI) for Autonomous Surface Vehicles (ASVs) addresses developers’ needs for model interpretation, understandability, and trust. As ASVs approach wide-scale deployment, these needs are expanded to include end user interactions in real-world contexts. Despite recent successes of technology-centered XAI for enhancing the explainability of AI techniques to expert users, these approaches do not necessarily carry over to non-expert end users. Passengers, other vessels, and remote operators will have XAI needs distinct from those of expert users targeted in a traditional technology-centered approach. We formulate a concept called ‘human-centered XAI’ to address emerging end user interaction needs for ASVs. To structure the concept, we adopt a model-based reasoning method for concept formation consisting of three processes: analogy, visualization, and mental simulation, drawing from examples of recent ASV research at the Norwegian University of Science and Technology (NTNU). The examples show how current research activities point to novel ways of addressing XAI needs for distinct end user interactions and underpin the human-centered XAI approach. Findings show how representations of (1) usability, (2) trust, and (3) safety make up the main processes in human-centered XAI. The contribution is the formation of human-centered XAI to help advance the research community’s efforts to expand the agenda of interpretability, understandability, and trust to include end user ASV interactions.
| Year | Citations | |
|---|---|---|
Page 1
Page 1