Publication | Open Access
Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy
16
Citations
31
References
2020
Year
Artificial IntelligenceIntelligent SystemEngineeringMachine LearningHuman-machine InteractionIntelligent SystemsSocial SciencesMixed-initiative SystemsInteractive Machine LearningData ScienceHuman-in-the-loop User FeedbackInteractive SystemsHuman ComputationHuman-in-the-loopDesignUser ExperienceUser FeedbackComputer ScienceHuman-in-the-loop Machine LearningHuman-ai InteractionHuman-computer InteractionInteractive Feedback CollectionModel Accuracy
Mixed‑initiative systems let users give feedback to improve performance, yet the impact of such interaction on user trust remains largely unexplored. This study examines how providing feedback influences users’ understanding of an intelligent system and their perception of its accuracy. A controlled experiment with a simulated object‑detection system and image data was used to assess the effects of interactive feedback collection on user impressions. Participants’ trust and perceived accuracy dropped after giving feedback, even when the system’s actual accuracy improved, underscoring the need to account for trust effects when designing user‑feedback mechanisms.
Mixed-initiative systems allow users to interactively provide feedback to potentially improve system performance. Human feedback can correct model errors and update model parameters to dynamically adapt to changing data. Additionally, many users desire the ability to have a greater level of control and fix perceived flaws in systems they rely on. However, how the ability to provide feedback to autonomous systems influences user trust is a largely unexplored area of research. Our research investigates how the act of providing feedback can affect user understanding of an intelligent system and its accuracy. We present a controlled experiment using a simulated object detection system with image data to study the effects of interactive feedback collection on user impressions. The results show that providing human-in-the-loop feedback lowered both participants’ trust in the system and their perception of system accuracy, regardless of whether the system accuracy improved in response to their feedback. These results highlight the importance of considering the effects of allowing end-user feedback on user trust when designing intelligent systems.
| Year | Citations | |
|---|---|---|
Page 1
Page 1