Publication | Open Access
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
211
Citations
0
References
2020
Year
Artificial IntelligenceEngineeringInformation SecurityVerificationIntelligent SystemsFormal VerificationHardware SecurityResponsible AiEthic Of Artificial IntelligenceAi Safety EducationVerifiable ClaimsTrustworthy Artificial IntelligenceData PrivacyDevelopment ProcessesComputer ScienceTrust In Artificial IntelligenceData SecurityTrustworthy AiAutomated ReasoningAi SystemsTechnologyArtificial Intelligence Ethics
Recent AI advances have highlighted large‑scale impacts and revealed that current regulations are inadequate for responsible development, prompting a need for verifiable claims and external scrutiny to earn stakeholder trust. The report aims to outline actionable steps for stakeholders to enhance the verifiability of AI claims, focusing on safety, security, fairness, and privacy evidence. The authors analyze ten mechanisms across institutions, software, and hardware, offering recommendations for their implementation, exploration, or improvement.
With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable. Those outside of a given organization also need effective means of scrutinizing such claims. This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.