Concepedia

Publication | Open Access

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable\n Claims

124

Citations

103

References

2020

Year

Abstract

With the recent wave of progress in artificial intelligence (AI) has come a\ngrowing awareness of the large-scale impacts of AI systems, and recognition\nthat existing regulations and norms in industry and academia are insufficient\nto ensure responsible AI development. In order for AI developers to earn trust\nfrom system users, customers, civil society, governments, and other\nstakeholders that they are building AI responsibly, they will need to make\nverifiable claims to which they can be held accountable. Those outside of a\ngiven organization also need effective means of scrutinizing such claims. This\nreport suggests various steps that different stakeholders can take to improve\nthe verifiability of claims made about AI systems and their associated\ndevelopment processes, with a focus on providing evidence about the safety,\nsecurity, fairness, and privacy protection of AI systems. We analyze ten\nmechanisms for this purpose--spanning institutions, software, and hardware--and\nmake recommendations aimed at implementing, exploring, or improving those\nmechanisms.\n

References

YearCitations

2008

35.7K

2002

29.6K

2016

14K

2019

7.9K

2016

7.9K

1992

7.2K

2019

5.4K

2013

4.9K

2019

4.4K

2020

4.2K

Page 1