-
No.24
Technology Governance Policy Research Unit
Advancing AI Audits for Enhanced AI Governance
No.24
Technology Governance Policy Research Unit
Summary
As artificial intelligence (AI) is integrated into various services and systems in society, many companies and organizations have proposed AI principles, policies, and made the related commitments. Conversely, some have proposed the need for independent audits, arguing that the voluntary principles adopted by the developers and providers of AI services and systems insufficiently address risk. This policy recommendation summarizes the issues related to the auditing of AI services and systems and presents a vision for AI auditing that contributes to sound AI governance.
The issues surrounding AI auditing are diverse, and even if the same word is used, different assumptions are made based on different positions and preconditions, making it easy for discussions to be at odds. Therefore, it’s important to have common understanding when the parties discuss issues such as audit scope and the timing for putting AI audits into practice.
This policy recommendation addresses the following six issues that are assumed when discussing AI auditing and these are presented in Chapter 2: (1) The need for AI audits, (2) proof proposition, (3) AI audit scope, (4) timing of AI audits, (5) AI audit practitioner requirements, and (6) AI auditing parties and organizations involved.
Chapter 3 examines why AI audits are difficult to conduct from variety of perspective like technical, institutional, and social perspective. Chapter 4 presents the case of recruitment AI to provide specific examples of issues related to AI audits and the reasons why AI audits are difficult to conduct.
In Chapter 5, three recommendations for promoting AI auditing are explained.
Recommendation1: Development of institutional design for AI audits
Recommendation2: Training human resources for AI audits
Recommendation3: Updating AI audits in accordance with technological progress
In this policy recommendation, AI is assumed to be that which recognizes and predicts data with the last chapter outlining how generative AI should be audited.
About the AI Audit Study Group
This policy recommendation is the result report of the AI Audit Study Group, a study group of the AI Governance Project, a project of the Technology Governance Research Unit of the Institute for Future Initiatives, The University of Tokyo. The AI Audit Study Group started its research activities in June 2022 and consists of the following members.
- Arisa Ema (Associate Professor, Institute for Future Initiatives, The University of Tokyo)
- Ryo Sato (Visiting Researcher, Institute for Future Initiatives, The University of Tokyo / Deloitte Touche Tohmatsu LLC)
- Tomoharu Hase (Visiting Researcher, Institute for Future Initiatives, The University of Tokyo / Deloitte Touche Tohmatsu LLC)
- Masafumi Nakano (Visiting Researcher, Institute for Future Initiatives, The University of Tokyo / The University of Toyo)
- Shinji Kamimura (Deloitte Touche Tohmatsu LLC)
- Hiromu Kitamura (CDLE, AI Legal (NEC) / IRCA (International Register of Certified Auditors) “Japan” Members Supporter)
The full report can be downloaded below.
Slides explaining this policy recommendations can be downloaded below.