While the social implementation of artificial intelligence (AI) services and products is expanding, issues related to the reliability and transparency of AI are becoming a challenge.
Many country governments, enterprises, and organizations are developing various approaches such as guidelines and tools to mitigate the risks.
However, in many cases, they are not sufficiently used in practice due to the following problems.
- Each AI service has different significant risks.
- AI models may not be able to mitigate the risks sufficiently and continually.
- Sometimes people (including users) realize risks.
In order to solve these issues, a practical framework is needed to consider what are the important risks for AI services and products, who is responsible for those risks, and what assessment metrics and tools to use.
This study group is operated as a joint research project between the University of Tokyo and Deloitte Tohmatsu Risk Services Co., Ltd. and carries out investigation and research of specific cases using the Risk Chain Model (RCModel) developed by a research group at the University of Tokyo.
In addition, various organizations and companies, including NEC Corporation’s “Research on AI Ethics and Legal Systems” and the AI Business Promotion Consortium’s “AI Ethics WG,” are providing case studies and other support to carry out this project. The goal is to create a platform that can be used by anyone to consider the risks associated with the planning, development, provision, and use of AI among all stakeholders involved.
This workshop is conducted as part of the Toyota Foundation’s D18-ST-0008 “Formation of a Platform for Ethics and Governance of Artificial Intelligence
How to use Risk Chain Model
Risk Chain Model (RCModel) Guide Ver1.0
Policy Recommendation “RCModel, a Risk Chain Model for Risk Reduction in AI Services”
Practice: Risk Chain Model and Recruitment AI
Movie of the event(A Part)
Movie of the event(B Part)
Online event report: “A Risk-based Approach to AI Services: Risk Chain Models and Recruitment AI” (2021/7/15)
Case Study
*These are fictional case studies below and don’t raise issues or assure for any company or AI service.
Case01.Recruitment AI (2021/07)
Case02.Unstaffed convenience stores (2021/09)
Case03.Power Line Inspection AI (2021/09)
Case04.Defect Detection AI (2021/09)
Case05.Road Guide Robot (2021/09)
Case06.Verification of Recidivism Possibility AI (2021/09)
Case07.Loan Examination AI (2022/04)
Case08.Cancer Diagnostic AI (2022/04)
Case09.Smart Appliance Optimization (2022/04)
Case10.Driverless Bus (2022/04)
Case11.Guidance in Plant Operation (2022/04)
*This case is detailed in Use Case 1 of the AI Business Promotion Consortium’s Ethical Working Group Deliverables. It was assessed, with help from the Chiyoda Corporation, as a typical case.(Note: The Chiyoda Corporation did not assess this case directly.)
Reference(Japanese only): https://aibpc.org/?p=1523