While the social implementation of artificial intelligence (AI) services and products is expanding, issues related to the reliability and transparency of AI are becoming a challenge.
Many country governments, enterprises, and organizations are developing various approaches such as guidelines and tools to mitigate the risks.
However, in many cases, they are not sufficiently used in practice due to the following problems.
- Each AI service has different significant risks.
- AI models may not be able to mitigate the risks sufficiently and continually.
- Sometimes people (including users) realize risks.
In order to solve these issues, a practical framework is needed to consider what are the important risks for AI services and products, who is responsible for those risks, and what assessment metrics and tools to use.
This study group is operated as a joint research project between the University of Tokyo and Deloitte Tohmatsu Risk Services Co., Ltd. and carries out investigation and research of specific cases using the Risk Chain Model (RCModel) developed by a research group at the University of Tokyo.
In addition, various organizations and companies, including NEC Corporation’s “Research on AI Ethics and Legal Systems” and the AI Business Promotion Consortium’s “AI Ethics WG,” are providing case studies and other support to carry out this project. The goal is to create a platform that can be used by anyone to consider the risks associated with the planning, development, provision, and use of AI among all stakeholders involved.
This workshop is conducted as part of the Toyota Foundation’s D18-ST-0008 “Formation of a Platform for Ethics and Governance of Artificial Intelligence
How to use Risk Chain Model
*These are fictional case studies below and don’t raise issues or assure for any company or AI service.