-
No.23
Technology Governance Policy Research Unit
Towards Responsible AI DeploymentPolicy Recommendations for the Hiroshima AI Process
No.23
Technology Governance Policy Research Unit
Executive summary
Alongside the proliferation of artificial intelligence (AI) technologies, particularly machine learning, generative AI -capable of producing diverse content- has gained widespread use, raised expectations for improvements and innovations in various aspects of
life and work. However, it has also underscored the need for appropriate design, development, deployment, and utilization of AI. Given that AI evolves and operates within society, there is a pressing need to establish comprehensive AI governance that encompasses AI developers, providers, users, public institutions, and the society at large. Emphasizing transparency and equity, the call for a shared framework that fosters innovation while mitigating risk is important.
Foundational policy recommendations
1. Establish Forums for Responsible AI Deployment Discussions
To uphold shared values, such as fundamental human rights and democratic principles, it is imperative to facilitate the creation of forums for agile and ongoing multistakeholder deliberation.
2. Promote “Interoperability” Between Frameworks and Mutual Respect for Discipline on AI
The concept of “interoperability,” as discussed at the G7 Hiroshima Summit, can be contemplated from two aspects: “standards” and “interoperability among frameworks”.
Interoperability among frameworks involves an approach to achieving common objectives while respecting disciplines pertaining to AI, which may vary among different countries, regions, organizations, and domains. Disciplines encompass various classifications and it is not always imperative to establish entirely new regulations for every emerging technology within the AI spectrum. Nevertheless, in cases where the application relationship between AI services and existing laws remains ambiguous, or when the objective is to safeguard vulnerable segments of society, appropriate measures must be considered , including the potential enactment of new legislation within the respective countries, regions, and domains.
3. Stakeholders and Measures for Responsible AI Deployment
To advance the responsible deployment of AI, it is imperative to delineate the responsible actors and formulate appropriate measures.
Particularly, in the continuum of processes spanning AI design, development, provisioning, and utilization, where various organizations and individuals may be involved, the locus of responsibility can become ambiguous. Consequently, in interorganizational transactions encompassing AI development to provisioning, ensuring appropriateness through contractual agreements is important.
Furthermore, monitoring mechanisms should be established to ensure proper transactions.
However, in transactions between AI providers and consumers, providers should not only take suitable preventive and corrective measures, but AI users can also leverage governance through disciplines other than regulations, such as market dynamics, investments, and reputation, by acquiring appropriate literacy. Additionally, it is advisable to consider establishing remedial measures such as compensation systems when accountability is unclear.
Considering that the AI lifecycle extends beyond national, regional, and organizational boundaries, it is essential to promote discussions that enhance transparency regarding the responsibilities and measures of these stakeholders.
The full report can be downloaded below.