Canceled: International Symposium: AI and Justice
- Date:Fri, Mar 13, 2020
- Time:16:00-19:00
- Location:In view of the escalating novel coronavirus outbreak, and as a matter of safety and precaution, we canceled the international symposium. We appreciate your kind understanding.
- Hosted by:
Institute for Future Initiatives, The University of Tokyo,
Fondation France Japon de l’EHESS (École des hautes études en sciences sociales, Paris)
AIR - Co-hosted by:
Ethics committee, Japanese Society for Artificial Intelligence
- Language:
English-Japanese with the simultaneous interpretation
In recent years, AI and other information technologies have been used by law enforcement agencies such as the judiciary. And as deep fakes and other abuses of AI become issues, law enforcement officials need to better understand the impact of digital transformation and what AI can and cannot do, and the challenges it faces.
This symposium invites Jean Lassègue of CNRS, a philosopher of science who is a co-author of “Justice digitale” with French jurist and judge Prof. Garapon, and Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics, United Nations Interregional Crime and Justice Institute (UNICRI), which provides training and education programs on the use and misuse of AI to various countries.
In the panel, three Japanese discussants, Tatsuhiko Inatani (Kyoto University: criminal law), Takehiro Ohya (Keio University: jurisprudence) and Masahiro Kurosaki (National Defense Acadey: international law) will discuss “AI and Justice” and its implications with the speakers.
This event is supported by JST-RISTEX Grant Number JPMJRX16H2.
- 16:00Opening remarks
Sébastien Lechevalier (Fondation France Japon de l’EHESS)
- 16:05Presentation 1: Jean Lassègue (CNRS)
“The Impact of Digital Transformations on the Rule of Law”
- 16:35Presentation 2: Irakli Beridze (UNICRI)
“Opportunities and Challenges of Using AI for Peace and Justice”
- 17:05Comments from designated discussants
Tatsuhiko Inatani (Kyoto University)
Takehiro Ohya (Keio University)
Masahiro Kurosaki (National Defense Academy) - 17:45
Break
- 18:00Panel Discussion and Q&A
Panelist: Jean Lassègue, Irakli Beridze, Tatsuhiko Inatani, Takehiro Ohya, Masahiro Kuroski
Moderator:Arisa Ema, Assistant professor, Institute for Future Initiatives, UTokyo - 18:50Closing remarks
Hideaki Shiroyama, Institute for Future Initiatives, Graduate Schools for Law and Politics, and GraSPP, UTokyo
Jean Lassègue, “The Impact of Digital Transformations on the Rule of Law”
In this presentation, I will focus on the main digital transformations that impact the making of law and the building of a legal order. These changes fall into four main categories: new forms of literacy, new social interfaces, new forms of social mediation and new forms of judgement. They all contribute in triggering a competition between the traditional (mostly textual) and the new (mostly digital) forms of legality.
1. New forms of literacy: all writing systems (either alphabetic or ideogrammatic) are now depending on a backstage digital coding in order to be performed over digital networks. The historical origin of this digital coding lies on the combinatorial power of the alphabetic system and the positional notation of numbers. But this undercover alphabetization and arithmetization of all writing systems at a global level goes hand in hand with a complete illiteracy among most of their users: only computer scientists know how to write software and their importance as a social group is comparable to that of scribes in Antiquity. In the case of law, it generates a rather unique form of competition between forms of legality: law professionals cannot read legal texts the way they used to and depend on computer scientists to do so even if they nonetheless have to produce legal judgements the traditional way.
2. New social interfaces: examples will be given of how digitalisation disrupts the legal way of drawing a line between private and public spheres of life as well as challenges the prescriptive dimension of law by replacing it with merely statistical rules.
3. New social mediations: digitalisation of legal systems include four new social mediations: (1) pieces of software that help reach legal decisions ; (2) platforms which organize various forms of litigations ; (3) database that at the same time collect data and predict behaviors as well legal issues ; (4) blockchains on which contracts and exchange of goods are supposedly securely written. The basic aim of these new forms of mediation is to get rid of their symbolic aspects as they are supported by traditional legal institutions and replace them by strictly technological solutions.
4. New forms of judgement: the predictive power of pieces of software which build on large database compels legal experts to partly delegate their judgements to algorithms. Legal judgements become therefore a collective enterprise that intertwines State (sovereignty) and private (profit) concerns.
These four main categories can help clarify the impact of digitalization on the rule of law and consequently also draw limits to this impact.
Irakli Beridze, “Opportunities and Challenges of Using AI for Peace and Justice”
This presentation will focus on the link between Artificial Intelligence (AI) and the criminal justice system, in particular the relevance of these technological advancements for law enforcement and the courts systems.
AI can enable law enforcement agencies, the judiciary and legal professionals to enhance their capabilities to prevent, control, investigate and prosecute crime. For instance, machine learning, an application of AI, promises to support law enforcement to more effectively and efficiently identify persons of interest, stolen vehicles or suspicious sounds and behaviour or predict trends in criminality or terrorist action, while AI in the courts could support with legal research and analysis for the identification of precedent and the automated generation of judgments.
Notwithstanding of this potential, its usage is not without controversy and concerns have been raised from a human rights and civil liberties perspective that undermine the trust placed in law enforcement, the court systems and the criminal justice approach in general.
At the same time, AI can also enable new digital, physical and political threats for us to grapple with going forward. While the integration of these technologies into crime has yet to be substantially identified, preparedness for the emergence of new threats and crimes must be a priority as these technologies become more accessible and pervasive throughout society. The challenges surrounding the malicious use of deepfakes is just one area of immediate concern.
Jean Lassègue
Senior Fellow at the French National Centre for Scientific Research (CNRS).
A philosopher of science by training, my research was first focused on the epistemology of computer science from the 1920s to the 1950s (Hilbert’s program, Turing’s concept of computability, deciphering machines of war codes and the engineering of the first computers). I then became involved in the debates concerning the importance of computational models in human cognition but contrary to the most commonly adopted view, it is by referring to Turing’s pioneering work that I argued against them. It led me to claim, in sync with other researchers both from hard sciences and the humanities, that computer science should be interpreted as the last step to date in the long history of alphabetical writing and the positional numbering in the West. It then became possible to contribute to a new anthropological framework in which computer science and the gradual digitalization of human activities could be merged into a larger historical and social perspective at a global level – what we are experiencing now worldwide. By scrutinizing how the gradual digitalisation of social relationships has deep consequences on the way their normative aspects are interpreted and produced by the rule of law, I was able to describe this entirely new legal situation as a problem of literacy: the rule of law which was up to now based on corpora of readable texts is challenged by a new form of legality based upon unreadable coded laws implemented in various pieces of software using vast database. It is the connection between literacy, computer science and the rule of law which is at the core of my current research.
Irakli Beridze
Head, Centre for Artificial Intelligence and Robotics, United Nations, UNICRI
More than 20 years of experience in leading multilateral negotiations, developing stakeholder engagement programmes with governments, UN agencies, international organisations, think tanks, civil society, foundations, academia, private industry and other partners on an international level. Since 2014, Initiated and managed one of the first United Nations Programmes on Artificial Intelligence and Robotics. Initiating and organizing number of high-level events at the United Nations General Assembly, and other international organizations. Finding synergies with traditional threats and risks as well as identifying solutions that AI can contribute to the achievement of the United Nations Sustainable Development Goals. Mr Beridze is advising governments and international organizations on numerous issues related to international security, scientific and technological developments, emerging technologies, innovation and disruptive potential of new technologies, particularly on the issue on crime prevention, criminal justice and security. He is a member of various of international task forces, including the World Economic Forum’s Global Artificial Intelligence Council, and the High-Level Expert Group on Artificial Intelligence of the European Commission. He is frequently lecturing and speaking on the subjects related to technological development, exponential technologies, artificial intelligence and robotics and international security. He has numerous publications in international journals and magazines and frequently quoted in media on the issues related to artificial intelligence. Irakli Beridze is an International Gender Champion supporting the IGC Panel Parity Pledge. He is also recipient of recognition on the awarding of the Nobel Peace Prize to the OPCW in 2013.
Tatsuhiko Inatani, Associate Professor, Department of Law, Kyoto University
His scholarship focuses on law and technology and corporate crime. He is also a visiting researcher at RIKEN where he explores a suitable governance system for development of AI technologies, e.g. autonomous vehicle, with AI scientists and engineers. He has published several influential Japanese books and articles concerned with privacy protection, artificial intelligence and deferred prosecution agreements. He received his B.A. from The University of Tokyo and his J.D. from Kyoto University and was a visiting scholar at Sciences Po. Paris and the University of Chicago.
Takehiro Ohya, Professor of jurisprudence, Faculty of Law, Keio University
His main fields of research are the philosophical basis of legal interpretation, and the effect of information technology on legal / political systems. After finishing undergraduate of Law at the University of Tokyo, he served as a research fellow there for 4 years until he moved to Nagoya University Graduate School of Law as an associate professor. After being promoted to full professor in Nagoya, he moved to Keio in October 2015. He is a member of the executive board of Japan Association of Legal Philosophy from 2009, and several committees in Japanese government especially on AI and digital ethics.
Masahiro Kurosaki, Associate Professor of International Law, Department of International Relations, National Defense Academy of Japan
He has published a range of articles and book chapters on the law of international security, the law of armed conflict and international criminal law, which include: “Toward the Special Computer Law of Targeting: ‘Fully Autonomous’ Weapons Systems and the Proportionality Test,” Claus Kress et al. eds., Necessity and Proportionality in International Peace and Security Law (Oxford University Press, 2020 forthcoming); “The Fight against Impunity for Core International Crimes: Reflections on the Contribution of Networked Experts to a Regime of Aggravated State Responsibility,” Holly Cullen et al. eds., Experts, Networks and International Law (Cambridge University Press, 2017).
Institute for Future Initiatives, UTokyo
Technology Governance Policy Research Unit
ifi_tg★ifi.u-tokyo.ac.jp
(★→@).