Benefits and Risks of AI for Crime Prevention and Law Enforcement
- Venue：Conference Room, Ito International Research Institute, The University of Tokyo
- Hosted by：
Institute for Future Initiatives, The University of Tokyo
Mr. Irakli Beridze, Head of the United Nations Interregional Crime and Justice Research Institute (UNICRI)
The “International Symposium: AI and Justice” was canceled due to the spread of the coronavirus. Instead, a small and closed workshop was held on 13th March 2020, with. Currently, artificial intelligence (AI) and robotics technologies are being used by law enforcement agencies such as the police and the judiciary. Also, law enforcement agencies need to understand the changes and challenges posed by AI, as criminal usage of AI becomes a problem. In this workshop, Mr. Beridze presented the current situation and challenges of AI use in crime prevention and law enforcement and discussed future cooperation.
Mr. Beridze pointed out that the 3 + 1 components are important for understanding AI. 3 refers to (1) data abundance, (2) advanced algorithms, and (3) computational power. The + 1 supporting these 3 components is the investment. The AI-related market is estimated to grow 500 billion euros (Approx. 60 trillion yen) by 2025.
As the private sector advances AI development, more than 40 countries currently have AI national strategies. The United Nations is also discussing AI. The United Nations Interregional Crime and Justice Research Institute (UNICRI) Center for AI and Robotics, headed by Mr. Beridze, was established in the Hague, the Netherlands in 2017. The Center discusses AI and robotics concerning crime prevention and human rights and provides training programs and information for law enforcement agencies to promote understanding of AI and robotics. The network is extensive and links UN departments such as the Office of the High Commissioner for Human Rights (OHCHR) and the United Nations Office of Counter-Terrorism (UNOCT), international organizations such as Interpol, Europol, and the World Economic Forum, as well as governments, industries and academic communities in the world.
The discussion of AI in law enforcement comes from two directions. One is to understand how criminals use AI. Currently, discussions are being held from three perspectives. One is “digital crime”. Cyberspace crime is already a problem. Now, human hackers are taking over and threatening money and security, but machines are replacing them. The machine operates 24 hours a day, costs less, and self-learns. Also, as the data increases, it is expected that humans will not be able to cope with it.
The next problem is “physical crime”. For example, one can fly a drone with an AI application that can recognize images and attack targets. The increasing accuracy of image recognition also increases the risk of such crimes.
The last problem is “political crime”. Disinformation and fake news are not new issues. But until now, it has been a written form. Now, fake information can be created in video and audio form. Misinformation about politicians and corporate CEOs would ruin their image. Or, to the extreme, false information could trigger a war. There are currently concerns about the political use of such disinformation.
Law enforcement agencies need to be trained in crime using AI. Specifically, it is necessary to consider scenarios concerning what kind of technology is currently available, how it is used, how to counter it.
At the same time, law enforcement agencies are using AI to solve problems. Specifically, four technical domains, such as audio processing, visual processing, resource optimization, and natural language processing are beginning to be used in a variety of situations despite the challenges.
In western countries, there is currently a debate on how to use face recognition technology due to privacy concerns. We need to find a way to use it while not reinforcing/defending the existing regime and violating basic human rights. In Norway, for example, police have introduced a non-intrusive surveillance system that automatically anonymizes images by covering people with cartoon characters so that no personal information can be seen. In addition to human rights and freedoms, ensuring fairness, accountability, transparency, and explainability are important for law enforcement to use AI for monitoring and control (these are called FATE). And FATE will earn people’s trust.
AI usage is challenging not just in the police, but also in the judiciary. In Dubai last year, the center conducted a training program for judges. Also, the volume of data handled by the police and the judiciary will continue to increase. They can’t process huge amounts of data without the help of AI. As a result, the use of AI in law enforcement is expected to grow sooner or later. But AI and human judgment aren’t perfect. So instead of completely replacing AI with humans, we need to think about how humans and machines can work together.
The United Nations Interregional Crime and Justice Research Institute (UNICRI) is now working with Interpol and other organizations to create toolkits for law enforcement agencies. Instead of creating new principles for AI, they are creating guidebooks and best practices that implement those principles and values. Methodologies that have been organized and consolidated by international organizations will move to the next step of implementing them according to the circumstances of each country. Every use of AI technology involves uncertainties and risks. It is also assumed that different countries have different risk tolerance ranges and gray zones of what is a crime. That’s why Mr. Beridze stressed the importance of a multi-stakeholder discussion involving a variety of professions, industries, and countries.
When it comes to the use of AI in law enforcement, the discussion tends to focus exclusively on legal and criminal aspects. However, Mr. Beridze stressed that discussions should be conducted from a broader perspective, including social and economic perspectives as well. Crime is inherently inseparable from social and economic perspectives. For example, various issues such as unemployment rate, immigration policy, global warming, and the current spread of coronavirus in Europe are related to the increase and decrease in crime rate. Also, education, welfare policies, and investment are important in preventing crime. Situations are complex and require that risk scenarios be developed through discussions involving a wide range of people.
In the present age when the crime is becoming more diverse and technology is rapidly advancing, it is necessary to discuss problem setting and response with various people beyond existing barriers and concepts. The Center for AI and Robotics, headed by Mr. Beridze, is part of the UNICRI but because it is a project-based organization, it enables them for flexible collaboration between various governments, industries, and academic communities. At the end of the workshop, we discussed that the University of Tokyo, in cooperation with Mr. Beridze and other institutions in Japan and overseas, will build a forum of the use of AI in law enforcement and other public sectors.
Written by Arisa Ema, Institute for Future Initiatives, The University of Tokyo