International Symposium: AI and Democracy
- Date:Thu, Mar 25, 2021
- Time:18:00-20:00 JST / 10:00-12:00 CET
- Location:Online (Zoom webinar)
- Language:
English-Japanese with the simultaneous interpretation
- Participation fee:
Free of charge
- Registration:
Prior registration is required.
Please register through the event website.
*Click the button which is linked to the event details page on the external website.
As social disparities become more problematic, information technology, including artificial intelligence (AI), can be a cause of the growing divide, or conversely, a tool that contributes to democracy. How will values such as human rights and sustainability in democracy be extended when AI is designed not only to support decision making but also to intervene in ethical and political decision makings? In this symposium, we will raise questions and explore issues on the theme of AI and democracy in depth through discussions between researchers from France and Japan.
Institute for Future Initiatives, UTokyo
Technology Governance Policy Research Unit
ifi_tg★ifi.u-tokyo.ac.jp(★→@)
Introduction
On the 25th March, 2021, the Institute for Future Initiatives, University of Tokyo and the Fondation France Japon de l’EHESS (École des hautes études en sciences sociales, Paris) co-hosted an international online symposium entitled “AI and Democracy.”
As social divisions and disparities become more and more problematic, information technology, including artificial intelligence (AI), can be either a cause of the growing divide or, conversely, a tool for democracy. How will the values of human rights and sustainable society in democracy be extended? At the beginning of this symposium, Prof. Ema of the University of Tokyo made an opening remarks and introduced the general outline of the symposium, saying that she would like to think about AI and democracy through discussions with French and Japanese researchers with the audience.
Topics Presented
Artificial Intelligence and Human Rights in Democracy – Prof. Vanessa Nurock
Prof. Nurock, Associate Professor at the University of Paris VIII and UNESCO Chairholder on the Ethics of the Living and the Artificial, first introduced the Declaration of the Rights of Man and of the Citizen (Declaration of Human Rights) of 1789, the foundation of democracy in France. The Declaration of Human Rights intended to anchor three main ideas: to prevent despotic political domination, to prevent discrimination, and to allow democratic representation. With this declaration as a starting point, the internationalization of human rights began after World War II, and various freedoms have been declared, including freedom of thought, speech, information, religion, education, association, and movement.
Human rights are also important in the context of AI and cyberspace, and one question could be whether AI will improve or worsen awareness of human rights. Human rights could be improved if the use of AI leads to more equitable decision-making based on information rather than emotion. On the other hand, data bias against minorities, the poor, gender and race are a threat to human rights from AI. The lack of privacy, such as being used for cross-referencing, is also an important issue. But we should also probably go beyond the question whether AI improves or worsens awareness of Human Rights.
In some countries, AI is being used for criminal justice, credit decisions, and employment decisions. When biased AI is used to make important decisions about our lives and work, it is problematic from at least three perspectives, according to Prof. Nurock. It is not the presence of bias itself that is the problem, but (1) mistakenly assuming that the results produced by AI using biased data are objective, (2) assuming objective means impartial, and (3) mistakenly assuming that impartial means ethical. In other words, we take facts for norms and take objective for impartial, and impartial for ethical.
Reducing facts to norms has been coined the ‘naturalistic fallacy’, a term introduced in the early 20th century by the British philosopher George Edward Moore in his book “PRINCIPIA ETHICA” and developed by the French sociologist Pierre Bourdieu in his book “Masculine Domination.” In his book, Bourdieu points out that we say that certain domination relationships are natural even if they are social, as it is the case for male domination.
The same phenomenon is happening in AI, where the bias is artificial and is caused by the social structure. This is not something that happens naturally, but what Prof. Nurock proposed to coin an ‘artificialistic fallacy’, in reference to the ‘naturalistic fallacy’. Not only are we unaware of it, but we assume that artificial facts become moral norms and rights and forget this origin.
Prof. Nurock presented two examples based on this idea. The first is in Japan, where an AI was presented as ‘candidate’ in the 2018 Tama mayoral election with, among others, the argument that this AI would be more impartial. Politics may rely on impartiality, but it may also rely on other basis such as understanding what people live or empathy. This example points out that introducing AI in politics may lead us surreptitiously to redefine what is politics or what it should be.
Another case in point is the use of moral dilemma in driverless cars. There is an MIT study called “Moral Machines,” which deals with the trolley problem of which way to swerve when AI (automobile) is in a situation where if it swerves to the right AI will run over five people, and if it swerves to the left it will run over two people. There are two problems with this research. The first is that it reduces morality to a dilemma. The other problem is that programming ‘moral machines’ leads us to believe that ethics may be reduced to an algorithm. This is the same structure that Prof. Nurock pointed out earlier in the naturalistic fallacy, where facts become the norm.
However, as the American philosopher John Rawls said, when thinking about ethics and politics, we cannot make decisions based only on what is rational. We need to think of politics and ethics in terms of both rationality and reasonableness. To have both, we need to be careful, and to do that, we need to keep humans in the loop.But keeping humans in the loop may not be enough.
This brings Prof. Nurock back to the first French Declaration of Human Rights. The Declaration of Human Rights, issued during the French Revolution, advocated that we must think for ourselves about our fate, our politics and our ethics. From this, Prof. Nurock said, it is neither useful nor ethical to delegate the decision-making of our lives, or politics, to AI. If we are going to do that, we need to rethink ethics and politics.
So, should we really involve AI in politics and ethics at a decisional level –ie where Ai takes the decision? In thinking about this, Prof. Nurock suggests that we should realize that we have a responsibility to future generations. She then introduced the caring democracy proposed by the American philosopher Joan Toronto. There are a few elements of care, but attentiveness is the key. We should be careful when developing AI in areas that involve politics and ethics. People, AI, and society are interacting, and our decisions will have a lot of impact on the future society.
Prof. Nurock concluded by saying that while we shouldn’t hate or fear AI, we also shouldn’t blindly believe in it and believe that there is a technological, AI, solution to everything – especially in the ethical and political fields. There is no black and white way to say which is better or worse, and there is no way to say that AI is, in itself, a threat to democracy. However, AI will never replace humans, so it is important to think carefully about how to coexist with AI, was the message of today’s Prof. Nurock’s presentation.
During the question-and-answer session, there was some discussion on bias in AI. Prof. Nurock pointed out that bias is not always a bad thing, and that different countries have different values and social systems, so the accepted bias will change. Therefore, it is necessary to discuss what kind of bias we should accept as a society. She also said that it is important to have respect for cultural diversity and not to say that one country has a bias against another country’s culture and customs simply because they are different from their own. On the other hand, she pointed out that AI should not be used in areas that clearly lead to discrimination.
What are the real AI threats to democracy? A conceptual overview – Prof. Akira Inoue
The next presenter, Associate Professor Akira Inoue of the University of Tokyo, introduced three types of democracy: democratic instrumentalism, epistemic democracy and relational egalitarian democracy. This aims to highlight “true” AI threats to democracy.
Democracy is self-governance. In a democracy, it is important for us to make decisions collectively and to be involved in political decisions. Equal voting guarantees the legitimacy of democracy.
Prof. Inoue made two assumptions in presenting this topic. The first premise is value pluralism: when AI is used for social decision making, it cannot be separated from problems with the legitimacy of political procedures under the presence of value pluralism. When values are pluralistic, we cannot dispense with debates over the way of achieving coordination among the values.
Second, he has made the assumption that AI will be used appropriately. Some people may disagree with this assumption. As Prof. Nurock has already pointed out, AI processes information with a bias. However, Prof. Inoue said that this assumption enables us to understand the real threat of AI to democracy.
Based on these two premises, he discussed three views of democracy. The first, democratic instrumentalism, is the idea that democracy is valuable for fulfilling fundamental human rights. In an instrumental view of democracy, it is important to know whether the use of AI will better protect people’s rights or undermine them. If the use of AI will help more people to have their rights fulfilled, then AI should be used.
For example, in Japan, there is currently a problem of false provisions in bills. The Cabinet Legislation Bureau was not able to adequately check the contents of the bills. From the standpoint of democratic instrumentalism, one could argue that there is a limit to how much a human being can check everything, so AI should be used. However, on the other hand, the instrumental view sees AI as a threat to democracy in that data collection by AI violates the basic human right of privacy.
The next is epistemic democracy. Its goal is to track the truth.There are two ways of thinking about this. One is correctness theory, which holds that democracy is legitimate, to the extent that it successfully tracks a procedure-independent truth. The other is proceduralist theory. This is the idea that political procedures such as voting allow us to make more right decisions. From the standpoint of proceduralist theory, it is important to make decisions acceptable from all reasonable viewpoints.
According to epistemic democracy, AI may be used to support people by delivering pertinent information to those who lack it. However, there is a famous objection to epistemic democracy by the American philosopher Richard Arneson. He pointed out that we cannot dismiss epistemic arguments for the deference to experts without empirical evidence. In other words, if AI is used to gather information favorable to certain people, AI may favor epistocracy (the rule of experts).
Under a democracy based on relational egalitarian democracy, people have equal control over their relationships and do not exercise unequal power over others. According to this view, democracy is justified on the basis of the fact that no one has greater power in political decisions. Hierarchy must be based on the ideal of relational equality. Based on this idea, AI can be useful when AI-based communication technology helps minority groups to participate in political activities. It may also be possible to include topics related to minority people in daily politics so that they can be brought to the attention of the majority. This is already being used as a micro-targeting technique in social networking services. For relational egalitarian democracy, information asymmetry can be a threat to democracy.
Lastly, Prof. Inoue annotated that today’s topic was from a political philosophy perspective and was more of a conceptual discussion rather than a real issue. He concluded his talk by stating that it is important to continue to think about what kind of AI threats are morally relevant and can be practically applied based on these issues.
In the question-and-answer session, to the question whether AI is dangerous or not in democracy, he replied that it is not possible to answer the question in a straightforward manner, because it varies depending on what theoretical standpoint and moral practices one takes. In response to the question of how epistemic democracy should be considered in the post-truth era, he said that we have to think about whether democracy itself can be justified before defining the question of whether AI will put democracy in danger.
Comment from Prof. Alexander Gefen
Before the panel discussion, Prof. Alexandre Gefen of CNRS commented on the two topics presented. Prof. Gefen expressed his concern that while AI can be a tool to support democratization, if used by authoritarian countries, it can lead to human rights abuses and create inhuman governments.
Prof. Gefen introduced that research has begun from the initiative of “AI for humans,” it is important to think about what to let machines make decisions based on the basic principles of AI, and to include humans in those decisions. In order to do this, we need to think about what AI can be used for to improve decision making from an interdisciplinary perspective.
Panel Discussion
Differences in Emphasis of the Topic Providers
After a short break, a panel discussion was held, moderated by Profr. Hideaki Shiroyama of the University of Tokyo. Prof. Shiroyama began by asking Prof. Nurock about the connection between the second half of the pluralistic talk and the first half of the fundamental human rights talk, noting that the first part of the talk was about basic human rights, but the last part was about the importance of dialogue and respect for other cultures.
Prof. Nurock explained that she began by talking about the Declaration of Human Rights to emphasize that it is not a checklist, but that human beings must decide why they want human rights and the issues behind them. We decide our own fate by ourselves. At that time, equality and non-discrimination are important values. This spirit was the heart of the French Revolution. As Prof. Inoue pointed out, there are several views of democracy, Prof. Nurock said. Based on this, she responded that what is important is to choose and decide for oneself what rights one considers important, and also to think about future generations.
Prof. Shiroyama then asked Prof. Inoue about the difference in emphasis between Prof. Nurock and Prof. Inoue. Prof. Nurock pointed out that there is a problem with the bias embedded in AI, while Prof. Inoue tried to clarify the threat of AI based on the assumption that AI is appropriately working. From this point of view, he asked Prof. Inoue what he thought about the problem of bias in AI.
Prof. Inoue replied that the reason why he did not dare to mention the issue of bias is that which bias is practically problematic depends on the perspective from which it is viewed. Bias is not always bad, and it cannot be evaluated in a unitary way. He explained that at this presentation, the discussion focused on the ideal types of democracy that are important to deal with the problem of bias in AI.
Prof. Nurok joined the discussion, saying that there are times when it is better to have bias for political reasons. For example, giving scholarships to underprivileged children can be said to have a bias, but such an inclination may be necessary in some cases. However, the bias of AI is a problem with the input data itself. She pointed out that the development of AI must be carefully watched because there is a concern that this bias could be amplified.
What can we delegate to machines?
As the next question, Prof. Shiroyama asked what should not be delegate to the machines. In response, Prof. Inoue pointed out that theories of public administration and public policy have been long discussing what kind of delegation of authority and bureaucratic division of labor is appropriate for a form of democracy. This is also true for AI, and as Prof. Gefen pointed out, interdisciplinary research will be important to determine what can be delegated to AI.
Prof. Nurock also noted that there are cultural differences in what can be delegated to AI, and pointed out that international discussions are important. Prof. Nurock also said that we should not transfer the power of affection to AI, referring to a robot called LOVOT that she saw when visited Japan last year’s February. LOVOT itself is cute and children will love it, but it should be noted that even if children love LOVOT, it does not mean that LOVOT loves them. Prof. Nurock said that she is not saying that LOVOTs should not be created, but they should be carefully designed and represented, for example, they should not be represented as AI capable of love.
She also pointed out that politics and ethical issues cannot be delegated to AI. Humans are not perfect, we make mistakes, but we can argue and we can be compassionate. However, AI is not capable of showing compassion to its citizens, nor is it capable of debate or participation.
Prof. Gefen said that it is important to know who is involved in these discussions. Today, it is not only large corporations that are involved in the development of AI, but also small and medium-sized companies and start-ups. AI has been democratized and we have the tools and frameworks that make it easy for anyone to create AI that can be used for potentially dangerous purposes. He pointed out that there are issues such as how democracies will govern this, and what international and national laws they will rely on.
Post-Truth and AI
Lastly, Prof. Shiroyama reflected on the post-truth question posed by Prof. Inoue’s session earlier: AI can send targeted information to individuals. This leads to a situation where people only see the information they want to see, and there is less opportunity to listen to different opinions. This fragments the community.
Prof. Nurock pointed out that we explore a variety of information when we decide our own fate, but now there is too much information, and post-truth is having a negative impact on democracy. However, she said that it is a scientific way of thinking to correct wrong decisions and democracy should be like that, too.
Prof. Inoue also quoted the American theorist Robert Tallis, who said that we should not talk about political issues in our social network services. On the other hand, he mentioned the possibility of using AI for accepting different opinions. Prof. Geffen likewise pointed out that AI is being used to detect fake videos while fake use is being created in some cases, saying that there are two sides to the technology.
Conclusion
Finally, Prof. Lechevalier of the Fondation France Japon de l’EHESS gave the closing remarks. Looking back on the overall discussion, he pointed out that the important message of the day was that humans can control the impact of technology. He concluded the symposium by saying that he felt from today’s dialogue that it is important to conduct interdisciplinary research to achieve this.
(Written by Arisa Ema)