International Symposium: AI Governance and Security

  • Date:
    2019.12.06(Fri.)
  • Time:
    14:00-16:00
  • Venue:
    Daiwa House Ishibashi Nobuo Memorial Hall, Daiwa Ubiquitous Computing Research Building, Hongo Campus, The University of Tokyo
    Map
  • Hosted by:

    Institute for Future Initiatives, The University of Tokyo

  • Co-hosted by:

    Science, Technology, and Innovation Governance(STIG), UTokyo
    Next Generation Artificial Intelligence Research Center, UTokyo

  • Language:

    English-Japanese with the simultaneous interpretation

  • Capacity:

    100

Overview

Artificial intelligence (AI) and related technology networks are now being used as decision support in critical situations such as medical care, finance, and war. What are the responsibilities and roles of man and machines, and how should it be?
As technology is expanding globally, discussions on AI governance has been started at the international level such as OECD. In Japan, the Cabinet Office has announced the “human-centered AI social principles,” but how can the Japanese government, industry and universities contribute to these international discussions?
In this event, we invite Professor Toni Erskine (Australian National University and Cambridge University), who has been researching on moral agency and responsibility in world politics, and Professor Taylor Reynolds (MIT), who is also an expert member of the AIGO (AI expert Group) at the Organization for Economic Co-operation and Development (OECD), to discuss the issues and prospects of AI governance and security with the University of Tokyo professors.

Program
  • 14:00
    Opening remarks

    Prof. Hideaki Shiroyama, Institute for Future Initiatives and GraSPP, UTokyo

  • 14:05
    Presentation 1:Professor Toni Erskine, Australian National University

    “Responsibility Lost? AI, Restraint, and the Consequences of Outsourcing Decision-Making in War”

  • 14:35
    Presentation 2:Professor Taylor Reynolds, Technology Policy Director of the MIT Internet Policy Research Initiative (IPRI)

    “AI Security and Governance – Challenges for countries at all stages of development”

  • 15:05
    Panel discussion and Q&A

    -Panelist: Prof. Toni Erskine, Prof. Taylor Reynolds, Prof. Yasuo Kuniyoshi, Prof. Hideaki Shiroyama
    -Moderator:Arisa Ema, Assistant professor, Institute for Future Initiatives, UTokyo

  • 15:50
    Closing remarks

    Professor Yasuo Kuniyoshi, Next Generation Artificial Intelligence Research Center and Graduate School of Information Science and Technology, UTokyo

Abstract

Prof. Toni Erskine


The prospect of weapons that can make decisions about whom to target and kill without human intervention – so-called ‘lethal autonomous weapons’, or, more colloquially, ‘killer robots’ – have engendered a huge amount of attention amongst both policy-makers and scholars. A frequently-articulated fear is that advanced AI systems of the future will have enormous potential for catastrophic harm and failure, and this will be beyond the power of states, or humanity generally, to control or correct. Proposed solutions include a moratorium on the development of fully autonomous weapons and pre-emptive international bans on their use. Debates over what should be done about this possible future threat are contentious, wide-ranging, and undoubtedly important. Yet, unfortunately, this attention to ‘killer robots’ has eclipsed a more immediate risk that accompanies the current use of AI in organised violence – a risk that has profound ethical, political, and even geo-political implications.
The concern that has motivated this talk is that existing AI-driven military tools – specifically, automated weapons that retain ‘humans in the loop’ and decision-support systems that assist targeting – risk eroding hard-won and internationally-endorsed principles of forbearance in the use of force because they change how we deliberate, how we act, and how we view ourselves as responsible agents. We need to ask how relying on AI in war affects how we – individually and collectively – understand and discharge responsibilities of restraint. This problem is not being discussed in academic, military, public policy, or media circles – and it should be.

Prof. Taylor Reynolds


The global economic system is one of sharp contrasts. We face significant economic risks such as rising poverty rates, trade frictions, and slowing global investment that threaten the living standards of billions of people. At the same time, technological leaps in fields such as machine learning (AI) and robotics deliver new and improved services, higher efficiency and lower costs to consumers. But these technologies also usher in disruptive economic and social change, as well as new security and privacy concerns. Amid these counter forces, policy makers ultimately want to deliver better economic growth and social benefit for more people.

That leaves policy makers questioning how we, as a society, can harness these benefits of computing technologies in a way that supports economic growth for all and improves social well-being. Are policy makers in both developed and developing countries sufficiently prepared to address new regulatory issues in an agile way? How can technologists be convinced to consider longer-term consequences as they build these new systems? How can governments, technologists and academics work together to achieve these goals?

Bio

Prof. Toni Erskine


Toni Erskine is Professor of International Politics and Director of the Coral Bell School of Asia Pacific Affairs at the Australian National University (ANU). She is also Editor of International Theory: A Journal of International Politics, Law, and Philosophy, Associate Fellow of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, and a Chief Investigator on the ‘Humanising Machine Intelligence’ Grand Challenge research project at ANU. She currently serves on the advisory group for the Google/United Nations Economic and Social Commission for Asia and the Pacific (ESCAP) ‘AI for the Social Good’ Research Network, administered by the Association for Pacific Rim Universities. Her research interests include the impact of artificial intelligence on responsibilities of restraint in war; the moral agency of formal organisations (such as states, transnational corporations, and intergovernmental organisations) in international politics; informal associations and imperatives for joint action in the context of global crises; cosmopolitan theories and their critics; the ethics of war; and, the responsibility to protect populations from mass atrocity crimes (R2P). She received her PhD from the University of Cambridge, where she was Cambridge Commonwealth Trust Fellow, and then British Academy Postdoctoral Fellow. Before coming to ANU, she held a Personal Chair in International Politics at the University of Wales, Aberystwyth, and was Professor of International Politics, Director of Research, and Associate Director (Politics and Ethics) of the Australian Centre for Cyber Security at UNSW Canberra (at the Australian Defence Force Academy).

Prof. Taylor Reynolds


Taylor Reynolds is the technology policy director of MIT’s Internet Policy Research Initiative. In this role, he leads the development of this interdisciplinary field of research to help policymakers address cybersecurity and Internet public policy challenges. He is responsible for building the community of researchers and students from departments and research labs across MIT, executing the strategic plan, and overseeing the day-to-day operations of the Initiative. Taylor’s current research focuses on three areas: leveraging cryptographic tools for measuring cyber risk, encryption policy, and international AI policy. Taylor was previously a senior economist at the OECD and led the organization’s Information Economy Unit covering policy issues such as the role of information and communication technologies in the economy, digital content, the economic impacts of the Internet and green ICTs. His previous work at the OECD concentrated on telecommunication and broadcast markets with a particular focus on broadband. Before joining the OECD, Taylor worked at the International Telecommunication Union, the World Bank and the National Telecommunications and Information Administration (United States). Taylor has an MBA from MIT and a Ph.D. in Economics from American University in Washington, DC.