Call for Abstracts

Artificial Intelligence and Moral Learning is a one day symposium at the AISB-20 Annual Convention 2020, which will be held at St Mary’s University, Twickenham, London, 6-9 April 2020. The AISB-20 Annual Convention 2020 is organised by the Society for the Study of Artificial Intelligence and Simulation of Behaviour. We are taking submissions by extended abstract (approximately 1000 words) due Friday, January 10th. Please submit abstracts to EasyChair: https://easychair.org/cfp/aiml2020

Abstract

The more commonplace AIs become in human life, the more important it will be that human moral intelligence is intelligible to AIs, and vice versa. AIs will need to be “conversant” in ethics of the human sort, with all of its complexity, and make moral decisions in a way that is at least compatible with human moral decision making. Thus, AIs and humans will need to share at least some of the same “moral world.” Given the complexity of human moral life, how might this be achieved? One approach would be to seek a specific account of the many rules and heuristics humans use in making moral decisions. However, it isn’t clear that this can be done and it isn’t clear what rules would actually produce the outcomes we’re looking for. One of the earliest insights in western moral philosophy, dating back to Aristotle, is that ethics may be uncodifiable: there may be no set of unyielding or exceptionless rules that captures what it takes to be good. If this is correct, perhaps an alternative for developing AIs that humans can recognize as ethical agents, rather than as mere rule-followers, is to attempt to cultivate ethics in AIs in a similar way to the we do in humans: through a process of apprentice-learning and habituation. This symposium seeks to evaluate and compare the possible methods for AI moral learning (including other AI learning methods which have not yet been applied to the case of moral learning).

Aim

Considering the various ways of facilitating moral learning in AI  will require the methodological and theoretical perspectives of computer scientists, philosophers, and cognitive scientists. By bringing together this diversity of disciplinary approaches, the symposium will be an opportunity to examine in a holistic and interdisciplinary way how AI technologies can be developed so as to responsibly.

We aim to take an interdisciplinary approach to artificial intelligence, moral learning, and moral decision making. We welcome and encourage theoretical and methodological perspectives from analytic philosophers, phenomenologists, computer scientists, cognitive scientists, psychologists, and others who study this topic.

We welcome submissions on the topic of artificial and moral learning, broadly construed. These may include, but are certainly not limited to:

  • How can we translate what we know about human moral learning into a machine learning problem?
  • What are some principles that can ensure that AI systems are accountable to people?
  • How can we make AI systems sufficiently morally generalizable (i.e. have robust behavior in novel ethical situations)?
    • Specifically,  given our current awareness of adversarial inputs, what directions can we pursue to ensure reliability of moral AI systems in adversarial situations?
  • How can we extend to the moral landscape current efforts in making machine learning systems’ behavior intelligible to humans (e.g. visualization of image recognition neural network layers, saliency maps)?
  • In different moral frameworks, there are different conceptions of moral agency. What directions does interpretation of AI systems as moral agents point to regarding the development of moral learning?
  • Reinforcement learning seems like a promising venue for moral learning. What bottlenecks exist and how can they be overcome in this approach?
  • Can an AI systems develop character virtues? What would that look like?
  • How can we develop systems that can explore their own space of uncertainty and generate “questions” that can be answered by a human “moral trainer”?
  • Is it possible and what would it mean to create AI systems that are morally superior to humans?

For further questions of details, please contact: aiml.symposium@gmail.com
Nick Smith, Darby Vickers, and Rebecca Korf (Organizing Committee)

Program Committee:

  • Nathan Fulton – PhD, Department of Philosophy (UC Irvine)
  • Jeffrey Helmreich – Assistant Professor, Department of Philosophy (UCI)
  • David Woodruff Smith – Professor, Department of Philosophy (UC Irvine)
  • Emily Sumner – PhD Candidate, Cognitive Sciences (UC Irvine)
  • Kino Zhao – PhD Candidate, Logic and Philosophy of Science (UC Irvine)