University of Birmingham > Talks@bham > Artificial Intelligence and Natural Computation seminars > The need for moral competency in autonomous agent architectures

The need for moral competency in autonomous agent architectures

Add to your list(s) Download to your calendar using vCal

  • UserDr. Matthias Scheutz, Department of Computer Science, School of Engineering, Tufts University
  • ClockWednesday 18 September 2013, 15:00-16:00
  • HouseMechanical Engineering, G36.

If you have a question about this talk, please contact Leandro Minku.

Host: Prof. Aaron Sloman, Note: unusual time and place

Keywords: moral decision-making autonomous agents moral dilemma moral competence for agent architectures

Abstract: Soon autonomous robots will be deployed in our societies for many different application domains, ranging from assistive robots for healtcare settings, to combat robots on the battlefield, and all these robots will have to have the capability to make decisions autonomously. In this paper we argue that it is imperative that we start developing moral capabilities deeply integrated into the control architectures of such autonomous agents. For, as we will show, any ordinary decision-making situation from daily life can be turned into a morally charged decision-making situation, where the artificial agent finds itself presented with a moral dilemma where any choice of action (if inaction) can potentially cause harm to other agents.

Speaker’s home page:

This talk is part of the Artificial Intelligence and Natural Computation seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on from the University of Cambridge.