University of Birmingham > Talks@bham > Human Computer Interaction seminars > “Taking Turing by surprise? Designing `digital computers’ for morally-loaded contexts’.

“Taking Turing by surprise? Designing `digital computers’ for morally-loaded contexts’.

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Rowanne Fleck.

There is much to learn from what Turing hastily dismissed as Lady Lovelace’s `objection’: `digital computers’ can indeed surprise us. Just like a piece of art, algorithms can be designed in such a way as to lead us to question our understanding of the world, or our place within it. Some humans do loose the capacity to be surprised in that way: it might be fear, or it might be the comfort of ideological certainties. As lazy normative animals, we do need to be able to rely on authorities to simplify our practical reasoning: that’s ok. Yet the growing sophistication of computer systems designed to free us from the constraints of normative engagement may well take us past a point of no-return: what if, through lack of normative exercise, our `moral muscles’ became so atrophied as to leave us unable to question our social practices?

This paper makes two distinct normative claims:

1. Decision-support systems should be designed with a view to regularly jolting us out of our moral torpor.

2. Without the depth of habit to somatically anchor model certainty, a computer’s experience of something new cannot but remain very different from that which in humans gives rise to non-trivial surprises. This asymmetry has important repercussions when it comes to the shape of ethical agency in `artificial moral agents’: the worry is not just that they would be likely to leap morally ahead of us, unencumbered by the weight of habits. The main reason to doubt that the moral trajectories of humans v. autonomous systems might remain compatible stems from the asymmetry in the mechanisms underlying moral change. Whereas in humans surprises will continue to play an important role in waking us to the need for moral change, cognitive processes will rule when it comes to machines. This asymmetry cannot but translate into increasingly different moral outlooks, to the point of (likely) unintelligibility. The latter prospect is enough to doubt the desirability of autonomous moral agents.

This talk is part of the Human Computer Interaction seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on from the University of Cambridge.