University of Birmingham > Talks@bham > Computer Science Departmental Series > Verifiable Autonomy - how can you trust your robots?

Verifiable Autonomy - how can you trust your robots?

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mohammad Tayarani.

Host: Dr Nick Hawes n.a.hawes@cs.bham.ac.uk

As the use of autonomous systems and robotics spreads, the need for their activities to not only be understandable and explainable, but even verifiable, is increasing. But how can we be sure what such a system decides to do, and can we really formally verify its behaviour?

The concept of an “agent” is a widely used abstraction for an autonomous entity. While there are several forms of “agent”, ranging from the straightforward interpretation of an agent as a “process” all the way through to agents that “act like” humans, many practical autonomous systems are now based on some form of hybrid agent architecture. Increasingly, these more sophisticated agents are required within truly autonomous systems that must not only make their own decisions, but must be able to explain why they made their choices.

In this talk, I will examine these “rational” agents, discuss their role at the heart of many autonomous systems, and explain how we can formally verify their behaviours. This then allows us to: be more confident about what our autonomous systems will decide to do; use formal arguments in system certification and safety; and even analyse ethical decisions our systems might make.

This talk is part of the Computer Science Departmental Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on talks.cam from the University of Cambridge.