University of Birmingham > Talks@bham > Theoretical computer science seminar > Approximating Labelled Markov Processes by Averaging

Approximating Labelled Markov Processes by Averaging

Add to your list(s) Download to your calendar using vCal

  • UserPrakash Panangaden, University of Oxford, on leave from McGill University (Montreal)
  • ClockFriday 04 March 2011, 16:00-17:00
  • HouseUG40 Computer Science.

If you have a question about this talk, please contact Paul Levy.

Markov processes with continuous state spaces or continuous time evolution or both arise naturally in many areas of computer science: robotics, performance evaluation, modelling and simulation for example. For discrete systems there was a pioneering treatment of probabilisitic bisimulation and logical characterization due to Larsen and Skou. The continuous case, however, was neglected for a time. For a little over a decade there has been significant activity among computer scientists as it came to be realized that ideas from process algebra, like bisimulation and the existence of a modal characterization, would be useful for the study of such systems. There is a vast literature on timed systems, hybrid systems, robotics and control theory that also refer to such systems.

In a paper from 2000 a theory of approximation for LMPs was initiated and was refined and extended a couple of years later. Finding finite approximations is vital to give a computational handle on such systems. These techniques were adapted to Markov decision processes (MDPs) and applied to find good estimates of value functions. The previous work was characterized by rather intricate proofs that did not seem to follow from basic ideas in any straightforward way. For example, the logical characterization of (probabilistic) bisimulation proved first in 1998 requires subtle properties of analytic spaces and rather painful constructions. Proofs of basic results in approximation theory also seemed to be more difficult than they should be.

In recent work we take an entirely new approach, in some ways ``dual’’ to the normal view of probabilistic transition systems. We think of a Markov process as a transformer of functions defined on it, rather than as a transformer of the state. Thus, instead of working directly with a Markov kernel that takes a state to a probability distribution over the state space, we think of a Markov process as transforming a function into a new function over the state space. This is the probabilistic analogue of working with predicate transformers, a point of view advocated by Kozen in a path-breaking early paper on probabilistic systems and logic.

This new way of looking at things leads to three new results: 1. It is possible to define bisimulation on general spaces—not just on analytic spaces—and show that it is an equivalence relation with easy categorical constructions. The logical characterization of bisimulation can also be done generally, and with no complicated measure theoretic arguments. 2. A new and flexible approach to approximation based on averaging can be given. This vastly generalizes and streamlines the idea of using conditional expectations to compute approximation. 3. It is possible to show that there is a minimal bisimulation equivalent to a process obtained as the limit of finite approximants.

There are a number of interesting categorical aspects to this work which I will emphasize in the talk. This is joint work with Philippe Chaput, and with Vincent Danos and Gordon Plotkin.

This talk is part of the Theoretical computer science seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on from the University of Cambridge.