University of Birmingham > Talks@bham > Applied Mathematics Seminar Series > The unreasonable effectiveness of small neural ensembles in high-dimensional brain

## The unreasonable effectiveness of small neural ensembles in high-dimensional brainAdd to your list(s) Download to your calendar using vCal - Alexander Gorban, University of Leicester
- Thursday 25 October 2018, 13:00-14:00
- Watson LTB.
If you have a question about this talk, please contact Fabian Spill. Complexity is an indisputable, well-known, and broadly accepted feature of the brain. Despite the {apparently} obvious and widely-spread consensus on the brain complexity, sprouts of the single neuron revolution emerged in neuroscience in the 1970s. They brought many unexpected discoveries, including grandmother or concept cells and sparse coding of information in the brain. In machine learning for a long time, the famous curse of dimensionality seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of dimensionality becomes gradually more and more popular. Ensembles of non-interacting or weakly interacting simple units prove to be an effective tool for solving essentially multidimensional and apparently incomprehensible problems. This approach is especially useful for one-shot (non-iterative) correction of errors in large legacy artificial intelligence systems and when the complete re-training is impossible or too expensive. These simplicity revolutions in the era of complexity have deep fundamental reasons grounded in geometry of multidimensional data spaces. To explore and understand these reasons we revisit the background ideas of statistical physics. In the course of the 20th century they were developed into the concentration of measure theory. The Gibbs equivalence of ensembles with further generalisations shows that the data in high-dimensional spaces are concentrated near shells of smaller dimension. New stochastic separation theorems reveal the fine structure of the data clouds. We review and analyse biological, physical, and mathematical problems at the core of the fundamental question: how can high-dimensional brain organise reliable and fast learning in high-dimensional world of data by simple tools? To meet this challenge, we outline and setup a framework based on statistical physics of data. Two critical applications are reviewed to exemplify the approach: one-shot correction of errors in intellectual systems and emergence of static and associative memories in ensembles of single neurons. Error correctors should be simple; not damage the existing skills of the system; allow fast non-iterative learning and correction of new mistakes without destroying the previous fixes. All these demands can be satisfied by new tools based on the concentration of measure phenomena and stochastic separation theory. In several words, the stochastic separation theorems state that for an essentially high-dimensional distributions a random point can be separated from a random set by Fisher’s linear discriminant with high probability. The number of points in this set can grow exponentially with dimension. Different versions of stochastic separation theorems use different definitions of `random sets’ and `essentially high-dimensional distributions’ but the essence of these definitions is simple: sets with very small (vanishing) volume should not have high probability even for large dimension. The talk is based on the joint work with I.Y Tyukin and V.A Makarov, https://arxiv.org/abs/1809.07656 https://www2.le.ac.uk/departments/mathematics/extranet/staff-material/staff-profiles/ag153 This talk is part of the Applied Mathematics Seminar Series series. ## This talk is included in these lists:Note that ex-directory lists are not shown. |
## Other listsComputer Security Seminars Speech Recognition by Synthesis Seminars Lab Lunch## Other talksThe highwater algebra The imprint of their explosions: Using supernova remnants to understand stellar death Discrete models of cellular mechanics |