![]() |
![]() |
University of Birmingham > Talks@bham > Artificial Intelligence and Natural Computation seminars > Manifolds in the Age of Big Data and Deep Learning
Manifolds in the Age of Big Data and Deep LearningAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Hector Basevi. Host: Prof Peter Tino Speaker’s homepage: http://personalpages.manchester.ac.uk/staff/hujun.yin/ Recent decades have witnessed a much increased demand for advanced, effective and efficient methods and tools for dealing, analyzing and understanding data of increasing complexity, dimensionality and volume. Whether it is in biology, social sciences or in engineering and computer vision, data is being sampled, collected and cumulated in an unprecedented scale. It is not a trivial task to analyze huge amounts of high dimensional data. A systematic, automated way of interpreting big data and representing it efficiently has become a great challenge facing almost all fields. Research in this emerging area had flourished, until recently when deep networks emerged to the frontline. In the age of big data and deep learning, it has become a common “misperception” that good representations no longer matter as the deep network will “automatically” acquire any required learning tasks and can always beat “handcrafted” features. However, looking deep into it, manifold concept can still play important role in the representations in deep learning. Manifold concept provides an underlying framework to the study of these data-driven methods and learning techniques. Its topological space and metric relationship of objects concerned can be regarded as the basis for many learning, dimensionality reduction and recognition tasks. In big data, often dimensionality reduction is a precursor to any data analytics, learning and induction. How well these feature maps can capture the intrinsic properties of data will determine the performance of the deep network; and crude, enduring training may not always guarantee good representation. For instance, the filters in the layers of covolutional neural network can be seen as multiple manifolds or feature maps. A stack of restricted Boltzmann machines are similar; these RBMs are Markov random field models of features. Better manifold representation will lead to better representation and better performance. In another deep-network related architecture, reservoir computing/networks, reservoirs can be seen as feature spaces and can be learned or generated to facilitate good representation and hence learning. Previously developed lines of manifold learning such as eigendecomposition, dimension scaling, retinotopic mapping and canonical correlation can aid the design and implementation of deep networks. Recent investigations have shown comparable performance and promising results. This will be elaborated with a couple of examples and case studies. This talk is part of the Artificial Intelligence and Natural Computation seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsData Science and Computational Statistics Seminar analysis Lab LunchOther talksTBA TBA TBC Quantum Sensing in Space Ultrafast Spectroscopy and Microscopy as probes of Energy Materials Waveform modelling and the importance of multipole asymmetry in Gravitational Wave astronomy |