University of Birmingham > Talks@bham > Artificial Intelligence and Natural Computation seminars > Better learning algorithms for neural networks

Better learning algorithms for neural networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Leandro Minku.

Neural networks that contain many layers of non-linear processing units are extremely powerful computational devices, but they are also very difficult to train. In the 1980’s there was a lot of excitement about a new way of training them that involved back-propagating error derivatives through the layers, but this learning algorithm never worked very well for deep networks that have many layers between the input and the output. I will describe a way of using unsupervised learning to create multiple layers of feature detectors and I will show that this allows back-propagation to beat the current state of the art for recognizing shapes and phonemes. I will then describe a new way of training recurrent neural nets and show that it beats the best other single method at modeling strings of characters.

About the speaker

Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He did postdoctoral work at Sussex University and the University of California San Diego and spent five years as a faculty member in the Computer Science department at Carnegie-Mellon University. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto where he is a University Professor. He is the director of the program on “Neural Computation and Adaptive Perception” which is funded by the Canadian Institute for Advanced Research.

He is a fellow of the Royal Society, the Royal Society of Canada, and the Association for the Advancement of Artificial Intelligence. He is an honorary foreign member of the American Academy of Arts and Sciences, and a former president of the Cognitive Science Society. He received an honorary doctorate from the University of Edinburgh in 2001. He was awarded the first David E. Rumelhart prize (2001), the IJCAI award for research excellence (2005), the IEEE Neural Network Pioneer award (1998) and the ITAC /NSERC award for contributions to information technology (1992) and the NSERC Herzberg Medal which is Canada’s top award in Science and Engineering.

He investigates ways of using neural networks for learning, memory, perception and symbol processing and has over 200 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, Variational learning, products of experts and deep belief nets. His current main interest is in unsupervised learning procedures for multi-layer neural networks with rich sensory input.

This talk is part of the Artificial Intelligence and Natural Computation seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on from the University of Cambridge.