Neural networks are experiencing a renaissance, thanks to a new mathematical formulation, known as restricted Boltzmann machines, and the availability of powerful GPUs and increased processing power. Unlike past neural networks, these new ones can have many layers and thus are called 'deep neural networks'; and because they are a machine-learning technique, the technology is also known as 'deep learning.'
In this talk, I describe this new formulation and its signal-processing application in such fields as speech recognition and image recognition. In all these applications, deep neural networks have resulted in significant reductions in error rate. This success has sparked great interest from computer scientists, who are also eager to learn from neuroscientists how neurons in the brain work.