Deep consequences: Why syntax (as we know it) isn’t a thing, and other (shocking?) conclusions from modelling language with neural nets

With the development of ‘deeper’ models of language processing, we can start to infer (in an more empirically sound way) the true principles, factors or structures that underline language. This is because, unlike many other approaches in NLP, deep language models (loosely) reflect the true situation in which humans learn language. Neural language models learn the meaning of words and phrases concurrently with how best to group and combine these meanings, and they are trained to use this knowledge to do something human language users easily do. Such models beat established alternatives at various tasks that humans find easy but machines traditionally find hard. In this talk, I present the results of recent experiments using deep neural nets to model language. This includes the latest results from a recent paper Learning to Understand Phrases by Embedding the Dictionary, in which we apply a recurrent net with long-short-term-memory to a general-knowledge question-answering task. I conclude by discussing the potential implications of all of this for both language science and engineering.

Date:
Speakers:
Felix Hill
Affiliation:
University of Cambridge
    • Portrait of Jeff Running

      Jeff Running