Title | : | On Scalable Deep Natural Language Understanding |
Speaker | : | Sarath Chandar (University of Montreal, Canada) |
Details | : | Thu, 7 Jan, 2016 10:00 AM @ BSB 361 |
Abstract: | : | Recent advances in Deep Learning has shown significant improvement in Natural Language Processing when compared to traditional NLP methods. Word embedding has become a common tool in almost every NLP task. I will start the talk with a roadmap for Deep Natural Language Understanding. Then we will make an observation that even though Neural Network based solutions perform significantly better than traditional methods, they are not as scalable as the latter. Specifically, Neural Network based solutions require more training data and scales linearly with the vocabulary size. In the second part of the talk, I will try to address both these problems. We will see a method to generate synthetic question/answer pairs based on facts from a knowledge graph, which can be used to generate large scale Q/A data for training complex question answering systems. We will also see a Maximum Inner Product Search (MIPS) based solution for scaling Neural Networks to large vocabularies. Both of these can be considered as significant steps towards the ultimate goal of Natural Language Understanding. |