Deep Learning has created a small revolution in the natural language processing (NLP) community. These methods represent words not as atomic units but as points in a high-dimensional space, allowing much more fine-grained treatment. Deep Learning models learn these representations using non-linear functions, in order to convey the human intuition that lies behind natural language.
Some examples on how we use them in our research:
Language Modeling: This is the ability to anticipate likely continuations of a sequence of words, is central to many areas of NLP (speech recognition, machine translation, natural language generation, image captioning, dialogue). We investigate the use of Recurrent Neural Networks (RNN), and related models (LSTM, memory networks, etc) for this and how to modify them to exploit long-distance dependencies while preserving learnability, and to focus attention on different aspects of the input depending on the current needs of the generation process.
Applications of vector space representations: By learning vector space representations for intermediate stages of the computation, deep Learning is able to learn very complex mappings from input to output. We have focused on different mappings, from dialog tracking and question answering to textual entailment, multi-word expression detection and personality profiling; to cite only a few examples.