EPSRC logo

Details of Grant 

EPSRC Reference: EP/R021643/2
Title: eNeMILP: Non-Monotonic Incremental Language Processing
Principal Investigator: Vlachos, Dr A
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Amazon
Department: Computer Science and Technology
Organisation: University of Cambridge
Scheme: First Grant - Revised 2009
Starts: 01 October 2018 Ends: 30 June 2019 Value (£): 44,609
EPSRC Research Topic Classifications:
Artificial Intelligence Computational Linguistics
EPSRC Industrial Sector Classifications:
Information Technologies
Related Grants:
Panel History:  
Summary on Grant Application Form
Research in natural language processing (NLP) is driving advances in many applications such as search engines and personal digital assistants, e.g. Apple's Siri and Amazon's Alexa. In many NLP tasks the output to be predicted is a graph representing the sentence, e.g. a syntax tree in syntactic parsing or a meaning representation in semantic parsing. Furthermore, in other tasks such as natural language generation and machine translation the predicted output is text, i.e. a sequence of words. Both types of NLP tasks have been tackled successfully with incremental modelling approaches in which prediction is decomposed into a sequence of actions constructing the output.

Despite its success, a fundamental limitation in incremental modelling is that the actions considered typically construct the output monotonically, e.g. in natural language generation each action adds a word to the output but never removes or changes a previously predicted one. Thus, relying exclusively on monotonic actions can decrease accuracy, since the effect of incorrect actions cannot be amended. Furthermore, these actions will be used to predict the following ones, likely to result in an error cascade.

We propose an 18-month project to address this limitation and learn non-monotonic incremental language processing models, i.e. incremental models that consider actions that can "undo" the outcome of previously predicted ones. The challenge in incorporating non-monotonic actions is that, unlike their monotonic counterparts, they are not straightforward to infer from the labelled data typically available for training, thus rendering standard supervised learning approaches inapplicable. To overcome this issue we will develop novel algorithms under the imitation learning paradigm to learn non-monotonic incremental models without assuming action-level supervision, relying instead on instance-level loss functions and the model's own predictions in order to learn how to recover from incorrect actions to avoid error cascades.

To succeed in this goal, this proposal has the following research objectives:

1) To model non-monotonic incremental prediction of structured outputs in a generic way that can be applied to a variety of tasks with natural language text as output

2) To learn non-monotonic incremental predictors using imitation learning and improve upon the accuracy of monotonic incremental models both in terms of automatic measures such as BLEU and human evaluation.

3) To extend the proposed approach to structured prediction tasks with graph as output.

4) To release software implementations of the proposed methods to facilitate reproducibility and wider adoption by the research community.

The research proposed focuses on a fundamental limitation in incremental language processing models, which have been successfully applied to a variety of natural language processing tasks, thus we anticipate the proposal to have a wide academic impact. Furthermore, the tasks we will evaluate it on, namely natural language generation and semantic parsing, are essential components to natural language interfaces and personal digital assistants. Improving these technologies will enhance accessibility to digital information and services. We will demonstrate the benefits of our approach through our collaboration with our project partners Amazon who are supporting the proposal both in terms of cloud computing credits but also by hosting the research associate in order to apply the outcomes of the project to industry-scale datasets.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.cam.ac.uk