EPSRC logo

Details of Grant 

EPSRC Reference: EP/R033722/1
Title: Trust in Human-Machine Partnership
Principal Investigator: Moreau, Professor L
Other Investigators:
Sklar, Professor EI Coles, Dr AI Parsons, Professor S
Yeung, Professor K Keller, Mr P Borgo, Dr R
Luff, Professor P
Researcher Co-Investigators:
Project Partners:
Hogan Lovells Save the Children Schlumberger
Department: Informatics
Organisation: Kings College London
Scheme: Standard Research
Starts: 01 September 2018 Ends: 31 March 2022 Value (£): 1,012,658
EPSRC Research Topic Classifications:
Artificial Intelligence
EPSRC Industrial Sector Classifications:
Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
06 Mar 2018 DE TIPS 2 Announced
Summary on Grant Application Form
Interaction with machines is commonplace in the modern world, for a wide range of everyday tasks like making coffee, copying documents or driving to work. Forty years ago, these machines existed but were not automated or intelligent. Today, they all have computers embedded in them and can be programmed with advanced functionality beyond the mechanical jobs they performed two generations ago. Tomorrow, they will be talking to each other: my calendar will tell my coffee maker when to have my cuppa ready so that I can arrive at work on time for my first meeting; my satnav will tell my calendar how much time my autonomous car needs to make that journey given traffic and weather conditions; and my office copier will have documents ready to distribute at the meeting when I arrive in the office. And they will all be talking to me: I could request the coffee maker to produce herbal tea because I had too much coffee yesterday; and the copier could remind me that our office is (still) trying to go paperless and wouldn't I prefer to email the documents to meeting attendees instead of killing another tree?

This scenario will not be possible without three key features: an automated planner that coordinates between the various activities that need to be performed, determining where there are dependencies between tasks (e.g., don't drive to the office until I get in the car with my hot drink); a high level of trust between me and this intelligent system that helps organise the mundane actions in my life; and the ability for me to converse with the system and make joint decisions about these actions. Advancing the state-of-the-art in trustworthy, intelligent planning and decision support to realise these critical features lies at the centre of the research proposed by this Trust in Human-Machine Partnerships (THuMP) project.

THuMP will move us toward this future by following three avenues of investigation. First, we will introduce innovative techniques to the artificial intelligence (AI) community through a novel, intra-disciplinary strategy that brings computational argumentation and provenance to AI Planning. Second, we will take human-AI collaboration to the next level, through an exciting, inter-disciplinary approach that unites human-agent interaction and

information visualisation to AI Planning. Finally, we will progress the relationship between Technology and Law through a bold, multi-disciplinary approach that links legal and ethics research with new and improved AI Planning.

Why do we focus on AI Planning? A traditional sub-field of artificial intelligence, Planning develops methods for creating and maintaining sequences of actions for an AI (or a person) to execute, in the face of conflicting objectives, optimisation of multiple criteria, and timing and resource constraints. Ultimately, most decisions result in some kind of action, or action sequence. By focussing on AI Planning, THuMP captures the essence of what a collaborative AI decision-making system needs to do.

We believe that most AI systems will (need to) involve a human in-the-loop and that it is crucial to develop new AI technologies such that people can use, understand and trust them. THuMP strives for complete understanding and trustworthiness through transparency in AI. We will develop and test a general framework for "Explainable AI Planning (XAIP)", in which humans and an AI system can co-create plans for actions; and then we instantiate two use cases for this framework that focus on resource allocation in two very different critical domains.

A cross-disciplinary project team of seven investigators, four collaborators and four postdoctoral research assistants will work with three project partners--a leading oil & gas services corporation; a leading international charity; and a leading global law firm--to move us into this envisioned future. An ambitious and realistic programme of networking, development, evaluation and public engagement is proposed.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: