EPSRC logo

Details of Grant 

EPSRC Reference: EP/S023356/1
Title: UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence
Principal Investigator: Luck, Professor M
Other Investigators:
Black, Dr E Lomuscio, Professor AR Belardinelli, Dr F
Rodrigues, Dr OT Magazzeni, Dr D Criado, Dr N
Sadri, Dr F Toni, Professor F
Researcher Co-Investigators:
Project Partners:
Amazon Web Services (UK) Association of Commonwealth Universities British Library
Bruno Kessler Foundation FBK BT Codeplay Software Ltd
ContactEngine Ericsson Ernst & Young
Five AI Limited GreenShoot Labs hiveonline
IBM Corporation (International) Mayor's Office for Policing and Crime Norton Rose LLP
Ocado Group Royal Mail Samsung
Thales Ltd The National Archives University of New South Wales
Vodafone
Department: Informatics
Organisation: Kings College London
Scheme: Centre for Doctoral Training
Starts: 01 April 2019 Ends: 30 September 2027 Value (£): 6,865,984
EPSRC Research Topic Classifications:
Artificial Intelligence Human-Computer Interactions
Software Engineering
EPSRC Industrial Sector Classifications:
Communications Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
07 Nov 2018 UKRI Centres for Doctoral Training AI Interview Panel U – November 2018 Announced
Summary on Grant Application Form
The UK is world leading in Artificial Intelligence (AI) and a 2017 government report estimated that AI technologies could add £630 billion to the UK economy by 2035. However, we have seen increasing concern about the potential dangers of AI, and global recognition of the need for safe and trusted AI systems. Indeed, the latest UK Industrial Strategy recognises that there is a shortage of highly-skilled individuals in the workforce that can harness AI technologies and realise the full potential of AI.

The UKRI Centre for Doctoral Training (CDT) on Safe and Trusted AI will train a new generation of scientists and engineers who are experts in model-based AI approaches and their use in developing AI systems that are safe (meaning we can provide guarantees about their behaviour) and are trusted (meaning we can have confidence in the decisions they make and their reasons for making them). Techniques in AI can be broadly divided into data-driven and model-based. While data-driven techniques (such as machine learning) use data to learn patterns or behaviours, or to make predictions, model-based approaches use explicit models to represent and reason about knowledge. Model-based AI is thus particularly well-suited to ensuring safety and trust: models provide a shared vocabulary on which to base understanding; models can be verified, and solutions based on models can be guaranteed to be correct and safe; models can be used to enhance decision-making transparency by providing human-understandable explanations; and models allow user collaboration and interaction with AI systems. In sophisticated applications, the outputs of data-driven AI may be input to further model-driven reasoning; for example, a self-driving car might use data-driven techniques to identify a busy roundabout, and then use an explicit model of how people behave on the road to reason about the actions it should take.

While much current attention is focussed on recent advancements in data-driven AI, such as those from deep learning, it is crucial that we also develop the UK skills base in complementary model-based approaches to AI, which are needed for the development of safe and trusted AI systems. The scientists and engineers trained by the CDT will be experts in a range of model-based AI techniques, the synergies between them, their use in ensuring safe and trusted AI, and their integration with data-driven approaches. Importantly, because AI is increasingly pervasive in all spheres of human activity, and may increasingly be tied to regulation and legislation, the next generation of AI researchers must not only be experts on core AI technologies, but must also be able to consider the wider implications of AI on society, its impact on industry, and the relevance of safe and trusted AI to legislation and regulation. Core technical training will be complemented with skills and knowledge needed to appreciate the implications of AI (including Social Science, Law and Philosophy) and to expose them to diverse application domains (such as Telecommunications and Security). Students will be trained in responsible research and innovation methods, and will engage with the public throughout their training, to help ensure the societal relevance of their research. Entrepreneurship training will help them to maximise the impact of their work and the CDT will work with a range of industrial partners, from both the private and public sectors, to ensure relevance with industry and application domains and to expose our students to multiple perspectives, techniques, applications and challenges.

This CDT is ideally equipped to deliver this vision. King's and Imperial are each renowned for their expertise in model-driven AI and provide one of the largest groupings of model-based AI researchers in the UK, with some of the world's leaders in this area. This is complemented with expertise in technical-related areas and in the applications and implications of AI.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: