EPSRC logo

Details of Grant 

EPSRC Reference: EP/S030964/1
Title: ActiveAI - active learning and selective attention for robust, transparent and efficient AI
Principal Investigator: Philippides, Professor A
Other Investigators:
Mangan, Dr M Graham, Professor P Vasilaki, Professor E
Nowotny, Professor T Marshall, Professor JAR
Researcher Co-Investigators:
Project Partners:
Flinders University of South Australia Macquarie University University of Queensland
Department: Sch of Engineering and Informatics
Organisation: University of Sussex
Scheme: Standard Research
Starts: 01 November 2019 Ends: 31 October 2022 Value (£): 953,584
EPSRC Research Topic Classifications:
Artificial Intelligence Robotics & Autonomy
EPSRC Industrial Sector Classifications:
Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
07 Mar 2019 Intl Centre to Centre Fulls Announced
Summary on Grant Application Form
We will bring together world leaders in insect biology and neuroscience with world leaders in biorobotic modelling and computational neuroscience to create a partnership that will be transformative in understanding active learning and selective attention in insects, robots and autonomous systems in artificial intelligence (AI). By considering how brains, behaviours and the environment interact during natural animal behaviour, we will develop new algorithms and methods for rapid, robust and efficient learning for autonomous robotics and AI for dynamic real world applications.

Recent advances in AI and notably in deep learning, have proven incredibly successful in creating solutions to specific complex problems (e.g. beating the best human players at Go, and driving cars through cities). But as we learn more about these approaches, their limitations are becoming more apparent. For instance, deep learning solutions typically need a great deal of computing power, extremely long training times and very large amounts of labeled training data which are simply not available for many tasks. While they are very good at solving specific tasks, they can be quite poor (and unpredictably so) at transferring this knowledge to other, closely related tasks. Finally, scientists and engineers are struggling to understand what their deep learning systems have learned and how well they have learned it.



These limitations are particularly apparent when contrasted to the naturally evolved intelligence of insects. Insects certainly cannot play Go or drive cars, but they are incredibly good at doing what they have evolved to do. For instance, unlike any current AI system, ants learn how to forage effectively with limited computing power provided by their tiny brains and minimal exploration of their world. We argue this difference comes about because natural intelligence is a property of closed loop brain-body-environment interactions. Evolved innate behaviours in concert with specialised sensors and neural circuits extract and encode task-relevant information with maximal efficiency, aided by mechanisms of selective attention that focus learning on task-relevant features. This focus on behaving embodied agents is under-represented in present AI technology but offers solutions to the issues raised above, which can be realised by pursuing research in AI in its original definition: a description and emulation of biological learning and intelligence that both replicates animals' capabilities and sheds light on the biological basis of intelligence.

This endeavour entails studying the workings of the brain in behaving animals as it is crucial to know how neural activity interacts with, and is shaped by, environment, body and behaviour and the interplay with selective attention. These experiments are now possible by combining recent advances in neural recordings of flies and hoverflies which can identify neural markers of selective attention, in combination with virtual reality experiments for ants; techniques pioneered by the Australian team. In combination with verification of emerging hypotheses on large-scale neural models on-board robotic platforms in the real world, an approach pioneered by the UK team, this project represents a unique and timely opportunity to transform our understanding of learning in animals and through this, learning in robots and AI systems.

We will create an interdisciplinary collaborative research environment with a "virtuous cycle" of experiments, analysis and computational and robotic modelling. New findings feed forward and back around this virtuous cycle, each discipline informing the others to yield a functional understanding of how active learning and selective attention enable small-brained insects to learn a complex world. Through this understanding, we will develop ActiveAI algorithms which are efficient in learning and final network configuration, robust to real-world conditions and learn rapidly.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.sussex.ac.uk