EPSRC Reference: |
EP/P001017/1 |
Title: |
Acoustic Signal Processing and Scene Analysis for Socially Assistive Robots |
Principal Investigator: |
Evers, Dr C |
Other Investigators: |
|
Researcher Co-Investigators: |
|
Project Partners: |
|
Department: |
Electrical and Electronic Engineering |
Organisation: |
Imperial College London |
Scheme: |
EPSRC Fellowship |
Starts: |
01 January 2017 |
Ends: |
31 December 2019 |
Value (£): |
330,105
|
EPSRC Research Topic Classifications: |
Digital Signal Processing |
Instrumentation Eng. & Dev. |
Robotics & Autonomy |
|
|
EPSRC Industrial Sector Classifications: |
No relevance to Underpinning Sectors |
|
|
Related Grants: |
|
Panel History: |
|
Summary on Grant Application Form |
The interaction between users and a robot often takes place in busy environments in the presence of competing speakers and background noise sources such as televisions. The signals received at the microphones of the robot are hence a mixture of the signals from multiple sound sources, ambient noise, and reverberation due to reflections of sound waves. Thus, in order to focus on stimuli of interest, the robot has to learn and adapt to the acoustic environment.
The aim of this research is to provide robots and machines with the ability to understand and adapt to the surrounding acoustic environment. Acoustic scene analysis combines salient features from the observed audio signals in order to create situational awareness of the environment; Sound sources are detected, localised and identified, whilst acoustic properties of the room itself can be characterised. Using the information acquired by analysing the acoustic scene, a three-dimensional map of the environment is created, and can be used to identify sounds or recognise the intent of speech signals. Moreover, by moving within the environment, the robot can explore and learn about the acoustic properties of its surrounding.
However, many of the tasks required for analysis of the acoustic scene are jointly dependent. For example, localising the sources of sounds buried in noise and reverberation is a challenging problem. Sound source localisation can be improved by enhancing the signals of desired sources, such as human speakers, whilst suppressing interfering sources, such as a television. However, for source enhancement, desired and interfering sources must be spatially distinguished, hence requiring knowledge of the source directions.
The novel objective of this research is therefore to identify and exploit constructively the joint dependencies between the tasks required for acoustic scene analysis. To achieve this objective, the project will take advantage of the motion of the robot in order to look at uncertain events from different perspectives. Techniques will be developed to constructively exploit motion of the robot's arms by fusing microphones attached to the robot's limbs with microphone arrays installed in the robot head. Furthermore, approaches will be investigated that allow multiple robots to share their experience and knowledge about the acoustic environment.
The research will be conducted at Imperial College London, within the Department of Electrical and Electronic Engineering with academic advice from national, European, and international project partners at the University of Edinburgh, UK; International Audio Laboratories Erlangen, Germany; and Bar-Ilan University, Israel.
|
Key Findings |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Potential use in non-academic contexts |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Impacts |
Description |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk |
Summary |
|
Date Materialised |
|
|
Sectors submitted by the Researcher |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Project URL: |
|
Further Information: |
|
Organisation Website: |
http://www.imperial.ac.uk |