EPSRC logo

Details of Grant 

EPSRC Reference: EP/T021063/1
Title: COG-MHEAR: Towards cognitively-inspired 5G-IoT enabled, multi-modal Hearing Aids
Principal Investigator: Hussain, Professor A
Other Investigators:
Abbasi, Dr Q Ratnarajah, Professor T Akeroyd, Professor M
Baillie, Professor L Sellathurai, Professor M Renals, Professor S
Imran, Professor MA Arslan, Professor T Al-Dubai, Professor AY
Bell, Dr PJ Buchanan, Professor WJ Adeel, Dr A
Hart, Professor E Casson, Dr A
Researcher Co-Investigators:
Project Partners:
Action on Hearing Loss (RNID) Alpha Data Parallel Systems Ltd (UK) deafscotland
Digital Health and Care Institute NHS Nokia
Sonova AG The Data Lab UCL
University of Manchester, The
Department: School of Computing
Organisation: Edinburgh Napier University
Scheme: Programme Grants
Starts: 01 March 2021 Ends: 28 February 2025 Value (£): 3,259,000
EPSRC Research Topic Classifications:
Biomechanics & Rehabilitation Cognitive Science Appl. in ICT
Med.Instrument.Device& Equip.
EPSRC Industrial Sector Classifications:
Healthcare
Related Grants:
Panel History:  
Summary on Grant Application Form
Currently, only 40% of people who could benefit from Hearing Aids (HAs) have them, and most people who have HA devices don't use them often enough. There is social stigma around using visible HAs ('fear of looking old'), they require a lot of conscious effort to concentrate on different sounds and speakers, and only limited use is made of speech enhancement - making the spoken words (which are often the most important aspect of hearing to people) easier to distinguish. It is not enough just to make everything louder!

To transform hearing care by 2050, we aim to completely re-think the way HAs are designed. Our transformative approach - for the first time - draws on the cognitive principles of normal hearing. Listeners naturally combine information from both their ears and eyes: we use our eyes to help us hear. We will create "multi-modal" aids which not only amplify sounds but contextually use simultaneously collected information from a range of sensors to improve speech intelligibility. For example, a large amount of information about the words said by a person is conveyed in visual information, in the movements of the speaker's lips, hand gestures, and similar. This is ignored by current commercial HAs and could be fed into the speech enhancement process. We can also use wearable sensors (embedded within the HA itself) to estimate listening effort and its impact on the person, and use this to tell whether the speech enhancement process is actually helping or not.

Creating these multi-modal "audio-visual" HAs raises many formidable technical challenges which need to be tackled holistically. Making use of lip movements traditionally requires a video camera filming the speaker, which introduces privacy questions. We can overcome some of these questions by encrypting the data as soon as it is collected, and we will pioneer new approaches for processing and understanding the video data while it stays encrypted. We aim to never access the raw video data, but still to use it as a useful source of information. To complement this, we will also investigate methods for remote lip reading without using a video feed, instead exploring the use of radio signals for remote monitoring.

Adding in these new sensors and the processing that is required to make sense of the data produced will place a significant additional power and miniaturization burden on the HA device. We will need to make our sophisticated visual and sound processing algorithms operate with minimum power and minimum delay, and will achieve this by making dedicated hardware implementations, accelerating the key processing steps. In the long term, we aim for all processing to be done in the HA itself - keeping data local to the person for privacy. In the shorter term, some processing will need to be done in the cloud (as it is too power intensive) and we will create new very low latency (<10ms) interfaces to cloud infrastructure to avoid delays between when a word is "seen" being spoken and when it is heard. We also plan to utilize advances in flexible electronics (e-skin) and antenna design to make the overall unit as small, discreet and usable as possible.

Participatory design and co-production with HA manufacturers, clinicians and end-users will be central to all of the above, guiding all of the decisions made in terms of design, prioritisation and form factor. Our strong User Group, which includes Sonova, Nokia/Bell Labs, Deaf Scotland and Action on Hearing Loss will serve to maximise the impact of our ambitious research programme. The outcomes of our work will be fully integrated, software and hardware prototypes, that will be clinically evaluated using listening and intelligibility tests with hearing-impaired volunteers in a range of modern noisy reverberant environments. The success of our ambitious vision will be measured in terms of how the fundamental advancements posited by our demonstrator programme will reshape the HA landscape over the next decade.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: http://www.napier.ac.uk