EPSRC logo

Details of Grant 

EPSRC Reference: EP/V001310/1
Title: Benchmarking for AI for Science at Exascale (BASE)
Principal Investigator: Thiyagalingam, Dr J
Other Investigators:
Zuntz, Dr J Lazauskas, Dr T Armour, Professor WG
Burnley, Dr T Lahav, Professor O Snow, Dr JFB
Researcher Co-Investigators:
Project Partners:
Boston Ltd Cerebras Systems DDN (DataDirect Network) (International)
IBM Hursley MathWorks nVIDIA
University of Leicester
Department: Scientific Computing Department
Organisation: STFC Laboratories (Grouped)
Scheme: Standard Research - NR1
Starts: 01 June 2020 Ends: 31 August 2021 Value (£): 284,104
EPSRC Research Topic Classifications:
EPSRC Industrial Sector Classifications:
Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
04 Mar 2020 Software use code development for exascale computing Announced
Summary on Grant Application Form
Advances in Artificial Intelligence (AI) and Machine Learning (ML) have enabled the scientific community to advance the frontiers of knowledge by learning from complex, large-scale experimental datasets. With the scientific community generating huge amounts of data from observatories to large-scale experimental facilities, AI for Science at Exascale is on the horizon.

However, in the absence of systematic approaches to evaluate AI models and AI algorithms at exascale, the AI for Science community, and, in fact, the general AI community, are facing a major barrier ahead.

This proposal aims to setup a working group with an overarching goal of identifying the scope and plans for developing AI benchmarks to enable the development of AI for Science at Exascale, in ExCALIBUR - Phase II.

Although AI Benchmarking is becoming a well-explored topic, a number of issues are still to be addressed, including, but not limited to:

a) There are no efforts aimed at AI benchmarking at exascale, particularly for science;

b) A range of scientific problems involving real-world large-scale scientific datasets, such as those from experimental facilities or observatories, are largely ignored in benchmarking; and

c) It is worth having benchmarks to serve as a catalogue of techniques offering template solutions to different types of scientific problems.

In this proposal, when scoping the development of an AI benchmark suite, we will aim to address these issues. In developing a vision, a scope and a plan for this significant challenge, the working group will not only engage with the community of scientists from a number of disciplines, and industry, but will also engineer a scalable and functional AI benchmark, so as to learn and embed the practical aspects of developing an AI benchmark into the vision, scope, and plan. The exemplary benchmark will focus on removing noise from images, which is a common issue across multiple disciplines including, life sciences, material sciences and astronomy. The specific problems from each of these disciplines are, removing noise from cryogenic electron microscopic (cryo-em) datasets, denoising X-Ray tomographic images, and minimising the noise from weak lensing images, respectively.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: