EPSRC Reference: |
EP/V028154/1 |
Title: |
C6: Correct-by-Construction Heterogeneous Coherence |
Principal Investigator: |
Nagarajan, Dr V |
Other Investigators: |
|
Researcher Co-Investigators: |
|
Project Partners: |
|
Department: |
Sch of Informatics |
Organisation: |
University of Edinburgh |
Scheme: |
Standard Research |
Starts: |
01 March 2021 |
Ends: |
29 February 2024 |
Value (£): |
494,698
|
EPSRC Research Topic Classifications: |
Electronic Devices & Subsys. |
Fundamentals of Computing |
|
EPSRC Industrial Sector Classifications: |
|
Related Grants: |
|
Panel History: |
Panel Date | Panel Name | Outcome |
25 Nov 2020
|
Efficient Computing Peer Review Panel
|
Announced
|
|
Summary on Grant Application Form |
About 55 years ago, Gordon Moore speculated that transistors will become smaller and more energy efficient every year. Since then, we have enjoyed exponentially increasing computer performance owing to what has been called the Moore's law.
However, Moore's law is coming to an end and has already begun to disrupt the semiconductor industry. Absent the exponential performance and energy gains due to device scaling, industry has pivoted to hardware specialisation: targeting hardware to a specific computation class generally leads to orders of magnitude improvement in energy and performance.
We are well and truly in the age of heterogeneous computing. A modern smartphone today has dozens of devices within a single chip, including CPUs, GPUs, and other accelerators. But efficiency hinges on reducing data movement between these devices; otherwise, it can seriously jeopardise the benefits of heterogeneous computing. Sadly, an analysis of Google workloads on a mobile device reveals that, on average, more than 60% of the overall energy is spent on moving around data.
One promising approach to reducing data movement is called cache coherence. The cache coherence protocol, which automatically replicates data consistently, enables data to be accessed locally when it is safe to do so. Thus, it not only minimises data movement but it also does so in a programmer-transparent fashion.
However, cache coherence protocols are notoriously hard to design and verify even for homogeneous multicores, where they have been deployed today. To make matters worse, we do not know how to keep the devices of a heterogeneous computer coherent correctly, in part because we do not yet understand what it means to be correct.
In this project, we propose an entirely new way of designing coherence protocols. Instead of manually designing them and verifying them later, we propose an automatic method to generate them correctly. Our method is based on a new foundation of heterogeneous coherence called compound consistency models, which formally answers the question of how distinct coherence protocols should compose.
If successful, the project will not only lift the major roadblock to efficient heterogeneous computing (data movements costs), it will also catalyse the burgeoning open hardware movement by democratising one of its trickiest components: cache coherence protocols.
|
Key Findings |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Potential use in non-academic contexts |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Impacts |
Description |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk |
Summary |
|
Date Materialised |
|
|
Sectors submitted by the Researcher |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Project URL: |
|
Further Information: |
|
Organisation Website: |
http://www.ed.ac.uk |