EPSRC Reference: |
EP/V024817/1 |
Title: |
Turing AI Fellowship: Interactive Annotations in AI |
Principal Investigator: |
Santos-Rodriguez, Dr R |
Other Investigators: |
|
Researcher Co-Investigators: |
|
Project Partners: |
|
Department: |
Engineering Mathematics |
Organisation: |
University of Bristol |
Scheme: |
EPSRC Fellowship - NHFP |
Starts: |
01 January 2021 |
Ends: |
31 December 2025 |
Value (£): |
1,292,963
|
EPSRC Research Topic Classifications: |
|
EPSRC Industrial Sector Classifications: |
Financial Services |
Healthcare |
Information Technologies |
|
|
Related Grants: |
|
Panel History: |
|
Summary on Grant Application Form |
With the prevalence of data-hungry deep learning approaches in Artificial Intelligent (AI) as the de facto standard, now more than ever there is a need for labelled data. However, while there have been interesting recent discussions on the definition of readiness levels of data, the same type of scrutiny on annotations is still missing in general: we do not know how or when the annotations were collected or what their inherent biases are. Additionally, there are now forms of annotation beyond standard static sets of labels that call for a formalisation and redefinition of the annotation concept (e.g., rewards in reinforcement learning or directed links in causality).
During this Fellowship we will design and establish the protocols for transparent annotations that empowers the data curator to report on the process, the practitioner to automatically evaluate the value of annotations and the users to provide the most informative and actionable feedback. This Fellowship will address all these through a holistic human-centric research agenda, bridging gaps in fundamental research and public engagement with AI.
The Fellowship aims to lay the foundations for a two-way approach to annotations, where the paradigm is shifted from annotations simply being a resource to them becoming a means for AI systems and humans to interact. The bigger picture is that, with annotations seen as an interface between both entities, we will be in a much better position to guide the relation of trust in between learning systems and users, where users translate their preferences into the learning systems' objective functions. This approach will help produce a much needed transformation in how potentially sensitive aspects of AI become a step closer to being reliable and trustworthy.
|
Key Findings |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Potential use in non-academic contexts |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Impacts |
Description |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk |
Summary |
|
Date Materialised |
|
|
Sectors submitted by the Researcher |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Project URL: |
|
Further Information: |
|
Organisation Website: |
http://www.bris.ac.uk |