EPSRC logo

Details of Grant 

EPSRC Reference: EP/R033188/1
Title: DADD: Discovering and Attesting Digital Discrimination
Principal Investigator: Such, Dr JM
Other Investigators:
Hedges, Dr M Coté, Dr M Criado, Dr N
Tasioulas, Professor J Vigano, Professor L nelken, Professor d
Researcher Co-Investigators:
Project Partners:
AI Club for Gender Minorities Google
Department: Informatics
Organisation: Kings College London
Scheme: Standard Research
Starts: 01 August 2018 Ends: 31 March 2022 Value (£): 653,777
EPSRC Research Topic Classifications:
Artificial Intelligence Information & Knowledge Mgmt
EPSRC Industrial Sector Classifications:
Information Technologies
Related Grants:
Panel History:
Panel DatePanel NameOutcome
06 Feb 2018 ICT Cross-Disciplinarity and Co-Creation Announced
Summary on Grant Application Form


In digital discrimination, users are treated unfairly, unethically or just differently based on their personal data. Examples include low-income neighborhoods targeted with high-interest loans; women being undervalued by 21% in online marketing; and online ads suggestive of arrest records appearing more often with searches of black-sounding names than white-sounding names. Digital discrimination very often reproduces existing instances of discrimination in the offline world by either inheriting the biases of prior decision makers, or simply reflecting widespread prejudices in society. Digital discrimination may also have an even more perverse result, it may exacerbate existing inequalities by causing less favourable treatment for historically disadvantaged groups, suggesting they actually deserve that treatment. As more and more tasks are delegated to computers, mobile devices, and autonomous systems, digital discrimination is becoming a huge problem.

Digital discrimination can be the result of algorithmic biases, i.e., the way in which a particular algorithm has been designed creates discriminatory outcomes, but it also occurs using non-biased algorithms when they are fed or trained with biased data. Research has been conducted on so-called fair algorithms, tackling biased input data, demonstrating learned biases, and measuring relative influence of data attributes, which can quantify and limit the extent of bias introduced by an algorithm or dataset. But, how much bias is too much? That is, what is legal, ethical and/or socially-acceptable? And even more importantly, how do we translate those legal, ethical, or social expectations into automated methods that attest digital discrimination in datasets and algorithms?

DADD (Discovering and Attesting Digital Discrimination) is a *novel cross-disciplinary collaboration* to address these open research questions following a continuously-running co-creation process with academic (Computer Science, Digital Humanities, Law and Ethics) and non-academic partners (Google, AI Club), and the general public, including technical and non-technical users. DADD will design ground-breaking methods to certify whether or not datasets and algorithms discriminate by automatically verifying computational non-discrimination norms, which will in turn be formalised based on socio-economic, cultural, legal, and ethical dimensions, creating the new *transdisciplinary field of digital discrimination certification*.

Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Impacts
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Summary
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:  
Further Information:  
Organisation Website: