Hadley Beresford

Published 31/1/2021

Mapping Algorithmic Bias: Patterns, Consequences and Alternatives

I co-supervise Hadley with colleague Helen Kennedy. The project is in collaboration with the Department for Work and Pensions. Hadley passed their viva in January 2024. Here is the abstract for their project:

This thesis makes an original contribution to critical algorithm studies by addressing the gap in the literature regarding the experiences and perceptions of data practitioners utilising algorithmic bias mitigation methods. This is important because while algorithmic bias mitigation methods have been proposed, little is known about how data practitioners engage with them, nor how practitioners’ perceptions regarding these methods may impact their effectiveness. Understanding such things is crucial, as how data practitioners engage with these methods may have implications for the effectiveness of algorithmic bias mitigation efforts within an organisational context.  

The thesis makes its contribution through three empirical qualitative papers, which together aim to investigate how practitioners in a government department might work to mitigate the impact of algorithmic bias. The research was carried out in partnership with the Department of Work and Pensions (DWP), the UK’s ministerial department responsible for implementing work and welfare services and policy. Whilst the department intends to utilise algorithmic technologies for their purported efficiency gains, they were interested in gaining further insight into how to mitigate the risks of algorithmic bias inherent therein.

The first paper reported on research that used semi-structured interviews to investigate how data practitioners at DWP are engaging with algorithmic bias mitigation methods. The findings suggest participants strongly relied on legal frameworks in their bias mitigation methods, due to their position as civil servants. While participants felt a strong sense of responsibility to the public, the relevant legal frameworks made them accountable to the state. Additionally, my participants’ working practices in relation to bias checking were limited by previous research conducted by DWP, and by influences from the department’s organisational culture.

The second paper investigated how practitioners on the Aurora AI project, a Finnish AI recommender project run by the Finnish Ministry of Finance, were working towards ‘good practice’ in algorithmic bias mitigation. This research also used semi-structured interview methods, interviewing Aurora AI team members and algorithmic justice advocates. The research uncovered a lot of disagreement among participants about what constitutes good practice in mitigating algorithmic bias and the types of solutions that might be practically implementable. Despite the dominance of technical approaches on the Aurora AI project, participants identified socio-technical methods such as AIAs (algorithmic impact assessments), critical thinking regarding data use, and VSDs (Value Sensitive Design) as potential instruments for mitigating algorithmic bias.

The third paper reports on research that used workshop methods to investigate how DWP organisational culture might influence the adoption of mitigation approaches. This third paper identifies three key findings: (1) it is difficult for civil service practitioners to align technologies to social justice values when servicing a large diverse public, (2) practitioners perceived there to be a lack of clarity within legal and organisational guidance, and (3) participants perceived diversity in the workforce as important to algorithmic bias mitigation efforts.

Through analysis of the findings of these empirical chapters, this thesis makes three overarching contributions to knowledge. The first is that the fast-paced working practices that characterise the development of algorithmic technologies is not conducive to the slower-paced thinking needed to consider algorithmic bias using a socio-technical lens. Often, practitioners are under pressure to produce results quickly, and this may lead to the prioritisation of immediately tangible results such as a project’s technical deliverables. The second contribution is to highlight the importance of context and, in my case, the significance of the UK civil service context and the unique challenges which exist therein. Specifically, algorithmic technologies deployed within a civil service context are strongly influenced by political processes and build on policy decisions already put in place by government officials. Finally, due to these practitioners’ position as civil servants, they may be required to consider the diverse and conflicting views in found in the public in a way private organisations do not. However, the views of the public are currently missing from discussions of how the public sector should engage with algorithmic technologies, leaving practitioners to imagine what the publics views might be.

In addition to contributions to the emerging fields of critical algorithm and data studies, this thesis contributes towards a range of disciplines interested in the role of algorithmic technologies in society, including established fields such as information studies, sociology, communication studies and organisation studies.