Dealing with uncertainty
Implementing algorithmic systems involves a lot of uncertainty, be it because of the inherent uncertainty in the results of the systems themselves, due to organisational uncertainty regarding the history and aims of a project, or what sort of issues the organisation might hit once the algorithm is out in the wild. Many algorithmic bias mitigation strategies focus on risk management, but this doesn’t change the fact that a lot of decision making – either human or algorithmic decision making – is made when the outcomes are unpredictable. Part of a successful algorithmic bias mitigation strategy means being aware of uncertainty and becoming comfortable in managing potentially uncomfortable situations.
What types of uncertainty are important to algorithmic bias mitigation?
- Data quality: Issues with data quality can often be unclear unless the team has collected the data themselves. Issues surrounding data quality can become particularly problematic when using or combining with third party datasets, and these issues may not always be obvious until much later in the project.
- Statistical uncertainty: While models are able to make predictions, there is always a level of uncertainty in these which goes beyond errors and confident intervals. Additionally, there can be confusion and uncertainty between people working at the same organisation as to how to interpret statistical output.
- Organisational uncertainty: There can be a lot of uncertainty on a project due to people leaving, project’s changing hands, mismatched team co-ordination, and other organisational challenges. Sometimes people who work on a project can have very conflicting ideas about parts of the project at hand (Bates et al., 2024, Beresford, 2024).
Worksheets
Further Resources
Bates, J., Kennedy, H., Medina Perea, I., Oman, S., & Pinney, L. (2024). Socially meaningful transparency in data-based systems: Reflections and proposals from practice. Journal of Documentation, 80(1), 54-72. https://doi.org/10.1108/JD-01-2023-0006