{"id":13,"date":"2024-10-22T13:07:47","date_gmt":"2024-10-22T12:07:47","guid":{"rendered":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/?page_id=13"},"modified":"2025-10-09T12:05:08","modified_gmt":"2025-10-09T11:05:08","slug":"what-is-algorithmic-bias","status":"publish","type":"page","link":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/","title":{"rendered":"What is algorithmic bias?"},"content":{"rendered":"\n<figure class=\"wp-embed-aspect-16-9 wp-has-aspect-ratio wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube\"><div class=\"wp-block-embed__wrapper\">\n<div class=\"video-wrapper\"><iframe loading=\"lazy\" title=\"AlgoBias Toolkit: Understanding algorithmic bias\" width=\"1300\" height=\"731\" src=\"https:\/\/www.youtube.com\/embed\/1rYi0t8kH6I?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/div>\n<\/div><\/figure>\n\n\n\n<p class=\"\">Organisations are increasingly making use of algorithmic technologies, both in the form of generative AI and predictive analytics. As these tools become more embedded within our working practices and the platforms we use, awareness of \u2018algorithmic bias\u2019 has increased along with it.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What are some real-world examples of algorithmic bias?<\/h2>\n\n\n\n<p class=\"\">The <strong>COMPAS system in the United States<\/strong> \u2013 which stands for Correctional Offender Management Profiling for Alternative Sanctions \u2013 is a risk assessment system used in the criminal justice system to aid judges in making decisions about bail and sentencing. An investigation by ProPublica found the system discriminated against Black defendants, finding them more likely to reoffend than white defendants with a similar criminal history (Angwin <em>et al.<\/em>, 2016: Kirkpatrick, 2016). This has raised serious concerns about racial bias in the criminal justice system, and how algorithmic technologies can exacerbate and further entrench prejudice attitudes.<\/p>\n\n\n\n<p class=\"\">Due to the COVID-19 pandemic,<strong> UK A-Level students<\/strong> were unable to sit their final exams, and as such, their grades calculated algorithmically using a combination of data sources including teacher predicted grades, the school\u2019s passed performance, and previous years subject data. However, this approach caused widespread controversy due to students from state schools receiving lower scores than their peers from private schools, who were often awarded their teacher assessed grade to small class sizes. After complaints from students, parents and teachers, all students were awarded their teacher predicted grades. In 2023, the <strong>Dutch authorities reformed the childcare benefit<\/strong> system to include algorithmic methods to predict fraud cases. This case is particularly notable considering the lengths the department went to to ensure the system was implemented in line with bias mitigation best practice, however this didn\u2019t prevent the algorithm from discriminating against certain groups. Originally, the algorithm was more likely to disproportionately identify men and non-Dutch people as being likely to commit fraud. The algorithm was later re-weighted to correct for this, however it was then found to disproportionately identify Dutch people and women as being likely to have committed fraud. This case highlights some of the particularly challenging aspects of algorithmic bias, and how even with the best will and intention, algorithmic technologies pose often unpredictable difficulties. Find out more: <a href=\"https:\/\/www.technologyreview.com\/2025\/06\/11\/1118233\/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure\/\">Inside Amsterdam\u2019s high-stakes experiment to create fair welfare AI<\/a>, MIT.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What causes Algorithmic Bias?<\/h2>\n\n\n\n<p class=\"\"><strong>Algorithmic bias<\/strong> refers to unfair outcomes that result from automated decision-making processes. These outcomes may disadvantage certain groups based on race, gender, socioeconomic status, or other characteristics, even when these characteristics are not explicitly included in the model. Algorithmic bias emerges due to a complicated set of conditions, one way of understanding this problem is to consider it in terms of three possible framings \u2013 the data, the algorithm, and the socio-technical environment in which the algorithmic technology operates. These three framings aren\u2019t clear cut, and there\u2019s interplay between these framings.<\/p>\n\n\n\n<p class=\"\"> The following is a visual representation of these framings:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"904\" height=\"508\" loading=\"lazy\" src=\"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-content\/uploads\/sites\/3\/2025\/08\/where-does-bias-come-from.png\" alt=\"\" class=\"wp-image-63\" srcset=\"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-content\/uploads\/sites\/3\/2025\/08\/where-does-bias-come-from.png 904w, https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-content\/uploads\/sites\/3\/2025\/08\/where-does-bias-come-from-300x169.png 300w, https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-content\/uploads\/sites\/3\/2025\/08\/where-does-bias-come-from-768x432.png 768w\" sizes=\"auto, (max-width: 904px) 100vw, 904px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Data<\/h2>\n\n\n\n<p class=\"\">Causes of bias which emerge in the data-based framing of algorithmic might include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"\"><strong>Non-representative datasets<\/strong>: Datasets which don\u2019t include certain groups will preform badly on those groups, as the algorithm will have less \u2018experience\u2019 with those groups, and will be more prone to errors and biases. For example, Buolamwini (2017) found that facial recognition algorithms were more likely to mis-identify black women compared to all other gender and racial combinations, due to the lack of black women in the underlying datasets the algorithm was trained on. <\/li>\n\n\n\n<li class=\"\"><strong>Historically biased datasets<\/strong>: All data is historic in nature (Oman, 20xx). As such, datasets contain the social biases of the time they were captured. This can be in what\u2019s in the dataset, such as over-representation of Black men in arrest statistics due to over-policing of BAME communities and entrenched racism in the criminal justice system (Benjamin, 2021). Or, it can be due to the absence of data, such as under-representation of women in certain industries. If we go further back, we could also consider diagnostic categories such as \u2018hysteria\u2019, and while modern day data scientists are unlikely to come across a dataset with a patient diagnosed with hysteria, it\u2019s worth reflecting on how categories change across time. Using data which contains the prejudices of the past makes it difficult for an algorithm to predict an equal future.<\/li>\n\n\n\n<li class=\"\"><strong>Proxy variables<\/strong>: When datasets include variables such as postcode or educational attainment, algorithmic systems can often infer sensitive characteristics such as race, socioeconomic background, or other personal information which was not included in the original dataset. Even when protected characteristics are not explicitly included, proxy data can still influence model outputs in problematic ways (O\u2019Neil, 2017)<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">The Algorithm<\/h2>\n\n\n\n<p class=\"\">Causes of bias which emerge in the algorithmic-based framing of algorithmic might include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"\"><strong>Model decisions: <\/strong>Decisions about what variables go into a model, and how these are handled, influences the types of outputs a model can produce. O\u2019Neil examines this in her book <em>Weapons of Math Destruction <\/em>using the example of a algorithm which decides what to eat for breakfast \u2013 if pop tarts are excluded from the algorithm, that\u2019s imposing the idea that pop tarts are not a suitable breakfast food. Similarly, when recommender systems are designed to promote videos, they\u2019re prone to popularity biases.<\/li>\n\n\n\n<li class=\"\"><strong>Model purpose<\/strong>: Some models are far harder to make \u2018fair\u2019 than others, and this can be down to what the model is designed to achieve to start with. Fraud detection algorithms have been seen as particularly contentious, in part due to their effects on often marginalised groups, who often have very little recourse if they\u2019re flagged by these systems. This can lead to incredibly harmful consequences, such as families losing access to money which they need for food and shelter.<\/li>\n\n\n\n<li class=\"\"><strong>Context<\/strong>: Some models are designed and trained for use in an environment which is quite different from some of the environments its later deployed in. This can lead to emergent biases, where the algorithm is not suitable for its new context \u2013 however this may not always be obvious at first, and emergent biases may also develop due to changing circumstances since the model was first developed (Friedman and Nissenbaum, 1996).\u00a0<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Socio-technical framing<\/h2>\n\n\n\n<p class=\"\">Causes of bias which emerge in the soio-technical-based framing of algorithmic might include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"\"><strong>Human decision-making<\/strong>: AAlgorithmically calculated outputs are often passed on to humans who act as final decision makers, which is often encouraged as a bias mitigation mechanism called having a \u2018human in the loop\u2019 to protect against purely algorithmic decisions discriminating against those impacted by them. However, the research on how humans react to machine generated decisions is unclear, and humans may be swayed by algorithmically calculated decisions, assuming there is something they \u2018missed\u2019 themselves (Eubanks, 20xx). It\u2019s important to consider how rigorous a \u2018human in the loop\u2019 approach would be in your own organisation, not just more generally.<\/li>\n\n\n\n<li class=\"\"><strong>Organisational and cultural factors<\/strong>: The social attitudes which will influence data scientists\u2019 decisions about data and algorithms belong to the socio-technical frame. This may include assumptions, prejudices, or just lack of familiarity with the groups the algorithm is being designed for \u2013 particularly when the big tech industry is overwhelming white and male. A lack of diversity has been posited as part of the reason the development and continued sustainment of cultural biases in algorithmic technologies.<\/li>\n\n\n\n<li class=\"\"><strong>Systemic inequality<\/strong>: Algorithms may re-enforce systemic inequality, particularly when these systems are used to detect benefit fraud, and decide who is allowed acs to limited resources. This is an issue which strongly influences both the da and algorithmic frames.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">How can we address algorithmic bias?<\/h2>\n\n\n\n<p class=\"\">A range of bias mitigation techniques have been developed and are in use today. Many of these have been technical de-biasing methods \u2013 focusing on issues such as data quality, comparative statistics, and fairness benchmarking. However, recent research has demonstrated that that these technical approaches by themselves do not substantially mitigate the risks of algorithmic bias, and ultimately, that of algorithmically caused harm upon those subject to the decisions made by algorithmic technologies. It\u2019s important for organisations to adopt both technical and socio-technical bias mitigation strategies, informed by longstanding theories and methods used in the social sciences. In this toolkit, you can learn more about the types of social science approaches which can assist your organisation in bias mitigation.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Further Resources<\/h3>\n\n\n\n<p class=\"\">Dolata, M., Feuerriegel, S., &amp; Schwabe, G. (2021). <em>A sociotechnical view of algorithmic fairness<\/em>. <em>Information Systems Journal<\/em>. Manuscript accepted for publication. arXiv. <a href=\"https:\/\/arxiv.org\/abs\/2110.09253\">https:\/\/arxiv.org\/abs\/2110.09253<\/a><\/p>\n\n\n\n<p class=\"\">O\u2019Neil, C. (2016). <em>Weapons of math destruction: How big data increases inequality and threatens democracy<\/em>. New York, NY: Crown Publishing Group. ISBN: 978-0553418811.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Organisations are increasingly making use of algorithmic technologies, both in the form of generative AI and predictive analytics. As these tools become more embedded within our working practices and the [&hellip;]<\/p>\n","protected":false},"author":7,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"nf_dc_page":"","footnotes":""},"class_list":["post-13","page","type-page","status-publish","hentry"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>What is algorithmic bias? - AlgoBias ToolKit<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/\" \/>\n<meta property=\"og:locale\" content=\"en_GB\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"What is algorithmic bias? - AlgoBias ToolKit\" \/>\n<meta property=\"og:description\" content=\"Organisations are increasingly making use of algorithmic technologies, both in the form of generative AI and predictive analytics. As these tools become more embedded within our working practices and the [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/\" \/>\n<meta property=\"og:site_name\" content=\"AlgoBias ToolKit\" \/>\n<meta property=\"article:modified_time\" content=\"2025-10-09T11:05:08+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-content\/uploads\/sites\/3\/2025\/08\/where-does-bias-come-from.png\" \/>\n\t<meta property=\"og:image:width\" content=\"904\" \/>\n\t<meta property=\"og:image:height\" content=\"508\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Estimated reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/what-is-algorithmic-bias\\\/\",\"url\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/what-is-algorithmic-bias\\\/\",\"name\":\"What is algorithmic bias? - AlgoBias ToolKit\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/what-is-algorithmic-bias\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/what-is-algorithmic-bias\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/wp-content\\\/uploads\\\/sites\\\/3\\\/2025\\\/08\\\/where-does-bias-come-from.png\",\"datePublished\":\"2024-10-22T12:07:47+00:00\",\"dateModified\":\"2025-10-09T11:05:08+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/what-is-algorithmic-bias\\\/#breadcrumb\"},\"inLanguage\":\"en-GB\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/what-is-algorithmic-bias\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/what-is-algorithmic-bias\\\/#primaryimage\",\"url\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/wp-content\\\/uploads\\\/sites\\\/3\\\/2025\\\/08\\\/where-does-bias-come-from.png\",\"contentUrl\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/wp-content\\\/uploads\\\/sites\\\/3\\\/2025\\\/08\\\/where-does-bias-come-from.png\",\"width\":904,\"height\":508},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/what-is-algorithmic-bias\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"What is algorithmic bias?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/#website\",\"url\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/\",\"name\":\"AlgoBias ToolKit\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/lifeofdata.org\\\/site\\\/algobias-toolkit\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-GB\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"What is algorithmic bias? - AlgoBias ToolKit","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/","og_locale":"en_GB","og_type":"article","og_title":"What is algorithmic bias? - AlgoBias ToolKit","og_description":"Organisations are increasingly making use of algorithmic technologies, both in the form of generative AI and predictive analytics. As these tools become more embedded within our working practices and the [&hellip;]","og_url":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/","og_site_name":"AlgoBias ToolKit","article_modified_time":"2025-10-09T11:05:08+00:00","og_image":[{"width":904,"height":508,"url":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-content\/uploads\/sites\/3\/2025\/08\/where-does-bias-come-from.png","type":"image\/png"}],"twitter_card":"summary_large_image","twitter_misc":{"Estimated reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/","url":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/","name":"What is algorithmic bias? - AlgoBias ToolKit","isPartOf":{"@id":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/#website"},"primaryImageOfPage":{"@id":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/#primaryimage"},"image":{"@id":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/#primaryimage"},"thumbnailUrl":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-content\/uploads\/sites\/3\/2025\/08\/where-does-bias-come-from.png","datePublished":"2024-10-22T12:07:47+00:00","dateModified":"2025-10-09T11:05:08+00:00","breadcrumb":{"@id":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/#breadcrumb"},"inLanguage":"en-GB","potentialAction":[{"@type":"ReadAction","target":["https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/"]}]},{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/#primaryimage","url":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-content\/uploads\/sites\/3\/2025\/08\/where-does-bias-come-from.png","contentUrl":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-content\/uploads\/sites\/3\/2025\/08\/where-does-bias-come-from.png","width":904,"height":508},{"@type":"BreadcrumbList","@id":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/what-is-algorithmic-bias\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/"},{"@type":"ListItem","position":2,"name":"What is algorithmic bias?"}]},{"@type":"WebSite","@id":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/#website","url":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/","name":"AlgoBias ToolKit","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-GB"}]}},"_links":{"self":[{"href":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-json\/wp\/v2\/pages\/13","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-json\/wp\/v2\/comments?post=13"}],"version-history":[{"count":5,"href":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-json\/wp\/v2\/pages\/13\/revisions"}],"predecessor-version":[{"id":144,"href":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-json\/wp\/v2\/pages\/13\/revisions\/144"}],"wp:attachment":[{"href":"https:\/\/lifeofdata.org\/site\/algobias-toolkit\/wp-json\/wp\/v2\/media?parent=13"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}