Advertise with us
HomeNewsDEALING WITH ARTIFICIAL INTELLIGENCE BIAS - FERNANDEZ MARCUS-OBIENE

DEALING WITH ARTIFICIAL INTELLIGENCE BIAS – FERNANDEZ MARCUS-OBIENE

- Advertisement -
Advertise with us

INTRODUCTION – WHAT IS THIS ALL ABOUT?
In many small ways that we don’t really think about, AI is used in our everyday lives. From how we use our phones to what we do with our laptops to how we shop and how we use many household items.

One of the many benefits of AI on a personal level is that it is personalized. When relating with AI, we expect a different experience from every other person.

For example, when I listen to music on an app, I do not expect to find the exact kind of music as others. I expect that given historical data about me and my preferences, I will be shown the kind of music I like. Same thing happens when one does a Google search.
This work will examine the nature, issues and proposed solutions relating to AI and Bias.

WHAT IS AI?
According to Britannica artificial intelligence (AI) is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.

Components of the intelligence commonly considered in AI research include learning, reasoning, problem solving, perception, and using language.

WHAT IS BIAS?
According to Google, bias is the inclination or prejudice for or against one person or group, especially in a way considered to be unfair.

HOW DOES BIAS IN AI OCCUR?
Bias in AI occurs when results cannot be generalized widely. We often think of bias resulting from preferences or exclusions in training data, but bias can also be introduced by how data is obtained, how algorithms are designed, and how AI outputs are interpreted.

Everybody thinks of bias in training data – the data used to develop an algorithm before it is tested on the wide world. But this is only the tip of the iceberg.

As Prof. Narayan of Stamford University notes, all data is biased. This is not paranoia. This is fact. Bias may not be deliberate. It may be unavoidable because of the way that measurements are made – but it means that we must estimate the error (confidence intervals) around each data point to interpret the results.

Let’s say one were to collect data about the height of people. If you collected the height data and put them all onto a chart, you’d find overlapping groups (or clusters) of taller and shorter people, broadly indicating adults and children, and those in between. However, who was surveyed to get the heights? Was this done during the weekdays or on weekends, when different groups of people are working?

If heights were measured at medical offices, people without health insurance may be left out. If done in the suburbs, you’ll get a different group of people compared to those in the countryside or those in cities. How large was the sample?

AI bias is the underlying prejudice in data that’s used to create AI algorithms, which can ultimately result in discrimination and other social consequences.
A real-life example is the 2019 research which showed that members of racial and ethnic minority groups suffered from health inequities in the United States aided by a major healthcare risk algorithm.
Also, recently, Twitter apologized after users called out its image-cropping algorithm for being racist. When you upload an image or link to Twitter, the algorithms theoretically crops the image centering on a human face in the preview image.

To understand AI bias better, attention will be turned to the forms/types of AI bias.


TYPES OF AI BIAS
While there are varying types and categories of AI bias, for the purpose of clarity, this work will consider only four (4) as follows:

  1. Reporting bias
  2. Selection bias
  3. Group attribution bias
  4. Implicit bias
  5. Reporting bias
    This type of AI bias arises when the frequency of events in the training dataset doesn’t accurately reflect reality. Take an example of a customer fraud detection tool that underperformed in a remote geographic region, marking all customers living in the area with a falsely high fraud score.

It turned out that the training dataset the tool was relying on claimed every historical investigation in the region as a fraud case. The reason was that because of the region’s remoteness, fraud case investigators wanted to make sure every new claim is indeed fraudulent before they travel to the area. So, the frequency of fraudulent events in the training dataset was way higher than it should have been in reality.

  1. Selection bias
    This type of AI bias occurs if training data is either unrepresentative or is selected without proper randomization. An example of the selection bias is well illustrated by the research conducted by Joy Buolamwini, Timnit Gebru, and Deborah Raji, where they looked at three commercial image recognition products. The tools were to classify 1,270 images of parliament members from European and African countries. The study found that all three tools performed better on male than female faces and showed more substantial bias against darker-skin females, failing on over one in tree women of color — all due to the lack of diversity in training data.
  2. Group attribution bias
    Group attribution bias takes place when data teams extrapolate what is true of individuals to entire groups the individual is or is not part of. This type of AI bias can be found in admission and recruiting tools that may favor the candidates who graduated from certain schools and show prejudice against those who didn’t.
  3. Implicit bias
    This type of AI bias occurs when AI assumptions are made based on personal experience that doesn’t necessarily apply more generally. For instance, if data scientists have picked up on cultural cues about women being housekeepers, they might struggle to connect women to influential roles in business despite their conscious belief in gender equality — an example echoing the story of Google Images’ gender bias.

HOW DOES AI BIAS AFFECT US?
In many instances, the effect of bias may not be felt seriously. However, in areas like medicine, as noted by Dr. Narayan, bias in medical AI is a major problem, because making a wrong diagnosis or suggesting the wrong therapy could be catastrophic.

According to the 2021 PwC article titled “AI Bias is Personal for Me. It Should Be for You, Too,” studies have found mortgage algorithms charging Black and Latino borrowers higher interest rates and egregious cases of recruiting algorithms exacerbating bias against hiring women. A series of studies about various facial recognition software found that most had misidentified darker-skinned women 37% more often than those with lighter-skin tones. A widely used application to predict clinical risk has led to inconsistent referrals by race to specialists, perpetuating racial bias in healthcare. Natural language processing (NLP) models to detect undesirable language online have erroneously censored comments mentioning disabilities, depriving those with disabilities of the opportunity to equally participate in discourse.

Furthermore, even in the application of AI to hiring of staff, given that more and more, organisations are tending to use AI driven software to sieve resumes that get to seen by the HR departments, any form of bias may see qualified candidates not being considered for hiring.
Clearly the consequences of AI bias can be far-reaching and serious attention needs to be paid to it especially given that unlike one-on-one human interactions, AI tends to be applied to multiple persons and groups at the same time so the consequences of error is multiplied several times.

CONCLUSION – HOW CAN WE DEAL WITH AI BIAS?
From the above, it is obvious that tackling the problem of AI bias requires collaboration between virtually all of society particularly tech industry players, policymakers, and social scientists.
Itrex Group provides some guidance on how we can help solve this problem. This includes:

  1. Examine the context. Some industries and use cases are more prone to AI bias and have a previous record of relying on biased systems. Being aware of where AI has struggled in the past can help companies improve fairness, building on the industry experience.
  2. Design AI models with inclusion in mind. Before actually designing AI algorithms, it makes sense to engage with humanists and social scientists to ensure that the models you create don’t inherit bias present in human judgment. Also, set measurable goals for the AI models to perform equally well across planned use cases, for instance, for several different age groups.
  3. Train your AI models on complete and representative data. That would require establishing procedures and guidelines on how to collect, sample, and preprocess training data. Along with establishing transparent data processes, you may involve internal or external teams to spot discriminatory correlations and potential sources of AI bias in the training datasets.
  4. Perform targeted testing. While testing your models, examine AI’s performance across different subgroups to uncover problems that can be masked by aggregate metrics. Also, perform a set of stress tests to check how the model performs on complex cases. In addition, continuously retest your models as you gain more real-life data and get feedback from users.
  5. Hone human decisions. AI can help reveal inaccuracies present in human decision-making. So, if AI models trained on recent human decisions or behavior show bias, be ready to consider how human-driven processes might be improved in the future.
  6. Improve AI explainability. Additionally, keep in mind the adjacent issue of AI explainability: understanding how AI generates predictions and what features of the data it uses to make decisions. Understanding whether the factors supporting the decision reflect AI bias can help in identifying and mitigating prejudice.
    In addition to the six (6) items suggested by Itrex Group which the writer agrees with, it is also suggested that regulators also set out proper guidelines for the application of AI and given that AI, like all of technology, is consistently evolving, regulators need to consistently relate with the market and actors and keep abreast with the latest developments so as to ensure that regulation keeps up, as much as possible, with the latest developments.

Fernandez Marcus Obiene is a Senior Associate at Tsedaqah Attorneys and co-founder of legaltech startup, Wekrea8.com.

Advertise with us
Oludare
Oludare
Lawyer, Bibliophile, Polyglot, Traveller
Advertise with us
Must Read
Advertise with us
Related News
Advertise with us

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.