BairesDev
  1. Blog
  2. Innovation
  3. What You Need to Know About Bias in AI
Innovation

What You Need to Know About Bias in AI

Many assume AI is more objective than humans, but given that it is created by humans, that’s not always the case.

Carolina Batista

By Carolina Batista

Solutions Architect Carolina Batista helps BairesDev clients solve technical challenges through creative approaches to deliver high-quality software.

6 min read

Featured image

Artificial intelligence (AI) has permeated modern life, providing numerous benefits such as convenience, improved systems and services, and better quality of life. But the use of AI also entails some drawbacks, including high development costs and ethical issues. AI also has the potential to discriminate against certain groups the same way humans do. Many assume AI is more objective than humans, but given that it is created by humans, that is not always the case.

For certain groups, this discrimination can lead to lower quality healthcare, higher interest rates, unfair criminal justice practices, or exclusion from job consideration. So, it’s important to understand why it happens and take steps to reverse it. Here we explain the features of AI that make it discriminatory and the types of discrimination that result.

How AI Bias Happens

In discussing AI bias, two separate issues are important. One is regarding the outcomes of bias, which we explore in the sections below. The other is the features inherent to AI that allow that bias. The summaries here are based on information from the World Economic Forum.

  • Implicit bias is discrimination based on unconscious bias against members of a particular gender, race, disability, sexuality, or class. 
  • Sampling bias happens when data selected from a population do not reflect the distribution of that population.  
  • Temporal bias occurs when developers fail to build in consideration for future conditions.  
  • Over-fitting to training data is when the AI model can accurately predict values from the training dataset but cannot accurately predict new data.  
  • Edge cases and outliers are data outside the boundaries of the training dataset that can interfere with the machine learning process.

Race

Technology can discriminate based on race across a variety of applications, including healthcare. For example, a study reported in the New England Journal of Medicine found that pulse oximeters, which measure the amount of oxygen in the blood, are less accurate on people with darker skin than lighter skin.

Similarly, AI can use false assumptions about various races when forming conclusions. For example, one study examined an AI program called GPT-3, which is a large language model. When the researchers fed it the phrase, “Two Muslims walked into a…” the results were more often violent in nature than when the same phrase was entered, substituting terms like “Christians,” “Jews,” “Sikhs,” or “Buddhists.”

One of the researchers who conducted that study explained that AI programs are like babies who can learn to read very quickly, yet don’t have the life experience to understand context. As AI programs pull data from across the internet, they don’t have the proper framework to determine whether particular terms, images, etc. are appropriate for certain applications.

Gender

Similar issues can occur related to gender. For example, if a male-dominated technology company uses training datasets for a tool to help identify job candidates based on past and current employees, the AI will learn that women aren’t viable applicants for open positions.

As another example, Google Translate is biased against women in that it tends to assign traditional gender pronouns to activities traditionally associated with the male and female genders, such as “he invests” or “she takes care of the children.” Such translations further entrench gender stereotypes following decades of effort to undo them.

The following video explains another AI issue that can be discriminatory against women, facial recognition:

Age

Part of the reason for ageism as it relates to AI is the fact that older adults are less likely to have access to technology and are therefore often excluded from its development and from the creation of related policies. According to a recent article in The Conversation, “AI is trained by data, and the absence of older adults could reproduce or even amplify…ageist assumptions in its output.”

For example, many data sets are from studies about older adults in poor health, a process that ignores the reality of healthy aging. The Conversation article explains, “This creates a negative feedback loop that not only discourages older adults from using AI, but also results in further data loss from these demographics that would improve AI accuracy.” Additionally, older adults can be understood to be a large range of ages, such as “50 and older,” which overlooks the needs of narrower age sets, such as those 50-60, 60-70, 70-80, and so on.

Income

Yet another form of bias that’s less frequently discussed is income or class bias. Consider credit scores, which are used to determine credit risk for important purchases, such as a home. Stanford University notes, “The predictive tools are between 5 and 10 percent less accurate for [lower-income families and minority borrowers] than for higher-income and non-minority groups.”

Once again, the problem is the source data, which is “less accurate in predicting creditworthiness for those groups, often because those borrowers have limited credit histories…. People with very limited credit files, who had taken out few loans and held few if any credit cards, were harder to assess for creditworthiness.” When these potential borrowers are turned down for loans, a vicious cycle begins as they miss on opportunities to build solid credit histories and create wealth through property value increases.

Prevention Practices

The first step in combatting discrimination in AI is recognizing it. Developers must be willing to face the unpleasant truth that this discrimination reflects the one held and communicated by humans, but also the fact that it can be mitigated. The second is measuring the impact through studies and user feedback. And the third is developing systems to prevent it. All these steps are needed to prevent the problems that can result from discrimination, including an increase in inequality, exclusion, and marginalization.

This process isn’t always easy. For example, a highly offensive racial slur recently appeared in an Amazon product description. Yet, developing an algorithm to disallow the word from appearing on the site at all would eliminate hundreds of book titles that include it. Still, developers are making progress by improving data collection practices and the quality of training data, using multiple datasets, and implementing other types of algorithmic hygiene.

If you enjoyed this article, check out one of our other AI articles.

Tags:
Carolina Batista

By Carolina Batista

Carolina Batista is a Solutions Architect at BairesDev. Carolina leverages her expertise to provide the highest quality software delivery by assessing and solving technical challenges, defining teams, and establishing creative approaches to solve client problems.

Stay up to dateBusiness, technology, and innovation insights.Written by experts. Delivered weekly.

Related articles

Innovation - The Future of
Innovation

By BairesDev Editorial Team

4 min read

Contact BairesDev
By continuing to use this site, you agree to our cookie policy and privacy policy.