How Algorithmic Bias in Artificial Intelligence Hurts Business and How to Protect Yourself

Algorithmic Bias in Artificial Intelligence
July 5, 2024

|

With the ongoing rise of artificial intelligence (AI) in our everyday lives, Concentric’s Global Intelligence team cautions you to be aware of algorithmic bias and its impact on corporations around the world. Algorithmic biases are byproducts of automated systems unjustly disfavoring people based on social, political, or economic conditions. U.S. legislators are calling for accountability and transparency from AI companies, preventing harm to individuals and businesses, and increasing public trust.   

Bias Implications for Businesses  

Algorithmic bias is a result of “unfair outcomes due to skewed or limited input data or exclusionary practices during AI development,” according to Datacamp. Algorithmic biases occur when AI systems output overestimated decisions due to lack of diverse, inclusive, or representative data. 

Based on our research into algorithmic bias, we assess it can harm businesses in a few ways. In 2022, a survey conducted by DataRobot reported that over 350 U.S and U.K. technologists suffered losses from algorithmic biases. More than half lost revenue (62 percent), customers (61 percent), employees (43 percent), or incurred legal fees from litigation (35 percent). When employed by companies, discriminatory AI systems can damage company credibility and reputation, therefore losing trust and confidence from consumers and the public. 

  • Supply chain algorithms forecasting demand may overestimate target audiences, resulting in overlooked populations and potential profit loss. For example, Amazon garnered negative publicity when they employed an AI recruitment algorithm that inadvertently and systematically discriminated against women.
  • According to a study conducted by Axios, public trust in companies building and selling AI tools dropped to 35 percent, compared to 50 percent five years ago. Employees who witness a company’s investments in AI may no longer trust the decision-making, and not use the technology. Executives may struggle implementing the technology and lose returns on investment. 
  • The U.S. Equal Opportunity Commission (EEOC) began an initiative to ensure AI, machine learning, and other emerging technologies are compliant with federal civil rights laws. The EEOC became the first institution to settle an AI hiring discrimination lawsuit. Three companies were sued for age discrimination, following algorithmic biases that automatically rejected female applicants over the age of 55 and male applicants aged 60 and up.
    • Companies may be responsible for exclusionary practices that render unfair decisions, therefore liable for unfavorable consequences to those affected. 

Lawmakers are Changing the Game

U.S. lawmakers issued the Algorithmic Accountability Act of 2019, directing the Federal Trade Commission (FTC) to require entities perform “impact assessments” for decision making systems and for companies to collect data, analyze risk of discrimination, and provide accounts of safeguarding measures. The effort aims to ensure mitigation strategies to address disparities in algorithmic outcomes are deployed.

  • FTC Chair Lina Khan launched an inquiry into generative AI investments and partnerships among companies such as Alphabet, Amazon, Anthropic, Microsoft, and OpenAI. 
  • The FTC issued a collective declaration titled Enforcement Efforts Against Discrimination And Bias In Automated Systems, collaborating with the Consumer Protection Bureau, the Department of Justice, and the Equal Employment Opportunity Commission. 
  • In January, the FTC prohibited Rite Aid from using facial recognition technology for surveillance, after failing to prevent harm to consumers. 

Recommendations – Considering Mitigation Policies to Prevent Distrust

As corporate entities’ reliance and investment in AI technologies grow, businesses must develop policies to mitigate biases to adhere to legal ethics and prevent public distrust. To increase public trust, algorithm models must be explainable and transparent, and regularly audited for audiences. Concentric’s cyber and intelligence teams are here to help if you have questions about AI modeling and your training information. We also recommend considering several factors when addressing the issue, such as: 

  • Understanding your training data: Ensure the data is holistic and free of labels or classes that may harm end-users. 
  • Structuring data gathering: Allow for multiple labels for a single data point to make your model more flexible than producing standard outcomes. 
  • Deploying diverse teams in machine learning development: Leverage diversity in the workplace to challenge your model and pose a variety of questions to determine the outcome.
  • Maintaining quality assurance: Review results in real time to ensure consistency and identify problems early-on to find solutions with greater ease. 
  • Planning for feedback: Continuously review your model and audit for instances of bias. Improve the model’s performance, constantly iterating toward higher accuracy. 

Share this post:

Facebook
Twitter
LinkedIn