0805 (banner)

CognitiveScale Blog

Mitigating Bias in AI/ML Outcomes: Adding Confidence, Minimizing Risk

By Mark Robbins Aug 5, 2020 9:41:40 PM

0805 (banner)

If you’re in banking, finance, data science, or high tech, you’ve no doubt heard the tale of the Apple Credit Card debacle and scenario where a high profile technology couple were granted astoundingly different credit lines (no need to point out the “winner,” but by a differential factor of 20X) when applying for the same card, with similar credit histories and even some financial factors weighing into the wife’s advantage. With the card backed by prominent bank Goldman Sachs and results revealed on Twitter, the broad disparity became big news, now almost of legendary status. The truths were exacerbated by the bank’s inability to justify the rationale and seeming inequity of gender results of the organization’s AI/ML-based consumer application and approval model.

Facing Hard Facts

Let us consider some of the ramifications of this situation, none of them good, and all of them challenging:

  • Goldman Sachs, even after public denials, was left holding the bag for perceived sexual bias in their data-oriented credit card application and authorization process.
  • The bank was left flat-footed, with no defensible recourse due to their inability to explain their consumer system’s logic and its subsequent results, even while they claimed a thorough vetting.
  • Even though input data did not include protected gender information, as Goldman Sachs claimed, academics note that simply removing a characteristic in input data does not de facto eliminate the potential for biased results. 
  • Restricted access to data, especially in financial services, insurance, and health care further complicates the problem.
  • Along with negative publicity in major press outlets, the situation generated incalculable damage to the business, including sullied brand reputation, lack of overall consumer trust, as well as the realities of having to undergo a federal government investigation.

None of these are damage control items any organization wants to have to add to their list of AI/ML justification and remediation tasks.

Apple Card- Only One Example of Broader Bias

And as high profile and technology germane the Apple example might be, the broad truth remains that biased results from AI/ML implementations exist in virtually every industry and on many data input dimensions, however different their intent or purpose. And as more decisioning algorithms make their way into the everyday functioning of thousands of organizations, the opportunity for negative impact and increased risk multiplies as well, and we will see increased oversight and scrutiny, particularly in regulated environments. Additionally, the COVID-19 pandemic has further necessitated the demand for more digital interactions, magnifying the focus on trust in a new world of stripped of most human interaction and now reliant on intelligent decision-making alternatives.

“Explainability” as the End Game

 So, what does an organization need to do to prevent bias from occurring, and ensure that decisioning systems meet acceptable levels of risk, trust, and transparency? It is a tough question, and a hard problem to solve. Every user of intelligent systems should be able to provide evidence that their systems are 1.) unbiased and fair, and that they 2.) are “explainable.” There is a growing realization that controls and transparency need to be a business culture imperative, and that they should be the groundwork of every intelligent system architecture. Implicit bias, or “bias through unawareness,” is no longer an excuse or defense for skewed results from flawed AI/ML black-box learning models or poor input data. 

New Challenges, New Technologies

Now there is contextual help in validating confidence in AI implementations. Innovative technology, specifically Cortex Certifai from CognitiveScale, delivers added visibility and trust in existing and planned AI/ML solutions. It delivers the desired explainability factor, removes the mystery from previously undecipherable learning models, and offers informed decision points for risk mitigation, all in graphically simple to understand terms. Certifai shines a light into the issue of inherent AI/ML bias and subsequent exposure. It is not designed to resolve the specific problem but to provide the insight necessary to pre-handle issues and validate an acceptable level of risk, and confidence with, intelligent system outcomes. It becomes your partner in arriving at a favorable fairness score, again with visualizations and subtle decision point highlights to assist. 

Searching for and Applying a Fairness Score

Screen Shot 2020-08-05 at 10.24.28 PM

Certifai utilizes open-source data to scan and create an awareness model, providing a fairness rating for review and evaluation. Classifications such as age and status (as in gender where available), are ranked accordingly and are factored into the overall Fairness Overview calculated visually by Certifai. Each organization must then determine the impact or acceptability of these fairness scores, and whether remediation or corrective efforts are necessary based on results. This prevents anomalies pre-production, and the ramifications experienced by the likes of the Apple Card.

More to the Certifai Story

There’s a lot more to the Certifai story, both in seeing how it works, but also in understanding how it can inform your organization, keep you out of the headlines, and prevent having to deliver a mea culpa to valued customers. My colleague Aldo Arizmendi and I recently presented an AI Tech Talk, Examining the Apple Card Gender Bias Use Case and How to Prevent It. The recorded web event (it is only 30 minutes) is now available on YouTube and is the perfect starting point to further the conversation around adding confidence and mitigating risk surrounding your AI/ML models. Or you can also visit our website, where a free trial awaits! See for yourself how Certifai delivers practical, forward-thinking, and simple to implement results, without obligation.

CTA-Certifai-1-1