star-bg.jpg

CognitiveScale Blog

Systemic Bias in Financial Services: Fixing a Growing AI/ML Problem

By admin Jul 16, 2020 9:53:28 AM

0715 (banner).psd

The Current AI/ML Situation

As computational analytics, including Artificial Intelligence and Machine Learning (AI/ML), gain greater traction and an almost essential status in the processing of massive volumes of data, their ubiquity and often inherent predilection for incorrect outcomes have brought both skepticism and distrust to their algorithms. Nowhere is this truer than in financial services, where thousands of accurate risk assessments and fiscal decisions, such as for credit cards and loans, are the lifeblood of banks and credit institutions as they satisfy consumer needs.

Folklore Status for Public Inequalities

And while many applications based on computational analysis may and do deliver better operational processes for financial institutions and result in faster results for consumers, the often highly public and blatantly biased results outed and shouted across the internet have placed these inequities in the spotlight. Consider:

  • The black professional woman, while having good credit and substantial financial assets, was denied a car from a major car rental corporation. And that the rental car company had no means to explain, defend, or corroborate its decision after being challenged by the consumer. Name the bias: Racism?
  • A man and a woman who applied for the same credit card, and who discovered substantially different results with regards to credit limits and terms that favored the man. Only because they were married was the inequity uncovered, ironically with the wife having a better credit score and asset value. Name the bias: Sexism?
  • Hundreds ̶ perhaps thousands ̶ of denied credit cards and loan applications, again with nothing but an anecdotal explanation from the lender as to the negative decision. Name the bias: Classism?

Regardless of the reason, the shortcomings of the use of AI/ML in these fiscal scenarios often leave financial institutions red-faced and flummoxed, and consumers frustrated and furious. Both cause consternation and lack of trust, especially in today’s hyper-focused world with its increased awareness of inclusivity and equality, where bias has no place.

Current Data Issues and Realities

And so as discussed, a heavier reliance on and for machine learning solutions to automate financial processes exacerbates many bi-directional issues. Providers need the tools to better vet both existing and new systems for precision and truthfulness, and consumers need proof the industry is working to rectify these sometimes-covert biases.

Not to absolve the financial services sector, but other industry trends and realities do make their task more challenging:

  • AL/ML implementations may have been created and trained using legacy models and data that no longer are applicable or valid.
  • Unlike a human being, it is often difficult to effectively retrain a skewed AI decision-making model without substantial effort.
  • Models can be using obsolete datasets to reach conclusions, datasets that are now notoriously (and legitimately) secure in an industry based on proprietary information.
  • Added regulations, like the General Data Protection Regulation (GDPR), which further protect assess to and use of personal data, limiting the opportunity to refine AI applications using the latest records.

All these considerations add layers of complexity and difficulty to the task at hand.

Facing Negative Bias Outcomes

High profile bias in AI/ML results, and the lack of transparency in the process via an inability to explain decisions to consumers, can only result in negative outcomes; and while detrimental to the consumer, these become critical business issues for the financial provider.

  • Lack of organizational credibility and loss of customer trust and loyalty
  • Loss of business and associated revenues
  • Bad public relations and negative social media coverage
  • Legal ramifications for racism, sexism, xenophobia
  • Fines for lack of regulatory compliance
  • Inability to roll out new AI/ML applications due to lack of confidence in outcomes

A Bias-Busting Solution: AI Trust as a Service

Is there a way to ensure accuracy, transparency, and trust in AI/ML implementations for financial services and their customers? CognitiveScale has developed such a solution via its Trust as a Service solution, Cortex Certifai. 

Adopters of AI/ML are projected to share in an immense world-wide profit pool of $1 trillion by 2030, according to industry analysts. But to fully embrace an AI system, organizations must be able to trust and defend their non-biased decisions. CognitiveScale addresses this critical need for trust in AI/ML. Its first-to-market data and AI/ML model vulnerability detection and risk management product, Certifai, provides six key dimensions of model characteristics including fairness, reliability, privacy, inclusiveness, transparency, and accountability. Certifai also produces an industry-first Trust Index, similar to a FICO score. 

The Certifai solution adds confidence in AI/ML results and provides transparency via explainability of the AI models and their outcomes, without the need to understand the model internals and related datasets. It eliminates many of the major barriers to data and AI/ML adoption, helps restore business credibility, minimizes exposure to lost revenue and damaged reputation, and restores confidence that the AI/ML community is moving in step with the current social movement to eliminate all forms of bias.

For more information about Cortex Certifai, visit the CognitiveScale website

CTA-Certifai-2-1