"You can almost think of AI as a supply chain problem where the upstream work is really around data. And so data quality becomes really key." It was a treat for Matt Sanchez, our Co-Founder & CTO, to be interviewed by Intelligent Automation Radio for a podcast episode about responsible AI, Trust as a Service, and more of what we do here at CognitiveScale. AI and ML solutions must cycle through numerous iterations before they deliver trusted data, trusted decisions, and trusted outcomes. As time goes on, AI continues to learn and improve, and that's what we want to reach in order to mitigate risk of bias and unfairness in models.
Google Podcasts: https://podcasts.google.com/feed/aHR0cDovL2ZlZWRzLnNvdW5kY2xvdWQuY29tL3VzZXJzL3NvdW5kY2xvdWQ6dXNlcnM6NDkzNzA2Nzk2L3NvdW5kcy5yc3M/episode/dGFnOnNvdW5kY2xvdWQsMjAxMDp0cmFja3MvODU3Mzg3MTA3?ved=0CAkQ38oDahcKEwjo65q6ldLqAhUAAAAAHQAAAAAQAw
Guy Nadivi: Welcome, everyone. My name is Guy Nadivi and I'm the host of Intelligent Automation Radio. Our guest on today's episode is Matt Sanchez, Founder and Chief Technology Officer of CognitiveScale, an enterprise AI software company. CognitiveScale is number one in AI patents among privately held companies and number four overall since 2013, with a focus on helping clients to pair human and machine. Prior to CognitiveScale, Matt was one of the leaders at IBM's Watson Labs, which is of course, part of IBM Research, one of the largest industrial research organizations in the world. So with such an accomplished track record in the field of artificial intelligence, Matt is someone we absolutely had to have on our show. He's been kind enough to carve some time out from his understandably busy schedule to join us and share his considerable insights with our audience. Matt, welcome to Intelligent Automation Radio.
Matt Sanchez: Well thanks, Guy. I'm glad to be here and appreciate you taking the time to discuss some of these topics today. It's certainly a interesting time for us in the field of artificial intelligence, so glad to be here.
GN: So let's start by talking about something very interesting that you're an advocate for which is something called responsible AI. Can you please define that, Matt, and explain what the components of responsible AI entail?
MS: Sure. So responsible AI at its core really is about, for us, it's about wanting our clients to have a better understanding of how to maximize the value of AI while minimizing the risk and the risk could be to their business and it could be to society. And so we believe that you need to have tools to really handle this. It's not something that many businesses are equipped with today, and these tools need to be able to automatically detect, score and mitigate risks that come from using AI and related technologies to automate decisions or to help augment decisions that are happening in the enterprise. And so we want to make AI transparent, trustworthy, and secure by providing these tools. And responsible AI is really about leveraging those sorts of things to make sure that these systems we're creating are not just opaque learning machines, but they're actually trusted, controlled, intelligent systems that can actually improve individuals, organizations, and society.
And we think of really there's six key components to responsible AI. We call them trust factors and we really talk about it in terms of it being a trusted AI framework. And these trust factors are things like effectiveness. So making sure that the AI systems and the models that we build in these systems are continually generating the optimal business value. It's actually been studied. A number of studies were done recently, even that talked about the difficulty in making sure that AI models deliver ongoing business value. And so that continues to be a challenge, but beyond business value, there are risks that also come with it. And so these trust factors go beyond just understanding business value and look at things like explainability. How explainable are the decisions that these AI systems make to human users? We did a very simple explanation for how an automated decision was made. What about bias and fairness issues?
How do we know if there's bias in these systems? How do we test for it? How do we measure it if it's somehow hidden or learned, inferred by these AI models? Can we test for that? Robustness, making sure that these systems are secure, that adversarial attacks on these systems can be understood. We can actually test for these weaknesses in these AI systems. Data risks. Data really is the fuel for these systems and if it's tainted with bad information, right, it's a garbage in garbage out problem. We need to be able to detect that and data is constantly changing. So this isn't a one-time thing. It has to be monitored. And then finally compliance. There are legal, ethical considerations when we use automated decisioning. Many countries are actually starting to define very specific laws around automated decisions and the use of algorithms. And so compliance is going to be a continually changing landscape, but one that's increasingly important for customers using AI.
GN: Matt, you've spoken about the need for data to be "nutritious, digestible, and delicious," end quote, which by the way, is how I like to describe my wife's cooking. What did you mean by nutritious, digestible and delicious data?
MS: Yeah, so, and I think somebody you had on your podcast in the past, Lee Coulter, is a good friend of ours and someone that we actually have talked about together on this topic, kind of came up with this set of things and tried to create an analogy to what does it mean to really power a artificial intelligence system or to power machine learning? And we were really focusing on the data. So delicious really means the right variety. I need to make sure that the data I have is not, it has the right inputs. It has the right conditions. It has the right outputs. If it doesn't have that, I know I don't have complete information. So whatever I'm learning from that, it's not going to be very good. My results aren't going to be very good. Digestible means that I have to be able to actually consume the data. A lot of data that was created in enterprises is not digestible by machine learning algorithms.
So the structures that are created have to be both useful and usable by the model that we're creating with it. And it has to be free from any sort of contaminants as well that could cause the system to reject that information. And finally nutritious really means data that really is sustenance for the main purpose of that model. It contributes either positively or negative to the inferences we're making, but it's not just noise. It's not just filler. It actually needs to be the right stuff. And nutritious means that our confidence in the predictions that these systems are making is growing over time. We call that trusted decisions. There's transparency and trust in those decisions. And then finally nutritious also means that the data itself is not poisonous. There's no leakage of private data that was unintended or biased information. And so we talk about this as a high level framework to really think about your data because without the right data, AI really cannot succeed.
GN: Matt, as I'm sure you're aware, AI projects have an unacceptably high failure rate, as much as 85% according to one report. In your experience, what are the biggest reasons AI projects fail and what can be done to reverse poor outcomes?
MS: Well I think there are three key problems for failure in these projects. One is, of course, as I just discussed, is data quality. And if I can't really get a handle on data quality, then everything else downstream from that fails. You can almost think of AI as a supply chain problem where the upstream work is really around data. And so data quality becomes really key. The second part, which is really one level downstream from data, is modeling and model validation. We've heard from clients that it can take upwards of a year to build one machine learning model and get it into production. And the challenge that they tend to get into, maybe half the time, is actually the technical work. The other half of the time is validating that that model is actually trustworthy, that it's compliant, that it actually delivers the right business value and that we can prove that and that can trip up these projects, essentially stalling them out in the lab. And then finally business outcome, making sure that on a continual basis that my AI systems are measurable against the business KPIs that they're designed to solve for.
That's the only way to make sure that the investments in those AI systems are actually paying off. And so these three problems really trip up a lot of projects. And what you really need to solve for this is first you need trusted data. You need data that is free from bias, that has the right nutritional value to solve the problem and is ready for machine learning. You need trusted decisions. So we need to make sure that the decisions that these systems are making have a level of transparency and explainability built into them. And then finally we need trusted outcomes. We need to know that and have full transparency from the business side that the AI systems are actually generating value.
GN: Matt, there are, as I'm sure you know, some concerns cropping up about the misuse of AI and machine learning, deep fakes being just one example. Do you see any economic, legal or political headwinds that could slow adoption of these advanced technologies, or is the genie out of the bottle at this point to such an extent that they just can't be stopped and perhaps not even effectively regulated?
MS: Yeah. So, I think this is a continual challenge in the field of artificial intelligence and in a lot of potentially other related fields. And certainly, there is a lot of opportunities for misuse of these technologies. And I think we will continue to see that by bad actors. Now, that being said, I think most corporations, governments are actually going to, or are incented if you will, to use AI in a responsible manner. And the reason for that is that it's a brand trust issue and it's a public safety issue if you're in a government agency. And brand trust has been shown to be a very costly thing to lose in terms of dollars and cents. And so at the end of the day, if the consumer doesn't trust the brand, they stop using that brand.
That's results in literally trillions of dollars in lost revenue globally every year because of trust issues. And this can exasperate those trust issues. If you're using AI in a way that is not trusted, your brand will erode very, very quickly and consumers are keenly aware of this. So that being said, I think to the extent that organizations can address the ethics question around AI, what are your principles as an organization that you're going to adhere to? Publish those principles and then have a way of actually showing that you're following them. I think that's one way organizations can sort of get around the fear, if you will, that they're somehow using AI for evil in the back office, but also from a regular regulatory standpoint, we're seeing more and more examples of consumer data protections. That's usually where it's starting with things like GDPR in Europe, but now also in California. January 1st this year, the California Consumer Privacy Act went into effect.
And now consumers are gaining more control over their own data. And that's really where it starts and that's regulation and laws that are being passed. And there are many other legal ramifications for organizations that try to use AI in the wrong way, and particularly try to use data in a sort of illegal way. And then finally, I would say on the public safety side of things, I do think there are genuine public safety concerns with things like autonomous vehicles and other sorts of autonomous technologies that will be regulated at some point. We will have to expand the regulations that exist to protect the public from these technologies, just like any new technology that surfaces in the marketplace. And then of course, there's always the bad actors who try to use technology for their own criminal purposes. And that's going to happen whether business uses AI or not. In that sense the genie is out already out of the bottle. The technology is out there.
GN: Your company CognitiveScale describes its product as, "the world's first automated scanner for black box AI models that detects and scores vulnerabilities in most types of machine learning and statistical models.". There are some who say that AI algorithms should be audited much like a publicly traded company's financial statements. Could CognitiveScale be a virtual operator for auditing AI algorithms?
MS: Yeah, that's a great question and, in fact, one that we talk to our customers about quite often, and as it turns out, auditing is already starting to occur. There are large organizations that have had various forms of, I'll call them, AI audits and particularly auditors starting to look into from a business risk standpoint. How has the use of AI potentially introducing new risks into the organization and are they managing those risks appropriately? You can think of banks, for example, needing to answer these questions from a regulatory standpoint. And so AI auditing, if you will, is becoming an increasingly important topic. Now how do you do it? First of all, I think you have to understand the ethical principles, regulations, laws, et cetera, that are applicable for your business and for your jurisdictions of interest. Regulations are specific to jurisdictions. In the United States, for example, if you're a healthcare insurance company and you operate in 50 States, you probably have 50 different sets of insurance codes that talk about discrimination and they don't all talk about discrimination in the same way.
And so now you have to understand how to apply what does that mean for your business? And a lot of organizations, what they're doing to get ahead of that is to define their own and publish their own AI principles. You can see large tech companies like Google, Facebook, and others who have published some of these principles, but now you can actually start to see banks and healthcare insurance companies and other types of companies starting to publish their AI principles. What are their values as a company and how are they using AI responsibly? And so that's sort of the second part. First understand what are the applicable regulations and laws? Second is define your principles. And third, have the measurements and controls in place to prove that you're being compliant with these rules and regulations.
And on that last point, this is where our product, Cortex Certified, can really help. Because one of the things we discovered is that within an organization, the technical people, the data scientists and the engineers, speak a very different language than the compliance officers and the business owners when it comes to AI. And so we needed a common language so that they could all talk. And this is something we call the AI trust index. Think of it as almost a single scoring mechanism for measuring algorithmic risk and breaking it down into those six trust factors that I discussed earlier, where we can now get a very simple score, almost like a credit score for an AI that tells me how trustworthy is it. And so instead of just looking at the technical attributes, the statistical attributes of these systems, I now can look at the trust attributes or the ethical attributes of these systems. And this is enabling a common language to be able to then facilitate measurement and ultimately controls and audit in these organizations.
GN: Interesting. Earlier this year, Matt, there was an article in MIT Technology Review about artificial general intelligence or AGI. In that piece, the author, Karen Hao, who's been on her podcast, wrote quote, "There are two prevailing technical theories about what it will take to reach AGI. In one, all the necessary techniques already exist. It's just a matter of figuring out how to scale and assemble them. In the other, there needs to be an entirely new paradigm. Deep learning, the current dominant technique in AI, won't be enough". Matt, what do you and your team at CognitiveScale think it will take to achieve AGI?
MS: Yeah, so I'll preface this with the point that our team at CognitiveScale, AGI is interesting and it's a topic that's worth debating. However, my view is this is not where the current opportunity in the market is. And so while it's interesting, we don't spend a whole lot of time in the AGI world, but that being said, I do have some opinions that I can share. And I think there's a couple of different ways of looking at it. One is if AGI is supposed to be creating a system, creating technology that can really work like the human brain, meaning think and learn the way that the human brain does, then we have a long way to go, like maybe 50 plus years of more work to do before we even get close. And I just pointed to two things that humans do that machines don't do today at all and we don't even know how to do at the scale that the human brain can. And the first is just common sense, understanding or common sense reasoning.
And this was something that I learned about when I was at IBM, because we were certainly trying to figure out how to teach Watson to have a little bit more common sense when it was answering questions and it's challenging. It's challenging to some of the things we learn as human beings, the inflections in our voice, the subtleties of body language, things that just are intuition to humans are very difficult for machines to understand, and encoding that information, encoding that data in a way that the machines can understand is really challenging. So, in that sense, I would agree with the person who said that we need new technologies to solve for this because the encoding of that is still a big challenge, even with deep neural networks. And then things like emotion are also very challenging and they factor very deeply into how the human brain works. So if that's the definition of AGI, I think we've got a long way to go.
If AGI really at an algorithmic level is supposed to be about generalization, so generalizing, showing that one algorithm can solve for multiple tasks, different types of tasks without having to be explicitly retrained, if you will, or rebuilt to solve those tasks, then I think we're actually on our way. I think there's been some great advances along this dimension with reinforcement learning and some other technologies. And so there are certainly a lot of interesting advances in this space, but my view is, the definition of AGI that I've always sort of looked at really talks about it being more of this learning, really simulating, understanding and learning in a way that's similar to how the human brain can reason and learn and generalize. And I think we've got a long way to go before we are even close to that.
GN: Okay. So perhaps that's a good segue into my next question. Overall, Matt, given your vantage point, what makes you most optimistic about AI and machine learning?
MS: So the number one thing that I get excited about with AI is when I see real business outcomes. So when I see that by using AI, we can start to save a lot of money for our customers, or we can help them improve the customer experience that they want to deliver. And when I can see that in terms of dollars and cents, for me, it shows that AI is working, that it's worth the investment and it actually is something that's worth pursuing as a core capability in the business. And so I think that's, at the end of the day, what I get excited about, and we see that with a lot of our efforts and with our customers. And in fact, we make that a core part of our methodology for how we work with clients is to really focus on the outcome first and to really challenge our clients and ourselves to define what that outcome is, how we achieve it. What does good look like?
Why is it better than what you're doing today? And I think if you start from there and when you see the result and you can calculate the value, it's really exciting. And I've seen so many examples of that now over the years that I'm really excited about the future. I think we can continue to improve and to apply that. And it's really this iterative process where the first time you turn the crank and see this result, it's very exciting and it makes you want to do it again and again, and, as you do that, your data gets better. Your techniques get better, your infrastructure gets better, and you start to see things go faster and faster. And we're seeing examples of that today with a lot of our clients that are making it that I'm really excited about.
The second thing I'm really optimistic about is that I think ethical AI and the concerns around this that are really top of mind, both for consumers, for governments, perhaps not as much the US government, although recently there's been more movement there, but in other countries, Canada, Europe, even the Middle East, there's a lot of proactive effort from the government side to really define the principles again at a societal level around AI. And I think that's really encouraging because to me it means that people are starting to understand how to define this and that it's important. And I'm also seeing it at the level of CEOs and boards within very large corporations because again, they're worried about risk and they're worried about the brand trust. And they know that AI has both the potential and the power to be very valuable, but also very, very risky if they aren't managing these things. So I've seen an increase in that dimension.
And I think that you can point to a few events that have occurred, very public events that have occurred, that you can kind of see why these issues are top of mind and things like data breaches, data misuse by certain organizations and social media outlets and people being put in front of Congress to explain themselves. I mean, these are things that no CEO or board wants to be a part of. So a lot of these challenges now have been recognized and organizations are starting to invest in making sure that they can get the right outcomes from these systems and do it in a safe way. So that's why I'm excited about it. I'm seeing that trend increase, both of those trends increase.
GN: That is all encouraging. Nevertheless, as a corollary to my last question, I've also got to ask what leaves you most concerned about AI and machine learning?
MS: Yeah. So a couple of things. One, I would say the first one is inflated expectations. AI is not a magic wand to solve all your past data sins is what I like to tell people. So back to my comment on garbage in, garbage out, if your data is not nutritious and digestible, AI is not going to solve that for you magically. And it really comes down to the information you believe you have is subjective. A simple example of this would be if I'm trying to solve an image classification problem, I want the machine to tell me if the image I'm looking at is one thing or another. If I put two images in front of two different human experts and those two different human experts disagree on the classification, then effectively what we have is highly subjective information. And it's likely that AI is not going to provide a whole lot of value there. It might. AI could provide some additional information that could help those human experts, but it probably isn't a situation where it can automate that decision in place of human intervention.
So we have to always think of AI, it's not a magic wand, but it can help. It can be, it can certainly help. It can certainly potentially uncover some of those ambiguities and actually improve upon them and make your data better and make your processes better. But inflated expectations is sort of the one thing that always has me worried. Second thing is over-hyped fears. As we said, I think we do have the ability to put the right guardrails around AI. We do have the ability to measure things like explainability and robustness and bias. And I think corporations will do this because it is a brand trust issue. It is a legal compliance issue, but the fears that the corporations are going to start using AI to somehow abuse people's information, they're real situations and a lot of times it's unintended side effects. And I think that's the challenge. I think that's what we need to really focus on is not necessarily that it's ill intent, although that does happen, of course.
There are always bad actors and I think we all hope they are the exception and not the rule, but be given that there is a way to measure these things, I think we have the ability to actually put the guardrails around these systems. And I think that's important for us to work with leaders, leaders in the government and business leaders to really make sure that those practices, those controls are put in place. But those are the things that worry me the most, that the expectations are inflated and that fears are also in somewhat over-hyped in a science fiction type of way sometimes and in other ways. There are some real fears. There are some real issues that have occurred, both bias related issues, where it's almost we like to think of it as this fairness through unawareness fallacy, which basically says, "Well, of course I'm making a fair decision because the system doesn't even understand the concept like gender or age or ethnicity."
But we actually can prove is that sometimes those systems, even though you're not explicitly telling them that information, because it's the way that machine learning works, they sometimes learn those patterns and can develop biases that you don't want. And so the idea that you can just be unaware of these things, and that makes you fair is actually false. And so I think it's those types of understandings that can overcome some of those fears.
GN: Matt, for the CIOs, CTOs and other IT executives listening in, what is the one big must have piece of advice you'd like them to take away from our discussion with regards to deploying AI & machine learning at their organization?
MS: Yeah. So really I would kind of say it like this. Start with the business outcome in mind. Set realistic expectations with the business around what that outcome's going to look like. Make sure you add explainability and measurement as a first class requirement to these systems. So with the business outcome in mind, also ask how are we going to measure it? Do we have the right feedback loop in the system to really measure this? Make that a requirement of the system, not an afterthought. And be prepared to iterate, to deliver incremental value, meaning you're not going to get it right the first time. You're going to have to iterate. You're going to learn a tremendous amount every time you turn the crank on these systems and they do improve over time. We'd like to say that with AI the first day is the worst day, meaning the very first version of your system probably is the worst it's ever going to be. And this is somewhat unique about AI systems. They improve with time. They improve with that feedback loop being put into operation.
And that's very unique in the IT world because most of the IT systems we build, they realize their maximum value on day one and it sort of then declines over time. AI kind of works the opposite way or it should. And so a big part of that is iterate. Think of it as an iterative process. Start small, and stair-step your way towards incremental business value.
GN: All right. Looks like that's all the time we have for on this episode of Intelligent Automation Radio. Matt, it's been a real treat having a marquee name in the field of AI on the podcast today. I think you've really shed some light for our listeners on the black box that is artificial intelligence. And I suspect you provided many of them with new data points to factor into their thinking for their own AI projects. Thank you very much for being on the show today.
MS: Well, thank you, Guy, and appreciate the time and look forward to hearing the podcast when it goes live and following other topics in the space that you're interested in.
GN: Matt Sanchez, Founder and Chief Technology Officer of CognitiveScale, an Austin, Texas company. Thank you for listening, everyone. And remember, don't hesitate, automate.