Resources / Blog

Samsung SDS’s Patrick Bangert: ‘Trying to Explain How the AI Does It Is a Mistake’

Join Infinia ML’s Ongoing Conversation about AI

Episode Highlights from Machine Meets World

This week’s guest is Patrick Bangert, VP of Artificial Intelligence at Samsung SDS.

Highlights from our conversation include:


“I’ve found in my change management attempts, not all of which were successful, that trying to explain how the AI does it is a mistake. To a business audience, you simply need to present what it does.”


If in order to drive a car you’re required to understand how it works, you would never move it.


Getting the AI to explain a particular conclusion in a precise case is the AI explainability, and I think that’s very important.

“However, explaining to a business audience what the AI does in generality for all cases or all possible cases is a completely different question, a completely different complexity.”


There are many cases where you encounter a situation that is simply unsuited to AI. AI can address it, but it shouldn’t.”

Photo by Azamat E on Unsplash

Audio + Transcript

Patrick Bangert:
I’ve found in my change management attempts, not all of which were successful, that trying to explain how the AI does it is a mistake.

James Kotecki:
This is Machine Meets World, Infinia ML’s ongoing conversation about artificial intelligence. I am James Kotecki and my guest today is Patrick Bangert, VP Artificial Intelligence at Samsung SDS, which is the IT services arm underneath the Samsung umbrella. Patrick, welcome to the show.

Patrick Bangert:
Hi, James. Pleasure to be on your show.

James Kotecki:
So VP of Artificial Intelligence, very cool title, very futuristic. When people hear that title, what do they think you do?

Patrick Bangert:
So first of all, people generally ignore the SDS and think I’m VP of AI at Samsung Group altogether. SDS is the IT company for the group, so we run the data centers, we write the software and we do most of the AI. If you might think of the consumer electronics that Samsung Group sells, you have the natural language system called Bixby on your phone, you have a fingerprint scanner, you have a facial recognition to get access to your electronics. You have image recognition in medical and autonomous driving domains. You do forecasting for our retail arms and the supply chain so… We basically cover the entire gamut of AI. I get to be its face, and part of my job description is to be a little bit of a visionary and see where it’s going and to run all the projects.

James Kotecki:
What are some of the things that you’re imagining?

Patrick Bangert:
AI used to mean that AI systems understand the world, and then it kind of got overtaken by machine learning, which is a numerical neural network-based approach that learns from data and generalizes from it. And nowadays, the two terms are used as synonyms, but we are encountering the glass ceiling for the accuracy of machine learning-type models. If you look at natural language-type models, like GPT-3 that came out a few months ago, it gives very sensible responses to a cursory reader. But if you ask questions of content, like ask GPT-3 an arithmetical question, you’re likely to get the wrong answer. It’ll be in the form of a nice sentence, but it’ll be the wrong number. These language models do not have any logical understanding of anything. They simply combine words, and if you give them enough text to learn from, the words are combined correctly, but there’s no content.

Patrick Bangert:
Pure machine learning, in cases like this, doesn’t lead to the kind of accuracy that ordinary individuals talking to their computer would expect. So we have to find a way of combining the machine learning approach that we have today, which has been successful in many restricted use cases, and the logic approach of the 1960s, 1970s. Then I believe we are in a position to get natural language processing that actually makes content sense. That’s, of course, a research project that’s going on at multiple universities right now. In the distance of five years, ish, we can expect some tangible results, and I’m really looking forward to that.

James Kotecki:
Talking about change management as a big part of your job, how do you get someone to adapt this new AI-powered technology as part of their workflow? Oftentimes, the easiest way to do that is with analogies to the current people that you currently have — this will be able to replace a person, or augment a person, or do this specific set of subtasks that a person used to do. Do you find that a useful analogy in the business context?

Patrick Bangert:
Well, by saying this AI technology will be able to replace or automate something, that’s not an analogy, right? That’s a claim that you make and you’re going to have to demonstrate that that claim is correct. The analogy would be, this AI system works like the human brain in this and that fashion. At that point, you’re trying to explain how the AI works and what it does. And I’ve found in my change management attempts, not all of which were successful, that trying to explain how the AI does it is a mistake. To a business audience, you simply need to present what it does. The business audience, for very good reasons, doesn’t care and shouldn’t care about how it works exactly. They should be able to observe that it works. The technical audience must be able to deliver the technology in a way that it does not require understanding. It should simply do its job, and only then is it actually really useful. If in order to drive a car you’re required to understand how it works, you would never move it. So it essentially depends on you having no clue.

James Kotecki:
So where do you stand on issues of AI explainability, AI monitoring, AI auditing, all different terminologies that are really rising up in popularity, especially around concerns that business people have of, wait, is my algorithm a total black box? Do I have no understanding or even control over what it’s doing? Is it using biased data? Is it going to get me results that eventually get me in trouble and I need to have some kind of eyes on it? How do you think about that?

Patrick Bangert:
Yeah, this is an absolutely crucial part of AI that’s surfacing now, and I believe it’s surfacing too late. It should have been an important topic earlier on in the history of AI. The very popular use case obviously is getting a loan at a bank. You get rejected and now you ask, why did I get rejected? 10 years ago, a person would have met with you and explained it to you. These days, an algorithm makes the decision and you don’t get an explanation, that’s very frustrating. Getting the AI to explain a particular conclusion in a precise case is the AI explainability, and I think that’s very important. However, explaining to a business audience what the AI does in generality for all cases or all possible cases is a completely different question, a completely different complexity. And that is the one I was referring to earlier that I think is probably not a good idea to attempt.

Patrick Bangert:
But for a particular data point, why in this case did you refuse the loan, that is an absolutely crucial question. And that is also not a question to be answered by me or by a technical person. That is a question that must be answered by the algorithm itself. The neural network that you deploy for this use case must be able to give out not only the result, accept the loan or reject the loan in that case, but it must give out a reason in a structured way that can be presented immediately.

James Kotecki:
Do you have a code of ethics that guides the work that you do at Samsung?

Patrick Bangert:
Definitely. Ethical AI is, I think, the most important topic within AI in recent years. And I’ve written about this on LinkedIn as well. There are many cases where you encounter a situation that is simply unsuited to AI. AI can address it, but it shouldn’t. A recent example is the grading of A-level graduates in the UK. They got an AI to supply the final grades for what is the British high school, and that is the prerequisite for entering university, so if the AI gives you a mistakenly low score, your future career prospects are done. And that is unethical, in my personal opinion. And the population of the United Kingdom agreed, and they got the AI taken down. So you have cases where the risk of a false positive is so high that no matter how accurate your AI system is, the few percent that it’s inaccurate by are so damaging that you mustn’t use the AI at all.

James Kotecki:
This is perhaps a silly analogy, but I always kind of wondered when I was watching Star Trek, why didn’t the computers and the robots just take over everything? Because clearly you have systems in that future that are so advanced that they can clearly just autopilot the ship and drive everything and why do you even need people involved in this at all? And I guess this is one possible explanation for that vision of the future. Yeah, people actually aren’t as good or as accurate as the machines in most cases, but it’s so important to have a person there for those edge cases that we just accept, as a society, the need to have people there in general.

Patrick Bangert:
And of course, Star Trek has the view that the computers don’t take over the world, but there’s plenty of movies and series where the AI does take over the world. And there are opinions out there by distinguished scientists, like Stephen Hawking, that have appealed to governments around the world, saying that AI is a danger to humanity because it might very well take over the world and so let’s regulate it now. Personally, as an AI practitioner down in the weeds, I can reassure the audience that AI is so exceedingly far away from the AI that we have to be able to take over the world that you don’t need to worry about this. If the AI can’t even do simple things correctly, like book appointments while you speak to it on your mobile phone or drive the car very well in all circumstances at the moment, it’s certainly not going to be Skynet.

James Kotecki:
Well, we are so glad to have had you on the show today, Patrick Bangert, VP of Artificial Intelligence at Samsung SDS. Thank you so much for being here on Machine Meets World.

Patrick Bangert:
Thank you very much for having me, it’s a pleasure.

James Kotecki:
And thank you so much for watching and/or listening. You can always email the show, it’s mmw@infiniaml.com. Please like this, share this, give the algorithms what they want. I am James Kotecki, and that is what happens when Machine Meets World.