Lockheed Martin’s Chris Benson on Machine Meets World

“AI and ML are really about people.”

Chris Benson, Principal Emerging Technologies Strategist at Lockheed Martin, talks artificial intelligence with Infinia ML’s James Kotecki.

Machine Meets World streams live on YouTube weekly. It’s also available in audio form on Apple PodcastsGoogle PodcastsSpotify, and Stitcher.

Recorded live on August 18th, 2020.

Watch Episode Highlight Clips on LinkedIn

When it comes to AI, “it is up to every citizen to be aware of what is happening.”

Why does Chris Benson believe humans will eventually figure out artificial general intelligence? Because humans themselves are computational machines.

“AI and ML are really about people.”

Show Suggestions?

Email mmw@infiniaml.com.

Transcript

James Kotecki:
And we are live from Infinia ML. This is Machine Meets World. I am James Kotecki here with today’s guest, and I’m going to make sure I get this right, because this is a new title for him, Principal Emerging Technologies Strategist at Lockheed Martin, and the cohost of the Practical AI podcast, which just celebrated 100 episodes. Congratulations and welcome, Chris Benson, to the show.

Chris Benson:
Thank you very much. It’s great to be with you, James.

James Kotecki:
That’s quite an intro, and it’s quite a title that you have. I know that it’s relatively new, but when people hear that title, what do they imagine that you do?

Chris Benson:
Usually conspiracy theory comes out and stuff, and they’re thinking, He works for a defense contractor, and people’s imaginations go on that. So I get all sorts of great conversations as a result. I enjoy it. It’s fun.

James Kotecki:
And obviously you’re very deep into artificial intelligence, which is what we want to talk today, but what does the purview of emerging technologies cover? My imagination was going right to that scene in independence day, where they go into that hanger in area 51 and you see the floating alien spaceship they recovered from Roswell.

Chris Benson:
Yeah. But we don’t talk about that.

James Kotecki:
Of course not, of course.

Chris Benson:
That’s off limits. We can’t do that. So it kind of covers a lot of things. And so in a broad generalization, I think that there is a large movement in DOD at large, but also obviously the various organizations that support DOD in the commercial space and in the industrial complex space that we recognize that times are changing. Technology is moving really fast. There is an understanding that a lot of commercial technologies that you will find in all sorts of commercial companies that we all know from our day to day lives are valuable in this space.

Chris Benson:
And so commercialization and that integration in with the fence in general is a big thing right now. And my job has a lot to do with that. I work for a director that works for the CTO of Lockheed Martin and trying to get technologies into the company that we can benefit from. But also those … you talked about your space aliens thing.

Chris Benson:
The equivalent of that, the things that we do do in secret for DOD in the intelligence community, there are potentially technologies that you could declassify entirely and find uses where if you remove the things that drive the classification, it would be a great technology for the world. And we might move those things out as well. So it’s a two way door that we’re working there, and it includes kind of everything really.

James Kotecki:
I do want to say from the outset that you are speaking on your own behalf, you’re not speaking on behalf of Lockheed Martin, but of course you do work at Lockheed Martin. And so your experiences are colored by that. And so we’re just really interested to talk to you about your take on AI specifically, because that’s what we’re here to talk about. When you talk about something like AI ethics, what does that look like at Lockheed Martin? What does it look like in the defense space? Because obviously ethical considerations, when you talk about weapons and defense industry topics get really sensitive, really fast.

Chris Benson:
It does. And I will mention as I address that, I’ll talk about it from my perspective, but as I changed roles, you alluded to that, I just handed off. I was the overall AI ethics lead for Lockheed Martin company-wide. And so I have done a lot of work in that space. I’ve moved into a new position and handed it off to somebody else. But as we talk about this, there is a big challenge in AI ethics, really for any company. And it’s not just in the defense world.

Chris Benson:
And that is, we’ve looked at the last couple of years and we see leaders out there figuring out what their principles are. What does AI ethics mean to each of these organizations? And they have set out a set of principles on what those are with some supporting docs. And we have done the same. I don’t think we’ve announced ours yet, but we have completed the same process.

Chris Benson:
The big challenge going forward is how do you take principles and make them real? So if your company does something, it produces products and/or services. And those are informed by AIML models at this point, because those are becoming pervasive and everything. AIML is one of the great enablers, and it is not a product unto itself in most cases, but rather something that helps other products and services kind of get to what they need to be, what the vision is for them.

Chris Benson:
And so AI ethics is kind of the constraint. It’s the boundary around that. And to make it real, you have to get it, in our case, into the workflow of an engineer. The person that is out there working on the product and service who may have no particular expertise, they might’ve had a training class for two on AI ethics, but that’s not really their primary thing, you really need that person to have it built into their workflow and their tooling. You need, I know it sounds really boring, but you need compliance around that at a corporate or business unit level. You need policies so that your own organization’s policies on how it does business incorporate that in, so that it’s not a thing sticking off.

Chris Benson:
I know in our organization, we have an ethics organization that governs our operations, and that was a big part of it. We needed to both integrate it into engineering and integrate it into our ethics organization. So they’re not really specific to our company. They’re really things that industry at large has to tackle. And so that’s where the hard work really comes in, is figuring out how you get from principles into workflow or someone, it doesn’t have to be an expert, and yet they can fully comply with what’s expected of them.

James Kotecki:
And what’s more difficult here, the principal’s part or the implementation part? Because it strikes me, and this has been a theme on other episodes of the show, a lot of times you talk about AI ethics, it just comes down to questions of ethics. Maybe leaders are having to define ethical principles for the first time, forget about AI.

Chris Benson:
Yeah, that’s true. I think we had an advantage at our organization that we already had our ethics well figured out. We were very, very, very mature in that space. So I think there is that. A lot of organizations, especially smaller organizations have not necessarily focused strictly on their ethics. They kind of do it as they go, and they’re very ethical in their behavior, but they may not have formalized it.

Chris Benson:
So you definitely have to kind of start with what ethics means to an organization. But after that, you kind of look at what an AIML model is, where it gets its data from, what are its applications, where are you going to use? Are the people that are going to be the objects of that model in some capacity aware of it? Should they be aware of it? Should they not?

Chris Benson:
I’m in the DOD space. In some cases, we don’t want that person. Whereas if you’re an American company or a company of any country operating on its citizens may be in a purely commercial sense then you certainly need to have them informed about things that other technologies may not have brought to bear. AIML is a tool like every other technology, but because you’re pulling in data that reflects real life, because that’s what the data input is, it is representing a model of the world upon which that model is operating.

Chris Benson:
And that can lead to all sorts of problems because we have bias all over the place, and the application that you may want to apply that model to, if not being thoughtful, it could break laws. It can break at least a company policy, corporate policy, and minimally, if you have customers, even if you don’t do any of those things, you may violate their expectations. And so it can do real business damage if you don’t consider all those things.

James Kotecki:
You alluded earlier to the issue of transparency, which is a facet of ethics. And often I think the one that most people go right to, and they think about what AI ethics might mean. Is it auditable? Can I understand what the model is doing? Is it a quote black box? Those terms in those concepts maybe take on a little bit of a different color when you’re talking about it inside of a secure environment where people have security clearances like Lockheed Martin. So can you talk a little bit more about what that means in your context?

Chris Benson:
Sure. So I mean, the way that I would address that is that you are taking a model that is trying to make an inference about the world in which it’s operating through the data that you’re training it on first and then bringing that in. And so the first thing that people think of, obviously it’s weapon systems and stuff. And if you look at the way the defense industry has approached this topic, where if you are producing commercial products, privacy might be a huge issue there because your customers are expecting that, but that’s why we have so many different versions of AI ethics.

Chris Benson:
If you’re in defense and it’s a terrorist camp, and you’re going to send, I’m just making all this up by the way, this is not real, if you’re going to send a cruise missile and that cruise missile is somehow informed by one of these technologies, then do you want the terrorists to worry about their privacy issue? That’s not really a concern. So if you’re only thinking of the commercial space, you may think privacy is a universal thing. And if you look at Microsoft and Google, they have addressed that in a very direct way.

Chris Benson:
We have cases where privacy is hugely important. I expect if I go into a Lockheed Martin facility and there are cameras and they may have AI models that are analyzing people coming in, there is a privacy issue. I’d like to be aware of that coming in. If it is sending a cruise missile to attack a terrorist camp, it’s a nonissue altogether. So there’s a lot of nuances in this space. And so a lot of the standard commercial assumptions don’t necessarily apply in the same way that you would apply to a commercial space with customers.

James Kotecki:
The other fascinating aspect of this, or one of the most fascinating aspect of this, you’re working in a highly regulated space, right? Sometimes people throw out terms like the Wild West to describe AI and the lack of policymaker understanding of this space in general. But when you switch over to defense specifically, you mentioned ethics has been longstanding concern. I imagine that maybe some of these concerns are a little bit more developed and a little bit more mature from a regulatory standpoint where you live.

Chris Benson:
Yeah. So I came from the commercial space. I’ve only been with Lockheed Martin for just under two years as we record this. And so this is still quite new to me. And that’s something that I’ve had to adjust to. Most certainly is the fact that there is an enormous amount of law and ethical guidance and outright constraint on what you can and can’t do. I came into this organization and had to learn about things that I had never thought of like the law of war, which it literally is a legal framework that governs how you do that. And there’s the same in autonomy and people are worried about obviously terminator robots flying around, which is by the way, not very real life in the sense of my day to day world.

James Kotecki:
I mean, Skynet’s the obvious joke, right? But if anyone’s going to do Skynet, people would think it would be you. So you have to actually handle that joke on maybe more of a day to day basis than the rest of us.

Chris Benson:
And I have had people actually accuse me. As you know, I’ll get out and do talks just like you do. And I’ve been at conferences and there was one that was in Switzerland. And I had a lady basically go there. And then I had one actually in London where that happened as well. And it’s not where we are. I mean, especially we could talk about the limitations of current deep learning models and what they can and can’t do, but people’s imaginations run wild. It’s a great technology, but it’s just a technology. It’s a tool in the toolkit. But yes, people kind of assume that you must be doing Skynet out there and that you’re out and what’s going to save us from the robots that are about to kill us tomorrow. And that’s not real.

James Kotecki:
So do you have any advice for the rest of us who maybe don’t get the question in that pointed of a way, but certainly are faced with Thanksgiving and dinner table conversations around … Is AI evil or malevolent or whatever?

Chris Benson:
It’s a tool. And so my biggest … I’ll say as somebody who is a practitioner in this space, not only in AI, but in the defense world as well, we have a whole bunch of governance. There’s a DOD document that’s publicly available called 3000, 3000.09. And anybody can go Google that and look it up. And it governs how autonomy can be utilized, and it only addresses autonomy, but that’s mostly overlapped in terms of topic. There’s a little bit around AI ethics that you can add on top of that. But a lot of these docs are publicly available.

Chris Benson:
I think the key with AI specifically is to recognize that like any other technology, it can be abused by people who have motivations, individuals, humans that have motivations on how they want to train a model and what they want it to do. And there are good actors and bad actors in the world. And we try to be good actors, all of us. And we encourage that.

Chris Benson:
And so I think the key there is, if people will educate themselves on the fact that there’s actually a great deal of guidance in this space, and if they have an interest, if they’re listening to this podcast and they want to go look up that document I just talked about, and they want to go look up law of war and stuff, they might be surprised as was I, as someone new to the space.

Chris Benson:
There’s a huge amount of assurance that we’re going to go do the right thing and that we’re required by law to do that. But having said that, the future’s unknown, who comes after us. We’re trying to do good stuff. We’re trying to follow the law. We’re trying to be ethical, moral. Someday we may have people that are not as ethical and moral, and it is up to every citizen to be aware of what is happening. They should read up on it and they should have a voice. AI will evolve. Someday we may have AGI and things like that. So it’s an evolving story.

James Kotecki:
Well, we have to go to AGI in a second, but I do want to ask, you kind of pinned to that, but what do you actually worry about in the field of AI or the emerging technologies you look at?

Chris Benson:
I worry about people because right now we’re seeing two things. We’re seeing this economies of scale happening in AI. And we’re seeing democratization of the tools, which is fantastic. We have open source tools, even open source data that’s now out there. And there’s amazing things you can do without having to start with lots of resources, almost anyone in the world that has a laptop and an internet connection can get into this if they want to. There’s free learning. So the ability is out there.

Chris Benson:
It really comes down to what are their motivations? How are they thinking about it? And a lot of people I’ve discovered really check out. When I tell them that I’ve been in AI, I can almost see their eyes glaze over if they’re not in technology. And they kind of think of you as one of those people over there, one of those elites and things like that. And I keep my … One of the things I’m always trying to do is get them to engage.

Chris Benson:
I don’t care if you are a retired school teacher who is 70 years old, never was into technology. You taught English. This is your future. It’s the future of your grandchildren that they’re going into. Everybody should engage and educate themselves on these topics because it will affect and it will evolve. And it will require that the whole world be aware of it and understand that they have a right to kind of speak toward where it should go. It should be a democratic process.

James Kotecki:
Yeah. I’m right there with you. That’s one of the reasons why I’m passionate about doing this show, is to try and make it accessible to as many people as possible, at least within our corner of the world. Okay. So AGI, we’ve got to talk about that. You said you’ve been at Lockheed for a couple years, but you’ve obviously been in AI much longer. AGI of course, for those listening, the idea of Artificial General Intelligence, that one day computers could be smarter than people and be generally intelligent to do lots of different things instead of one narrow and specific thing. So I think you and I last chatted for another podcast a couple of years ago, but where’s your head at right now on whether AGI is even going to be possible within our lifetime?

Chris Benson:
So the way I typically address it is, I’m going to slightly skirt your question and say I don’t know-

James Kotecki:
It’s probably smart. It’s a crazy question.

Chris Benson:
I don’t know what the timeline is. My personal, this is strictly me, my personal viewpoint is, we’re really good at figuring things out over time. And fundamentally, I don’t think there’s anything magical that … and we look at AGI and we start comparing that, and this is critical to how we analyze it. When we think of AGI, we compare it to ourselves. We’re thinking about humans and human brains and thought processes. And when we’re looking at that, we often go directly to our own intelligence and our own consciousness and self awareness of all of those.

Chris Benson:
And so the reason I think that we will almost inevitably get to AGI at some point, I have no idea when, but at some point is that we are computational machines ourselves. We are governed by our biology, which is governed by our chemistry, which is governed by the physics around that. And physics is governed by mathematics. We are computational in every aspect of our being. And so if it’s computation, we’re going to figure it out eventually. I mean, that’s why we have been so successful to date. And if you look at just for a brief moment, after nearly a thousand years of dark ages, we hit the Renaissance a few centuries ago and look where we’ve come since then.

Chris Benson:
So if you look at the years, decades, and centuries in front of us, don’t know when, but yeah, I think we’ll get there. I think the surprising thing is I don’t think it’s going to be what we expect. With today’s essentially dumb, deep learning models that are far, far, far away from self awareness and consciousness and such like that, we can tackle specific issues and problems to solve, and in a very narrow scope. We’re really, really good at that. If you think about computer vision and natural language processing and the things that we’ve seen in recent years, it’s amazing what you can do.

Chris Benson:
So you can have something even today that is incredibly intelligent in a very narrow scope from a model, and you can stack those models in a software stack up next to each other and to where each model is doing very specific, narrow things and get tremendous capability out of that, which doesn’t come anywhere close to AGI. It’s not conscious. It’s not self-aware. It’s not that idea of AGI being able to be as versatile as human beings are. So I think it’s coming someday. I don’t think it will be what people envision in the movies today.

James Kotecki:
You are a big animal rights advocate. In fact, you’re wearing an animal rights shirt. You showed me earlier that the camera can’t see right now, but how are we going to know if and when this future AGI deserves rights, and what will that mean?

Chris Benson:
So I will delve in there. That’s far outside of my expertise, but like everybody on the planet, I’m entitled to an opinion. And that’s all it is. I think as we are delving into what our own sense of consciousness and intelligence are, and we are looking at artificial means of achieving that, and I’ve already said since it’s all computation, whether it is in Silicon or whether it’s biology, it’s all computation. I think that we’re going to have to solve this going forward.

Chris Benson:
And so if you look at the fact that, and this is me as an amateur saying this, it’s only been about 50,000 years since we became as humans kind of special. And it’s only been in the last 12,000ish years roughly that we’ve kind of created civilization and started doing things. And that led to our domination of the planet. So in the scheme of things, that’s a really short amount of time in the history of the earth. It’s a really, really short amount of time. And so it is entirely conceivable that someday we may not be the dominant life form on the planet and things. It might be artificial. It could be something else. Who knows what can happen? We can’t see the future.

Chris Benson:
And my view is, if somebody was having to assess us and said, Okay, well, they became the dominant life form on the planet. And how was their stewardship? Were they responsible? Did they manage their ecosystem well? I would hope that they would have a good assessment. Right now, I won’t do it, but I could roll off tons of stats that are actually contrary to that. And so I made a conscious decision a few years ago as an adult. It wasn’t something I was raised thinking about that, and I decided I’m going to be the steward that I would want a future assessor to expect from me.

Chris Benson:
And so after years of eating meat and everything, I went vegan. And when I think about AI, if we get AGI at some point, and I’m speaking very speculative when I say this, and let’s say that it becomes remarkable some point far down the road and is able to take over most functions that we now look to humans to do, if that were to happen, then I would hope that functionally, we would no longer be the apex, especially if intelligence far outrun us. I would hope they’d want to keep me around or keep my grandchildren around. And so I just try to act in a way that I think is a responsible way to do that. And it ties in the animal rights. It ties in AIML. It ties in ethics, the whole bang. So I try to live that philosophy.

James Kotecki:
In the interim point between now, let’s say, I think they simulated like a worm brain or something at one point, but let’s imagine that you could get a robot dog. Let’s say one of those Boston dynamics weird robot dogs, and the brain inside of it was like AGI but at the level of a dog, right? So it was not as smart as a human … there’s going to be this middle ground point, right, where we have stuff that’s way, way smarter than today, but still not at the level of a human brain. I guess my question is, in that middle period, does that robot deserve rights like a real dog would deserve rights?

Chris Benson:
So recognizing this as just my perspective. We currently, in almost all jurisdictions around the world, regard animals as just property of humans anyway. So that’s a minimal level. I’m actually recognizing that animals experience fear, they experience pain, they experience love and joy, many of the same. It may not feel exactly as it does with humans because we’re at a different computational level, which may give rise to that consciousness potentially, but they are certainly experiencing those emotions. I personally think animals should be accounted for in that way. And if I see an artificial intelligence that is exhibiting that same set of characteristics and behaviors and presumably thoughts to support those, then yes, I personally would definitely advocate that, but we’re not even there with our biology at this point.

James Kotecki:
I love this field that I love our conversations because you’re able to go from the practical day to day of like, how do you get AI implemented in a company? And you quickly get into like these ethical almost like late night dorm room conversations around free will and consciousness, and it’s all related. It’s all relevant. And even though it is speculative, it’s a lot of fun. I want to bring us back to the cold, hard reality of the present for the last couple of questions. When the history of AI is written, do you think COVID-19 will get its own chapter?

Chris Benson:
I do. And obviously I don’t know where that’s going, but we’ve seen a remarkable … when I started my own podcast a couple of years ago, we were envisioning that like other technology groups, we wish that AIML would have a sense of community built around it. And that was something that wasn’t really there. There was the academic world where they would go, but there wasn’t kind of a sense of we’re all in this together. And I saw so many other areas in tech where people really did create that community. And so I was promoting this even a year ago that it seemed like a true sense of community had not happened.

Chris Benson:
I’ve seen that happen since with COVID-19 because almost immediately the call went out to say, how can we use this new technology to help find the answers that our medical professionals, our scientists really need to help the human populations across the earth? And so there was the CORD-19, which was a giant data set of all the relevant papers that were being published, hundreds per day, that were included. A lot of the AI Institute started supporting COVID research.

Chris Benson:
And a byproduct of that is that it brought the AI community together, not just the academics, but it brought people from industry. They were building things and they might be addressing the virus itself, or they might be using AI to figure out how to create an N95 mask a little bit better, or a little bit cheaper, or one that works a little bit more effectively. And there was a beauty to that, that arose out of the horror that is COVID-19.

Chris Benson:
And I say that as someone who has lost a close family member to COVID-19. So it’s definitely affected us, but at least out of the horror, something wonderful has occurred. And my hope is that as we get COVID-19 licked, if you will, at some point in the future here, that we can turn our community to other things and do AI for good in a larger scale around the world.

James Kotecki:
You mentioned your podcasts. And so in the closing minute or two here, I want to talk about that. You’ve done 100 episodes. You mentioned this idea of community being built. What else have you learned doing 100 episodes of conversations with leaders in the space?

Chris Benson:
A lot, especially as we got going and got a track record, we got access to so many people that were looking for a voice. And we try to go where we find kind of the magic of a story. And sometimes we’re talking to someone who’s rather famous in the AI world. And sometimes it is a startup with somebody that’s fresh out of university, and they’re trying to do something, but they have a story to tell or they have a passion they want to relate.

Chris Benson:
And I think if there is one unifying lesson, it is that AI and ML are really about people. It’s really about a bunch of people with all sorts of motivations – most of them very good – that want to make the world a better place. And they want to be part of that story. They want to be part of a story about a world adjusting, and they want it to be by far a net good.

Chris Benson:
And so that has really given me a very optimistic attitude where I know some people, when as soon as you say AI, they think of all of the apocalyptic kind of viewpoints and stuff. When I hear it, I’ve heard so many people tell so many wonderful stories and they’re accomplishing amazing things, saving lives all over the world. It might be keeping crops growing in Africa, was an early episode we had. We’re taking a cell phone out and ensuring that people had food and people were not starving. Children were not starving to death. That’s just one instance of hundreds and hundreds.

Chris Benson:
So I think we need … AIML is a part of humanity. It is becoming an extension of us. It is right now in a small way through these deep learning tools and reinforcement learning and such. But if we get to AGI, it’ll also be a part of us. It’s a way … the two were bound together for the foreseeable future.

James Kotecki:
Well, I couldn’t have scripted a better ending for the show, Chris. Thank you so much for that. That was great.

Chris Benson:
You’re welcome.

James Kotecki:
And it’s almost like you do this all the time and speak into a microphone all the time, because you do, which is the Practical AI podcast, which people can find I’m sure, wherever fine podcasts are distributed. Anything you want to specifically plug there?

Chris Benson:
No. Our goal, I talked about community, our goal is to make AI Practical productive and accessible everyone. And unofficially, sometimes we cover very technical topics, but we also try to make it understood by everybody. And I like to tell people that your grandmother who has never done tech in her life could listen in and at least understand what we’re talking about. And I hope she does.

James Kotecki:
I love it. That’s a fantastic goal. Lockheed Martin and Practical AIs, Chris Benson, thank you so much for joining us today.

Chris Benson:
Thank you for having me on the show.

James Kotecki:
And thank you so much for watching and/or listening. Please find us on LinkedIn, follow us, like us, share us, comment. You know what to do. You’ve listened to podcasts and watch videos before on the internet. So please do that for us. Thank you so much. I am James Kotecki, and that is what happens when Machine Meets World.

Share this post