Emerj CEO and Podcast Host Daniel Faggella on Machine Meets World

Daniel Faggella, CEO of Emerj Artificial Intelligence Research and host of two AI podcasts, talks (what else) AI with James Kotecki of Infinia ML.

Machine Meets World streams live on YouTube weekly. It’s also available in audio form on Apple PodcastsGoogle PodcastsSpotify, and Stitcher.

Recorded July 21st, 2020.

Killer Quote

“Executive AI fluency is the linchpin of why money is being wasted . . . . Without that grounding, we’re going to pick bad vendors. We’re going to get sold on value propositions that nobody can deliver on. We’re going to drastically underestimate the kind of teams and data infrastructure needs we’re going to have, and we’re going to blow money where it shouldn’t go.” (13:01)

More Highlights

The chasm between AI’s present and long-term implications. (5:55)

The business challenge of keeping pace with AI’s advancing capabilities. (17:30)

Why most companies could see “a net tick backwards from AI in the very near term.” (21:59)

Show Suggestions?

Email mmw@infiniaml.com.

Full Transcript

James Kotecki:
Hey, from Infinia ML, this is Machine Meets World. I am James Kotecki, your host, and we’re talking artificial intelligence today with my guest. Very excited about this guest. He’s the founder of Emerj Artificial Intelligence Research and the host of not one but two AI podcasts, the AI in Business Podcast and the AI in Financial Services Podcast, a lot to get out, Dan Faggella, welcome to the show.

Dan Faggella:
James, glad to be here.

James Kotecki:
Dan, you’re often the one asking the questions. I’m excited to be the one asking you the questions this time.

Dan Faggella:
Yeah, yeah, yeah, you’re flipping the script on me here.

James Kotecki:
Between the two podcasts, how many episodes have you done?

Dan Faggella:
Oh, man. I mean, I started in 2012. Now we’re doing three a week. We had times we were doing two a week. We had times we were doing one a week. I mean, squarely over a thousand recorded interviews, but for market research, the bulk of what we do is not going to air. If we’re doing a project for Defense or for Wells Fargo or something, we’re not going to take every phone interview and be able to put it up for free on the web. Just the stuff that’s openly available, squarely over a thousand is safe to say.

James Kotecki:
You’ve interviewed all these different AI experts in business and academia, all these technical folks, but your background is really interesting because when I read your bio, it doesn’t seem like the traditional AI background. You ran a martial arts business. You got into AI somehow from that.

Dan Faggella:
Yeah. It was I guess an interesting jump. I never really wanted to have a job. When I was like 20, my friends were delivering pizzas or something, and I liked training fighters. My ears are all messed up because I trained martial arts for a long time, I won a national tournament in Brazilian Jiu-Jitsu, competed all over the United States, had done seminars in the US and Brazil and was very active there, and it was a more fun way to pay the bills than something boring. But I was really interested in psychology and really interested specifically in skill acquisition. There’s a lot of really great academics there, fellow by the name of Anders Ericsson that’s really the screaming edge of skill acquisition as a science, kind of putting it on a map. He was one of the folks I talked to during my thesis at University of Pennsylvania.

Dan Faggella:
I was doing the Ivy League thing, and while I’m doing the Ivy League thing back in 2012, there were rustles in the breeze that, “Hey, Dan, all this neuro stuff you’re doing about learning skills, they’re kind of doing that with machines.” This is the really early days of ImageNet, really early days of NLP for Twitter data. And I became reasonably well-convinced when I graduated that maybe I got the wrong degree. And so just out of raw curiosity, I started interviewing the ever-loving hell out of everybody thinking about AI in the longterm, artificial general intelligence, as well as applying it in the near-term. By the time 2014 ran around, there were companies that really wanted to pay for understanding the ROI impact of AI. Back then, there was essentially nobody doing it. A little bit of a roundabout path to get to where I am, but a fascination with the mind turns into a fascination with these replica minds that we now study and I am where I am.

James Kotecki:
How have the interviews that you’ve conducted evolved? I imagine that the first few interviews you’ve conducted with someone in AI probably started pretty basic, and now, obviously, you’re getting into the weeds and the details with all these different experts and all these different niches and applications and fields.

Dan Faggella:
It started with what are unabashedly my, to this day, my longer-term interests, which are ultimately sort of what are we turning into here, so 30 to 40 years from now, really, what are we working towards? What is –

James Kotecki:
You’re saying we as in what are people turning into –

Dan Faggella:
Yeah, yeah, hominids –

James Kotecki:
– or society turning into?

Dan Faggella:
Hominids, yeah, hominids.

James Kotecki:
Ok.

Dan Faggella:
What is to occur? What is the future of the human experience? What is the percentage of virtual versus physical that will be day to day, let’s say, 20, 30 years into the future, and how powerful would these machines be? What kind of responsibilities will they take on, ideas around sentience, ideas around strong AI, et cetera. Folks like Ben Goertzel, folks like Nick Bostrom, folks like James Hughes, all these people were on the podcast back before anybody, in all squareness, before anybody actually cared about AI at all, even a small amount.

Dan Faggella:
It was mostly that stuff back in the day. I still do a bit of that. We have Ben Goertzel back on the show in the next couple of weeks, but primarily now it’s around business. Most of my day today… The United Nations doesn’t call me to talk about the 30-year consequences. It’s the very near term. A big retail bank isn’t going to pay us for really longterm stuff. Really, what they’re paying for is access to what’s delivering ROI now, so the shift has been less of an abstract focus on business and then an abstract focus on the future and a lot more of a focus on really a pressing and prescient use cases and trends in the now in specific industries where we focus such as life sciences and financial services. That’s been the big shift over the years.

James Kotecki:
That makes sense. It makes total sense, but it’s a bit sad that the UN isn’t thinking about the next 30, 40 years in artificial intelligence. Maybe there is someone there who is, but this what I think-

Dan Faggella:
Your boy’s chipping away. Your boy’s chipping away. Give me some time.

James Kotecki:
Yeah, I mean –

Dan Faggella:
Spoonful of honey helps the medicine go down, my chap. You understand?

James Kotecki:
That’s true. I understand, but there seems like there’s still this gaping gulf between folks who are thinking about gosh, in the next 30, 40 years, if we really get anything approaching AGI that’s going to so fundamentally rupture and change everything about what it means to be human, as you’re getting at, that we got to be thinking about this now versus, “Hey, my data’s kind of messed up. I want to do something in artificial intelligence but I can’t because I don’t have good data. How do I get my data in the right place?” Do you see a continuum or do you see a chasm between those two different realities?

Dan Faggella:
Right now, I definitely see a chasm. Most people know me for Emerj.com, which is arguably on the web sort of the place for hard data around use cases and vendor landscapes, and again, precedence of ROI and ease of deployment for AI for specific capabilities across industries. On my DanFaggella.com website, it’s much more longterm purpose, the bigger cause and the bigger transitions ahead stuff.

Dan Faggella:
I see a chasm in terms of the business and the policy world in those spaces at all. I think it’s almost wholly considered to be science fiction in some of these other realms and really isn’t even addressed to any degree. When I did present… so I was at UN headquarters a little bit over a year ago. We did a big presentation about deepfakes, so in New York City, the big building there, we actually deepfaked the director of UNICRI, which is the crime and justice wing of the United Nations, one of the wings that I’ve done a decent amount of speaking for and work with and whatnot. I like those folks.

Dan Faggella:
And we basically made this woman say a bunch of stuff she never said in a video, but the video itself that I put together intentionally drew the line forward towards where that’s ultimately taking us, a programmatically generated space of whatever it is that we’d like to experience and how preferable that is to maybe reality sometimes and how much control there would be to really have control over that virtual world. I didn’t go off the – as much detail as I could, but I tried to draw the line, but I will tell you, that’s an intentional effort for me. I think it’s very important to draw the line. We have a Saturday series right now called AI Futures, where we are talking to folks with Future of Humanity Institute, the IEEE, we had Stuart Russell was the kickoff episode where we’re trying to, again, draw people, like, yes, this is where it is now, but geez, let’s take a gander here. Let’s take a gander at the next 10 years, even, just 10 and just see where the heck things are. But I have to draw that line real narrow because the chasm is gargantuan.

James Kotecki:
You are, though, in a sense, a bit of a one-man survey if you’ve interviewed a thousand experts in this field about where things are going. What’s your take as you unify all the disparate opinions about AGI, which is, for the audience, is artificial general intelligence, which is when computers are able to act in a way that’s broadly intelligent like people do as opposed to just narrow very specific things like a financial services application. There’s obviously disparate opinions about this. I saw some data from a few years ago where somebody surveyed a bunch of the top experts. I think it was the top published people at NeurIPS or one of those conferences, and the answers for when we will achieve AGI or anything close to it are all over the map from it’s coming in the next 20 years to over a hundred years away. What’s your take on it?

Dan Faggella:
We’ve done similar polls. The last poll that I ran that was exactly like this was 2018. The big poll that I referenced here is Bostrom’s poll from Bostrom and Müller from 2014 or something. They’d interviewed hundreds of folks from different AI organizations. I think they landed somewhere in the… The question was framed in a very nuanced and specific way. It was something akin to, “With a 90% or higher certainty, when would you suspect we have human level artificial intelligence?” It was something akin to that. People can Google “Bostrom AGI survey.” It’s going to be the first thing. It’s a PDF.

Dan Faggella:
They landed in 2065, 2060-ish if I recall correctly in terms of their mean. When we did a similar poll of something like 30 or 40 folks… and we actually have a lot of the answers actually written all the way out: “when will the singularity arrive” and then “Emerj,” E-M-E-R-J. If you put that into Google, you’re going to see the poll. You’ll see the quotes. You’ll see who the hell we interviewed.

Dan Faggella:
But we landed somewhere quite similar in terms of the 2060-ish range. Does that mean that’s when it’s going to occur? Lord knows, brother, right? I mean, I’m not prognosticating here, but we do talk to a lot of folks, and we’re landing somewhere near where Bostrom did. I personally, the way that I frame it, I’m not a dogmatist. It’s not like, “Oh, I know there’s going to be AGI.” I’ve never said that. 500 articles, 2,000 articles of mine, go find that once. You’ll never find it. My statement is I’m at slightly better than coin toss that before I die, we will see the tippy toes of what’s after people. That’s my ballgame. That’s the only thing I say. I think that our polls potentially validate that, but I have no certainty. I just have probabilities.

James Kotecki:
And of course, I think one thing that might get overlooked in this conversation, which gets back to the practical interviews that you do on use cases, excuse me, use cases is we don’t need anything close to AGI to have a fundamentally transformative experience-

Dan Faggella:
No way. No way.

James Kotecki:
… with this technology.

Dan Faggella:
Yeah. Absolutely not. I mean, we know people are already talking about OpenAI’s programmatically generated text for example and just how amazing it is while also being as dumb as a bucket of rocks or whatever the case may be. And we can imagine the same sort of dumbness and amazingness in a thousand domains and dimensions. We can imagine entire industries really being pretty well transformed. And to be frank, the transformation of existing sectors is slow work. When we look at – when we help a bank or a life sciences firm or what have you, it’s real chip-away stuff. It’s real chip-away stuff to get the data infrastructure right, to get the team structure right, to get executives to understand what AI is and what it can do and screen out the bogus vendors.

Dan Faggella:
It’s a lot of chip-away work, and we’re not seeing like, “Oh, Merck and Bayer are going to be out of business next week when the magic drug discovery… ”

James Kotecki:
Right.

Dan Faggella:
I’m so far from that pole, it’s borderline ridiculous; however, I do think that the whole ‘we overestimate a year, we underestimate decade’ thing absolutely holds for AI in my opinion. And yeah, to your point, we don’t need AGI to have financial services as a whole or healthcare as we think about it now or gaming and entertainment as we think about them now completely and radically shifted to something astronomically different than what we understand today. So yeah, I’m with you on that.

James Kotecki:
What are some other fundamental misunderstandings that you encounter when you do talks and trainings and articles and consulting to organizations?

Dan Faggella:
This is a line that I toe that I’m actually surprised that I’m alone blowing this trumpet in the corner here, but that is that I think there’s a lot of ideas about why artificial intelligence has not taken off in the enterprise maybe as much as some people would have suspected. There’s plenty of traction, more in some industries than others. We tend to do work in the industries where there’s more just because those people care more about market research. They’re spending more. But there’s a lot of hypotheses here. “Well, the applications are very nascent.” “Well, data infrastructure’s rough.” All of these are really, to be frank, I mean, they’re good insights, but for me, really, a core insight is that executive AI fluency is the linchpin of why money is being wasted, why according to Gartner’s numbers some eighty-X percent of pilot projects are just flopping miserably is that executive AI fluency is kind of the linchpin. I’ll explain why.

Dan Faggella:
When we, as leadership, don’t have a bounded and strong understanding as to what AI does, and really, in very cold terms, James, why is AI different than IT? Just why is it different, understanding that we need iteration, understanding that we need this data pipeline, understanding that we need cross-functional teams to understand the data and the features, understanding just, again, not technical. I’m not a technical guy. I’ve passed Andrew Ng’s course, but don’t ask me to use Python again. I mean, I could do a little bit just to prove I’m not a complete nimrod, but otherwise, I’m a business language guy. I’m a bottom line guy. The people that pay us, they pay for bottom line. They don’t need me to tell them linear algebra.

Dan Faggella:
Just the conceptual level, why is AI different than IT? What reasonably can AI do? Also, another element of this, not just what can AI do reasonably, but what’s a reasonable bounded reality of use cases? If we’re in banking and I’m a banking leader, do I even understand outside of chatbots and, I don’t know, recommendation engines, really, really common use cases, do I even have an understanding of where within my domain AI is sprinkling its way in? Do I understand anti-money laundering? That’s a space where we at Emerj actually have a pretty big focus… or fraud or wealth management, or do I have at least a bounded grasp of what those are? So what is AI, and how is it different from IT? And what the ever-loving hell is it able to do?

Dan Faggella:
If I don’t have an idea of both of those things, which honestly is maybe a six to eight-hour education process… This is not like a “go to a bootcamp.” This is like a “just learn some fundamental stuff, man, just learn some fundamental stuff.” Without that grounding, we’re going to pick bad vendors. We’re going to get sold on value propositions that nobody can deliver on. We’re going to drastically underestimate the kind of teams and data infrastructure needs we’re going to have, and we’re going to blow money where it shouldn’t go.

Dan Faggella:
So unless we have a coherent vision of AI and what it can do and we can fit that into a broader business strategy and digital transformation and vision, stuff doesn’t work. And so our work, yes, it’s providing data on ROI, ease of deployment, vendor comparisons, whatever, yes, we provide analysis, but we’re also doing advisory to get executives smart as hell so when we’re gone, nobody’s picking the willy-nilly vendors that are making big promises and nobody’s picking applications that are outside the wheelhouse of what’ll really drive business value. That’s a big misconception. Executive AI fluency, almost not talked about. In my opinion, that’s why we’re wasting money.

James Kotecki:
Is part of the problem here that it’s difficult to put this stuff in human terms? This is a theme that came up on a previous episode I did where a lot of the way that AI is couched is you couch it as an assistant. You compare it to what a human was doing, and you say, “It can do it faster or better or at a greater scale,” but obviously, a lot of the… maybe you agree with this. Some of the best applications of AI in the future will be things that are just harder for us to conceive –

Dan Faggella:
Yes.

James Kotecki:
– as humans because this is not like… I mean, we talk about artificial general intelligence, some point down the line, but for now, it almost – it does and it doesn’t make sense to put these things in human terms because you limit your ability to understand what it really could do for your organization.

Dan Faggella:
100%. I think any, “Well, AI is like X.” Any individual analogy is going to be very, very weak. It’s not just, “Well, imagine a virtual person who could . . .” It’s like, “No, that’s not really going to work for every application. It’s really not.” But the problem is… so this is why the education piece works. I could have told you, if this was a simple concept, I could have said, “James, it’s six or eight minutes of learning,” but I said six or eight hours, didn’t I?

James Kotecki:
Yeah.

Dan Faggella:
That’s because we do need to understand that there isn’t a single concept box here, there’s a bounded space of what reasonably these applications can do, and then we actually need to tie that abstract understanding to instances. “Oh, look at this instance of anomaly detection in cyber sec and fraud. Ah, look at this instance of what we consider to be recommendation engines for internal document search or what have you. Ah, look at this instance.”

Dan Faggella:
We need to concretize those because this is a broader space of art and science. Doesn’t mean it’s impossible. Doesn’t mean we need to go back to Carnegie Mellon and get a Ph.D. Nobody needs that. Nobody has time for that, but we do need to have a broader conceptual understanding than a sentence. So yeah, you’re right. It is hard to put it in a box. The way that we think about it is like this, James. AI is in part very challenging for businesses to stay on top of because it’s a creeping capability space. It’s constantly opening and moving out. So we could imagine wireless tech could’ve been seen the same way, like, “Oh, cell phones and cell phone towers. It’s not just cell phones, man. It’s all these other things.” We can imagine that, and that is true to some degree.

Dan Faggella:
AI is like that, but in five dimensions, I mean, AI is creeping its way into so many points and fractions of so many workflows at once and is potentially creating ways to overhaul those workflows in different ways. Our job, our core research is called the AI Opportunity Landscape, and this is essentially looking at the totality of the vendor ecosystem in major sectors like insurance, like banking, like wealth management, and then looking at the investments of the global top 20 companies in that space and mapping and ontologizing what are the capabilities, what are the new [inaudible] that we are able to do because of AI? What are the new things that AI allow us to do?

Dan Faggella:
Some of them we may have no interest in. We’re not here to play with AI toys. We only want the ones that are going to drive unique value for us, but what are those? And that space is creeping and opening constantly. In banking alone, we’ll see more money go into customer service. We’ll see some of those companies fail. We’ll see a lot more money balloon into fraud, and we’ll see that that kind of creeping space of what’s possible and what’s working always moving so our research is updated every six months because companies that want to stay on top of this, like you said, it’s we can’t, “Okay. I get what AI does.” It’s like “no, unfortunately, unfortunately you kind of need a map of the landscape.” And so the hypothesis that started this company and what folks ultimately pay for us for the people who are doing subscription research with Emerj is to stay on top of that ever-creeping capability space of how AI is shaking up their sector and what they could do about it.

James Kotecki:
And to tie this back to something that you said earlier in our conversation, if it is true that AI is one of the technologies, maybe the technology that is going to shift us from being humans to being something else beyond that maybe within or at the end of our lifetimes, then it should be hard. It should be difficult to understand. It makes sense that it’s hard to put it in analogies that are just easy to grasp in a few minutes’ time.

Dan Faggella:
Exactly. And if we think it’s hard to grasp now, I mean, in 10 years, I think it’s going to be astronomically harder to grasp. I mean the “conversational interface,” quote-unquote, stuff, the chatbots 10 years ago – it would be annoying to explain it to somebody, but it wouldn’t be impossible. If you were going to try to explain what OpenAI is doing now with text generation, you probably need really strong applied maths understanding, and you also probably need months of grasping the different algorithms and approaches that are interacting with each other to conjure that result that they’re now conjuring. So imagine 10 years from now, imagine when neuro and AI start intersecting, imagine when, who knows, maybe nano and neuro start intersecting. I mean, things are definitely going to be getting more complicated, not less. So it’s hard now, but buckle your seat belts, man. I mean, 20 years from now, we’re going to be in a different universe.

James Kotecki:
My programmatically-generated avatar will interview your programmatically-generated avatar –

Dan Faggella:
That’s exactly it.

James Kotecki:
– and we’ll save ourselves a lot of time.

Dan Faggella:
But it’ll be about an unlimited number of topics, and we’ll be astronomically more brilliant, and we’ll be able to do it in all the imaginable languages, including new programmatically-generated languages that are more efficient in terms of transferring ideas with a granularity of ideas than any current hominid language because we’re too dumb to figure it out.

James Kotecki:
Right. Yes. Oh, man. All right. So let’s bring it back to the reality, the day to day of the pandemic that we’re currently in. We haven’t even touched on-

Dan Faggella:
Yeah. Okay [inaudible] –

James Kotecki:
I can’t believe we’ve gone 20 minutes without talking about COVID-19.

Dan Faggella:
Yeah. Well, I’m actually grateful that that’s the case, but either way, yeah, we can dive in [inaudible].

James Kotecki:
But we got to talk about the way that this has changed how businesses think about AI. A lot of the narrative here is around acceleration, things that were on the table are now happening faster because AI is accelerating them and people realizing that maybe they want to disconnect from the need to be totally relying on people in call centers or what have you when they can build out AI systems to process paperwork, et cetera, et cetera. Is that what you’re seeing?

Dan Faggella:
Yeah. I have a bit of a nuanced take on this. We’ve actually written a pretty good deal at emerj.com on this topic. I see it sort of twofold. So if you are Big Tech, I think, you’re in a nice and square spot. You still got lots of R&D budget. The vast majority of your revenues are still coming in. I think that for most enterprises, so non-Microsoft enterprises, non-Amazon enterprises, for most of them, I think this’ll be a net tick backwards from AI in the very near term. Let me explain.

Dan Faggella:
Artificial intelligence requires a number of things to be done well. We’ve written a great article called Critical Capabilities, so E-M-E-R-J dot com, Critical Capabilities, easy to find on Google. There’s an infographic that goes along with it. Basically, the prerequisites to deploying AI: we need to be able to have some R&D budgets here, we need to be able to have cross-functional teams that can work together and understand data and understand the problem space and understand the precedence of use and where it could go, we need to have data infrastructure that often requires some overhauling, to be honest, to actually make work. And to put those pieces together – to endure, and this really is the right word, to endure the R&D and iteration hit that we’re going to have to take in order to move this stuff forward.

Dan Faggella:
Now, the benefit is we learn. A lot of companies don’t focus on retained learnings. One of the most important things companies can do with early AI projects is figure out what’s working with cross-functional teams, what’s working with making discoveries about our data, and focusing on learning is one of the core areas of ROI. That’s its own interview. I’ll leave that aside, but it’s a very important point. Those things that we really do have to buckle up for to make AI work, I think there’s going to be less of that to spread around when we’re thinking about layoffs, when we’re thinking about how much revenue is going to be coming in, when we’re thinking about limitations in how people can collaborate and whatnot.

Dan Faggella:
I think for a lot of companies, there’s going to be a crimping in R&D that’s maybe we could say a little bit more speculative and involves a lot more moving parts. So I think that’s that. However, in the very near term, I think RPA is going to find a way to really slide in. I forget. Is it UiPath just raised, what are they worth, 10 billion now or something?

James Kotecki:
This is Robotic Process Automation.

Dan Faggella:
Yeah. Robotic Process Automation.

James Kotecki:
Which is kind of like a cousin to AI. I think there’s a continuum of how people describe it, but it’s-

Dan Faggella:
Exactly.

James Kotecki:
… really it’s own separate thing.

Dan Faggella:
I think it should squarely be called its own separate thing, and I could go into great detail about that. I have a lot of respect for Pegasystems, UiPath, all those guys. They’re getting started with AI in some ways. I don’t really see RPA and AI as the same thing. I think they can intersect. Some people think of RPA as like the boot jacking system to get AI off the ground.

Dan Faggella:
Absolutely the wrong way to think about it. They can interact, but they don’t inherently have to be married at all. Anyway, RPA is much more rote automation of explicit tasks. Human drags this folder from here to here and saves it with this name, so we get a machine that does that same dragging. It’s very if-then. We don’t necessarily need learning here. We just need a machine to run some macros. And I’m not trying to disparage what these companies are doing. Like I said, I respect Pega, respect UiPath, but it’s not necessarily as complicated as generating text or what have you with OpenAI.

Dan Faggella:
That stuff I think is going to slide in because, yes, efficiencies is going to be the name of the game, the focus, when we think about our enterprise clients, when we poll our audience, and we have 20,000 folks reading the newsletter, so we get to blast these people and get a sense of what Europe is thinking and what the United States is thinking, what banking is thinking and life sciences. Efficiencies is the name of the game, and RPA really has the ability to slide in with much less of that iteration and question marks, slide in and deliver some of those efficiencies and maybe streamline some of those workflows. So I suspect in the next 18 months, mostly RPA is going to be the winner here, and it won’t be until we get on solid financial footing and we’re on our uptick again and companies have some rife funds in the war chest where we really start to see AI fly in.

Dan Faggella:
Here’s another change though that I think’s very important to bring up for your audience. The AI that we do see flourish in this period will be AI that has a very mature understanding of how to integrate with the enterprise. Let me explain what I mean very quickly. If you’re an AI vendor and you need to plug into seven different sources of data and overhaul a workflow within an enterprise, good luck in the best of times. Full stop. Good luck in the best of times. Full stop. Now, it’s not to say you shouldn’t raise money, you shouldn’t try. Do your thing, brother. Do your thing. But that’s hard. It’s really, really hard. It’s not to say it’s wrong. Sometimes overhauling is right. Sometimes a lot of data, a lot of complexity is right, but right now when everybody’s working from home, we can’t get eight people in the room cross-functional to think through a lot of this stuff, overhauling data infrastructure in one silo is hard enough, nevermind six silos.

Dan Faggella:
The companies that I think are going to find their fit are companies that mostly handle the complexity of AI up in their own cloud and they find a single juncture –

James Kotecki:
Mmmmm [agreement].

Dan Faggella:
– a single workflow where they can deliver value with their already-trained algorithms by focusing on one or maybe two data streams or sources of information from the client, where they’re not altering a workflow, they’re just layering value on top of it with artificial intelligence. Those companies are going to be fit for this COVID era in terms of budget, in terms of integration requirements, and I think we’re going to see a lot more of them flourish.

Dan Faggella:
I don’t want AI to only be small-sniping stuff. It won’t be. It’s going to expand to the bigger projects, but right now, that sniper approach of individual workflows, limited data sources, fast as all hell integration, that’s what a lot of companies have not focused on in the last three years, and that’s what we’re seeing a lot more companies tilt towards. And I think that’s motivating. Still, I think RPA is going to be the bigger winner in the next 18 months.

James Kotecki:
I love the nuance of that answer. I think that that generally drives the conversation forward from the top line headlines of “tech is going to take over everything because of COVID-19.” I think that’s better.

Dan Faggella:
Yup.

James Kotecki:
I think that’s more thought through. So we’re running out of time here. Two lightning round questions for you –

Dan Faggella:
Let’s do it.

James Kotecki:
… whatever lightning round means. Of course, we’re just making this up as we go. What do you make of the conversation around, call it what you want, ethical AI, responsible AI, accountable AI. People have different semantic understandings of what each of those terms might mean, for example, but this general conversation around “is AI going to be something that humans can have oversight over and control and use in an ethical way.” What do you make of that, and what do you think that businesses are thinking about that?

Dan Faggella:
Yeah, well, I happen to know what business are thinking about, but very high level, two things. A, I think it’s heartening that we are considering the ethical and social consequences of the technology at its relatively nascent phase. I think that’s heartening. I’ve been involved in these conversations with organizations that I respect, like the OECD who’s thinking – they have their OECD AI principles, and they’re working on more hard law about that and I’ve been a big supporter of them. I’ve been a big supporter of the IEEE, who’s working on Ethically Aligned Design as far back as three or four years ago.

Dan Faggella:
So I’m actively a proponent of this general schtick. Interviewed Wendell Wallach back in 2012 about ethical AI when you didn’t care about it and nobody listening to this show cared about it in 2012. So my track record is pretty clear of being supportive here. I do think that there is a dark side where AI ethics becomes less of a, “Hey, how do we adhere to both the law and our own values in building the future,” which I think is virtuous, productive, fruitful. I think that’s a frame that is fruitful, virtuous, and productive.

Dan Faggella:
There’s another frame which is using ethical nitpicking to shoot down ideas in order to convey some sort of moral superiority. A lot of AI ethics discourse could be more around, “Oh, well, how dare you think of such and such? What about this disparaged group,” or, “How dare you think of such and such? What about yada-yada?” where the goal actually isn’t a productive outcome for the business and for the customer and for thinking about AI in a way that’s going to be more adherent to the law and to values and move forward. It’s actually not about moving forward. It’s about a different kind of preening that to me is absolutely vicious, vicious in the Aristotelian sense, like it is vice. And so I think there’s a really strong amount of slide into vice, but on the aggregate, I think the conversation is fruitful. On the aggregate, I am by and large, a proponent and actively support many organizations that are doing what I consider to be important work there. I think we got some lines to watch, but overall I’m an optimist.

James Kotecki:
Well, thanks for using the term Aristotelian on an AI interview show. And let me ask you one last question. You’re a – I think you said you’re a Brazilian Jiu-Jitsu national champion? Is that right?

Dan Faggella:
Yeah. Yeah.

James Kotecki:
So is there a Jiu-Jitsu lesson that applies to AI in business?

Dan Faggella:
Oh, man. I think there probably are a bunch. In fact, my area of focus in grad school, as I mentioned, was skill acquisition and skill development and the way humans learn, so all these terms for AI, like overfitting and underfitting, there are exact correlates in skill acquisition in the science of how you train people or how you train people in environments that are too limited, for example, where it can’t transfer it to other things like transfer learning, so to speak. It’s not the exact same term, but there’s a lot of correlates there.

Dan Faggella:
I think for me, probably mentality-wise, Jiu-Jitsu is a great place to learn as a young competitor that if you lost, complaining or saying anything about who beat you is like really the most shame-worthy thing. If you complain that this guy trained more than you or you complain that he got started before they blew the whistle or whatever, it’s very, very looked down upon to be cowardly. I think that if you are to run your own business, you also can’t… there’s nobody to blame, really, in the same way. I think, you’re moving forward with hard technology projects in AI and in the enterprise, I think for me, probably the responsibility and grit are big lessons from Jiu-Jitsu and things that I value. [If] there’s anything that transfers for me at a personal level, it’s probably that stuff.

James Kotecki:
It makes sense. Dan Faggella, thank you so much for joining us on Machine Meets World today. It’s been great to have you as a guest. We could have gone for another half hour.

Dan Faggella:
I know it. Hey, James, it’s been a blast, man. Thanks.

James Kotecki:
And thank you so much for listening and/or watching this podcast and/or video depending on whichever one you decided to plug into. By the way, you can do the other one next time if you want. Machine Meets World is a production of Infinia ML. You can email the show at Machine Meets World, M-M-W, mmw@infiniaml.com, and hey, why not rate us and like us and do all that social media stuff? I am James Kotecki. It’s been great. I’ve been your host, and that is what happens when Machine Meets World.

Share this post