Forrester’s J. P. Gownder on Machine Meets World

J.P. Gownder, VP, Principal Analyst at Forrester, talks artificial intelligence with James Kotecki, Director of Marketing at Infinia ML.

Machine Meets World streams live on YouTube weekly. It’s also available in audio form on Apple PodcastsGoogle PodcastsSpotify, and Stitcher.

Key Quote

“. . . for AI, we need to have more comprehensive governance that includes everything from ethics to explainability to accountability to bringing together all of the operational side of a model. Is it working properly? Is it being retrained continuously? Is it getting better? Does it need to be retired? But also, wedding that to some of these broader customer-relevant and employee-relevant issues.” [15:17]

“ . . . what AI is trying to do using probability is creating new conundrums that the business isn’t used to dealing with. And people aren’t very good at probabilistic thinking in the first place. So I think unfortunately, it’s one of these cases where you’ve got to bring everyone together to do this internally. My colleague Michele Goetz and I did a whole bunch of interviews around this. And we found that even some really large, sophisticated technology companies haven’t mastered every dimension of AI governance.” [16:09]

Other Highlights

A human-centered view of artificial intelligence [9:21]

The coming decade of job loss, job gains, and job transformation [24:49]

Gownder’s science fiction recommendations, which tackle inequality, climate change, and AI [27:24]

Show Suggestions?

Email mmw@infiniaml.com.

Full Transcript

James Kotecki:

Hey, and we are live from Infinia ML. This is Machine Meets World. I am James Kotecki, talking artificial intelligence today with my guest, vice president and principal analyst at Forrester Research, J.P. Gownder. Welcome.

J.P. Gownder:

Hello. Thanks for having me.

James Kotecki:

Thanks so much for being here. We are obviously in the midst of a pandemic, an AI revolution, a work from home revolution. You are the person at Forrester, or at least one of the principle people at Forrester who think a lot about a number of the issues, so I’m really excited to chat with you about all this today. What’s on your mind these days as you think about your work at Forrester? Obviously, Forrester’s been impacted by COVID-19 as much as anybody else. It’s a large events business. Everybody’s now distributed, working from home, if they weren’t before. What are things like for you personally before we get into that professional stuff?

J.P. Gownder:

You know, it’s gone quite well. I mean, a company like Forrester, the nature of our work before this crisis involved a lot of travel, a lot of being able to connect remotely from hotel rooms when I’m on the road visiting clients. And so we had an infrastructure in place that was pretty friendly to remote work. There were parts of our organization, of course, who don’t work from home often, and so there was a bigger transition perhaps for them. But we had the fundamental infrastructure in place to be able to do this. And in fact, aside from the lack of travel, the work that I’ve been doing hasn’t changed much, even if the nature of some of the questions have changed.

James Kotecki:

Okay. So let’s talk about some of those questions. There’s a video that you made in September of 2019 called Are You Ready For AI and Automation? And really, for the last several years, I think a lot of business leaders have been asking themselves a version of that question. Now we’re in the middle of a pandemic. How has that changed the nature of that question, who’s asking it, and what some of the answers are?

J.P. Gownder:

I think the first thing that comes to mind is that a lot of companies have realized that their digital transformation journey was going more slowly than it needed to. The pandemic has brought into very strong relief the fact that companies who were more digital, more automated, more remote, simply have been thriving in this environment, at least on the operations side. If your customer goes out of business, then there’s not much you can do. But can you actually keep work getting done in your organization?

J.P. Gownder:

And of course, a big part of this is automation and the related area of AI, which is to say companies who had in place systems that leveraged automation and AI could kind of keep their operations moving even as their human employees face this huge disruption. So think of it in simple terms as you have a call center, and you’re doing tier one and tier two support using an automated system, not old school IVR hopefully, but something that’s using natural language understanding, natural language interaction, and that can actually do some of the things that people need done on a self service basis. Well, guess what, you’re going to be in better shape because especially in March, when people were moving home from the call center, it’s a totally new way of working for a call center, a contact center agent, so that’s just one example.

J.P. Gownder:

But for many organizations, those who had a relatively higher level of automation and AI, there was a better business continuity outcome. So leaders I’m talking to now are trying to figure that out, not only for the short-term, but also just over the long-term. Right? So business continuity planning is going to become a much more important feature of how we think about the world, secondary waves of infection, not that we’ve left the first one in the United States. But other things like climate change, trade risks, trade wars, political upheavals, it’s all going to be there. So AI and automation are going to be elevated in terms of their importance relative to other investments.

James Kotecki:

And do you think that part of the shift here was for some companies, now understanding, and I hate to put it this way because it sounds so dystopian, but humans are in some sense a liability? If all of your call center workers get sick and you don’t have an automated system in place, then that is a liability, where maybe they were thinking … I don’t know. Maybe they were more risk averse before, and they were thinking, “Oh, if we put complex AI and automation in place, then that’s the liability.” Have you seen a shift there?

J.P. Gownder:

I have. I mean, what I’ve been seeing in my research is actually to consider your workers at all levels a source of irreplaceable value. So I don’t think of them necessarily as a liability. There is a risk.

James Kotecki:

Much less dystopian way of putting it. I appreciate that.

J.P. Gownder:

Well, and there’s a risk associated with having human beings without some technology redundancies in place that can help them along the way. So there’s a lot of operational stuff that gets done at organizations that if you had robotic process automation bots in place, the bots aren’t going to be able to do everything accurately, but maybe they could do 80% of it. And so if there is this sort of sudden disruption of the sort that we’ve just gone through, you’re going to be able to keep the lights on. Right? You’re going to be able to keep things going until the humans can adapt.

J.P. Gownder:

Again, this may turn out to be something of a one time event because in the future, I expect organizations will have layers of redundancy, a business continuity plan, be prepared to change their operating model on a dime and go back home. So to some extent, we will learn from this collectively. But I do think that you raise an important point, which is that human beings, in order to be successful in work, there are many ecosystems that we rely upon. I don’t have children, but many of my colleagues do, and they’ve been homeschooling those kids. They didn’t buy a home that was meant to have a full-time office in it. I have an elder at home. I take care of my 87 year old father as a sole caregiver.

J.P. Gownder:

There are many institutions that I used to rely upon to help me with that effort, and those are frayed. And so as an employee, I now have different parameters facing me. So I like to think of it in terms of employee experience. It has just shifted what’s possible, giving people more flexibility, and then thinking about the risks of not being prepared with the right technology.

James Kotecki:

This brings up an interesting point, which is, I’m sure you’re familiar with Keynes’ idea from, I don’t know, the ’30s or ’40s, that by this moment in his future, our present, we would all be working something like 15 hours a week. Right? I think it was an essay called Economic Possibilities for our Grandchildren. And that’s been analyzed famously a number of different ways. But the question has always been: Why hasn’t all this AI and automation allowed us to work less and do other things more? What you’re talking about with caring for children or other family members, I mean, frankly, I’m speaking from my own personal experience also as a person with two young kids in the house, even though my wife is able to help out significantly as well, and we try and tag team it. I am probably working less than before.

James Kotecki:

I know people who don’t have kids that are reporting working more, so it’s maybe a mixed bag. But if we think about the ongoing childcare crisis of the fall when school starts again, and maybe it’s still virtual or some kind of hybrid model. We may just want to as a society take another look at the concept of 40 hours a week, or for many white collar professions, maybe 50 hours a week, or whatever that was. Are we finally at a moment, you think … Because we’ve had the technology. Right? We have the automation in place, but people often choose to work more and fill those hours, even if they can be automated. Do you think we’re at a kind of cultural moment where we might adapt that promise or towards that promise that Keynes made so many decades ago?

J.P. Gownder:

It seems that the fundamentals on that issue are rather more economic than technological, although I would say that with regard to AI, I mean, we remain at a very nascent stage of that long revolution. You like to say, “Well, there was an AI winter in the ’50s and ’60s, and the another one in the ’80s.” And now we may be having another one. And this gets to this issue of overly inflated expectations about what AI can do. I think of it, the Keynes proposition, as we look at the structure of our economy, and there are extremely high asset inflation on things like housing and education. So while the overall inflation rate doesn’t look so bad, people actually have to work an awful lot to be able to get ahold of the money to make ends meet on those issues. And then even if you’re fortunate like you and I to be in the sort of knowledge worker category, very privileged in many ways, even so, those are very competitive jobs where the expectations tend to grow rather than shrink.

J.P. Gownder:

So I don’t know, I tend to think that it would be great if we did rethink some of those things. And there are organizations that certainly have been trying this and that out. But I don’t know that this will be the inflection moment to be honest with you.

James Kotecki:

I definitely want to talk more about the potential AI winter and what AI can actually do at this moment. But I want to stay on this topic of being human-centered because I know that’s been a big theme for you in your research. And obviously, these things intersect. Right? So there’s this idea of the technology is out there. And people might read an article about some demo, some amazing thing that AI or some other kind of automated technology can do. And then there’s the practical reality of: Does that actually get implemented on a day-to-day basis by businesses? And you have said that the biggest challenge for most businesses to implementing this stuff is not the technology, it’s the human side of the equation. Can you explain more about what that means?

J.P. Gownder:

So being human-centered is recognition that we’re not at a stage where AI is just going to take over all different sorts of categories of workflows in your organization without an awful long process of integration and training. For most AI workloads, they are augmenting human labor. Humans remain in the loop. Even in highly automated situations, human talent and creativity and judgment remains important. Continuously acting as a subject matter expert remains important. Retraining algorithms is important. So over the last few years, we’ve done a lot of research here. And I’m not diminishing the challenges associated with the technology side. There are many challenges, but they are knowable and discernable. And the people in charge of solving those problems are well trained to deal with them, even when there is complexity.

J.P. Gownder:

What instead tends to be the problem is that your people, meaning your non technology staff, whose jobs are somehow transformed by their interaction with AI and automation, maybe they don’t have the skills, the inclination, the experience, or the support really, to help them operate with that new system. It’s also the case that leaders often have antiquated notions. To take a quick example, we tend to imagine as leaders that we expect people to succeed on efforts that they make. And when you’re in the realm of AI, you actually have to build in a pretty good failure rate. Right? The system will not be doing its job if it’s always correct. It’s probably not … It’s either doing something really tautological, or it’s actually not solving problems.

J.P. Gownder:

And then the org structures, it’s great to invest in technology. But do you have a sort of governance? Do you have ethics? Do you have people who are ethicists? Do you have legal input? Are you bringing in HR to actually embed AI into the organization as if it were another, albeit different, kind of worker?

James Kotecki:

How many ethicists to do you think are qualified to do this kind of work? Because obviously, it requires some kind of fundamental understanding of the technology as well. This isn’t ethics that you can do in an abstract. Right? You have to understand what is actually going on. And from my experience, colloquially, I’m sure you find this as well when you say that you do AI and automation research, most people probably completely misunderstand what you’re talking about.

J.P. Gownder:

Right.

James Kotecki:

Think of sci-fi.

J.P. Gownder:

Well, I think you’re right. I think it’s an interesting area for growth. And right now, the people who are getting into the ethics area, they do have to be people who have started elsewhere. Probably, they’ve come up through the tech ranks, or the legal ranks, or something like that. And these questions are, I don’t know, they lend themselves to participation, however, from folks from other areas.

J.P. Gownder:

I’ll give you the example of bias that tends toward racism, where you’ve not used training data that actually encompasses lots of different kinds of people. And it’s something like facial recognition, or you’re trying to actually do this, and you’ve not used nonwhite faces. This comes up actually quite more often than one would hope. Well, ethicists who operate in anti-racism and in bias, systemic bias in the workplace, well, look, they have a lot to contribute to that conversation, understanding that they may not be experts in how algorithms work, or training data works, et cetera.

James Kotecki:

What do you think is more … I’ve asked this to another guest before as well, but I’m curious on your take. What do you think is more insidious? Those kind of examples of overt racism and bias, or the much more subtle things that … And I guess this is kind of a leading question. But what about the much more subtle things, where people might not even realize that systems are biased, or that they were biased when they programmed it, or that the data that they got was biased in subtle ways that aren’t as easy to catch and don’t make for flashy headlines, but still might steer AI systems toward making decisions that if we could evaluate them, we would see as biased?

J.P. Gownder:

I think that maybe is a bit of a false dichotomy in the sense that any bias … So look, some bias is stuff that you actually want in your model. Right? I mean, there is such a thing as positive bias, not in ethnicity, or race, or gender, or other issues of that sort, but the technical meaning of the word bias as it pertains to building a model is different from these kind of systemic biases. But I would argue that maybe some of the ones that seem subtle actually wind up having huge implications. So Apple had the recent problem with their credit card, where again, they made a very simple and subtle error around, I guess it was multicollinearity, where something that predicted your gender as being female was being used as a driver for a certain outcome on whether to extend credit. The bottom line is, a lot of women didn’t receive credit simply because of the way that their algorithm made its determination.

J.P. Gownder:

So it’s a seemingly actually pretty subtle or basically error, and it had this huge ramifications. And because it’s Apple, it got exposed. But any company could do that, and it might not get exposed, so it might not make a headline, and yet, it may have these huge implications. It just gets to this issue, which is that for AI, we need to have more comprehensive governance that includes everything from ethics to explainability to accountability to bringing together all of the operational side of a model. Is it working properly? Is it being retrained continuously? Is it getting better? Does it need to be retired? But also, wedding that to some of these broader customer-relevant and employee-relevant issues.

James Kotecki:

And how much of that is technological? And how much of that is educational, philosophical, and even political in terms of the things that need to change?

J.P. Gownder:

I think to your earlier point, it is a combination. I mean, I tend to think that understand … I’ll give you an example. If you just managed AI the way that you manage data, master data management, there would be some gaps because what AI is trying to do using probability is creating new conundrums that the business isn’t used to dealing with. And people aren’t very good at probabilistic thinking in the first place. So I think unfortunately, it’s one of these cases where you’ve got to bring everyone together to do this internally. My colleague Michele Goetz and I did a whole bunch of interviews around this. And we found that even some really large, sophisticated technology companies haven’t mastered every dimension of AI governance. And that’s because it is a moving target. It is a new set of concerns.

J.P. Gownder:

The ROI of it isn’t always immediately obvious. It may be a risk mitigation play to have certain dimensions of governance in play. And you’ll see that they will blow up at some point, and everyone will realize, oh, well, we need to be doing that. So there is this sort of sociological way that these things become part of our action.

James Kotecki:

I want to move onto another ethical issue, which is the issue of jobs and job loss. And are people going to lose their jobs? And it’s wrapped up in the ethical issue because there’s the question of what responsibility businesses have and what responsibilities governments have. And we talk about policies like Andrew Yang’s basic income idea, for example. That was the first time to really pop up on the presidential scene. But these have been discussed. Let’s just level set right now. People are talking about COVID-19 as being accelerant to automated job loss. What are you actually seeing right now before we get into the ethics of what should happen?

J.P. Gownder:

Well, what we are seeing is a lot of companies are very rapidly looking at automation and AI as sort of the next wave of investment. Some of it is very short-term. There were problems that were created by the COVID-19 crisis that needed to be dealt with, and dealt with in a remote and immediate fashion. I interviewed a company that makes, of all things, disposable sanitary wipes, a very good business to be in. But the problem is that when this hit, they were overwhelmed by their channel for requests. So they had to triage and sort of prioritize. Who are we going to send some of this to? Their ERP system didn’t do this. So within six days, they conceived, they developed, and they deployed an RPA bot that was able to do this triage. And that was a very huge thing because they really needed to plug this into their system.

J.P. Gownder:

But over the longer term, companies are going to reassess, and they’re going to say, “Number one, can I afford to bring back a person? If not, maybe I might work with them on a contingent basis,” so people who were previously full-time employees may find themselves contractors without benefits. That’s a problem, certainly for the people. And then the other piece will be: Do I actually accelerate my digital transformation such that I automate away processes that were done before by human beings? Which would, I have a kind of 10 year forecast of where jobs are going based on AI and automation. It would create this sort of discontinuity, where things would accelerate and jump up faster than originally thought.

James Kotecki:

And how much of that, as a business leader, should I be thinking about from an ethical perspective? I mean, because I think about if I was making horseshoes manually, and then there was a factory to make horseshoes. And I was just, because I wanted to employ all my people, I just kept the manual methods, I never automated. Well, pretty soon, I would go out of business. And the automated folks would win. Right? And all my people would be out of work. Right? So I wonder, I think sometimes in society, these kind of issues are put onto individual business leaders of: How could they possibly automate that swath of the workforce? And there may be some discussions there. It strikes me though that this is much more of a societal and kind of government policy debate that needs to happen, rather than asking individual business leaders to make a choice between potentially going out of business or trying to save the people that they have in the company right now with outdated methods.

J.P. Gownder:

I would say that you’re right that government policy is needed here. But I don’t think it’s all on government, and for the following reason. Customers increasingly take an interest in the ethical operations of your organization. You have seen, for example, a number of blow ups over Amazon, small boycotts if you will, about the way that workers were treated in various contexts, from warehouses that before the crisis, maybe people didn’t get breaks, and there were a lot of bad stories about that. During the crisis, are people actually being exposed and getting sick? And it’s not just Amazon. It’s across the board.

J.P. Gownder:

So you as a leader need to think about: What is your brand proposition? And if you are violating that in the way that you’re treating human beings that work for you, that’s an issue. There are alternatives, by the way, to simply laying people off, like furloughs. However, if you do furlough people, they have a reasonable expectation that they will be coming back at some point, so treat that decision ethically as well.

J.P. Gownder:

One other thing I noted in some recent research I’ve been doing is that companies who are in touch with social trends tend to actually see things on the horizon more quickly. This would say if you had a great diversity and inclusion approach at your company, you would not be surprised when Black Lives Matter became a huge issue because you would be getting information feedback about that. However, the same is true honestly for AI and automation. Although there have been miniature moments of sort of hubbub, we haven’t seen the big backlash like the Luddites of the 19th century yet. And so you have this risk of engendering brand damage, as well as undercutting how customers are going to think about you.

J.P. Gownder:

But then, can you actually recruit people later? What will happen down the line when all of those people that you sort of blithely got rid of are reviewing you on Glass Door? And the last thing I’ll just say on this is that consistently, we find that there’s a set of companies that try to over automate employment, and they generally fail. This was true for Tesla, if you recall. They blamed some of their big production problems on over automation, physical automation, to be fair. But the same principle applies. And what you will find is that there is all of this IT and knowledge of business process and of product and customer that you didn’t realize that the person who didn’t make as much money as you thought actually was crucial to your business. So it needs to be done not just on an ethical basis, but practically speaking, making these sweeping changes can blow back on you.

James Kotecki:

So we’re keeping that human centered view that you’ve kind of had throughout here, which I appreciate. I wonder though, and this is really more of a speculative question. But I wonder. Does in the long-term, does that hold? So there’s the idea, you can’t automate everybody right now, even for practical reasons, because you still need people. And I think we’ve certainly found that in our business at Infinia. People that try to over automate, I agree with you, are kind of in for a reckoning. But the question is: How good does this technology eventually get? And can businesses eventually figure out ways to adapt it in ways that really do replace people? Right? So the argument that technology can’t replace people now is not to say that it won’t be able to do so in 10 years, 20 years, 50 years. What’s your long-term prognosis for all this?

J.P. Gownder:

Well, I think you’re right. I mean, we don’t see many elevator operators anymore. When I was a kid, you could go to New York and visit maybe a fancy old building, and there would still be an elevator operator in the ’80s or whatever. But you don’t see a lot of that. Right? You see that certain job categories simply do go away. And one of the dangers that we have right now is that it is across the board. It’s not sort of localized to one kind of job. We find for example that in the area of medicine, if you are a radiologist, you need to have a really strong eye on this in the 10 year timeframe because we’re finding that AI can identify tumors and things like that more effectively in some cases than humans. I don’t think it’s ready for replacement of the radiologist.

J.P. Gownder:

But imagine if you’re a radiologist, and you can rely on this tool, and maybe you can do the work of five radiologists, then that means there won’t be future growth there. So over the long haul, the belief that we have at the moment is by 2030, 29% of today’s equivalent jobs would be lost to automation and AI. However, the automation economy also creates some new jobs, and that would be equivalent to about 13%. That still leaves a 16% gap, which is by the way, quite a bit lower than many of the prognosticators out there.

J.P. Gownder:

Nevertheless, the bigger issue is going to be job transformation, where all of our jobs are touched in some way by passing off tasks that we used to do to an intelligent machine, and then hopefully up-skilling to make ourselves better. In the US, this is challenging because of our education system and the cost structure. If you are an employee, especially if you’re below a certain level, and the onus is on you to up-skill, you can’t afford to be able to take out loans to be able to do that. So one of the big issues we have right now is basically finding our way to a new covenant around continuous education for everyone.

James Kotecki:

And obviously digital learning, online learning, another thing that’s exploding right now, potentially being accelerated by that. I wonder, you mentioned it’s hard in the US. Are there other countries that you think are doing this better?

J.P. Gownder:

Well, in Germany, there’s a very tight coupling between schools of various sorts, whether those are colleges and universities, the very extensive vocational training for people who are doing factory work, that sort of thing, and the companies themselves. So there’s a considerable feedback loop. There’s a lot of sort of potential for retraining. They just have infrastructure in place that would be very natural. In other words, if you’ve always hired out of this particular school for your factory, you have a relationship. And if you need to make that more continuous, then that’s not a bad thing.

J.P. Gownder:

And then there are other countries where you could also say, “Well, look. The cost of education hasn’t inflated to the degree that it has in the United States.” And maybe in some of those places, people would have a little more access. But there’s still innovation needed in this space to match people with these evolving skills, which candidly are, if we’re having six week releases in software, guess what, there may be a need for a continuous learning that goes beyond some certification.

James Kotecki:

I think that is actually a great topic for entirely another conversation. But I want to wrap up with one more question, a bit of a personal question because your Twitter bio says that you’re an aspiring SFF author, which I assume means sci-fi fantasy.

J.P. Gownder:

Correct.

James Kotecki:

What is a favorite work of fiction that you love, that you think reflects a realistic view of where we’re going?

J.P. Gownder:

Well, I’m going to give you three briefly because I don’t think there’s just one. I think Octavia Butler’s Parable books, they really nailed a lot of what’s going on right now, the sort of tribalism, the income inequality, problems, continuing problems with racism. They’ve been very prescient. But if you look at her work, you would have a very dim view of the future.

J.P. Gownder:

A second one that’s a little brighter and a little more recent is by L.X. Beckett, an author who wrote a book called Gamechanger that takes place in a maybe 100 years from now, where we’re using AI as a coordinator to help us solve climate change and to decarbonize the economy. And there’s an entire economic theory that has sort of taken the best of capitalism and more state mandated direction and put them together.

J.P. Gownder:

And the last one, if you’re interested in a really interesting take on what an AI’s thinking might be like, even in 50 years from now, because there’s a lot of hype around the perfect AI, Kim Stanley Robinson had a book called Red Moon. And there’s an entire section of that book that takes sort of this highly statistical, very machine centric view of what an intelligent AI system might look like, but it doesn’t just mimic human beings. It’s something completely different, so some combination of dystopia, hopeful AI, and machine centric, hopefully, there’s truth in there somewhere.

James Kotecki:

We will take it. We will take whatever we can get in 2020 and look for the bright spots, but prepare for whatever’s coming. And thank you for helping us do it and think through some of these issues, J.P. Gownder of Forrester. Really appreciate you being here today.

J.P. Gownder:

Thank you so much. It was fun.

James Kotecki:

And thank you so much for watching and/or listening. Machine Meets World is an Infinia ML production. And if you’re watching the video, we have it as a podcast. If you’re listening to that podcast, hey, guess what, you can check it out as a video. You can also email the show at mmw, Machine Meets World, mmw@infiniaml.com. I am James Kotecki. That is what happens when Machine Meets World. And ending the broadcast.

Share this post