Resources / Blog

NY Times AI Reporter Cade Metz on His New Book, “Genius Makers”

Join Infinia ML’s Ongoing Conversation about AI

Episode Highlights from Machine Meets World

This week’s guest is Cade Metz, New York Times AI reporter and author of the new book Genius Makers: The Mavericks Who Brought A.I. to Google, Facebook, and the World.

Highlights from our conversation include:


“AI researchers and executives . . . talk about the fact that they really want to build AI that is safe and is not harmful. And this concern about AI harming the world, even destroying the world, is something that gets talked about a lot.

And yet at the same time, these same folks are pushing the technology forward so quickly, they’re not paying attention, I would argue, to real concerns now.


“It’s not just the weaponization of these technologies, it’s the bias that can show up in AI, the harmful bias against women and people of color. It’s the disinformation that these technologies can help generate, so called deep fakes, which are fake images and video that look like the real thing. Fake text that looks like the real thing.

“These are all questions and issues that we’re not only grappling with as a society, but the people who built the technology are grappling with.


“You have AI experts who believe AGI is going to happen within the next five years. And then you have equally well-educated, equally influential, equally intelligent people who will say, ‘That is just patently ridiculous.’

From cover of “Genius Makers: The Mavericks Who Brought A.I. to Google, Facebook, and the World.”

Audio + Transcript

Cade Metz:
Concern about AI harming the world, even destroying the world, is something that gets talked about a lot. And yet at the same time, they’re not paying attention, I would argue, to real concerns now.

James Kotecki:
This is Machine Meets World, Infinia ML’s ongoing conversation about artificial intelligence. I am James Kotecki and my guest is Cade Metz, New York Times reporter and author of the new book Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World which comes out on March 16th. Cade, welcome to the show.

Cade Metz:
Glad to be here.

James Kotecki:
Cade, you’ve been covering AI, Silicon Valley, technology for years. What was your inspiration to write this book?

Cade Metz:
What I tell everyone about the real inspiration for the book is essentially its main character, a guy named Geoff Hinton, studied AI in the early 70s at the University of Edinburgh, and eventually wound up in the US and Canada. He’s an amazing person in many respects and, in a lot of ways, is personally responsible for pushing a lot of the technologies that have come to the fore in recent years into some of the biggest companies on earth. And that’s really the thrust of the book. It is a sweeping narrative, but I feel like the strength is really the characters and the people like Hinton, who believed in basically one idea and then pushed it into industry, and then what happened once they did that.

James Kotecki:
Are there major turning points for the history of AI that hinge on idiosyncrasies, specific personality traits of these characters where history of the technology could have actually gone in a different direction?

Cade Metz:
Absolutely. And one key moment is the prologue to the book. And this moment when Geoff Hinton essentially auctions off his own services and the services of two of his grad students. Companies from China, the US, companies you know vied for this technology, essentially computer vision technology that he and his two students had built.

James Kotecki:
Not giving too much away from the prologue but, of course, Hinton decides to go with Google, an American company over a Chinese company, even though it’s possible, very possible, that he would have gotten more money had he allowed the auction to continue. Is there a central tension here between the practitioners, the people who are developing this technology and their motivations for doing it versus these giant companies who are, of course, on a fundamental level motivated by money?

Cade Metz:
Absolutely. So the first half of the book is about these very idealistic academics who have nurtured this one idea, the idea of a neural network, a system that can learn skills on its own, whether it’s computer vision or recognizing commands that you speak into a cell phone. A lot of them, including Hinton, did have this idealistic vision for it. Google ended up working with the Defense Department and that causes a clash with these idealists, including Hinton, who don’t want their technology used in that way, and that’s what the second half of the book is about. And it’s not just the weaponization of these technologies, it’s the bias that can show up in AI, the harmful bias against women and people of color. It’s the disinformation that these technologies can help generate, so called deep fakes, which are fake images and video that look like the real thing. Fake text that looks like the real thing. These are all questions and issues that we’re not only grappling with as a society, but the people who built the technology are grappling with.

James Kotecki:
Was there any other way for this to have gone down where you wouldn’t have had these concerns around these ethical issues that we have today? Could it have gone in a different direction?

Cade Metz:
Not really. This is how Silicon Valley, as they call it, works. You take these ideas and you amplify them, and you push them forward as fast as you can, and you promote them, and you talk them up, and you hype them. That’s the way you attract the money and the talent you need to make them work. And you don’t necessarily worry about the consequences. What we need to think about now is, given everything that has happened over the past few years, maybe we should and can be more careful about these things. That’s what Silicon Valley is struggling with.

James Kotecki:
I want to talk for a minute about science fiction as a source of inspiration here. There’s a scene in the book where tech executives are attending a premiere of the TV show Westworld which is, of course, a show about sentient robots.

Cade Metz:
Right.

James Kotecki:
How important do you think it is to understand science fiction as a way of understanding what real people are actually attempting to do?

Cade Metz:
It’s essential. What many people don’t realize is if you walk into these AI labs, some of the top AI labs in the world run by these very large, very powerful, very wealthy companies, there are posters from science fiction films on the walls. You walk into a conference room and there is Stanley Kubrick’s 2001. At Facebook in New York, a lab run by a guy named Yann LeCun, that was one of his not only favorite movies growing up, but a seminal moment when he was growing up, and he’s got stills from the movie all over that Facebook AI lab in Manhattan. And that’s just one example.

Cade Metz:
The same thing is the case here near where I live in San Francisco in the OpenAI lab founded by Elon Musk and another guy named Sam Altman, who was part of that Westworld event you talked about, and you had the same sorts of posters. And often the rooms are named after these scifi books and movies, but there’s an irony there, right? In 2001, things go wrong. They go very wrong. In a lot of ways, these AI researchers and executives see that, and they talk about the fact that they really want to build AI that is safe and is not harmful. And this concern about AI harming the world, even destroying the world, is something that gets talked about a lot.

Cade Metz:
And yet at the same time, these same folks are pushing the technology forward so quickly, they’re not paying attention, I would argue, to real concerns now. They’re looking into the future and this potential doomsday scenario, but what about all the stuff we’ve been talking about? In many cases, they look beyond that.

James Kotecki:
One of the things that people are looking to on the horizon is this idea of artificial general intelligence, AGI, AI that is as good as or better than the human brain. Is there a consensus view in Silicon Valley, the tech industry generally, about whether and when that’s going to be possible? My sense is that it’s all over the map, but I wonder if there is an emerging consensus.

Cade Metz:
You have it exactly right again. It’s all over the map. And I think that that’s another thing people outside this field don’t realize is that the AI community is not this monolithic thing. You often get headlines in even places like the New York Times that say, “AI Experts Say.” Well, that’s misleading. You have AI experts who believe AGI is going to happen within the next five years. And then you have equally well-educated, equally influential, equally intelligent people who will say, “That is just patently ridiculous.”

Cade Metz:
So it is a spectrum, but you do have this group of people, mostly centered around two very important labs, OpenAI, which I mentioned in San Francisco, and a lab called DeepMind in London, who also plays a big role in the book, their founder, Demis Hassabis, in particular. Those two labs really believe in this AGI notion, and that’s their stated mission in each case: to build AGI. They believe it’s closer than a lot of the rest of the world thinks. But I think when you hear that you have to step back and say, “We’re still struggling to get self-driving cars to work. We’re still struggling to get robotics to do much more than basic tasks.” Let’s deal with that stuff first. And let’s realize when people talk about the AGI idea, that they don’t necessarily know how to get there.

James Kotecki:
DeepMind calls you and says, “Look, we’ve got it. We’ve built AGI.” What is it that they would show you that would convince you, a reporter for the New York Times who covers this beat, that they had actually done it for real?

Cade Metz:
It’s impossible to answer, right? What I have learned from years of covering Silicon Valley is that you can’t trust a demo. One of the things that has happened over the past few years is that reporters got into self-driving cars and they took a drive in a circle around Mountain View, California, and they got out and they said, “Self-driving cars are here. We’re going to have these all over the place tomorrow.” Those drives in a circle around Mountain View are incredibly misleading. They show you only a tiny picture of what the technology can do, and you’re not processing all the chaos in our daily lives that the car has to deal with.

Cade Metz:
Google demoed it’s self-driving car in 2010. We’re now at 2021. We’re a year past when Elon Musk said these cars would be everywhere, and it’s still not going to happen for a while. And finally, I think, the tech press is realizing that that is so much further off than they thought. When I get that call from DeepMind and I go to London and walk into their office and they show me what they’ve got, I know to be skeptical, at the very least. Right?

James Kotecki:
Yeah.

Cade Metz:
It’s a great question.

James Kotecki:
Cade Metz is a New York Times reporter and the author of the new book, Genius Makers, and it comes out on March 16th. And Cade, I found it gripping, by the way.

Cade Metz:
Great to hear.

James Kotecki:
It’s so fascinating to put it in context of these people. Congratulations.

Cade Metz:
Thank you.

James Kotecki:
The book comes out on March 16th and thank you so much for being on the show.

Cade Metz:
I enjoyed it. Honestly, great questions.

James Kotecki:
Well, thank you so much. And thank you so much for listening and/or watching. You can always email the show, mmw@infiniaml.com. Please like this, share this. Give the algorithms what they want. I am James Kotecki and that is what happens when Machine Meets World.