Resources / Blog

Glimpse Group CEO Lyron Bentovim: AI Will Soon Fool Us In VR

Join Infinia ML’s Ongoing Conversation about AI

Episode Highlights from Machine Meets World

This week’s guest is Lyron Bentovim, CEO of The Glimpse Group. Highlights from our conversation include:

“The AI element that comes into VR is by bringing avatars to life when there’s no humans behind them.”

“When you come in and you see an experience and you see an avatar and you don’t know if that’s a real person or an AI, that is kind of like that ‘wow’ moment where VR and AI get together.”

When we’re in VR, our perception of reality changes, because we’re accepting cartoon avatars as real people. So AI has a lower threshold of being real in there because that whole scenario is surreal.”

Photo by Giu Vicente on Unsplash

Audio + Transcript

Lyron Bentovim:
You see an experience and you see an avatar and you don’t know if that’s a real person or an AI — that is kind of like that wild moment where VR and AI get together.

James Kotecki:
This is Machine Meets World, Infinia ML’s ongoing conversation about artificial intelligence. I am James Kotecki and my guest today is Lyron Bentovim, CEO of The Glimpse Group. They do VR and AR training software. Welcome to the show.

Lyron Bentovim:
Glad to be here, James.

James Kotecki:
AI, VR. These are great buzzwords.

Lyron Bentovim:
We’re in a very buzzword industry, but I’ve seen so many “wows,” back in the good old days where we actually have people come visit us on a regular basis, of people that yeah, you talk about it. They kind of feel that it’s interesting. But when they put the headset on, suddenly they can dream. And they want to dream with us.

James Kotecki:
So give me an example of one of those “wows,” where the surprise and delight is not just from the quality of the VR, but is also somehow related to the AI that’s a part of your software.

Lyron Bentovim:
The AI element that comes into VR is by bringing avatars to life when there’s no humans behind them. Once you’re in VR, even if it’s a computer-generated world and it looks like — definitely anyone looking at it in the screen, it looks like a game. But when you’re in there, your brain wants you to feel you’re real. And then when you come in and you see an experience and you see an avatar and you don’t know if that’s a real person or an AI, that is kind of like that “wow” moment where VR and AI get together.

James Kotecki:
It’s kind of like a VR version of the Turing Test, in a way, where you’re talking to something and you’re not sure if it’s a bot or not. And if it can convince you, it passes.

Lyron Bentovim:
Yeah, exactly. And that’s really where it is because all the original tests were you put someone in another room and they’re communicating to you and you were trying to figure out if their communication is real or not. In VR, you and I can be avatars and there could be a third avatar in the conversation with us that is not a human.

Lyron Bentovim:
We all look the same or each his own avatar, but all look like avatars. There’s no way to distinguish that. The motion of our hands and faces are the same. The only thing that comes out is what we say. And that true Turing Test would be when that avatar can pass a long discussion and nobody would know who’s the real two people and who’s the AI.

James Kotecki:
And does being inside VR do something to the brain where it’s either easier or harder to pass that Turing Test? Maybe some of my skepticism, for example, is already turned off because I’m accepting this video game reality as a bit more real than I would if I were just looking at a screen.

Lyron Bentovim:
When we’re in VR, our perception of reality changes, because we’re accepting cartoon avatars as real people. So AI has a lower threshold of being real in there because that whole scenario is surreal.

James Kotecki:
If I’m sending an email to a lawyer and I get an email text back, in some ways it might as well be a bot. But if I’m in VR talking to an artificially intelligent bot, but that looks like a human, then that is actually the more real interaction to me.

Lyron Bentovim:
Yes. That’s a perception. You’re sitting in someone’s office, you perceive them as a person. And you’ll talk about them, you go back home and talk to your significant other and say, “Oh, I was sitting in this lawyer and they were saying that we need to sign this agreement.”

Lyron Bentovim:
You’ll talk about that person rather than, “Oh, I got this legal service that’s an AI and they recommended we signed this agreement.” Then you will take that advice, even though it’s coming from the same source, differently just because you were sitting with that person. And I think that’s where VR and AR will really catapult AI to where its potential is going to be fulfilled.

James Kotecki:
We’re talking about a plausible near future here. But I want to ground this in the reality of what’s actually available today. So what’s the cutting edge of what you’re working on and what real customers of yours are using day to day?

Lyron Bentovim:
So, the limitations are coming obviously from the AI field, not from the VR field, because in VR, obviously avatars are getting more and more real like, but that doesn’t matter whether a person is role-playing them or there’s an AI behind them. In terms of the AI ability, it’s all about the ability to learn enough from a conversation to make a conversation with a human.

Lyron Bentovim:
So let’s say we have our scenario where there’s role-play. Someone is getting trained and someone is training the other person. What we are doing with our customers is trying to capture this dialogue, have it happen naturally between the two people doing or the multiple people doing that dialogue, and run them through AI to get to the point where I can put the AI instead of the person training and they can interact enough with what’s out there.

Lyron Bentovim:
So that’s the cutting edge where we’re trying to be right now. The basic ones are more canned responses. So if I know what questions you might ask, I can have the AI avatar come back with the logical responses, but obviously that’s kind of baby AI. So it does its thing. I basically can answer basic questions.

Lyron Bentovim:
And as long as you don’t go deviate from the script, at least you get the feeling that you’ve had some interaction without me needing to put someone on the other side that will interact with you. The challenge is to get the AI to the point where it can deviate from that decision tree that you can give AI and have enough behind the scenes to interact or to know how to deal with that. And that’s where we are on that cutting edge.

James Kotecki:
So it’s about scaling the ability to do training by capturing a real training, and then being able to remove one of the components of that training, which is the human trainer, and replace it with an AI. Ultimately, that’s the goal.

Lyron Bentovim:
Exactly, yes.

James Kotecki:
And how do you monitor an AI where success by definition is its ability to answer things that you never knew it was going to be asked?

Lyron Bentovim:
The test is usually, for me, is not telling the person in there that that’s AI. When they don’t talk about, “Is that a real person out there or not?” And they just assume it’s a real person and just make comments about their personality or how they dealt with their side of the equation, that usually means that you’ve done something well.

James Kotecki:
And by that definition, by your own test, what is your pass percentage right now?

Lyron Bentovim:
AI is not there. We’re using what’s out there. We’re not an AI company. So we’re taking things that are existing and that are reasonably available that you can pull them into engagement. It’s good, but not good enough. But it’s definitely getting there.

James Kotecki:
And what’s a timeline there that we can expect to get whatever the threshold would be, 70, 80, 90% pass rate, based on your definition of success? You look at the market, you see the vendors.

Lyron Bentovim:
I believe that in the next three to five years, that would be something that is doable.

James Kotecki:
If you are able to solve the VR conversation challenge with a bot, an avatar that seems real, will that also effectively solve the issue for chatbots and other non-VR forms of business communication, or are they totally separate issues in your mind?

Lyron Bentovim:
No, they’re the same. I think the added element that we haven’t really touched yet, but you will need to make it very effective, is to add a layer of mannerism. It’s not only what you say, which obviously a chatbot or a Siri-type phone service, in a VR it’s all the same. It’s basically taking the text you want to convey and then speaking it out loud or just writing it if it’s a chatbot.

Lyron Bentovim:
In VR, you need to have mannerism. So if I’m sitting there and I’m doing this thing and I’m telling you, “You’re fired,” and I don’t even move my hands, and I don’t even look you in the eye, it wouldn’t be as . . . If you want to train someone of how to deal with that, it won’t be the same.

James Kotecki:
It’s like emotional metadata, almost, that you need to go with the actual text.

Lyron Bentovim:
But you will need the AI to learn. And there’s a lot of people analyzing that because one of the things missing in VR on avatars, even for real people, is at this point I control my voice and my hands and my head, but my facial features, which is a very important part of what we’re doing, there is a lot of people developing technologies to start capturing that and actually tie that to the voice so that when I’m saying something, being able to reflect that on my avatar as it reads my voice.

Lyron Bentovim:
And then once you take that, AI can have a field day with a bunch of that data to try and contextualize it and saying, if I’m saying, “You’re fired,” the AI can not only say that word when the time is right, but actually act that in the right way.

James Kotecki:
So much of what you’re building is a potential ethical minefield. It’s mind-boggling, there’s a lot of philosophy under this and existential things that we can’t even begin to touch on right now. What ethical principles guide you as you do this work?

Lyron Bentovim:
You want to do the right thing. And VR as a whole, and when you mix AI into it, as you said, opens up a lot of those dilemmas. And one of the reasons I want to be involved in this is I want to make sure I shape this world the right way, rather than the wrong way. You can use AI for good, rather than basically creating a world where we’re all basically robotically almost AI-ing us rather than humanizing AI.

James Kotecki:
Lyron Bentovim, CEO of The Glimpse Group. Thank you so much for joining us today on Machine Meets World.

Lyron Bentovim:
My pleasure. Fun.

James Kotecki:
And thank you so much for listening and/or watching. Maybe one day we’ll experience this in VR together as well. You can always email the show. It’s mmw@infiniaml.com. Send us your comments and suggestions. Please like this, share this, give the algorithms what they want. I am James Kotecki, and that is what happens when Machine Meets World.