Deepgram CEO Scott Stephenson on Machine Meets World

Scott Stephenson, CEO of the speech recognition company Deepgram, talks artificial intelligence with Infinia ML’s James Kotecki.

Machine Meets World streams live on YouTube weekly. It’s also available in audio form on Apple PodcastsGoogle PodcastsSpotify, and Stitcher.

Recorded July 28th, 2020.

Killer Quote

“You can have a human listen to a conversation, you can have them infer what was happening, all of that… In the future, you’ll be able to have the equivalent of a much cheaper human do it, which would be the machine. And the cost of that will drastically reduce, and that will mean that we’ll have to come up with new laws and rules and ways of operating in society to deal with that.” [16:01]

More Highlights

How working in a “James Bond lair” led to the development of Deepgram. [4:14]

How doing particle physics is like doing deep learning for speech. [6:10]

How always-on recording can change human behavior. [19:58]

Show Suggestions?

Email mmw@infiniaml.com.

Full Transcript

James Kotecki:
Hey, and we are live from Infinia ML. This is Machine Meets World, and we’re talking artificial intelligence with my guest today, the CEO of Deepgram, Scott Stephenson. Welcome.

Scott Stephenson:
Thanks for having me.

James Kotecki:
And Scott, you and I were chatting a little bit before. You’re actually in your empty office, you said, which explains the kind of office-like plants behind you.

Scott Stephenson:
Yeah, this is the Deepgram ficus. Yeah. We’re in the empty office right now.

James Kotecki:
And I think you may be uniquely suited to talk about being far away from other people by virtue of your backgrounds, which is my somewhat awkward transitional pivot too. Like, the founding story of your company is very interesting. It kind of has elements of like secret, super villain lair to it. I don’t know, kind of like… Tell me a little bit about what it means that you started your company two miles underground, or at least the idea.

Scott Stephenson:
It definitely does. So Deepgram is a speech recognition company. We use deep learning in order to build speech recognition systems for enterprises. But we got started out back in physics world. And so I’m actually a physicist. So I built deep underground dark matter detectors, our CTO built… He was an astrophysicist and built a telescope that did automated searches three miles above the ground, I was two miles below the ground. But nevertheless, what we were doing there, underground, was building a particle physics experiment that would sense dark matter.

Scott Stephenson:
And it’s very similar to experiments that were done a few decades ago to send neutrinos, which were actually found. It was the same type of experiment; put it deep underground, use different materials and things like that. But nevertheless, they made these huge water tanks, like, you could scuba dive in them and everything. It was crazy. But nevertheless, for us, we use liquid xenon, which is cryogenic, you could never swim in it. It’s very, very cold, like -100C and everything.

Scott Stephenson:
But yeah, we were searching for dark matter underground. We built these really sensitive detectors that are kind of like a really rudimentary camera, but that is extremely sensitive, it can sense individual photons. And then you reconstruct the event from those photons, and you can figure out inside the detector, where the event happened, how big of an event it was, what type of event it was, et cetera. And so that’s what we were building back in the day. It turns out that it’s actually very, very similar. Like, the science work that you’re doing there is very similar, but also like the online system is very similar to doing speech recognition and audio.

Scott Stephenson:
The type of cameras that we made, that we had were photomultiplier tubes that were individual pixels that sort of lined the inside of this detector. And they’re analog devices, they’re not digital, and you had to digitize those as a wave form, like this noisy wave form and figure out what was going on inside from that. And you had hundreds of channels like that, and you essentially listened or watched that wave form in real time to figure out if there’s anything interesting going on inside. And you used machine learning to do that, and that… So that’s what we built back then. We do a very similar thing now for like recorded phone calls and meetings and things like that for enterprises.

James Kotecki:
Was it an obvious connection or leap to make, to go from that to this? I understand the technology may look similar once you kind of get into the weeds, but even just conceptually, what you’re doing, you’re a physicist, you’re in the mindset of trying to uncover the secrets of the universe, and now you’re running this tech company, so explain to me the kind of mental bridge that you took to get there.

Scott Stephenson:
So from the technology side, if you ask… Like, we have physicists at Deepgram. Many people that we knew that were great at using machine learning in this type of world, we hired at Deepgram. But if you asked them like, what are you doing day-to-day? And it’s kind of like, very, very similar to what we used to do. But yeah, the jump was not so obvious, where… But the thing that actually led us there was we were sort of underground building detectors, doing things like that and just kind of thinking like, “Man, this is… it really is like we’re in a James Bond lair and we’re building all this crazy stuff. And when are we going to be able to do this again? Or like, how could we document this or something?”

Scott Stephenson:
So that’s the idea that got us thinking about, “Man, can we make a backup copy of our lives and maybe just record like video all day, every day or audio all day, every day, that kind of thing?” And so we built little devices that we just like stuck in our pocket, or just clipped to our shirt or whatever, that just recorded audio all day, every day, and ended up with like around a thousand hours of audio that way. It wasn’t just one person, we made a few devices, and like cajoled our friends into doing it and all that.

Scott Stephenson:
But nevertheless, we got a whole bunch of audio that way, and still the connection wasn’t exactly made. We were like, “Hey, we have all this audio, but whoa, it’s going to take a long time to listen to it all and figure out if there’s anything good inside. Is there some way to search it and find the interesting events? Like, the joke your friend told you, or that meeting that you had about building the dark matter experiments or whatever.” And we just looked for a product out there in the world. Does anybody have anything that can do this kind of searching or understanding inside the audio? We were super underwhelmed by that, and we were thinking, “Man, we could build something better than this.” That was the real like, “Maybe we should build something better than this.”

James Kotecki:
I guess, one more probing question on that is like, did it feel weird to like give… I mean, you’re a physicist, you went two miles underground to do experiments that are kind of like fundamental, basic science sounding kind of stuff. Right?

Scott Stephenson:
Mm-hmm (affirmative).

James Kotecki:
So was there almost like a philosophical shift that you had to take to like, “I’m going to stop doing that and start doing this”?

Scott Stephenson:
Yeah, it’s really interesting. So when you’re doing the particle physics side, you’re exploring the fundamental laws of the universe. Right? But the way that we look at it is we’re like exploring the fundamental laws of intelligence or information or something. So it is not known how to do this, to do deep learning properly in speech and just like, have the problem solved. It is not known yet. You know certain things though, like, you should gather a lot of data from many different types of people in different domains and different situations with different accents and everything like that, you should build a model that is very expressive, it has the ability to like learn many things, and then you build a hardware device, a compute cluster or a computer cluster in order to train that model based on the experience of that data.

Scott Stephenson:
So you know those things, but like how to make it work really well, extremely well, it’s… you’re at the very beginning of like… I think of it like, it’s 1900 and electricity is now just finally rolling its way out into the commercial world. You can use it for lighting the streets of Paris, and that’s the consumer use case, or you could use it to like, help mining or something like that. But in 1900, you don’t know about like the cell phone that’s going to be in your pocket, and radio is still yet to be a thing and TV and the internet and all of that, but there are so many other things that are going to come after it that . . . So for us, we’re like, “Hey, we’re discovering this way to do something in a new area, but it’s actually very similar.”

James Kotecki:
If you asked the average business person, like, is speech recognition solved? They might say, “Yeah, pretty much. Like, Siri is in my pocket, I can talk to Siri and it understands what I’m saying most of the time.” There’s automated transcription services out there that I’ve used that get me 80, 85, 90%. It’s not as good as a human, but it’s pretty good. I remember, man, back in the ’90s, there was something called like… I think it may be still around, it’s was called like Dragon Speech where you talk and it types, and that was like the big commercial. So this technology has been out there and evolving. And I think also, we have the experience now of talking to automated agents on the phone. And they’re not perfect, but they’re certainly less annoying than they used to be. So how much of this is still unsolved? And is it that last little part that’s actually the hardest to solve?

Scott Stephenson:
The last part is definitely hard, but it really is more complicated than it seems. There are many languages in the world, there are many accents, there’s many different domains and things that you’d like to accomplish. Right now, most people’s experience with pretty good speech recognition or a pretty good speech interface is in the and control environment where they’re saying like, “Turn on the lights,” or, “Get me the recipe I was looking at last night,” or whatever. It’s not a conversation, it’s an order of a very simple thing, and that works pretty well now.

Scott Stephenson:
But there are actually three steps to that type of process, where you have a… The first is perception. And what I mean by that is there’s an acoustic wave form coming out some sensor, like your ears or like a microphone, and you have to somehow convert that into, maybe not necessarily what the person meant or what you should do about it, but just, what are the words that they said? Like, what did they say? Not what did they mean, what did they say? And then the next step would be like, well, what did they mean? And then the next step would be, what do I do about it? So we think about it as a perception, understanding, and action, or interaction.

Scott Stephenson:
Like, the perception side in well-trodden domain, so like English people. That’s where most of the development happened in the US. But American English, down the fairway type data works pretty well, but that’s in the specific use case of like that… or not conversational audio, the command and control, where if you took a phone call or you took a meeting and then tried to transcribe that, then the accuracy is actually a lot less, it’s way worse. That’s partially because the data rate, the… Like, you can hear it as a human, like, it sounds worse. There’s less information contained in that recording, and so it’s not as good.

Scott Stephenson:
But actually, you can bump up that accuracy by working specifically on models that work in that domain really well, and that’s what deep ground works on. But there’s also several… like, these other steps that are happening later, so like, the understanding and the interaction. The understanding is like… if you asked me, it’s like, it’s unsolved, man. Like you can write specific rules, you can hire an army of engineers to make it work, but it is not solved in a real AI sense, it’s solved in a rules-based sense. So a lot of people write rules, and you take what was said over here in a limited domain and you move it into the understanding and you say, “Okay, well, I’m either not going to do anything and say, ‘I don’t understand you,’ or whatever, and then maybe do something.” You’re just getting the first abilities out there now.

Scott Stephenson:
But, the interaction side is actually pretty good now. Like, five years ago, it wasn’t, but it is pretty good now, like the text to speech. So rather than speech to text, like, you have a recording and you’re trying to figure out what the words were, you have the words and you want to have a machine say it. That’s actually pretty good now. And so there’s a lot of development, but that middle understanding part is really not done yet, and there’s a lot to do there. And this is kind of where Deepgram is really going. Like, the secret underlying thing that is happening at Deepgram is that you solve perception really well, and you get it to work really well in these customer domains, and then you move into understanding, and then you move into the interaction, then you have the whole suite for everybody to use.

James Kotecki:
Is it possible to even measure what we mean by understanding? Because even if I, as a human, am in a meeting with somebody, there could be three or four different levels of understanding that I might have. I can understand them on a straightforward level, I can understand that they’re being sarcastic, I can understand that they’re very nuanced in what they’re seeing, and they’re actually hinting at something that I actually know from some other contexts that they’re bringing up that maybe someone else in the meeting doesn’t understand. So how do you actually measure what understanding means?

Scott Stephenson:
It’s interesting. Like, if you’re trying to get a hundred percent accuracy across like a hundred thousand different conversations, like that is a mountain too tall to climb right now. But if you’re looking for like 80% or 90% accuracy of disposition on like phone calls in a call center, like, that’s totally possible of like, is this person mad, happy, neutral at the end of the conversation? That type of thing is actually very totally possible. So there’s a continuum here of satisfaction.

Scott Stephenson:
I like to think of it like, if a person came into a conversation with no context, an intelligent, well-mannered person came into a conversation with no context, what would they be able to do? They would be able to tell you what the words are that the person is saying, they would be able to tell you how many people are there, they would be able to tell you, like, who is speaking when, they would probably be able to tell you general topics; sports, politics, et cetera, that type of thing, they would probably be able to tell you about the people also, too, like, “Hey, a male in his 30s, talking about, whatever.” Like, all these general things.

Scott Stephenson:
But they won’t be able to get into the specific nitty gritty of like, does that person know what they’re talking about? It’s like, whoa, that depends. Like, it might sound like they know what they’re talking about, but if you’re very good in your domain, you’d be like, “No way that person, they’re going off the reservation there.” So, anyway. It’s a difficult problem. I like to think of it like, “Hey, we’re just at the beginning.” You should get to that general understanding level first, and then what you do is you build custom systems for each customer in order to understand their specific domain. But you should never be thinking like, “I’m going to solve this problem a hundred percent.”

Scott Stephenson:
Honestly, if you ask a person to listen to conversations, even people who are really in context on these types of understanding tasks, many times, they only agree like 50% of the time. It’s very common for them to agree like 70 to 80% of the time, sometimes like 50% of the time, and sometimes like 90 or 95% of the time on these understanding tasks. So yeah, even humans can agree.

James Kotecki:
What you’re hinting at there with the idea of a system could theoretically know that a male in his 30s was talking and things like that, that kind of hints at some ideas around metadata that are very interesting. So you mentioned earlier in this conversation that we’re kind of right at the beginning of this revolution, and we don’t quite know how these things are going to be used. I think about how things like metadata are used by advertisers, or to identify us by the cops or whatever, where you don’t actually have to know the underlying content of that phone call or that text message or whatever, but the metadata of the location, the time, et cetera, et cetera, can all be kind of pieced together and have some really interesting insights.

Scott Stephenson:
Yeah, absolutely. It can be. It’s a really interesting sort of legal gray area right now. Again, I kind of like to fall back to what could a human do now? Where you can have a human listen to a conversation, you can have them infer what was happening, all of that, but it’s… In the future, you’ll be able to have the equivalent of a much cheaper human do it, which would be the machine. And the cost of that will drastically reduce, and that will mean that like, we’ll have to come up with new laws and rules and ways of operating in society to deal with that.

James Kotecki:
So I’m always very fascinated by those kinds of conversations. Have you given any thought to the legal framework that you want to see under which that can happen and your company can still survive?

Scott Stephenson:
Yeah. So it’s really interesting, because a lot of the rules… Well, it’s hard to see how it would play out in history, but a lot of the rules right now are based on what’s practical, actually. So like, if you go work at one company and then you leave and go work at another company, there are laws that say, hey, you can’t literally take their IP and go over to this other company… their intellectual property, and go over to this other company with it. And you saw that happen with like Uber and others [inaudible 00:17:10] companies recently. But where that did happen, and then there were huge lawsuits and whatnot.

Scott Stephenson:
But people can leave jobs and move to another company, and they take their expertise with them. And so this is a really interesting thing to think about, like with AI models, where you have models that you could build for one customer in one industry, and then now, could you use that model for another customer in the same industry? And to what specificity, to what level? And right now, we are scientists, we think about this in the most sort of… We’re trying to look at it in a way that is really just most beneficial to society, like, assume kind of the worst in a lot of ways, and just make sure that you’re on the right side of history on this, and always ask for permission and those types of things. But yeah, it’s going to have to be actually laid out in the future. There’s always a lag here though. You’re talking probably like 10 years until this happens.

James Kotecki:
And so the interesting thing that I think you’re driving at is like, imagine a world where we can record everything that we do all the time, and we can obviously do that now, but that that data then becomes searchable and practical. We have kind of a Google for our own lives and we can… anytime I vaguely remember an idea I had, I can basically do the equivalent of Googling it and I can find it and bring that up. So I guess, imagine… I was even thinking in your IP example, imagine everybody had a personal device like that, and I walked to the office, et cetera, and then I moved to another company. Now, I have thousands of hours of audio recordings of the specific conversations that I had.

James Kotecki:
Even if I somehow delete my former colleagues’ parts of that conversation for legal reasons, they still have my part of the conversation. Like, there’s a blurry line in that case between my own expertise and my IP. It doesn’t make sense to even define that line anymore in that case.

Scott Stephenson:
Yeah. Yeah, absolutely. You probably will define zones where that is no longer cool to-

James Kotecki:
Record.

Scott Stephenson:
There are things that you could dream up here. Like, whenever a recording is… There’s like a geo-fencing recording, and hey, if any recording happens in this area, then it goes to the company, or something, even though it’s your device or whatever. I don’t think that’s practical, I don’t think that’s necessarily good, but there are many ways you could dream this up that could technologically solve the problem. I think most of it though, is going to come down to just sort of human decency and the old fallback of, what would you want your mom to know you’re doing, or whatever.

James Kotecki:
Huh. Have you seen any of the sci-fi, read any of the sci-fi stories that kind of pose this kind of technology? There’s a Black Mirror episode that I think people… their contact lenses were cameras, and so they recorded every minute of their lives. And of course, that’s played out to twisted Black Mirror style endings. And then there was a… I think the author’s name is Ted Chang, he had a short story about something similar to this, where people were able to record all of their personal lives and look up old things. And then a man realizes that something that he had remembered as being one thing was actually something totally different, and it changes his whole perspective on his life. Do you look to stories like that for kind of inspiration, or maybe like what not to do?

Scott Stephenson:
In a way, that’s how Deepgram started, was that type of device. And we were exploring that. And actually, it was really interesting. It changed your behavior little bit. So when you were wearing this device, it changed it for the positive, but also you optionally would omit things from what you would say, because you know it’s being recorded. So most of the time, that’s not the case, but sometimes you’ll be like, “I’m not going to say that.” Just an interesting-

James Kotecki:
Sure.

Scott Stephenson:
Which is an interesting thing, because that’s something humans haven’t had to deal with in the past. There are things in your life that are ephemeral, maybe somebody would remember them or whatever. But generally if five years, 20 years down the road, people don’t remember stuff all that clearly. But if you can record it, then you have it right there. And also, it could be misinterpreted and everything like that. But we also saw positive things, which were like, you could hashtag your life. You’re walking around, you’re doing whatever you’re doing, and something interesting happens, and you’re like, hashtag interesting, hashtag like, remember that, whatever.

Scott Stephenson:
And then a way to mark things in your life and go back to that moment and say like, “Hey, this is an interesting moment.” And so there’s a lot to discover there, there’s both pros and cons with many things, just… Same thing with electricity, same thing with like discovering radioactivity, same thing with many, many things.

James Kotecki:
I wonder if celebrities and politicians will weirdly have the best lessons for us and to kind of… if we’re all kind of choosing to live under a spotlight now, as they already have been. And of course, social media shows that if given the choice, we will choose to live under a spotlight. So it’s not a huge stretch to imagine people doing this. Maybe those will be the people who give us the lessons about how to behave.

Scott Stephenson:
Yeah. Yeah, absolutely. And celebrities have their private life and public life. Justin Kan had Justin.tv, that was a big long running experiment. You can read about what he thinks about how that went. One thing I didn’t mention before, when we have these recording devices on, you did have like your kind of cherished downtime. Even though we did it 24/7, there were times where you would just take it off and just leave it in your room and just go do… It’s like, “I don’t even want to be recorded right now.”

Scott Stephenson:
It is really interesting, it’s like you’re on the clock kind of, and then it’s kind of like the semester ends when you’re at college, you can kind of chill out. Or you clock out at work and you’re like, “Okay, I’m done with that for a little bit,” and whatnot. So yeah, it’s a little… There hasn’t been something like that in the past though, that brings that like omnipresent feeling, but I think it probably will come. I don’t think it will be in the next 10 years that this will be a big thing, but maybe in 20 years, something like that.

James Kotecki:
And what are your clients doing to adapt this now? Tell me about maybe a client or two, are they in industries where they are wearing recording devices? Are we talking about kind of an Alexa-style device in a conference room, back when we used to go into conference rooms, that would just record what goes on in the meeting? How does it play out right now?

Scott Stephenson:
So that’s some of them, but the vast majority in the… The conference room is now being rethought, and it’s just like Zoom meetings and things like that, which actually has been a really interesting transition for people, but also a big boon for this type of technology coming on quickly. Because previously, it was like, what are you going to do? Go instrument every meeting room in every company? Even then, many conversations don’t happen in meeting rooms and things like that. But now, everybody working from home, like everything does happen in Zoom, if you’re going to have a conversation, or pretty much everything. So that’s interesting, but that’s…

Scott Stephenson:
Most of our customers are in the call center world where they have sales and support, and the vast majority, they have a huge, huge amount of audio that they’re trying to… Just think about that phrase, this call may be recorded or monitored for quality purposes and whatnot. These are the types of things that are happening. It’s a very sort of… At least right now, it’s a very sort of basic way to analyze audio. Like, they’re trying to answer the most basic questions, like, why did this person call? Did they leave happy or not? And did the agent that answered the phone for the support call ask them the questions they were supposed to ask them? That sort of thing.

Scott Stephenson:
And really, it’s at a point where… It’s still in time where you’re… the goal is not to control the person or the agent or anything like that, it’s really just to help them be trained better. Because call center is a really brutal job. It’s a hundred percent turnover. Like, if you have a hundred person call center, you’re going to be hiring a hundred people that year, and a hundred people will be leaving. Maybe some people will stay more time than that, but you have this really high turnover. And so it’s really hard to keep a trained set of people working at the call center, and so this is one way to speed up training.

Scott Stephenson:
The way that they used to do it was sample random calls and listen to them. So you’d have like your manager listen to like 10 or 20 calls, or a QA person listen to like 10 or 20 calls a day, and it would just be literally random from their like 30 people that they’re managing. And so it’s like impossible to cover and help them. And so that’s really the type of use case that we see now, is that plus market research in the… compliance training and market research in the call center world right now, in that totally, you have everybody’s consent, you have that, Hey, this call may be recorded and monitored and that type of thing is happening.

Scott Stephenson:
The way that it works though, is it’s not like on-device, it’s not an Alexa, it’s nothing like that; it’s just a third party that’s not talking to you in the conversation. It’s just processed by a server later and says, “Hey, there’s something interesting that happened here.” Or in Deepgram’s case, we’re just giving the transcription and the words’ timings and confidences, so that other machine learning companies or the data scientists at that company can build a model in order to predict, was this a good call or not, or is this specific spot interesting or not? That type of thing. And then they go back and review it and train their personnel.

James Kotecki:
Yeah, that’s a good point. At this point, it’s valuable enough just to get a better transcript. Right? And just to get-

Scott Stephenson:
Yeah.

James Kotecki:
… a more accurate version of what’s being talked about, so that other people can build on top of that. As we go to that far-flung future, we have to build that on a foundation of good data.

Scott Stephenson:
Yeah, exactly. If you have garbage coming in, you’re not going to get anything out of it, and this is sort of the history of speech recognition and enterprise up until now. You had legacy companies, like Nuance and IBM, that just were not doing that well and cost a lot of money. Accuracy wasn’t good, speed was slow, reliability was low, et cetera, and then cost a lot of money. But now, it’s just changing very rapidly. That’s not the case. You can have it fast, reliable, accurate, tuned to your jargon and your acoustic environment, and that’s really, what’s igniting this, again, and the companies are seeing value from it. It’s been a big transformation. And what led to that was end-to-end deep learning being introduced into the speech world.

James Kotecki:
Final question, as we go from the practical day-to-day of data and call centers, and we zoom back out and think about your background in quantum physics, which is just, as a matter of like popular science, such kind of a fascinating topic to a lot of folks. Is there anything about that background, as a quantum physicist, that you still take with you as now the CEO of a company, not just a scientist, but as a CEO of a company and how you’re thinking about building a company? Q&Auantum physics is about uncertainty, it’s about probabilities, it’s about weirdness, and you’re trying to discover how the universe really works. Do you think about that in your day-to-day, running this company?

Scott Stephenson:
For sure. And especially from the standpoint of… Like, when you’re a physicist and you’re at the edge of understanding, you’re in a very uncertain state. You have things that you trust from the past that other people have discovered, and then you have what might be there or possible in the future. But it’s very, in a lot of ways, easy to test if you’re right or wrong, because experiments will come out in the next year or two or 10 that will confirm or deny what you were thinking.

Scott Stephenson:
And so essentially, you get beat down by the equivalent of the market in academia. Because real world makes tells you whether this is going to work or not. In business, you would call it getting beat down by the market, where you come out with a product, you come out with an idea and say, “Hey, this is going to make a lot of money,” or, “This is going to help a lot of people,” or, “This is going to transform this industry,” and then the market’s like, “I don’t want that,” or, “I love it,” but it needs these other things, and [inaudible 00:30:24].

Scott Stephenson:
So I think like always having that really heavy, healthy skepticism there helps in business. And then applying this… Really, like, at Deepgram, what we do, we do science, we do research and development on deep learning, and then we apply it to the real world. And that type of company, I think, works really well to have physicists in it. Because if it was already solved and engineered, then it would be okay, like, you just go build it. You don’t necessarily need the scientists there doing that. But if it’s a new thing that you’re trying to apply, then you need it. So we’re thinking about it all the time.

James Kotecki:
Scott Stephenson, Deepgram, on the cutting edge of deep learning for speech recognition. Thanks so much for joining us on Machine Meets World today.

Scott Stephenson:
Absolutely. Thanks for having me.

James Kotecki:
And thank you so much for watching, whoever and wherever you are, or listening, if you’re listening to the podcast version of this episode. My name is James Kotecki. You can email the show, MMW, for Machine Meets World, mmw@infiniaml.com. Thank you so much for watching, we will see you next time. That’s what happens when machine meets world.

Share this post