top of page
Indeed Wave.PNG

A.I. Trifecta: Military, Google, & Tom Hanks

Just when you thought you understood the definition of artificial intelligence, along comes generative A.I. Confusing? Yeah. That’s why we invited Thom Kenney, former Smashfly CEO and current director at Google on the show. T.K. helps us better understand what the hell is going on in the world today, including diving into the current writer’s strike in Hollywood, security concerns and what in the world comedian Sarah Silverman is so upset about. Put your thinking cap on for this one.

TRANSCRIPTION sponsored by:

Intro: Hide your kids. Lock the doors. You're listening to HRs most dangerous podcast. Chad and Joel Cheesman are here to punch the recruiting industry right where it hurts. Complete with breaking news, brash opinion, and loads of snark. Buckle up boys and girls. It's time for The Chad Sowash and Cheese Podcast.


Joel: Oh, yeah. What's up everybody? You know what it is, it's your favorite guilty pleasure, The Chad and Cheese Podcast. I'm your co-host, Joel. Joined as always, the peas to my carrots, Chad is here.


Joel: And today we welcome Thom Kenney to the show. That's right. Former CEO at SmashFly.

Chad: Ooh. Hello.

Joel: Now Director at Google and forever a soldier, apparently. Thom, welcome to the podcast.

Chad: Yeah, forever. Jesus.


Thom Kenney: Thanks Joel. Thanks Chad.


Joel: I sleep well at night knowing that Thom is on the case.

Chad: Thom, how you been, man?

Thom Kenney: I'm doing well. And you're safe for at least two days a month for what I do with the Army Reserve. So at least you got that to cover.

Chad: Don't forget two weeks.

Thom Kenney: Two weeks, two weeks in there.

Chad: Two weeks during the summer, right? Yeah, yeah, exactly.

Thom Kenney: That's true, that's true. So Summer side to protect you...

Joel: He's calling in from an undisclosed bunker. We're not quite sure where Thom is at the moment, but we appreciate the time to sit down with us.

Chad: Dude, you've been away from SmashFly, how long? Give us a little post on that, right? We heard you were CTO, went to CEO, acquisition happened, and then ejected. What happened after that?

Thom Kenney: So what happened after that was COVID, which was a really, really interesting time for everybody.

Chad: Now you're bringing me down. Jesus.


Thom Kenney: Well, but it's interesting, COVID changed a lot of different things, and for me it was really quite interesting. It's some time to reflect, some time to grow personally a bit. And out of the blue got this call from a venture capitalist friend of mine that said somebody needed some help with data and artificial intelligence. And having this reserve experience that I've had for so many years, the deployments that I've done overseas, it was a call from the Commanding General of Special Operations Command, a four star general. And guys like me in my rank in the reserves, don't get those calls very often.


Joel: Yeah.

Chad: No.

Thom Kenney: And he said, "Hey, come help me transform the command with data and AI." So I went and spent a couple of years with Special Operations Command. Transforming that organization, working with some of the most elite operators in the world.

Chad: Yes.

Thom Kenney: And helping the DOD in general kinda grow from a data and AI perspective. It was fantastic to, just over two years where an experience I would never get any other way, it was really, really interesting.

Chad: So you're talking about the most elite groups, especially, I mean I wouldn't just say in the Army, but also in the military, because there's some sharing that happens there, and now their usage of AI, is that what I'm hearing? Because that to me is the coolest fucking thing ever. We've got like the guys who are physical, they're there, but then they have the intelligence and then also AI kind of like compressing all of that for them.

Thom Kenney: Absolutely. And with the Special Operations Command, you had Navy folks, you had Marine Corps folks, you had Coast Guard folks, Air Force Army. It's a whole mix of different folks inside of the military that are doing these things. And it's eye-opening sometimes how this manifests itself. So for example, we set up a capability to be able to do data mining, data science, a little bit of machine learning on both unclassified and classified platforms, in a way where you could build a model and work with it in the unclassified world, and then seamlessly move it up into the classified world. And we opened that up to everyone inside of special operations. And after about six months of that, what was really, really interesting is, the number one set of users that we had by far were non-commissioned officers from the Marine Corps Special Operations teams.

Chad: Wow!

Thom Kenney: Right, and it was really interesting to see more than 50% of the users were from that specific cohort. They were just embracing it. What they could do with it from a personnel perspective, from a tactics perspective, from a learning perspective was just incredible to watch.

Chad: Yeah.

Joel: Thom, maybe it's all the doom scrolling I do every day. Maybe it's cause I'm a parent. Obviously the world is a very volatile place with Russia, China, Iran. I'm gonna ask you a big picture question that's unrelated to AI or employment. What is your take on the state of the world? Should I be so sleepless at night? Or should I be sleeping like a baby?

Thom Kenney: This is Thom's personal opinion.

Joel: Yep.

Thom Kenney: And it's not based on anything I do anywhere else, but I don't think we have to sleep restlessly at night. I think we have to be cautious about what we do. I think we need to be deliberate about the decisions that we make and the leaders that we choose. But I don't think we have to necessarily be restless at night. And part of that is because we've created a world economy that's so intertwined, that any real significant moves that would be catastrophic on the world stage, especially from a war perspective, would impact the aggressors as much as the defenders. And we see some of that in Ukraine, we see some of that in Taiwan. And there's all the different conversations that are happening. The bigger concern from my perspective, is one of resources. And it's more a commercial risk than it is a military risk.

Thom Kenney: Where we have limited resources and who controls those limited resources has a bigger impact on economies necessarily than it does on militaries. Now we have to be ready to stand and defend. The Army's mission is to fight and win the nation's wars and part of my job in the Army Reserve. But generally speaking, we are so intertwined that it would really take some, I'm gonna say maniacal intent, or some completely closed-minded attempt to go after a place where you're gonna bring the entire world into a world war. That's Thom's opinion. I could be completely wrong.

Automated Voice: Alright, alright, alright.

Joel: We can all rest everybody.

Chad: Yeah, well. And I think a lot of that has to do with the amount of budget that is actually given to guys like Thom and our defense. I mean, we have more budget going to that than the next, I think nine nations, which I think seven are actually NATO. There you go, Joel. You should feel better.

Joel: Almost more money than Google. Where Thom is also currently an employee.

Chad: Let's talk about that.

Joel: Tell us about that, Thom.

Thom Kenney: Well, Google's got a trillion dollar valuation, so maybe not the same scale as the Department of Defense, but Google's a really interesting place. So, when General Clark retired from SOCOM, it was a good time for me, 'cause he and I were great partners down there. To find my next move, and there was an opportunity to go focus on artificial intelligence with some of the world's biggest customers, including the Department of Defense, the Department of State inside of the US. And it was a really interesting transition, going from typically my history has been in small to medium-sized business, either founding them or turning them around through acquisitions. And the DOD is a very, very large organization, and Google's a very large organization. So it's really, really fascinating to work with customers like Deutsche Bank and Siemens and the Department of Defense and Department of State, as they are on their own journeys for developing capabilities around artificial intelligence, advanced data, new ways of doing business, new ways of interfacing with customers. This is really an exceptionally interesting time to be involved in technology, with how much transition we're seeing across the entire technology landscape.

Chad: Everybody is just fascinated by Bard and ChatGPT. Now everybody knows what a... And at least they think they know what a large language model is, for God's sakes. Today, obviously, Bard and you working with Google, a lot of these companies want to be able to find ways to actually integrate large language models or actually something that helps their process or helps co-pilot their employees. What are you doing around that? Is that really your major focus right now, or there are other programs and products and services that Google has that you're spending more time with?

Thom Kenney: There are so many different ways to interface with Google. I mean, we've got more than 10 products that have over a billion unique users. The planet-scale size of what Google does is almost incomprehensible. And when you think about where Google's technology investments are, we own every inch of fiber that we use around the globe. All those undersea cables, all those data centers, that's all Google-owned and that allows us to be very, very specific with how we develop technologies that enable being able to have as many products that have billions of users around the world. But to your point, a lot of what people are looking at right now are in two buckets. There's the large language model, the generative AI-type software tools that are out there, that people are trying to figure out, how can I best use it? And then there's still the data question. And in particular, the data question, because we have so much information that we're creating every single day. And if you think about where we're going to be, we're gonna be calculating our data in zettabytes.

Thom Kenney: And zettabytes is, it blows your mind to think that with the amount of information that we have in the world today, and that will be created in the next few years, there are not enough laptops and desktops and hard drives in the world today, that people use as consumers to have that much data. That's how much data we're creating. It's impossible, just think of all the laptops and all the desktops that are out there today, that still doesn't even have enough information to hold everything that the world is creating. So the impact of data may be one of those things that years ago, we were all talking about big data, and how do we get after big data?

Thom Kenney: And then we had data lakes and data streams and lake houses and all these different things that are out there. It is that problem is not going away. The data problem is absolutely not going away. And as you think about generative AI, you've also got to think about how the data that feeds generative AI also has to be managed. So these two things are really intertwined significantly for the world's largest customers. So you think about banks transforming the way they work with their customers, or you think about a recruiting team that's trying to find the next best candidate or write those job descriptions, or you think about autonomous vehicles and how they can leverage large language models. These are all really interesting aspects. And one of the biggest challenges with large language models, if we tie it back to data, is the ability to stay up to date. So if you go on to Bard, if you go on to ChatGPT, and something happens in, let's say, North Korea, and it happened at six o'clock this morning, you're not necessarily gonna be able to use large language models to get output based on that, what happened in the morning, because the processing time is immense for keeping that stuff up to date, so that data aspect is huge.

Chad: What about the security side of the house? Look at UKG, who just had a huge data security problem where they couldn't actually pay, they were being held ransom. A lot of this data is data that could prospectively be held ransom as well, and you're churning off new behaviors and even more data after you continue to crunch with all those GPUs, those NVIDIA GPUs. So what happens there? There's got to be a double, triple, quad down on security, right?

Thom Kenney: There absolutely is. And where you look at where Google is, for instance, we're pioneers in this idea of Zero Trust. And it changes the way you think about security from basically edge security to trust no one security.

Chad: Yes.

Thom Kenney: For years we said, "Hey, we have a firewall and we've got rules on this firewall and you can't get into the firewall." Well, guess what? Once you're in past the firewall, a lot of networks have carte blanche access to a lot of different things.

Chad: It's a bounty.

Thom Kenney: So the Idea of Zero Trust goes beyond the edge. It says, okay, maybe you do get past an edge, but I still don't trust you. You wanna get access to a database. Okay. But I still don't trust you if you've got access to the database to get access to the data. And one of the most important aspects of how data security is transforming, is this idea of context. What is the context in which you're trying to access this data? For example, you are in Indiana most of your time, and you're accessing Gmail most of the time from Indiana. And then you take off and go visit a friend in Portugal, for example. Well, guess what? Somebody's gonna raise their hand on the security side and say, wait a minute, you're now in Portugal and we see this all the time with credit cards.

Thom Kenney: Credit cards are doing this with fraud detection, right? I had this experience just recently. I'm in Kansas right now for the Army, and I'm trying to buy dinner while my daughter is in Boston, paying for her visa for her grad school in the fall in London. And all the fraud alerts are going off. Because they're looking at the context in which the transactions are happening. So as we look at Zero Trust, that context becomes really, really important. So that when you're talking about things like ransomware, the idea of phishing and spearing and the techniques that people use for social engineering to get access to networks, still doesn't necessarily give you the access once you're there because context becomes really important. The nature in which you're trying to access data becomes important.

Joel: Sounds like we need to go deep, Thom.

Automated Voice: Just the tip.

Joel: Can you give me a definition, a layman's definition of Generative AI? 'Cause I think a lot of our listeners are still trying to get their head around AI, and now we're throwing generative in there. Give us a definition.

Thom Kenney: So a very, very simple way to think about generative AI with things like large language models, is the probability by which something will occur. And by probability in large language models what I mean is, what's the probability that the word follows another word. And I've used this example before to say, if you have a sentence that says, an airplane needs, and then that next word, what's the probability that that word exists? Now, the more data that you have, the more you can conclude with probabilistic certainty what that next word will be. But still, there's a little bit of an open interpretation. An airplane needs wings, an airplane needs an engine, an airplane needs a pilot. There's lots of different words, but what's the probability of that word? And so, as you think about things, there's a term called an N-gram, and N is just a number. And it basically says, how many words am I looking ahead for Generative AI to try to create the probability that this will come out next?

Thom Kenney: And that's how Generative AI is able to answer a question that says what is the best place for me to go on vacation, or write me an abstract for a conference paper I want to do, or write me a job description for a software engineer that I can send off to any of the job sites that exist.

Chad: I mean, a lot of it has to do with probability. Okay.

Thom Kenney: Correct. So it's, all of those things are just looking at the corpus of data that already exists and what's the probability of those outputs. So it's not really inventing anything, it's not really creating anything new. It's using probabilities based on information that already exists to summarize data that's out there.

Joel: So ChatGPT is less than a year old, and are you surprised at how it's taken off? And if you had to predict what it looks like a year from now, Bard will still be less than a year old a year from now. I mean, could you just give us a sense of how fast this is happening and how we human beings can wrap our head around it.

Thom Kenney: So it's a misunderstanding to think ChatGPT is only been around for a year. The reality is it's been publicly available in its current form with Microsoft for a year. But OpenAI, the company that was started quite a number of years ago, has been working on this type of generative transformer for a while, for many, many, many, many years. As a matter of fact, at SOCOM in 2021, we did a hackathon with an earlier version of ChatGPT, and saw some really, really incredible results for what it was able to do even in that early version. So for me personally, I am not at all surprised. Not at all surprised. For a couple of reasons. Number one is, it really does seem like magic when you think about how this is actually working in the background, it's really kinda magical.

Thom Kenney: And second, everyone's looking for ways to do things easier, faster, simpler. But we're also introducing some really interesting new technologies too. So there's the second and third order of effects of this too. So think about ChatGPT writing a paper for school, right? You go into college and all these professors are thinking, "Oh my God, I've got all this ChatGPT stuff, and I need to know whether or not somebody's using that to write a paper."

Joel: Yeah.

Thom Kenney: Well, there's an MIT student who came up with something called GPTZero. And GPTZero uses the same sort of paradigms to detect the patterns that ChatGPT uses to determine the probability of whether or not this paper was written by ChatGPT. Okay. Now enter another tool called Quill. Well, if you take your output from ChatGPT and then you run it through QuillBot, well guess what? GPTZero doesn't really know anymore, whether it's ChatGPT.

Joel: Like a mole...


Thom Kenney: 'Cause you took the summary and the paraphrase and put it through a summarizer and a paraphraser that is now undetectable. So you think about those things, and even from my perspective, I've now gotten two invites to speak at conferences based on abstracts that ChatGPT wrote.


Chad: Yeah. So, okay, let me throw this at you. This is real world stuff right now. We've got these screen actors, we've got the writers, we've got, they're afraid of all of this, and they should be, and in a couple of different reasons. Number one, the writers, they're worried that AI, it's good now, but it could be great in just a few years, which means long-term, they could be easily out of a job, at least a good amount of them. Secondarily, we have actors and we've talked to Ryan Steelberg, who's the CEO of Veritone, who's actually cloned our voices. That digital clone means something, not just from a voice standpoint, but also from a visible standpoint, from a video, from a clone, from a deep fake standpoint. Where do we go from here? Does legislation just have to finally kick in? Are we gonna have to wait till Europe does something because the US is just sitting on their hands? What's going on with this? Because there are significant issues that could happen start today, which have and really impact us in just a few years, or maybe even months.

Thom Kenney: Maybe even months. And that there's a whole lot to unpack with that. So let me start first with the actors, the Screen Actors Guild, AFTRA, the strike that's going on right now. They do have every concern to be worried on one level, because the ease of which you can write a script today is so much simpler with tools like ChatGPT or Bard or any of the other large language model...

Chad: 'Cause they're just crunching other scripts, right?

Thom Kenney: They're just crunching other scripts.

Chad: Yeah.

Thom Kenney: And you could make an argument that there are certain television channels that are known for saccharine movies, right? They've got a very predictable.

Chad: Hallmark channel.

Joel: You're saying that like it's a bad thing, Thom.

Chad: Joel's Hallmark channel. He loves the Hallmark channel.

Joel: Nothing like a good holiday romance, Thom.

Thom Kenney: I wasn't gonna say it, I wasn't gonna say it.

Joel: That's right. Hallmark Channel...

Thom Kenney: So there are repetitive ways to pull these things out, right? But on the flip side of it, there are opportunities here for folks that can take a core of something that's created with a large language model and then expand it in their, with their own vision and their own ideas. So there's still a lot of creativity, because part of this is when you look at all of this data, and this is something I've chatted about with some friends and with my wife, and we sit around and think about big things. If all of the information that we start to create with large language models is only based on information that we have already created as humans, are we gonna get to a point we're gonna stagnate with creativity? Because are we just gonna continue to regurgitate what we've already created?

Thom Kenney: Will Artificial Intelligence get to a point of set chance where it truly can create something purely original, just like a human would? I think that's a little bit further down the line. But we may enter into the trap in the near term that says we're just regurgitating and regurgitating and regurgitating, because that's what we're feeding the models. And then in kind of mindset of self licking ice cream cone, that same data that's being created by the large language models is going back into the corpus of data that's feeding the large language models. So as we do more and more and more of large language model development, more and more of that is being fed into the large language models to do more large language model output. So you can see that this just starts creating more and more and more and more automated content that's been generated.

Thom Kenney: And what does that mean from a creativity standpoint long-term? So for the actors, for call center operators, for a lot of different occupations, there is a question of how much of this is going to remove the work that I've done as a creative person. And that's, you're not gonna get away from the answer that says some of it. Now, is it 10%? Is it 90%? Is an open question, but some of it is, the writers and the actors who embrace this type of technology will have an advantage in my opinion, because they can leverage the technology to build on their own creative base long-term.

Chad: Well, and also think about scalability. And again, something else that we talked to Ryan about at Veritone is that, let's say for instance, Brad Pitt, his voice in other languages is not his voice, right? But if again, it's a cloned, a digital clone of his voice, then he can actually, you can get Brad Pitt's voice, but also he can get paid on that, right? And then we start...

Thom Kenney: He can.

Chad: Then we start to get into the multimodal piece where we start taking in information that's video, pictures, all that other fun stuff. And then we start generating clones that are more than just voice. And then at that point, again, as an actor, I could prospectively scale instead of CGI. We've got, I've got my digital clone out there working for me. Now it might work for less, but from a scalability standpoint, who gives a shit?

Thom Kenney: I would give a shit.

Chad: Yeah?

Thom Kenney: And here's why.

Chad: Okay.

Thom Kenney: Because if you... If Tom Hanks is one of my favorite actors.

Chad: Oh, yeah.

Thom Kenney: I really enjoy Tom Hanks. He's got a great repertoire and he is got a great range that he can do, but he's in a lot of movies.

Chad: Yes.

Thom Kenney: And you'd be hard pressed to say that there are no actors that could have been just as good, if not better than Tom Hanks in some of the movies that he's done. And when you think about that, that scalability, let's assume for a moment we can take a Tom Hanks or a Tom Cruise or a Brad Pitt, and we could use their likeness. Look at how young Harrison Ford was in his last movie, with some of the multimodal digital impact that they were able to do with Artificially Intelligent agents.

Thom Kenney: When you get that kind of scalability, do you then also decrease the range of creativity? If every movie that we produce only has three actors, are we really still a creative culture? Or are we just regurgitating more of the same? And that's one of the risks that I think the Screen Actors Guild and AFTRA, the script writers, the actors are seeing, is that as this expands, to your point, maybe they get paid less, but they could scale. So if you're doing a hundred movies at a million dollars a piece, you don't have to do 10 movies at $10 million a piece, right? You're still earning the same amount of money, but are you decreasing the creativity level of Hollywood in general? I think we absolutely open up the door to decreasing the creativity level, but it's not just there with the screen actors, it's across books, right? How many children's books are written by artificially intelligent agents and language models?

Chad: Oh, yeah. Yeah.

Thom Kenney: How many fiction novels are gonna be written? Are we creating this world where the majority of what is output is only based on what we've done before? We really start to narrow that creativity bond, and that decreases our value in my mind as a human species of thinking outside the boundaries of what we already know to be true by pushing the limits of where we need to go.

Joel: Well, the good news is Thom, we could bring the dead back. So Elvis, John Wayne and Janis Joplin.

Thom Kenney: Yes.

Joel: Can all start putting out new content in a minute now.

Chad: Oh, no. No. [laughter] No.

Joel: Back to the Screen Actors Guild, and I think we focus a lot on the Tom Cruise's and the big money stars.

Chad: Oh, yeah.

Joel: It's the little guys to me that are gonna get squashed.

Chad: Oh, yeah.

Joel: The extras in the background...

Chad: And the up-and-comers who aren't gonna get that role. Yeah.

Thom Kenney: Yes, absolutely.

Joel: Talk about your take on what happens to them. And I think if we take this full circle to employment, we know many of the recruiting activities can be monotonous and can be replaced. What happens to recruiting? What happens to HR tech that can maybe be replaced? Talk about the little guy and how it impacts ultimately our business.

Thom Kenney: If we think about the little guy in the Screen Actor's Guild, for example. You lose the next Tom Cruise, you lose the next Tom Hanks, you lose the next Julia Roberts or Dame Judi Dench. You lose those folks because exactly to your point, they never have an opportunity to grow, because we're capitalizing on characters that are created by actors that we already know and are already trusted as money makers inside of the industry.

Joel: So can I push back on that and say...

Thom Kenney: Sure.

Joel: TikTok and Reels will be where the next famous people come from. We don't need Hollywood to be that incubator, because we have social media and TikTok. Isn't that where the next big stars come from?

Thom Kenney: Those are different stars.

Joel: Okay.

Thom Kenney: If you think about what it takes to actually do a movie, and Tom Hanks actually has a really interesting short documentary about this. The level of effort that goes into an actual movie is absolutely ginormous. It's huge. The impact of putting together a 30 second TikTok is actually very small. The benefit of the TikTok-tors that have grown and have grown a huge base, have done so because they've created a story arc for themselves. It hasn't been one particular video. It's been a series of videos, and they've captured the attention of folks that are there. Thom's personal opinion, I think that's different than a storytelling adventure, which is creating something out of nothing. Game of Thrones on TikTok is a lot harder to create than Game of Thrones in a production studio. It's not impossible, but I think it's a lot harder to produce inside of there.

Thom Kenney: And I think TikTok ties into, again Thom's personal opinion, the more immediate gratification realm that people want, whether it's through doom scrolling or just scrolling through videos, you can move very, very quickly. It doesn't take a lot of attention to watch a 20 second TikTok video. And then you can get caught up in the links to those videos and the different marketing and advertising that can be done. But it's different than a large content. Just like if you're gonna try to write War and Peace with ChatGPT, it may not be as easy as you think.

Chad: Yeah.

Thom Kenney: That is a tome of a story.

Joel: Or Hamlet. I could see Broadway and actual in-person actors thriving in this environment.

Thom Kenney: Absolutely.

Joel: But anyway, I sidetracked you from recruitment and how this is gonna impact that.

Thom Kenney: You did. You did.

Joel: So sorry, sorry. Go on.

Thom Kenney: As you get into the recruiting side of things, or even just HR tech in general, Conversational AI is one of those things that started as sort of a small little idea a few years ago. And conversational AI is everywhere now. And I think it's comical sometimes too. You look at the value of a company like Paradox and then the incredible work that they're doing inside of this market.

Joel: Right.

Thom Kenney: And try to say, Oh, well, I'm just gonna create a Paradox replacement with ChatGPT. No, you're not. It's not gonna happen, right? Because there's this concept of things like hallucinations and zero shots, large language models. When you're thinking about it from a recruiting perspective, you've gotta be really, really careful about these things like hallucinations. And I'll explain a little bit about what that is. It's basically a large language model that just makes stuff up or says stuff that's offensive.

Chad: It's predicting, right? And prediction isn't always true. Joel should know that.

Thom Kenney: Not always true, and not always DEI safe.

Joel: Yeah. There is that...

Thom Kenney: To use that kind of work.

Joel: There's that. Yeah.

Thom Kenney: We have a lot of books that were written that are incredibly offensive, incredibly racist, that are part of the way these large language models are being built up. So if we think we're just gonna, throw a quick large language model in and you're gonna have a ChatBot, and that ChatBot's gonna be your new recruiter, you're opening yourself up to a lot, a lot of risk. There is so much work that needs to go into ensuring that candidate experience is the best possible experience. Because we may have had some layoffs, we may have some hiccups in the economy, but the reality is we still have a fairly low unemployment rate. We still have a highly competitive market for really, really good talent. And that candidate experience is not decreasing in importance over time.

Thom Kenney: It's absolutely not decreasing. But what we can do is we can use these large language models and companies that are doing really, really good work with large language models, to enhance that experience. It doesn't mean that we have to replace it, it means that we can enhance it. Now when we think about humans versus automation, I would say when you think about high volume recruiting, great opportunity where you don't necessarily need a human to spend hours and hours and hours.

Chad: Yes.

Thom Kenney: Doing interviewing and doing background checks and doing question and answer sessions, that's a great opportunity to apply something like Conversational AI to some of that high throughput. But we also have to be careful too, because if you look at what happened with AWS a few years ago, when they tried to automate resume reviews.

Chad: Oh, yeah. [laughter]

Thom Kenney: It didn't go exactly how they planned.

Chad: Oh, no.

Thom Kenney: So we still need.

Chad: Yes.

Thom Kenney: Yeah. Car crash, train wreck, and a number of ways to look at it. So we've gotta be very, very careful about how we apply some of these technologies to ensure that we're not introducing bias, that we're not introducing prejudices, that we're not introducing any of the pitfalls that you've got by letting tech to sort of run a mock without any sort of constraints or measurements.

Chad: Well, that being said, let me throw this out there, 'cause Paradox is one thing. They're becoming a platform, even like core, core platform.

Thom Kenney: Yes.

Chad: Then you've got a Textio, right? That to me, today seems more like a feature than a platform. Especially when you've got these large language models that are out there. Now, they do have domain-specific data in power. No question. But if you have other organizations that are out there that might be behind Textio right now, and Textio is a very expensive product, 42.5 million in funding, that might do some of it. But at the end of the day, how quick could an organization catch up to a Textio, which is not really platform-specific, it just seems more futuristic?

Thom Kenney: It depends on what you're trying to do with the technology. And I say that very specifically because you can slap anything together very quickly today with a large language model, it really doesn't take a lot of effort. What takes effort is the psychological side of how you're interacting with candidates, for example, just having technology slapped with a nice little web UI interface, and you've got some JavaScript and you pop up this little ChatBot, it's very easy to get back to the days, 10 years ago, where people hated working with those ChatBots. Lots of major companies were out there, it's like, "How can I help you today?" "Oh, well, I've got a question about my bill." "Oh, you've got a question about Bill. Bill can be at your house in three hours," it's like, "No, that's not what I said at all."

Joel: Here's a link to search results about Bill questions.

Chad: [chuckle] About Bill...

Thom Kenney: About Bill questions, exactly. And as we think about this, the aspect of how your interface and the folks becomes so important now and becomes the differentiator. So as I think about how quickly could someone create something like this, it really is about how good are you going to leverage the technology from a psychological perspective that enhances the candidate experience. Because it'll be very, very quick to lose the candidate experience with a really bad product, but it's really, really easy to create a very bad product today. In this particular context, when you're trying to interact live with humans, it's not gonna take much to build a bad product.

Joel: Keeping it in Hollywood a little bit, Sarah Silverman, one of my favorite comedians is in a lawsuit right now for AI using her image, I guess, or her content. This is obviously gonna play on, it reminds me a little bit about the early days of YouTube, where people would just put everything on that was trademarked and copyrighted, and there was tons of lawsuits and that all got settled. Does this go a similar route? Or do you see the lawsuits and how this plays out differently than maybe YouTube did?

Thom Kenney: I see this plays out a little differently than YouTube, because there is some specificity about what Sarah is suing for. It's suing for where she believes information that was copyright protected was being used in the large language models. And that's an important differentiation, I think, from just using things that are publicly available. When you think about YouTube, what's copyrighted, a politician's speech, if you're recording a politician speaking in a public forum, they can't copyright that, because you as part of how you can operate in a free society, is you can post something that you video that was in the public domain.

Thom Kenney: Sarah's arguing that you use things that are not in the public domain, that I sell books, I sell tickets to my comedy show, those are a trade between a customer and a provider, you've paid me to get a value for something. So Sarah's argument is, you've used things that are not free to build your large language model that you are now making money from. So that I think is... There's a little bit of a nuance difference between how YouTube kind of grew. But I would tie it into more with what happened with music 20 years ago with all the peer-to-peer music sharing.

Chad: That's true. Yeah.

Thom Kenney: The App Store for example. That was a great example of using copyrighted material to forward the use in a way that it was never intended. And there's a reason today why Apple Music is doing as well as they do, because they sell for a very modest cost access to this huge world of information, but the music producers still get a benefit, a financial benefit from that. In Sarah's case, she's arguing, "I'm getting no financial benefit," and it's not just Sarah Silverman, there are communities of artists, for example, that are suing some of the image generative AI tools, that are producing likenesses.

Thom Kenney: So you can argue folks like Monet, it's now in the public domain, to, "Hey, take my picture on vacation in Faro, Portugal and make it look like a Monet painting." But for artists that are alive today who own the copyright, on the property of their art work, that art work that's available on a website that's being scraped and used to then create a likeness of that art is a different type of challenge that's gonna be seen in the courts. If one group of people is gonna make more money than anybody else in the world of generated AI, it's gonna be the lawyers.

Chad: Oh God, yeah. So, Ron DeSantis used a Trump AI voice, so it wasn't Trump, but it was his AI modulated voice, let's say, his voice clone, in a commercial, it's pretty much against Trump at this point. Now, usually they would use voice actors that sound like the politician. At this point we're saying, Well, we generally would use voice actors. That's okay. So why can't we use AI? When are we going to draw a line in the sand on this? Or can we just because of past precedent?

Thom Kenney: My personal belief is we're not gonna be able to draw a line.

Chad: Yeah.

Thom Kenney: Because to draw the line, you need legislation. You need a law to cross that line to have impact. And the challenge that we have today is the understanding for the general public of large language models, is sparse. They know what large language models are, they can use them, they can employ them. But the ins and outs of the technical aspect of what it does is a fairly complex set of equations and technologies. And being at Google, I don't even understand a lot of the ways that this works. The folks that are working on this are absolutely brilliant visionaries.

Chad: Yeah.

Thom Kenney: The challenge next is, how do we get this into a legislative form that could be enforced so that people could cross that line. We as a society in general across the world know that this is a concern. But what's the right reaction to this? The EU is taking a certain approach to this. The US is taking another approach to this. Some other countries are taking no approach to this at all and just allowing it to happen. And part of it too is thinking about how do we control the information that exists in our countries? And I'll give you a quick example of this. In the United States, we have rules that protect the privacy of our citizens. We have rules that protect copyright. We have rules that protect data. So building these large language models, even though it seems like it's an infinite amount of information, it actually is a small corpus of data relative to all the potential data that exists. Other countries don't have these limitations. So if you look at how China operates, they don't necessarily have the same privacy protections on their citizens as we do. They don't have the same privacy protections on data as we do.

Chad: Oh, no.

Thom Kenney: So their ability to train a large language model actually has quite a large corpus of data that extends beyond what we have.

Chad: Right.

Thom Kenney: To be able to move this forward. So we're in a world where, let's say, for example, we put legislation in place and there is a line that's crossed that said you cannot use an artificially intelligent agent to mimic the voice of a sitting or a former president, saying words that they've never said with the intent to deceive or to misinform, right? That's a pretty strong statement, right? And I'm not saying that Ron DeSantis did this, but that's kind of where the legislation would look to go. How do you protect the American people from misinformation or disinformation that can be easily generated by an artificially intelligent agent? But who's to say that we couldn't build that same type of thing in another country that doesn't have the same protections to its citizens.

Chad: And use it... Yes. And then use it... And then also from our standpoint, use it against other countries. And then obviously they could use it against us because we have our own rules.

Thom Kenney: Correct. Correct.

Chad: Doesn't mean that they're gonna abide by our rules a.k.a Cambridge Analytica and Facebook.

Joel: That escalated quickly.

Thom Kenney: And drawing this back to Joel's question earlier, about should we sleep well at night or not? The misinformation, disinformation side of things, is probably a much bigger threat than any sort of kinetic activity, because we've already seen how powerful misinformation and disinformation can be.

Chad: Oh yes.

Thom Kenney: On the global stage.

Chad: Yes.

Joel: All right, Thom, I'm gonna end on this. It's a simpler question, but no less important.

Thom Kenney: Okay. [laughter]

Joel: How do these companies monetize? How does ChatGPT make money? How does Google Bard, which is in a really tight situation, 'cause they already have a printing press for money, [laughter] how do they monetize these businesses?

Thom Kenney: If you look at the way that ChatGPT has monetized, they've basically said, we have a free version, the freemium model that then ties you into wanting more. That gets you into a paid model. And last I checked, I think ChatGPT is $20 for the premium model that gets you access to version four. People who are doing this professionally, if you think about going back to writers, going back to recruiters doing job ads, going back to folks that are writing policy documents for the government, the ability to have a tool that helps you over time is a way to monetize this going forward. The ability to jump into ChatGPT and say, Write me an abstract for a conference that I wanna talk to that highlights these three points for this particular audience, saves me a ton of time, that's worth 20 bucks a month for me, as an example, for something like the ChatGPT. On the commercial side for businesses, there's a whole new side of things that Google is working on called Enterprise search, which helps use large language models, find your data much, much more easily.

Thom Kenney: There's a whole enterprise world that we are just now starting to tap, that gives you that ability to look more deeply into your information that you've already got. And we'll use the government as an example. Imagine a tool that moves from Boolean search into something that's truly a generative search, that looks through every page of the US code, which is massive. You're getting to a world where you don't necessarily need a law degree to be able to start understanding a little bit more about the law. And you're opening up people's minds and people's abilities. And then think about it from the policy perspective for the government, if the government can use this type of technology, and the ways that Google is thinking about this long term, with accessing the world's information in new and ingenious ways. Now with tax codes where you have multiple, multiple replicative and duplicative parts of the tax code, just because the tax code's so big, policymakers can make things more simple for the average American everyday. And that opens up opportunities for monetization as well, right?

Chad: All we need is a chip in our head at this point because we are in the matrix. I can learn how to fly a fucking helicopter. This shit happens overnight.

Joel: We're in a world where I need a nonstop dose of Tylenol, Thom. [laughter]

Automated Voice: Doesn't anyone notice this? I feel like I'm taking crazy pills.


Joel: Thom, for anyone out there that wants to connect with you or learn more about what you do, where would you send them?

Thom Kenney: I would send them either to LinkedIn, Thom Kenney, or you can check me out on Twitter @tclmc5.

Joel: Chad, that is another one in the can. I learned a new word. Zettabytes. [laughter]

Automated Voice: 60% of the time, it works every time.

Joel: And with that, we out.

Chad: We out.

Outro: Wow. Look at you. You made it through an entire episode of the Chad and Cheese podcast, or maybe you cheated and fast-forwarded to the end. Either way, there's no doubt you wish you had that time back. Valuable time you could've used to buy a nutritious meal at Taco Bell, enjoy a pour of your favorite whiskey, or just watch big booty Latinas and bug fights on TikTok. No, you hung out with these two chuckle heads instead. Now go take a shower and wash off all the guilt, but save some soap because you'll be back. Like an awful train wreck, you can't look away. And like Chad's favorite Western, you can't quit them either. We out.


bottom of page