top of page
Indeed Wave.PNG
DS_logo_Primary.png

AI Guru Strikes Again!

Nothing strikes fear into the hearts of employers or the vendors that service them quite like the threat of a lawsuit. These days, legal challenges and the opportunity to take a business to court are aplenty ... compliments of artificial intelligence. Whether it's an entire block of countries like the EU, or individual states like California and New York, the risk of new laws and regulations putting an organization at risk are higher than they've ever been. That's why we had to get Guru Sethupathy, co-founder and CEO at FairNow, a company that helps ensure your AI solutions are compliant and well-governed, on the show for a quick check-in not the current State of AI. Whether it's the UK adopting it’s own alternative approach to regulating AI, California's Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, NYC Local Law 144 or even India's latest regulations on hiring, we need a guru - yep, pun intended - to help us better understand what's going on. It's a must-listen if your hiring practices leverage AI or if your company provides such services to employers.


PODCAST TRANSCRIPTION


Intro: Hide your kids. Lock the doors. You're listening to HR's most dangerous podcast. Chad Sowash and Joel Cheesman are here to punch the recruiting industry, right where it hurts. Complete with breaking news, brash opinion, and loads of snark. Buckle up, boys and girls. It's time for the Chad and Cheese Podcast.

[music]

Joel: Oh yeah. It's Dante's favorite podcast, AKA, the Chad and Cheese podcast. I'm your co-host, Joel Cheesman. Joined as always, the J to my silent Bob, Chad Sowash is in the house, and we welcome Guru Sethupathy, co-founder and CEO at FairNow Company, who helps companies ensure that their AI solutions are compliant and well-governed Guru, welcome to the podcast.

[applause]

Guru Sethupathy: Joel, Chad. Good to talk to you guys again. It's been a few months. How are you?

Chad: It is now. Make sure that everybody understands, give us a little Twitter bio, but how long did it take you to actually get your name legally changed to Guru, and how long, is it gonna take to be able to get AI as your middle name? So Guru AI.

Guru Sethupathy: That's what I'm working on.

Chad: How long is that gonna take?

Joel: I heard he's working on his doctorate in love so he could be Love Guru Sethupathy. No.

Guru Sethupathy: Do you remember Mclovin from back in the day?

Chad: How long is it gonna take us?

Speaker 1: Mcguru is coming.

Chad: So give us a little Twitter bio for all the listeners who didn't hear the last episode. You were on. Just little bit of love around Guru. Who is Guru.

Guru Sethupathy: Thank You guys. Yeah, absolutely. So I am founder and CEO of FairNow. Basically like Joel said, we help companies, especially in the HR space, but other spaces as well. Everyone's talking about AI right now. Everyone's wanting to build AI, use AI, but as we all know, it's also a little risky. And so helping make sure that what you're putting out there is tested, it's compliant, it's fair, all of that kind of stuff.

Joel: Look at you raining on the AI parade danger? Oh, no risk. No, don't say that.

Guru Sethupathy: Not at all. Not at all.

Chad: Now is, is Workday pounding down your door since they have a discrimination lawsuit right now?

Guru Sethupathy: Man, that is a fascinating story. I don't know how many of your readers are familiar with this, but yeah. A year ago, a gentleman sued Workday. And apparently the situation was, he'd been rejected from dozens of job applicants that he'd applied for, and said, Hey, Workday was the common kind of filter in all these job applications. Now, what's interesting is a judge, just in January, throughout that lawsuit, not saying that, Workday was fair or not fair or biased, or was not adjudicating on that, but more like, Hey, what's Workday's role as part of the recruiting ecosystem here? They're not like actually a hiring company. But they've come back on February 20th, he refiled it, amended it, and so the lawsuit's back on. And so this is gonna be one of the big test cases, Chad, in terms of what's gonna happen in this space. And so we're all waiting to see what happens.

Chad: Well, California, their first version of the regulations that they started pushing out actually had vendors on the hook. So I mean, do you see that actually as just kind of like the first draft and that'll fall away? Or do you think that vendors are actually going to be on the hook, the vendors and also the companies.

Joel: And employers.

Guru Sethupathy: I see both sides. Absolutely. And it's not just California. If you look at the EU AI Act, which is the big deal, I know we're gonna talk about that in a few minutes.

Chad: Let's do it.

Guru Sethupathy: But that one, New Jersey, New York State, California, all of them talk about both sides of the market. So Absolutely.

Joel: Do you service mostly the employer side or the vendor side? Like who's coming to you? Is it both or just one? Obviously both will eventually start knocking on your door. But what is it right now?

Guru Sethupathy: That's a good question because it's not what I would've guessed initially. 'Cause initially we went a little bit more after the employer side. Because of the New York City law and, other things. But who's actually been knocking on our door more? It's actually the vendors and here's why. Because they are getting [0:04:19.0] ____ questions from their customers. And what does a vendor care about? They care about selling to customers. And so if that's slowing down their sales cycle questions are slowing down their sales cycle, they're like, okay, we gotta find a way to answer these questions. So that's what's driving, the inbound right now.

Chad: Yeah. Well that makes a hell of a lot of sense. So listen, listen, vendors, if you are fumbling and stumbling and bumbling over the answers around AI, you might wanna call a guy like Guru. But question here though, as we talked about Workday. They just acquired HiredScore who, Athena Karp, who you might know. She was amazing at explaining and defending what they were doing. So do you think this is almost like Workday's way of saying, okay, we need AI 'cause everybody's getting AI, so we need AI, but we need what everybody's calling, here we go kids, responsible AI. That's right. I used air quotes. Do you think that was pretty much like the bandaid? It's like, okay, we, can get this covered if we get Athena, her group, and people who actually know what the hell they're doing in here. Do you think that was it?

Guru Sethupathy: I have a hunch. I think you're onto something there. I think both in terms of the AI journey. I think HiredScore has been around for a while And has been quite innovative in the space of AI. I actually haven't gone deep on their product with Athena, but I've talked to her about this. She and I have had multiple conversations about responsible AI. And I found her to be one of the more thoughtful peoples around this topic. So I would not be surprised if both from an AI roadmap HiredScore was attractive, but also kind of Athena and how she thinks about responsible AI, I'm sure was very attractive to working to her.

Joel: Guru when you talk about these things, it's normally a big company, Workday, CVS, the little guys out there have to be freaking out. They don't have the resources to build responsible AI are you getting calls from them? And what is that conversation like?

Guru Sethupathy: Again, it's about the vendor versus the the employer. So on the vendor side, we actually have small vendors. Who are customers, because again, they're trying to sell to big companies. And so big companies are like, who are you? And how do I know your shit isn't biased? So then they're like, all right, we gotta get through these sales cycle. So on the vendor side, it's big, small, medium, are all kind of, knocking on the door. Joel, on the employer side, you're right. It's the bigger companies because they're the ones who are more likely to get sued. If you're a small, medium sized company, you're like, yeah, I don't... I'm not gonna...

[overlapping conversation]

Joel: So are you priced for the smaller guys versus bigger vendors?

Guru Sethupathy: Yeah. Yeah. I mean, we're pricing, it's a combination of like, how many models do you have? How complex are your models? Those kinds of things. And so if you're a bigger vendor, you're probably gonna have a more complicated AI ecosystem.

Chad: Well, Yeah. I mean, we just last Friday talked about how HireVue was an issue for CVS. So CVS is getting sued because of the use of the HireVue system. At least that's how it's... That's how it's drawn up. So if you're a company, well, first and foremost, if you're a HireVue, you gotta get your shit together. You gotta do it quick. But even more so, if you're a company, especially like a brand like CVS, you have to defend that brand. I mean, you have to, you have to be...

Guru Sethupathy: You have to.

Chad: You have to do it. And you have to be ahead of everybody else.

Guru Sethupathy: That's right.

Chad: Unfortunately, they were not. So are you seeing more companies being more thoughtful about, okay, we want to be able to, we have to introduce AI into the process. Because if we don't, we're gonna be left behind. So we have to be able to do it. We've gotta figure it out. But how do we get people in who actually know what the hell they're doing?

Guru Sethupathy: So you covered a lot there. I want to hit on the Air Canada situation. Did you guys see that one? That's another good example.

Chad: Yes.

Joel: I didn't see that one.

Guru Sethupathy: Where chatbot, So Joel, they had a chatbot that's basically a customer facing chatbot. And a customer asked them about their discount policies, and it made something up, basically. And so the customer then went ahead buying a ticket. Assuming that to be true. And then found out it wasn't. And it was like, wait a second, I was told explicitly, yeah, X, Y, Z and then Air Canada came back and was like, oh, no, no, no. That chatbot's a separate company like.

Joel: Don't.

Guru Sethupathy: And that's not, the judges rule that they were responsible and they're liable for that. So, these are the kinds of the trial and errors that we're gonna see a lot of where companies try things, they're gonna mess up. And then other companies are gonna be like, whoa, we gotta be careful here. So you're gonna see a lot of that. I think, but you're absolutely right. I think here, and this is one of the things we've started building into our platform. Is actual testing of chatbots, of LLMs, of GenAI, testing it for bias, testing it for hallucinations. Is it saying crazy stuff? Is it saying weird stuff? Like, we actually have some, capabilities to test around that. So you can test that out before you release it into the wild.

Joel: I hope Google's listening after their recent, faux pas with.

Guru Sethupathy: Gemini.

Joel: Yeah. Gemini.

Chad: That's not even a faux pas. You're asking for something and you didn't get what you want. So now you're bitching money.

Joel: No, their QA was messed up. They should attest to that better.

Chad: No, that's bullshit.

Guru Sethupathy: That was very surprising, guys. Very, a company like Google.

Chad: They're images. I mean, come on. This is a little bit different than actually outcomes and screwing somebody over because they were a female and they played softball and the AI saw that that happened. This is hurting somebody versus not hurting somebody. A little bit different. So let's talk about the actual regulations and the EU go figure is leading on this. So tell us what the EU is doing.

Guru Sethupathy: Yeah. Not surprising in a way. Because the EU does operate in a more top-down fashion. It is gonna be like a multiple 100s of pages kind of legislation. They've been working on it for a while. So where we've landed since we last chatted, so back in December, there was an informal kind of agreement around the contours of the legislation. What's happening now, dribs and drabs of it have been released. And in April, I believe they're doing a final formal vote on it. And then I think 20 days after that, it goes into "effect." I put that in quotes because companies will have time. So it's kind of a lagged effect. You'll have six months to get certain things in order. You'll have a year to get kind of your bias testing in order.

Guru Sethupathy: You'll have a year and a half or to two years to get your reporting infrastructure and governance layer in order. So there's different components to it, and there's different time lags to each of these things. But it's happening. It's happening. And the fines are huge. Up to 6% of your annual revenue. So this is not like the New York City law on any way, shape or form. The New York City law, I wrote a blog post on this, ended up having no teeth behind it. And part of it was the fine was $500 per, I mean, what, why am I buying? Exactly. And it was very narrow. It was focused on AEDTs, which are automated employment decisioning tools, for hiring and promotion, but only if it's automated, completely automated.

Guru Sethupathy: So if you could easily come in and say, oh no, my human's in the loop. It's not automated and get around it. The EU AI act is much more broad. It even outlines kind of eight high risk areas of which guys, just so your audience knows, HR is one of the eight high risk areas. So HR is going to be in the crosshairs of this legislation. And very consistent. And I think California, New York State, others are going to have HR in the crosshairs as well. So I think HR, financial services and health are gonna be three of the domains that are gonna be in the crosshairs of all of the AI regulations going Forward.

Joel: One of the things that you mentioned is the importance of upskilling. So in that time window companies should be preparing, building the skills internally. Like talk about how companies should view it upskilling as opposed to just maybe hiring a company like yours. Or should they do both? How do I prepare?

Guru Sethupathy: They should do both. Because we're more of a platform company. We're a technology company. You're still gonna need humans in the loop. And making kind of really thoughtful decisions around risk, reward, trade-offs. The thing that I say is, look, at the end of the day, no one is trying to reduce risk down to zero. If you were trying to take risk down to zero, you'd never leave your bed. You'd never leave your house. So that's not how we operate. That's not how humans operate. That's not how businesses operate. But you need to know enough of the calculations to understand, okay, what's the risk of this technology? What's the reward? And are we comfortable with it? So you need both people at the junior levels who understand governance, but then you need senior stakeholders who really ultimately need to make these calls at a company wide level.

Guru Sethupathy: What is our transparency policy? What is our red line? Let me give you an example. If you're thinking about something like predicting attrition at the individual level, a lot of companies have tried this. My teams have built stuff like this. It's a little bit minority report desk. If I can predict that Joel will leave his company with 70% probability, okay, what do I do with that information? Do I throw more money at him? Do I try to keep him? What if Chad's about to leave with 67%. What's the difference between 70? I mean, you start to get into situations where like, you don't know how to use these insights and what to do with it, and then people can start gaming it. So it's actually, there's a really interesting ecosystem around some of these insights around AI. And so that's where you have to, as a company, say, okay, we're not gonna use it for that purpose. It doesn't make sense. It's not gonna drive value and it's gonna break trust. So these are the really intricate conversations that you need to be having. But to do that, you need to have the talent that understands this at the detail level And at the philosophical level. And there needs to be upskilling.

Chad: Well, let's talk about risk real quick. 'Cause I think this is the risk that companies are going to care about the most. If the Workday suit was brought in the EU, 6% of their global revenue equals $420 million for a Workday, that is a risk they care about, right?

Guru Sethupathy: Absolutely. Absolutely. I think there's two risks. I think that one, that's a pecuniary fine related risk. Now, I don't think it'll be that high. It'll be up to that amount and it'll be... But it'll be significant. But reputational risk matters. Like reputational risk matters, if you're losing customers or candidates because of some perception around how you treat your technology or how little you've tested it and those kinds of things. That one's a little harder to measure in terms of dollars, and cents, Chad. But it can add up to be even larger than that amount.

Chad: Yeah. Especially from a revenue standpoint. I mean, it's not only the fine, but the perspective of lost revenue. I mean, optics.

Guru Sethupathy: Lost revenue.

Chad: Obviously it all factors in there. Now, as you were talking about some of the things that you're gonna have to do at the EU, like, evaluating the impact of AI. And then also ongoing, monitoring. What does, that even look like?

Guru Sethupathy: So that's where kind of, we come in and help. Like, what does that look like? And when you're talking about ongoing monitoring, that's... If you're doing this manually, that ends up being, you need to hire like a big old data science team. And how many HR organizations have the capabilities to hire a big data science team. And then on top of that, they're gonna be like, wait a second. I thought you told me AI is gonna reduce my costs. Now I gotta go hire a big data science? So, this is where it gets tricky, but that's the beauty of kind of technology, like what we're building, where we can automate this. Like we have expertise in this. We've done this for many, many years. We know how to automate it. We know the ins and out, we know the details. And I think that can go a long way to helping companies, again, leverage the technology in a positive way, which again, I'm super bullish on. I think I shared this analogy with you guys last time. Like AI, like cars are an incredible technology. It transformed our societies, but at the same time, you gotta have breaks, you gotta have a dashboard, you gotta have rear view mirrors, otherwise, what are you driving? You're driving death mobile. So just put those things in place and then go fast.

Joel: You talk about the EU, and as I listen to this, I realized how complicated and complex this is going to be, and I think about Europe. Yeah. Well, apparently the UK is gonna have their own sort of set of, AI regulations, talk about what they're doing and how it's different or what you expect them to do, and how it might be different from the EU and the US.

Guru Sethupathy: I think part of it is, again, what one of the insights that we have is this is gonna be very patchworky. Each country and region and state is gonna have their own. In fact, they're kind of competing. I've talked to folks in various state offices who are like, oh, we gotta... What's that other state doing? Oh, what's that state doing? Well, we gotta... Exactly. So there's going to be that element of some competition amongst the regulators there. That being said, there is some common themes, and maybe that's what you're getting at Joel a little bit. The common themes are around, hey, you gotta evaluate and monitor your models. Like, that's just the thing. And especially on a couple of dimensions. One is, bias that always comes up. So evaluate bias. The other dimensions that come up are around hate, transparency and explainability.

Guru Sethupathy: Can you explain what your model is doing? If it's a black box, that's kind of a problem. Especially in the hiring space and lending space, healthcare. Like these spaces, you can't just be like, oh yeah, we just rejected your loan. Good luck. We don't know why. Like, you can't, you can't do that. So things like bias explainability are gonna be really, really common in these legislations. And then when it comes to GenAI, you're talking things like data privacy and security. Can someone hack into your GenAI? There's all this stuff you guys are probably reading now, like poison attacks. I dunno if your audience has heard that, but that's where you kind of penetrate the system and basically inject it with kind of your own prompts and almost teach it to be a bad kid. And then you can do whatever, you can do nefarious things. So there's a lot of security related things, data privacy related things. And so you see kind of some of these common themes, across the different laws and legislations.

Chad: And I think it makes it much harder because deep learning is a black box. And almost every case that's out there, if you take a look at some of these models, they're off learning by themselves. It wasn't something that was programmed. It was something that was a part of the process.

Chad: The data that's entirely different. What data are they training off of? And I think that's where you get companies HiredScore, which was Smart Buy for a Workday, where they know exactly what their data sets are. They know exact. And that's the secret sauce. So from the standpoint of a lot of people are pointing to the AI piece, which I obviously that's going to be the output, the learning and output. But isn't the data just as if not more important in this whole scheme of things?

Guru Sethupathy: It's more important. It's not even the same. It's more important. In fact, I don't know how much you follow the whole competition between Meta and Open AI and Google and all the, right, in terms of the foundation, one of the things you might be noticing is they're all converging. Their performances at least. And it's because the modeling techniques are known now. Amongst these top level data scientists at these organizations, they know what the best practices are. They know it's... Google published this famous paper called Attention Is All You Need, or something like that, which was a groundbreaking paper in this Gen AI space. And then other companies copied it. So a lot of that is not where you differentiate yourself, where you differentiate why everyone thinks eventually Google will get back on the right track and win this, is they have the most and best data out of anyone in the world. And so at the end of the day, we still expect Google to be at the top of this AI race. But to your point, that's what differentiates it. Your ability to know and have the best data and understand that data really well, and then be able to put it in a way that it's well governed. If you cannot make these mistakes, if you have amazing data and not make these mistakes, you're gonna be ahead of the game.

Joel: America's unique in that it has a 50 state, set up to where everyone makes their own rules. Talk a little bit about the current state of, what certain states are doing. We've got California with SB 1047, New York Local Law 144. Anything else that we should be looking, into the future for different states doing unique things around AI?

Chad: It's chaos.

Guru Sethupathy: Absolutely.

Chad: It's all chaos.

Joel: It's cats and dogs living together.

Guru Sethupathy: It's gonna be, each state's gonna have it. And that, imagine how complicated that's gonna be. But then again, our tax system is a little bit that. Each, you have a federal tax system and set up, and then each state has their own specific tax policies, tax rates, tax discounts for this thing versus that thing. And so you end up then building an ecosystem of people who specialize in that stuff. And that's what happens with regulation. Anytime you have a regulation, then you have an ecosystem of people, lawyers, and experts and consultants who then start specializing. So I expect that to be the case, Joel, here, where you're gonna have that regulatory ecosystem. So you have California, New York State is separate from the New York City local law Joel, so there's gonna, there is a New York State law as well. Maryland, is working on one right now. New Jersey has one. And then there's a bevy of other states. Massachusetts, Maine, Colorado has one that's focused on, it's fascinating, insurance companies right now. So that's a more targeted for the domain. The other ones that I mentioned, New York State, and California in particular are much broader. So that's gonna be the interesting piece, is some of these are gonna be quite broad, and some of them are gonna be domain specific. And so we're gonna see how that plays out.

Joel: Similar taxes, we have local taxes, we have state taxes, and we have federal taxes. And the federal trumps all that. And we talked a little bit about politics in the green room. I was surprised to see the FCC come out with laws so quickly around robocalls. Joe Biden, famously recently called people, it wasn't him, but he was saying, don't vote. So.

Chad: That was Dean Phillips, wasn't it? That was the other fucking Democrat.

Guru Sethupathy: No, no, it wasn't him. It was someone on his campaign. It was someone who used to work for Dean.

Chad: Yeah, yeah.

Joel: I'm sure there's stuff, we don't even know what's going on. But I was impressed to see the FCC move so fast to make this illegal, which tells me, and I commented recently that politicians are freaking out thinking that their voice could be out there saying, who knows what. So what's your take on the Feds coming down with, okay, everyone, here are the rules. All the states are gonna follow it. Are we gonna stay in this disparate state rules? Can the Fed come in and make sense of all this, in your opinion?

Guru Sethupathy: I think it can on certain areas. And they have, if you saw the White House EO, I think that came out right around Halloween of 2023. There are a couple of areas where the federal government does have domain. And when it comes to things national security or things our democracy. That ends up being the federal government's domain. And it's actually concerning. And I think they moved fast because we are, we talked about this, Joel, we're in an election cycle right now. So it's not something they can wait six months to do. And you guys have seen it, this can be crazy. It's the voice stuff. It's also the image stuff. It's the videos. You can make anything now. And you saw Sora, have you guys played around with Sora? Which is, I mean, video...

[laughter]

Joel: Isn't that making movies. Or you can, yeah.

Guru Sethupathy: Yeah. That's the one. And you can put people in there, you can put, I can put Chad in one of those movies and have him doing stuff he's never done. And in actuality. And so...

Chad: No ideas, Cheeseman.

Joel: I won't take the podcast, we got Guru on the show. I'm not gonna muddy it up. Don't worry. [laughter] Don't worry. Guru's too high class for that.

Guru Sethupathy: So that stuff is now, and so I completely understand why they came in to try to curtail that right now. That being said, other things like, hey, how do you wanna govern software in the hiring space or the lending space? Now the thing is, there's already, so one other thing I do wanna say, and Keith talks about this a lot. There's already federal legislation involved in lending and in hiring and in health. We already have, there's rules already. So in hiring the EEOC, we have... There's this chapter, Title VII of the Civil Rights Act, where you cannot be biased and it doesn't say humans can't be biased, or AI can be, it doesn't say anything. So you are... So already companies are absorbing that risk without really thinking about it. And so they should be proactively, instead of waiting for additional AI legislation, they should already be on top of this saying, hey, we need to know what's going on.

Chad: We've talked about this, for... Shit. Years now with AI. It scales decision making.

Joel: That's right.

Chad: It scales bad decision making. It's not that humans haven't been making bad decisions. We have. It's not that humans haven't been biased. Oh, we're so biased. But now the AI is taking that bias behavioral data, some of those decisions some of the "predictors" that some of these companies use, and it's scaling it. So now if you are a company who got away with bias and all those things before, this is going to make all of that scale, and it's gonna make it much easier to identify who's fucking around.

Guru Sethupathy: 100%. I think you're right on both of those accounts, it's much easier to scale bad biases and bad decision making. At the same time, it's also gonna be easier to sue. And I think that's where you were going with your second point, if I understood that correctly, in that it, you know exactly where the problem is. It's in that...

[overlapping conversation]

Chad: It'll be easier to see though, because it's gonna be the problem's not gonna be small because the scale was small.

Guru Sethupathy: Exactly.

Chad: Scale's gonna be much larger. So therefore the problem's gonna grow with the scale, and you're gonna... It's gonna be much easier to identify. So therefore you're gonna have more people pointing and saying, oh, look, there's a lawsuit.

Guru Sethupathy: But I think all of that is true. I think you're more likely more... It's more gonna be more easy to sue, more easy to have a lawsuit. At the same time though, let me say, it's gonna be easier to monitor as well. Because that's what I'm gonna... My point being. When you have 50 different recruiters, and three of them aren't entering the data in the system, and six others, are... You know what I mean? It's just hard to know what's going on. [laughter] But with the model in place, it's actually easy to monitor and you can automate the monitoring. And the thing is, this has been going on for a while. I talked to you about Capital... We come from Capital One, Capital One monitors its models all constantly. It's not hard. And so HR just needs to build these practices in. They just need to have that mindset adjustment and say, hey, look, we're starting to become a technology domain, a data-driven domain. We need to then along with that, build these other practices in place. It's easier to monitor a model than it is 50 humans.

Joel: Remote work, obviously in the last few years has become, incredibly popular. And we've seen sites that are platforms for managing, a diverse global workforce. The remotes, the oysters, the deals, etcetera. It seems to me recruiting on a global basis would open your company up to a lot of risk in terms of AI hiring. Going back to these different laws in different countries and states and all over the map. How confident are you that these platforms are covering companies' asses? Do you feel like companies are reluctant to go out on a global scale because of the risk? I'm just curious your thoughts on the global, state of things in AI recruiting.

Guru Sethupathy: When you were talking about that made me just think of this thing that got passed in India. I don't know if you guys saw this recently, but any company that uses AI in India for anything has to get a federal approval, which is crazy to think about. Imagine something that passing here, it wouldn't. But that's a pretty onerous thing. So to your point, Joel, now, like, hey, what if you are using... What if you're a multinational company and you use AI in your hiring practices just generally. And India is one of your places. And wait a second now for India, I gotta go get special approval. And then from, so you're, so this is the thing that's gonna be hard to manage. It's like, and this is what I meant by like, you're gonna have to hire some policy folks, some lobbyists, you're gonna probably hire legal folks, compliance. You're gonna start to have this ecosystem of people to think about how these things affect you. And so that's gonna be a bit of a challenge. I totally agree.

Chad: So if I'm a company, I'm out there, this is scaring the shit out of me. Go figure. I'm seeing, Workday, I'm seeing CVS, I'm seeing all these big brands that are getting thrown out all over the place. Is it feasible that I can do my business without even using AI? Especially when we're talking about hiring in a very competitive market?

Guru Sethupathy: For a short time. For today, for tomorrow, for this next month, for this next year. And so at the end of the day, then it comes to how forward thinking are CHROs is what it comes down to. And if you are a short term operational CHRO, you're like, hey, that's not my problem. We're gonna be fine for this year. I'm focused on this year, I'm focused on this quarter, I gotta cut cost this quarter. So there's different types of CHROs that I've come across Chad, the ones that are strategic, visionary, forward thinking, absolutely not. They know AI. In fact, there's so many opportunities in HR. HR is the perfect domain, actually. There's so many opportunities to use Gen AI capabilities, natural language processing capabilities, predictive analytics capabilities.

Guru Sethupathy: I was talking to a company the other day where they still have each recruiter manually at least check the resumes that come in. And apparently a 100,000 resume applicants last year. And I was just shocked. I'm like, really, in 2024, you know what I mean? That's just super inefficient. You can't compete, doing that. Again, you can do it for next week, you can do it for next quarter, but in 2025, 2026, how are you holding your cost down? How are you innovating? Things like, your employees, every single employee has a question about, I remember I had so many questions around vacation days, con policies, all of that stuff. You can automate all of that. There's so many opportunities. And this is, I think, some of the fun part of the conversation. Like, hey, how can you use AI to create value in HR? And there's really a lot of opportunities. So for the ones, the companies and the CHROs that are forward thinking, they're absolutely not gonna shy away from it.

Joel: I keep coming back to your different... States different laws and how disparate this gets. Just curious about your industry. Typically, there's a Coke and a Pepsi, and then a bunch of Fantas and Dr. Peppers. Where does this go? Do we have local accountants where people are AI regulation experts in your local market? And it's a few people your State Farm agent. Is it a big... Are there a couple big players? And it's all software and we know all the answers in all the states in real time. Is it both of those things? How do you think this industry evolves from one of the pioneers?

Guru Sethupathy: I think there's some parallels to... If you think about it, there's already compliance in HR related to hiring. Put aside AI and bias. I'm not talking about that. But when you hire, if I go hire in Canada, if I go hire in Europe, if I go hire in Asia, there are local laws that I have to follow to do that hiring. Compliance, all this stuff from how you interact with them to how you onboard them, the contracts, the pay, the salary, the level, all that stuff. And so you have these companies, payroll companies, that's how they got started. Was like, hey, you're gonna go hire over there, we'll help you. We'll deal with all the compliance. We'll take care of all that for you. And so in a sense, that's a little bit of what we're trying to do and others are gonna try to do as well, which is like, hey, we are going to become experts. One of the things we're partnering with law firms, for instance. And so we're gonna be experts on that stuff and take that off your plate and that's what, a Deel does. Or that's what a Dayforce. They take that off of your plate. And I think you're gonna see a little bit of that model, when it comes to AI regulations Joel.

Chad: So in talking to our friend, a friend of the show, EEOC commissioner Sonderling, he had mentioned over a year ago that there's more than likely going to be an ISO around this, the international organization for standardizations. More than likely that's gonna be a standardization that everybody can flow into. So it's not so chaotic. And it seems we're on our way there. Can you talk about that a little bit?

Guru Sethupathy: I can. I agree and I disagree. And so where I agree is, yeah, there's gonna be a standard, and it's here. I saw 42,001 was released late last year. Now it's still I think in the process of like, there could be some, amendments, some revisions and that thing. And there's still a process to be worked through there, but realize there's a difference between market standards and regulations. A market standard is great. It tells you that your product has a certain level of quality, certain level of certification that you can't take for granted. A regulation is still, like, I still got, even if I have the standard, I still gotta go satisfy that regulation if I don't wanna get fined. If I don't wanna get...

[overlapping conversation]

Chad: Doesn't this make it easier for regulators though, to be able to take a look at a standard and say, okay, that's the standard we're gonna get behind that. You have to do that. So it makes it much easier for a bunch of individuals, a bunch of 80-year-old individuals who they don't know a smartphone, they've got a jitterbug phone. They don't have a smartphone. So at the end of the day, is this not a smart way to just say, oh, wait a minute, those guys seem to have their shit pulled together. Let's just go ahead and jump on that train.

Guru Sethupathy: There's absolutely truth to that. And in fact, a lot of the private sector companies are pushing for that. They're saying, hey guys, White House, whatever, let's develop market standards. Let's have public private partnership. Let's work together and let's develop these standards and let those standards be the things that provide the common ground around this stuff. And there's some truth to that. If you look at the White House EEO, again, the White House... What the EEO says is they put the NIST, National Institute of Standards and Technology, in charge of developing some of those standards. So if you look at it in the US, that's our approach. Now, look, the states at the end of the day are gonna do their own thing. So the White House can't govern that. But from a White House perspective, I do think, other than things related to national security, which are very important. And they own the domain around that. But otherwise they, that's what they're saying, Chad. They're saying, hey, let's this, let's build some standards. NIST, you kind of, we give you power to run with this, work with private sector, work with others, and develop standards and let's go with that. So I think there's absolutely some truth to that.

Joel: That is Guru Sethupathy. Everybody...

Chad: There he is.

Joel: Guru, for our listeners that want to know more about you, maybe they can keep up with some of this stuff. Where would you send them to learn more?

Guru Sethupathy: Couple sources, follow me on LinkedIn, follow FairNow on LinkedIn. We're both... Myself, my account, FairNow's account. We're publishing a lot on these topics whenever there is, whether it's a lawsuit, whether there's a company that put out something that we think we disagree with or agree with, or laws and legislations as we've been talking about, we'll be posting on there. So please follow us as well as our website. So we have a whole area where we talk specifically about legislations, about market standards, and we talk about what's coming down the pipe and what's in these things. The EU AI Act. We have a whole page on who's affected, what's gonna happen and all that stuff. So between those two spaces, you should be able to learn a lot.

Joel: Long story short, Guru's a busy guy.

[laughter]

Chad: The Chad and Cheese podcast has a bat phone to Guru, okay? So you can always listen to us. But yes, definitely go to FairNow.

[laughter]

Joel: Yeah. Thanks for keeping us up to Speed Guru. This is a complicated issue, Chad, that's another one in the can. We out?

Chad: We out.

[applause]

Outro: Thank you for listening to, what's it called, the podcast, the Chat, the Cheese. Brilliant. They talk about recruiting, they talk about technology, but most of all, they talk about nothing. Just a lot of shout outs of people you don't even know. And yet you're listening. It's incredible. And not one word about cheese. Not one cheddar, blue nacho, pepper Jack, Swiss, so many cheeses and not one word. So weird. Any who. We try to subscribe today on iTunes, Spotify, Google Play, or wherever you listen to your podcasts. That way you won't miss an episode. And while you're at it, visit www.chadcheese.com. Just don't expect to find any recipes for grilled cheese, is so weird. We out.

Comments


bottom of page