top of page
Indeed Wave.PNG

The AI Guru Talks Regulations

The word audit scares most people. Hello, IRS? But sometimes audits can be your best friend for getting a good night’s sleep, especially when A.I. recruiting is involved. With laws being passed all the time that could put your company in jeopardy, you need some solid advice. That’s why we invited Guru Sethupathy, founder & CEO of FairNow AI to help us make sense this A.I. audit stuff. FairNow is a platform for simplifying and automating fairness compliance. They help businesses by auditing and monitoring AI algorithms in hiring. FairNow is on a mission to help companies build trust in their A.I. and if you’re looking at giving your vendors a deep dive to make sure they keep you out of the crosshairs of law enforcement, then this episode if a must.


Intro: Hide your kids. Lock the doors. You're listening to HR's most dangerous podcast. Chad Sowash and Joel Cheesman are here to punch the recruiting industry right where it hurts. Complete with breaking news, brash opinion, and loads of snark. Buckle up boys and girls, it's time for The Chad and Cheese podcast.

Joel: Oh Yeah, it's your therapist's favorite podcast, AKA, The Chad and Cheese Podcast. I'm your co-host, Joel Cheeseman. Joined as always, the Kelcee to my Holmes. Chad Sowash is in the house, and we are excited to welcome Guru Sethupathy, founder, and CEO of Now, with a name like Guru, that's a lot of undue pressure. Is that tough to live with a name like Guru?

Chad: Did you ask your parents, what are you doing to me? I mean, all this weight on my shoulders.

Joel: He was born and they said he will run an AI company in 40 years. [laughter]

Guru Sethupathy: The thing about Guru is like you could put that in any context. I'm out on a tennis court and they're like, what's your name? And you're like, Guru. Oh shit, you better... This guy must know how to play tennis. So it's kind of, in any context, you kind of have that pressure. So I gotta step up.

Joel: I mean, I don't know your sexual preference, but I would think with the women, if your name is Guru, you better bring the knowhow, if you know what I'm saying.


Chad: We're gonna be packing the heat, anyway.

Joel: Yeah, You better, you can't come with some weak shit if your name is Guru.


Chad: You better know how to date and whole nine yards baby, whole nine yards.

Joel: Mm-hmm. Yeah, yeah, yeah.

Chad: All right, Guru. So to get through this craziness, let's hear a little bit about you. Obviously, we just trampled all over you, but let's hear a little bit about Guru. Do you like long walks on the beach?


Chad: What makes Guru tick? That's what we want to know. Go ahead.

Guru Sethupathy: Well, with a name like Guru, you had to guess, I started out in academia. So that's just... There you go. It had to happen.

Chad: That's smart.


Chad: Didn't have a choice.

Guru Sethupathy: Yeah, right. It was just meant to happen. So, now been... For the last 15, 20 years, guys, I started out in academia. I'm an economist by training. And, but that's where I cut my teeth at this space of both AI and human capital. That's kind of, those are the two things I've been interested in the last 15 years. And when I go for walks on the beach, I think about AI. No, I'm kidding. But seriously, so, those two things are what I've been thinking about. And so AI is really interesting. I don't know how long you guys have followed the topic. It's obviously, really sexy again. It used to be sexy in the late '90s, early 2000s, for those in the audience who remember... Who played Chess or Go, or...

Chad: Oh, big blue.

Guru Sethupathy: Different types of games. Yeah. There you go. Right? Deep Blue is what it was.

Chad: Oh, yeah. Deep Blue and Kasparov, right?

Guru Sethupathy: Yes. That was '97, 1997. And the headlines were like, AI is here beating... 'Cause Chess was considered the ultimate human game created. And so that was 25 years ago. And then AI went off the radar a little bit. There was other technologies. There was like e-commerce and internet and cloud.

Chad: Cloud.

Guru Sethupathy: Social media, and Facebooks of the world, and the Ubers and...

Joel: QR codes.

Guru Sethupathy: All this stuff.


Guru Sethupathy: Exactly. And so where was AI? It was under development all these years, but it was a very different methodology. And what changed was the volume of data out there. And so all of a sudden, the way that Deep Blue functioned is very different from the way ChatGPT functions. So...

Chad: Well, and how we process it though, right? I mean, processors, we had all that data before...

Guru Sethupathy: Exactly.

Chad: And we just couldn't... We couldn't process it down. We were doing it with CPUs now we're doing with GPUs. I mean, things have changed.

Guru Sethupathy: Those two things changed. The processing, exactly, the compute power and the volume of data. We had data back then, but the volume today, exponentially different. And so those things change and so AI is now back in the forefront, and it is like really, really having an incredible impact. And the other big thing that's really was a big deal with ChatGPT, it's impressive in some really cool ways. But for the first time, regular people, your aunt and uncle, your buddy down the street who doesn't know anything about tech, for the first time, they could interact with an AI system. Because again, AI has been developing the last few decades, but the average person didn't know. Now, they can interact with it, and that's just changed the whole conversation. And we'll see how that affects regulations, and we'll talk about it in a second, but I think that's just raised the awareness of people of, oh my gosh, this is a thing. It's happening. What's the implication? What's the implication for my job, for me? Well, how's it gonna change my work? How's it gonna change my career? What should I go study?

Guru Sethupathy: Like, all of these things are super fascinating conversations. That's one thing that I think is happening. The other thing is just I've always just been fascinated by human capital, human potential. Why do some people succeed, other people less so? What determines people's careers? How do they go in and out of career? Skills, how are skills changing? All of this stuff. One of the things I've been thinking about a lot is the half life of skills. Like, if you look back at our parents and grandparents generation, they would kind of pick a thing, and they would just kind of do that for 10 years, 15 years, 20 years. Maybe they would have one other career, maybe two careers over their lifetimes. Now, people have four, five, six, seven different careers. And a lot of it is kind of warranted because things are changing so fast. Okay, you gotta go do something. And so how is that gonna evolve and what does that mean for people? I think the intersection of these two topics, AI and humans, is just kind of what I'm passionate about.

Chad: So the impetus of, talk about that. You obviously saw that there... This was happening, scale was happening faster, it was processing faster. It was moving faster. I even think that they're talking about Moore's law running much faster today than it was just even 6 to 10 months ago, for goodness sakes. So talk a little bit about that and being able to give us a little background of why you started

Guru Sethupathy: I think it was a couple of trends that we saw that were happening. One is I've been bullish on AI for over a decade. And so I continue to think the technology, the sophistication, the capabilities are going to continue to improve. And the ramifications of that are pretty tremendous. It's going to be playing a bigger and bigger role in making important decisions. And we make decisions all the time in our jobs. Whether it's people decisions, whether it's business decisions, whether it's all sorts of decisions, right?

Chad: Yes.

Guru Sethupathy: So that's one thing. And at the same time, we don't know how these systems work, that's the fundamental thing. Even if you go and talk to an engineer and you're like, "Hey, why did it do that?" I don't know if you've read on ChatGPT and GPT-4, but it does some weird stuff sometimes. There was one example where a person was talking to it in English and it started responding in French. And he's like, what? Like where like... And there's so many cases like that. It's really, for instance, it's really bad at math and logic, which for us, it doesn't make sense because we tend to think like simple math and logic, like third grade level math and logic...

Chad: Yeah.

Guru Sethupathy: It's easier than high school level creative writing, for instance, right?

Chad: Right.

Guru Sethupathy: But it's better at that high school level creative writing than it is at third grade level math and logic. And so in our brains, that doesn't compute. Why...

Chad: Does it...

Guru Sethupathy: Why is that? And so there's a bunch of stuff like that where we don't understand how it works or why it works or why it does what it does. That's a big deal. You have to put guardrails around this stuff. You still need humans in the loop. You have to put guardrails. You have to worry about things like explainability and fairness and transparency and all these things, these things really matter. And so I felt both of these things were going to happen. Both AI was going to be more and more important, at the same time for this to really take off, you need to build trust in these systems, because it's very easy to get scared or weirded out and then boom, the whole system can be... You could just lose trust in the system and then, who knows what happens at that point? So to me, really, those two things go together. The advancement of technology has to come along with trust. And that was the impetus for fairnow.

Chad: Well, Around the how, we'll get to that here in a second, but there are so many different types or classifications, let's talk about, around AI and it just makes everything so muddy. So you've got...

Guru Sethupathy: Confusing.

Chad: Artificial narrow intelligence, artificial general, artificial super. And then they even categorize... Like they break those out from there. So how much of your job is actually trying to educate and help organizations and literally, just human beings understand what the hell is going on out there?

Guru Sethupathy: That is a very insightful point. I would say in these early days, more than half of our job is that, Chad. Like we have a technology platform, but more than half of my job is just educating, especially... We're primarily working in the HR space, talking to executives in HR and just educating them on a couple of different things. First of all, what is AI? Like literally what is it? How do you define it? When people say AI, what do they mean? And how would you think about kind of categorizing it? And so we even talk about two big buckets of AI. One is predictive analytics, like any kind of model that can predict, Hey, should that person get hired? Hey, should that person get promoted? Those are just simple predictive analytic models.

Guru Sethupathy: And then the other one, which everyone seems to know about now is GenAI, which is generative AI. Which is these systems that can talk to you and have a conversation, etcetera. Those are the two big buckets. And then within those, there's subcategories that you can kind of cut into. But it's just good to know because for a lot of people, they just think AI, gen AI, they forget about the predictive models and those are really important too. So that's kind of one big bucket of things that people should think about.

Guru Sethupathy: Then within that, another thing we educate on is, okay, what's going on in terms of laws and regulations? And that's a whole another thing. People are like, what is even going on here? I've heard about the New York City law. What does that law say? And then what about other laws? And so in the US we have a patchwork of different states coming together with their own laws, but then in places like Canada and the EU, it's much more top-down, it's much more kind of nationally, federally led laws as well. So that's bucket two. And then we talk about governance more broadly. Okay.

Chad: Yeah.

Guru Sethupathy: How do you do it responsibly? How do you do this well from day one? So you just don't have a bunch of data scientists going around building cool models, and then you don't know how this is all going to explode a year from now. It's almost like a science lab where you put the science kids in the science lab, we're going to run these experiments, but like, hey, let's put some controls around this. We don't want to blow up the science lab. And so what does good governance look like? So those are the three topics that we're educating folks on.

Chad: So I think we get wrapped up in classifications and the how, as you talked about, but should we be more focused specifically on the what? And what I mean is the outcomes and the impact on humans. What does it do to the actual human? For these individuals who are qualified and they're getting negatively impacted, because that's the signal. That's really the end signal that we should care about the most. It's like the, how. I think we get really... We dig deep into it. It's like, well, how does this predict and well, it's like, but at the end of the day, it's who's being impacted and how are they being impacted? So talk a little bit about that because I do believe in the how. Don't get me wrong, I am a geek, I love this stuff. But I really believe the what happened and who is impacted is really the thing that most companies should be really feeling... Really focusing on mostly. What do you think?

Guru Sethupathy: I think that's a great point. In fact, one of the things, both in our product and our platform and how we educate folks, we talk about risk levels. And the risk levels are exactly related to your point around who is being impacted and how are they being impacted? So for instance, let me give you two examples. If someone is being impacted because they don't get a job or they don't get an interview, that is a very different risk level than I'm just chatting with a gen AI system and it's kind of bullshitting. Those are two different levels of impact and therefore two different levels of risk and that requires different levels of governance.

Guru Sethupathy: So our entire governance framework, Chad, is built on risk level, which is a direct function of the point you just made of like who is being impacted and how are they being impacted? So that is absolutely, absolutely, how we derive our governance logic. The other point I want to make though, and I think this gets lost sometimes is because there is a lot of attention around these AI systems and oh my God, what are they doing? Are they biased? All this stuff. Humans are biased too. And so one of the things I don't want people...

Chad: That's how the AI got biased in the first place.

Guru Sethupathy: Right? Exactly.

Chad: It didn't just... AI didn't just say, hey, we're going to be biased. Let's go ahead and bias this. Humans, the human behavior and data is what made this whole system. And here's, I think the big thing, Guru, and tell me what you think about this. First off, the bias we, there's already been biased inherent in the system 'cause it's humans. So we take that bias we put it into AI and the big difference around AI is AI scales better than humans do which means we can negatively or positively, okay.

Guru Sethupathy: Exactly.

Chad: Impact.

Guru Sethupathy: Impact alot more people in a millisecond.

Chad: Yes. Yes.

Guru Sethupathy: That's right, that's right. And that's the concern, that's the concern. But where we come from is this technology though, if done well, can scale things positively as well. And that's where we differ from some people who are like, hey, let's stop doing AI or let's kind of put a pause on... We think this can actually have really positive implications if you're building good governance and responsible governance while you're doing it. There's a really cool study that just came out that showed as a randomized controller trial, so the best kind of study that you can do. And they broke up men and women and put them in different buckets. And for one audience they said, you're going to be assessed and interviewed by humans. And for the other one it said you're going to be assessed and interviewed by AI. More women self-selected to go into the latter bucket.

Chad: Oh yeah, because they already saw the bias that was already inherent in the system.

Guru Sethupathy: You know that they know, for instance, the biases they did. Any woman that you talk to knows how interviews work and how that stuff works. And they know that biases that are already inherent to that, they believe the AI is going to be less biased.

Chad: Wow.

Guru Sethupathy: Right. If you're a company and you're not getting enough female candidates, you actually can do, get more, source more female candidates by having an AI system. Now, then you need to have, make sure that AI system is actually less biased.

Joel: Unlike Amazon's.

Guru Sethupathy: Yes, exactly, exactly, exactly. But there are ways of doing that and there are ways of doing it. So that's not impossible. There's ways of doing that and then there's a lot of research around this and but I do think that's the future. Again, if you do this stuff responsibly, it's almost like if you have a powerful car, if you have a Ferrari, you can go crash that thing in 30 seconds or you can use that. If you use it responsibly, you can get to your next place faster, in a cooler way, all that kind of stuff. So it's all about how you use a really powerful technology and that's what AI is, it's a powerful tool.

Chad: That's how Guru picked up chicks, Joel. We just found out.

Joel: Am going to say, you can use the name Guru with great responsibility, or you can abuse it. It's really up to you. How are you going to use that kind of Ferrari.

Guru Sethupathy: Touché, touché. Oh man, that was good.

Joel: Clearly our AI overlords don't want me to be on this interview. But I'm curious about the current system of auditing or what you think it's going to look like. A company looking to get an audit, where the hell do they start and where do they go?

Guru Sethupathy: Yeah. Where they can start, where they can go, they can come to us, that's what we do. And So what one of our...

Joel: And why do they trust? Why should they trust you or anyone else?

Chad: Why should they... Yeah. Why should they trust anybody?

Joel: Does the government give you a stamp? Do you have a little badge from the Boy Scouts.

Chad: His name is Guru, come on.

Joel: What is it that you have?

Guru Sethupathy: I'm Guru with a Ferrari, right? So that's how it spreads. A couple of things. So one of the key things around these laws is they are asking for an independent audit, almost every single one of them. It started with New York City. And let me say one thing before I go further. New York City law, Chad, you and I talked about this very briefly. I don't love the law. There's a bunch of things that if you want, we can get into it. There's a bunch of things I disagree with about, I would have written the law differently. But the thing it did, which I am happy about, is it raised awareness and all companies now are thinking, oh shoot. Okay, we got to slow down here. There's going to be other laws coming in. It's not just New York City, it's going to be other laws coming down the pipe. And so we just got to slow down. We got to... So it just, at least raised awareness of people are thinking about this in a more thoughtful way. So I do wanna give them credit for that.

Guru Sethupathy: Okay now coming to...

Joel: They're scared shitless is what's happening. But yeah, okay, they're aware of it.

Guru Sethupathy: Exactly. They are scared shitless. And then the second point now to your question, Joel is okay, what is an audit? And so almost all of these laws are asking for an independent audit. So what does that mean? That means like, if you're a vendor, you can't just go around and say, Oh yeah, I just audited myself and like everything looks fine, you have to go to a third party. Now, we're in the early days of this. So there is no, like, hey, this is like a verified vendor who does the audit, and like... It's not like fairnow has been Verified in that way or this other company has been verified in that way. So that doesn't exist yet. But here are a couple of things I will say, if we do audits and they don't work out well, that's going to negatively affect us. So there's an incentive and like a thing for us to be very, very, very careful how we do this.

Guru Sethupathy: The other thing I'll say is I've been doing this for decades, so I kind of know what an audit looks like in the topic of fairness and algorithms. The other thing that's happening now, this idea of market standards. And this is where I want to talk about carrot versus stick. So the laws and regulations are the stick. Hey, if you don't get audited by an independent third party, you got to pay a fine, blah, blah, blah. Standards are these market standards that are coming out, like ISO 42001. But ISO I'm sure your audience knows is a very, kind of, common, well known international standard around certification of a variety of things. It could be a lot of things and they're doing it now in the... And a lot of them is technology oriented and now they're doing it in the AI space. And 42001 is specifically about certifying that your AI systems are responsible and they have a bunch of stuff.

Guru Sethupathy: And in fact the NIST frameworks, the National Institute of Standard Technology, those frameworks are embedded in this standard. And there's going to be other standards. I'm sure SOC is going to come out with some standards. You guys have probably heard of SOC. The EU AI Act is going to have its own standards. So it's going to be a bunch of standards, right. And these standards, Joel, to your question, those are well known, those are well known kind of almost approved upon, agreed upon standards of saying, okay, gosh, if Joel's company has ISO 42001 certified AI, I know they're responsible. And So what we're doing in our product is saying, hey, we'll audit you, but at the same time, we'll embed these standards in there. And so if you pass these standards, boom, you don't have to trust us, trust ISO 42001. So that's kind of how we're working on this as well.

Joel: So we're in the early stages of this, I assume I could be Joey Bag o' Donuts, throw up a website and say I'll audit your AI and approve you, whereas, they're probably... It feels like the background check business to me. You got your mom and pop doing stuff, you got like your multi-global, like huge corporations doing it. Talk about the competitive landscape, how does this thing shake out? Or is there a Coke and a Pepsi? And then everyone else is like, good luck with that. Like how does this thing shake out?

Guru Sethupathy: I think you're right on a few fronts. The barriers to entry just for doing an audit, not that high. You got to know a little bit about how to do it. You got to understand the the laws. You have to understand EEOC laws, the existing laws, something Keith Sonderling talks about often. And I think you... I remember actually listen to your podcast with Keith. There's already laws on the books. The EEOC has employment discrimination laws around just human bias and discrimination and stuff like that.

Guru Sethupathy: And so those laws still apply. It's not like those laws have gone away. So you do want someone who has expertise in this space at the intersection of employment law, discrimination law, bias, and how do you conduct an audit? For example, the four-fifths rule. Not everyone knows about the four-fifths rule. So there's some subject matter expertise here. So I don't want to say it's completely simple, but there's some subject matter expertise. So, there are gonna be folks who have that subject matter expertise that are gonna come into this space. So that's part one. Part two though, is, beyond just the bias audit, there's a whole level of governance, Joel, that I believe is important. So let me give you an example.

Joel: Yep.

Guru Sethupathy: We've been chatting with customers. HR executives, dozens and dozens. One of the first things I ask them is, hey, how many models do you have in your HR organization. Not one of them has got the right answer. Meaning, they don't even know how many models they have in their own organization. And I happen to know it 'cause I'd be talking to their folks in their organization and I'd ask them, hey, how many models here? How many models here? And they didn't know. And so the first level of governance, forget bias audits, just forget that. The first thing you have to do is just know what you have. You gotta know. You gotta inventory, you gotta know what you have. You gotta know what these models are doing, why they're doing it, who has access to these models, what data they're using. That's just basic governance 101. Then, as you get into more high risk models, then you do things like, okay, now we have to develop validation reports.

Guru Sethupathy: We have to do bias audits, we have to do explainability reports. Then we have to do compliance with different laws. So we actually have a framework for different levels of governance, depending on the risk level of the model. And so, this to me... So yeah, you can do a quick bias audit. To me, a bias audit is like a bandaid. So if you go see someone and they're like, oh yeah, you have a scratch, let me put a bandaid on there. Sure, there's gonna be dozens of those.

Joel: Doesn't make you a doctor.

Guru Sethupathy: It doesn't make you a doctor. And I think that's what we bring to the table. And the reason we are able to do that is, me and some of my colleagues we're ex Capital One folks. The whole financial services lending space, if I can just digress for a second, there has been laws on the books there around, against discrimination and lending for decades now. And so any bank or any institution has had to be very, very well governed around the models that they use in lending and banking and so on. And when I was at Capital One, I learned about that and I said, wait a second, that's gonna come everywhere. That's not gonna be just in banking, that's gonna come to HR, that's gonna come to health, that's gonna come to insurance that's gonna come... It's gonna come to all these spaces and these spaces don't know how to do this governance.

Joel: Yeah.

Guru Sethupathy: And so to me that's kind of what's the different, that's gonna be the differentiator between just an auditor versus someone who's gonna, like you said, a doctor in this space.

Joel: Yeah. We have a lot of vendors listen to our show. A lot of people that are building products, cool things that aren't necessarily thinking about bias or how it could impact or break the law. Should vendors reach out to you just as employers do and say like, hey, we've already been vetted by fairnow, we've already been vetted by A, B and C top, whatever. Are they doing that and should they do that and what would it cost?

Guru Sethupathy: Yeah. We're already working with vendors. So we're actually working with both kind of enterprise customers who are using these things, as well as vendors who are building these things. And if you look at the laws, New York City is focused on the end customer but other laws like New Jersey, there's a law that's coming down the pipe that's focused on vendors. But if you look at most of the other laws, California, Massachusetts, the EU AI Act, like Canada, all of them are focused on both sides of the market. Both the end user and the company that builds the technology. So vendors should absolutely be concerned about this. And yes, absolutely. And we're already working with them on this. So yeah, they can come talk to us, we're already working with vendors. And I think one of the things there that's gonna be coming down the pipe, and we're already seeing this, is their customers are asking them these questions, Joel.

Joel: Yeah.

Guru Sethupathy: Their customers are starting to come and ask them, hey, wait a second. I didn't ask you last time when I signed the... Signed a contract. But now I wanna know what data are you using? What's your training data? Is there a bias? Have you had a third party look at it? They're starting to ask those questions and so that's where we can help.

Joel: Here's the thing though, Guru, is that I can have, I'm a vendor and I go through all of the hoops, right?

Guru Sethupathy: Yeah.

Joel: But the problem is, as soon as I put that in front of a bias set of data, and then a company takes over and they try to make it more customized/bespoke, depending on what side of the pond you're on, then that fucks the entire system up. So you can be as, much like we've said for years, applicant tracking systems worked perfectly until companies came in and they started customizing the hell out of... Oh no, we wanna do it this way, that way, and the other thing. And now everything is bottlenecked, it's not happening efficiently like it should. Now, in this case, a vendor can go through all of those hoops, but a company starts to push all this infected, let's say, data into the large language model, that's not the vendor's fault. Who is responsible there? Because California wants the vendor to be responsible in some cases.

Guru Sethupathy: And this is where... We're in the top of the first inning check, right?

Joel: Yeah.

Guru Sethupathy: There's gonna be so much that we have to sort through and figure out both from like a business perspective and from a regulatory perspective. So on this point around training data, so models, so when I say a model, I'm actually referring to two things, an algorithm and the training data that that algorithm was trained on. So sometimes the same algorithm could be five different models because every time there's a different training data, that to me, that's five different models. And four of those could be good, and the fifth one could be really biased data to your point, right?

Joel: Mm-hmm.

Guru Sethupathy: So that's point number one. Point number two is, we actually have come up with a pretty cool technique where we've developed a synthetic dataset, a proprietary synthetic dataset that then runs A/B tests on your model, it doesn't even use that dataset. I'm gonna say, Hey, forget about that dataset. Let's use this dataset and run a simulation on your model and test your model for that. And the laws actually allow for that. So the New York City law, there's actually a section there that says, hey, if you can't train on a historical training dataset for a variety of reasons, you can use an alternate test dataset. And so we've created that test dataset, so that way a vendor can say, hey, look, I don't know what training data you have, customer A, but based on this kind of canonical test data, our model is fine. And that's a valid thing, at least from the point of view of the New York City law, etcetera. The other thing that vendors should be aware of is how are their customers using their technology? It used to be the case that you're like, I don't care how you use it. Like, good luck to you, user error is on you, kind of thing, right?

Joel: Yeah. You're on the hook.

Guru Sethupathy: You're on the hook. And they still are on the hook, but where it starts to affect a vendor is now reputation. So now if, for instance, five different customers of theirs have been sued or audited. And they're all using this technology, their vendor technology. Again, they're probably not on the hook, Chad, but it starts to look bad. So I don't know if you saw this Workday lawsuit, no I don't know.

Joel: Yeah, yes.

Guru Sethupathy: Kind of as there's some debate there around whether that was a real legit lawsuit or if that was kind of contrived, but you know we can debate that. But the point being, it still looks bad on Workday, right?

Joel: Yeah.

Guru Sethupathy: And so Workday now has an incentive to make sure their customers are using their technology in the right way. And that's another way, again, where we can help, where we can say, hey, if we're auditing a vendor, we can also say, who are your customers? Let's go audit them. And so now you can make... And we can give them a score and so the vendor can know, hey, how are each of their customers doing. And, if they see one of their customers is scoring a D, for instance. All right, they're probably not using it correctly, let's go talk to them. So that's where, those are the kinds of things we wanna solve.

Joel: Gotcha. I wanna switch real quick to the top-down versus bottom-up, states versus federal.

Guru Sethupathy: Yeah. Yeah.

Joel: And so what we did see, going back to your example, we did see financial institutions, the government relaxed on lending laws, and we broke the entire globe. The United States screwed the entire world because everything was relaxed from the feds going down. We don't have really anything in place, federally, the EU is putting things in place. So how much risk do we have in breaking the entire damn world again, if we don't put guardrails in place federally? I know the states are pushing, and I appreciate that, but does it really matter unless we do a federal mandate from the top-down.

Guru Sethupathy: Yeah. Really interesting. A really interesting analogy that you're comparing it to kind of the financial system, and what happened in the late 2000s. A couple of things. One is just... I just wanna put this out there. We're just much more of a kind of state's right, type of place in America. And this isn't to get into politics or anything like that, but that's just how we were founded. I'm just going back to our founding and how we got started. We were a bunch of states, and this is part of our...

Chad: That was over 200 hundred years ago.

Joel: This is America, Chad.

Chad: That was over 200 years ago. Back in the horse and buggy days, we... Continue, continue.

Guru Sethupathy: No, no, But my point is, that's not the...

Joel: He said, no politics, Chad, I know where you're going with all this.

Chad: No, that was just a historical reference. Had nothing to do with politics, Joel.

Joel: I know what you meant. Guru, go on, save us.

Guru Sethupathy: It's just in our DNA. Our DNA is just very different from that standpoint, from Europe and other places. And so, I do think we are going to have, states coming out with their own laws. And they're gonna say we wanna regulate. Now, in some sense, that's good. The thing I like about that kind of stuff is experimentation. Hey, let California try what they wanna try. Let New York try, let Illinois try. And let's see what's working well. The whole idea of states is a laboratory for trying to see what's working and what's not. So that part is good. The part that sucks kind of is like, if you're a national company, your head is gonna explode with 20 different laws across so many different states. And you're like, oh my God, your head is literally gonna explode.

Chad: Well, this just makes business good for you, Guru, because now it is so goddamn complete. Nobody can keep up with it, dude. If there was a blanket law, it'd make it a little bit easier...

Joel: He's gonna bet on that garage for a few more Ferraris.


Chad: Maserati, just continue.

Joel: Yeah. Yeah.

Chad: Aston Martin.

Guru Sethupathy: So, that is, I think, going to be the plus and the minus of how the US is gonna flip. Now, the thing that is interesting here, and you're already starting to see some federal guidelines. Now, they're not specific laws, but the NIST which is again, an organization that we are members of, and we are working with and partnering with National Institute of Standards of Technology, they've come out with a NIST AI Risk Management Framework. And that framework is going to... Is supposed to be something that states can borrow from, that companies can borrow from, and that they can start using it in building into standards and into governance practices.

Guru Sethupathy: So again, I don't think it's going to be an all or nothing. Hey, is it only states doing stuff or is it only federal doing stuff? I do think in this case, Chad, you're gonna see both. You've already seen the White House put out a blueprint. The Congress is already talking about guidelines and guardrails. And so I do think you're gonna see kind of, high level guardrails at the federal level. And I do think you're gonna have specific laws at the state level. I do think that's gonna... How it's gonna proceed in the US.

Joel: So let me throw in another curve ball at you. The globe, every country and municipality, and they're gonna have their own laws as well. That company with that satellite office in Belgium is thinking, what the hell? How are we gonna cover our ass on this one? What advice would you give them? And does your company cover a global footprint? Is there an association in the offering that will sort of keep track of all this stuff? Is lobbying in the future?

Guru Sethupathy: One of the things we're seeing kind of in the early days is companies just turning off some of their AI systems.

Joel: Yes.

Guru Sethupathy: One thing I've seen as it gets complicated, they're like, you know what? Screw it. I'm just gonna turn off my hiring and my AI in New York City or in California, or whatever. Now look, that's not a sustainable answer, that's not how you build a business, that's not how you continue to do things over time. But that's kind of been a quick reaction from some companies. And so that's kind of part one of my answer to you. Part two is, yes, we help. In fact, that's kind of a big part of our value prop is like, there's gonna be 50 different laws, you are gonna have 50 different models, that's 2,500 different combinations, we're gonna keep track of that for you. So that is kind of, part of our value prop. Joel, because otherwise, like I said before, your head is just gonna explode. And 100% there's gonna be lobbying. When is there not lobbying when it comes to any law. So there's gonna be huge amount of lobbying. In fact, you're already... And I think a lot of lobbying is actually gonna from the big players.

Joel: Yeah.

Guru Sethupathy: I think it's the Googles and the Microsofts of the world that are heavily involved in shaping some of these things. In fact, if you look at Chuck Schumer, I think is convening a meeting on responsible AI. And if you look at the names on that list of people that are in the room, it is a who's who and the people that we've all heard of, these names. They're the CEOs of Microsoft and Google and all these really big technology companies. So there's absolutely gonna be lobbying, there's absolutely gonna be reshaping of this. And in fact, that's part of where it comes back to this carrot and the stick idea. There's gonna be laws on the books, but at the same time, there's gonna be some amount of like, not self-regulation, but market regulation. And so the analogy I'm gonna give you is SOC 2 compliance. And I think, Joel, you were shaking your head, you didn't know that one before, but Chad, I think you might. But SOC 2 is a thing.

Joel: Thanks for throwing me under the bus Guru. I appreciate that.


Guru Sethupathy: I just was making sure. If you were like, no, no, I know this stuff, I would've just skipped on. I would've just skipped on.

Chad: Guru, everybody knows Joel is just the pretty face on this podcast. Okay, so it's okay.


Guru Sethupathy: Well in 5 seconds, Joel, you're gonna know as much as I want.

Joel: I was just doing sit ups, is why my head is nodding. Doing some crunches.


Guru Sethupathy: But the SOC 2 compliance is basically a way, a standard for making sure that you are handling your customer's data. Your customer's data, their documents, their... All of your information is called InfoSec compliance. That is not a regulation. SOC 2 is not a regulation, but it is now so prevalent because any time you are a vendor, you wanna sell a product to someone, SaaS product, they will, in their recruit procurement process, they will ask you, are you SOC 2, it just kind of covers your ass. And so my expectation is that will become a thing, that will become a... There'll be standards out there that you'll have to follow. Again, it could be ISO 42001, maybe it's a SOC 10 compliance something. There's gonna be something. And that is something that all companies will just have to follow. And that's gonna simplify things. That'll simplify the things from like, oh my God, 30 laws in 30 states to, hey, if I follow the standard, that should cover me 90% of what I need to do. And so, I think that's also something you're gonna see in the future. And that's also something we help with.

Chad: Get ready kids, they're gonna be on RFPs all over the place when that happens. Guru, I want your commitment, my friend. This is just the start of this conversation. We need to have you back to continue this discussion because it is constantly changing from state to state, country to country the GPU to Google, Gemini. Jesus, it's all over the place.

Guru Sethupathy: And both sides are just... Both parts of the equation are changing so rapidly.

Chad: Yes.

Guru Sethupathy: What I mean by both parts of the equation is the technology is evolving at such a fast pace. And so if you ever even wanna come back, have me come back and just talk about the technology and what it's doing, and what is it capable of, and what it's not capable of, I go into that too. What are some of the things that sucks at? We can even nerd out on that a little bit. So, but that side of things, the technology and how fast it's changing and where it's headed, that's one piece. And then the laws and the standards and that stuff is also changing very quickly.

Chad: Because they go hand in hand kids. They go hand in hand. Guru, if somebody wants to find out more about and/or they want to connect with Guru 'cause who the hell doesn't want to connect with a Guru for goodness sakes, where would you send them?

Guru Sethupathy: Thank you. Yeah. So our website is the first and best place, And you can learn a lot more about our company and what we do there. You can reach out to us there on the website directly or email me. Happy to chat. Like you said, the very first question you asked Chad, I think was we're educating people on this stuff too, right? It's not just our technology. We're educating people, helping them think through the complexities here and how they can get started down this journey. So you can email me at

Joel: Love Guru, AI Guru or just plain Guru. Chad, another one is in the can, we out.

Chad: We out.

Outro: Wow. Look at you. You made it through an entire episode of The Chad and Cheese Podcast, or maybe you cheated and fast forwarded to the end. Either way, there's no doubt you wish you had that time back. Valuable time you could have used to buy a nutritious meal at Taco Bell, enjoy a pour of your favorite whiskey. Or just watch big booty Latinas and bug fights on TikTok. No, you hung out with these two chuckleheads instead. Now, go take a shower and wash off all the guilt, but save some soap because you'll be back like an awful train wreck, you can't look away. And like Chad's favorite Western, you can't quit them either. We out.


bottom of page