top of page

Bias: AI vs Human - Mobley v. Workday

  • Chad Sowash
  • Aug 5, 2025
  • 22 min read

Joel, Jeff, and Chad recording the Bias: AI vs. Humans episode

AI is less biased than humans—shocker, right? Turns out the robots might be better at hiring than Chad after three Old Fashioneds. Seriously? Doubtful...


This week on The Chad & Cheese Podcast, Jeff Pole from Warden AI drops data that’ll make your head spin and your DEI team weep. We’re talking real-world audits, AI bias stats that beat out human stupidity by 45%, and yes… the Workday lawsuit that could make vendors soil their Terms of Service.

Subscribe button

👉 Is Workday’s "AI" really just glorified RPA wrapped in marketing glitter?

👉 Are vendors about to get sued into the compliance stone age?

👉 Will HR finally care about bias beyond race and sex?


Spoiler alert: probably not. But hey, at least Jeff’s Scottish, so the truth sounds charming.


Tune in or get audited. 🎧


PODCAST TRANSCRIPTION


Joel Cheesman (00:29.742)

Yeah, you know what time it is. The Chad and Cheese podcast is here. I'm your co-host, Joel Cheesman. Joined as always, Chad Sowash is in the house as we welcome Jeff Pole. He's the co-founder and CEO of Warden's AI. Jeff, welcome to HR's most dangerous podcast.


Chad (00:47.708)

And he's not Latin at all. Let's just go. Let's just get that out there right now. He's not, nobody's calling him Poppy. Okay. Let's just.


Jeff Pole (00:53.902)

You


Joel Cheesman (01:00.046)

You may be right. You may be right. You may be right. But I love the name. love the name. Jeff, welcome to HR's most dangerous podcast. For all listeners that don't know you, what should they know other than co-founder and CEO of Org?


Jeff Pole (01:14.446)

Hi guys, good to be here. Well, I'm originally Scottish, as you can probably tell from my accent, based in London, but moving to Austin, Texas. I'm originally from Scotland and I'm still Scottish, yeah, very much so, despite moving to Austin. Absolutely not, never. Only get stronger over time. The further you get away from Scotland, the more Scottish you become, actually.


Joel Cheesman (01:21.762)

You're still Scottish, right? You originally, you're still always, you never leave it, right? Yeah. Okay. Good.


Chad (01:27.836)

It's like you're trying to leave the scots behind, Jeff. That's what it sounds like. Okay. Okay. Okay.


Joel Cheesman (01:32.076)

No, no, no.


Joel Cheesman (01:41.688)

I have heard that. I've heard that.


Chad (01:43.194)

Yeah, so a little bit more about you. London, Who? Edberg, yeah. They're a solid six, aren't they? Or something like that.


Jeff Pole (01:45.382)

Sure. Heart of Melovin actually. Edinburgh team called Heart of Melovin. Yeah. Not a great team, I don't think anymore. Yeah. I'm not up for football anymore, but not the best. But yeah, my background's in NEI, even before it was cool, and building startups, particularly in regulated industries, which led me to...


Joel Cheesman (01:46.316)

Rangers or Celtics?


Chad (02:12.924)

Mm.


Jeff Pole (02:14.318)

to ward any eye in what we're doing here.


Chad (02:17.18)

So what brings us here today? We're actually talking about the state of AI bias and talent acquisition and things. mean, we've been talking about this for years now. I mean, we've had the, you know, at the time it was the EEOC commissioner, EEOC commissioner Sondraling, we talked about at the time, on stage, we're at compliance events. And this is the thing in everybody's head is what...


kind of bias is actually happening in talent acquisition due to AI versus what we've been used to with humans for so long, right? So you guys actually did this data-driven review of AI bias, compliance, and responsible AI practices in TA. Give us, let's start hitting some of the key findings. We'll go one by one.


Jeff Pole (03:07.598)

Sure, so we a pretty comprehensive report, lots of different angles to go down. We found that the concerns about bias is one of top concerns from HR and TA leaders when they're considering adopting AI and evaluating vendors. So was second only to data security and data privacy. So big, big issue, which may be not surprised given all the headlines and just the sense of nature of.


Chad (03:11.782)

Mm-hmm.


Chad (03:20.38)

Mm.


Jeff Pole (03:35.734)

of AI and HR, but that was what we found from people's responses.


Chad (03:40.518)

So when we talk about, and in the actual survey, you did some AI scores versus actual humans. Let's go ahead and hit this square in the face because everybody believes that AI, and there's no question, AI could take human bias and it could scale it very fast, right? Because we don't scale well as human beings. So there is this fear that, hey, we're already biased and we're doing biased things.


but it's very little bias because only a dumb human like us can do that kind of stuff, right? You put it in AI's hands and it just, it erupts. It explodes with bias, right? So what did you find out in this survey?


Jeff Pole (04:24.621)

Right. And actually, the survey was just one part of it. We actually had real data on it. the core findings we wanted to share about the state of AI and bias and human bias was based on audits from our customers sharing their data, but also from what we could find from published audits online as well. So we had data of real-world AI systems in TA. And we also have human benchmarks of human bias from various academic studies.


Chad (04:28.804)

Okay. Ooh, I like real data. Yeah.


Huh?


Chad (04:51.462)

Gotcha. Okay.


Jeff Pole (04:52.686)

So we can go into more of that in detail, you wanna double click on it. What we found is that yes AI bias is real, like the models can be shown to be bias and that real systems in 15 % of cases had a bias in one or more protected group that would be below fairness thresholds, commonly in and safety of your thresholds. So real issue, however, when you flip that 15 % round,


Joel Cheesman (05:10.552)

Mm-hmm.


Jeff Pole (05:17.858)

you have 85 % of real-world AI systems in television that met fairness thresholds for all the protected groups that were tested on. So already, you know, probably less bias than you would think. And then we compared that to human benchmarks using the same metrics. Please jump in.


Chad (05:38.08)

So real quick, I want to be able to hit on this. Only 15 % of AI systems failed to meet the actual fairness metrics or benchmarks. Is that what I'm getting? OK, OK. OK, gotcha. But then you did a head-to-head analysis. Human versus the AI. This is what we've been waiting for, Joel. This is what we've been waiting for. OK, go ahead, Jeff. Play it on us,


Jeff Pole (05:48.642)

That's right, of one or more, one or more benchmark.


Jeff Pole (06:02.67)

So when you look at the data and compare on average between human bias in the same use cases and AI bias, we find that AI performed more fairly, but up to 45%, like the numbers would be 45 % better if an AI was used on average than the benchmark that we found from human biases up to the same use case.


Chad (06:06.49)

Mm-hmm.


Chad (06:29.155)

So.


Joel Cheesman (06:29.422)

But there's still significant, I guess, skepticism, cynicism about AI, but your number's saying the machine's won. They're less biased. We should trust them more than humans. that what I'm


Chad (06:35.271)

yeah, of course.


Chad (06:42.608)

or at least in his sample set, right?


Jeff Pole (06:44.43)

Right, obviously we always caveat things with the base of the data set and go into great detail on that to bring it up. But yes, that's what we've found based on this particular metric. You're essentially at disparate impacts, looking at impact ratios, which is a way that you can evaluate human selection processes as well as AI selection processes.


Chad (06:50.94)

Mm.


Jeff Pole (07:05.806)

And then when we looked at data for humans based on academic studies, so there may be question marks about the validity of academic studies, but we looked at many, aggregated them into a blended benchmark. And then we looked at all the AI bias audits that we've done and we could find online across all vendors. Then the average number differences between them were up to 45%.


Joel Cheesman (07:25.934)

So this is really timely, Jeff. Workday's in a little bit of a pickle. They're in a case right now, Mobley versus Workday. Give us sort of a summary of that case. And I guess if you were advising Workday, how do you see the case unfolding? This is going to be a huge case in terms of precedent for cases on A.O.N. hiring down the road, yes. Talk about your take on that case.


Jeff Pole (07:35.374)

I've heard of it.


Jeff Pole (07:53.27)

Yeah, that's a big topic. you know, guess key things are that, as you guys probably know, it's only at the allegation stage, right? So there is no yet any evidence we brought to table about whether the AI in this case was biased or not. So can comment on that. But I think the biggest implication, as I see it, is the impact on liability that this will have, particularly for vendors. Before this, it's been


because AI and vendors is a new thing to the legislator. Employers are on the hook for any potential discrimination they might bring about. But what I think will happen is it should, because of this, is that be a bit like a car manufacturer, where if there's a systemic fault in the car that leads to whatever damage done out there.


Chad (08:39.324)

Mm-hmm.


Jeff Pole (08:42.446)

You don't blame the drivers, right? There's a recall, there's a fix on the systemic fault. If there's not a systemic fault and certain drivers drive in a reckless way and lead to damage, then they're liable. And that I think is a simple analogy, but the way that we should be thinking about AI systems, which are very powerful, can be used incorrectly, but also there's...


Chad (08:42.46)

Class action, yes.


Chad (09:02.748)

Mm-hmm.


Jeff Pole (09:05.782)

There's a need for the developers to be liable if they have done something systemically wrong, which I think is what we will see, find out.


Chad (09:11.28)

Have you dug into this from a technical standpoint? Have you had a chance to at least take a look at it or at least dig into the case to see whether it was even AI or not? Because to be quite frank, a lot of cases, there was already a ton of bias as it was in just basic filtering, which could have been RPA or even something less technical. Have you been able to dig into this to see if it was really an AI scenario or was it just a


a shitty process by not just workday, but also companies using the technology.


Jeff Pole (09:47.298)

Right. So obviously, Workday is not a client of hers, so I can't come on that perspective. Yeah. And yeah, I've heard mixed rumors as to different parts of Workday that may or not have been at play here. Some people are saying that it wasn't even an AI system, and that was one of the defenses that they gave to try and dismiss the case. But then, know, quite interestingly, we found


Chad (09:51.132)

Not yet, knock on wood.


Jeff Pole (10:11.906)

that the judge said that what he says AI will over your marketing material. So it's at least potentially AI, which I think also is a good takeaway from this is that, you know, make sure you call it what it is. Although I also think the definition of AI is probably rightfully getting quite broad in a good way. It basically almost means like in automation, which I think even though some people rally against like real against that because it's not technically accurate to the perspective of like


Joel Cheesman (10:15.629)

Mm-hmm.


Chad (10:16.09)

Ha ha!


Chad (10:19.804)

What you get?


Chad (10:28.571)

Mm-hmm.


Jeff Pole (10:39.842)

the world and risks and so on. What does it really matter if it's truly like a clever AI or just an automated system? If the impact's the same, then it should be treated in the same way. So I wonder if that's not as significant a point as people might think it is in this particular case.


Joel Cheesman (10:43.203)

Mm-hmm.


Joel Cheesman (10:56.974)

One of the things that I find interesting is they're suing workday and not the companies using workday. that they made the connection that it's the technology, not necessarily the employers. And we saw a trend early on, um, with the likes of higher view and their terms of service that was basically trying to shield them from any wrongdoing, right? Like we're just a technology. You choose to use us. This is on you, not on us. What are you seeing? Cause I know you talked to a lot of vendors.


Are they really nervous about this workday case because it puts them on the hook? Does it create a heightened level of sort of activity and interest in what you do? Or is it sort of whistling past the graveyard? They think this is nothing, nothing to fear. What's your take?


Jeff Pole (11:41.826)

I think that most responsible vendors do overall take this issue seriously. And one of the findings in our report looking beyond just the AI's bias stats, but looking into the responsible practices governance. So like there is investment in there promising progress. I personally think.


they still obviously try and put, as you mentioned, they're trying to put as much as possible liability on the end employer. And that's one of the things essentially with this case that court has also tried to do it. And the essentially said that might not be the case. And I think people underestimate, the witness and vendors underestimate how significant this could be.


for them and how much they can't just say, well, it's up to user error kind of thing, that there is material responsibility on them to get things right and demonstrate it.


Joel Cheesman (12:29.816)

So workday loses, there could be a lot of hammers come down on lot of vendors. Is that what you're saying?


Jeff Pole (12:36.78)

I think so. I think the Hammers take different forms, right? It's like they take the form of like customers asking more difficult questions. So, then we had injury adoption and sales. And they also take the form of, this going to be the only AI bias lawsuit we ever see against a vendor or even an employer? Probably not, right? Like one of the other stats from our study was that, you know, we looked at all the discrimination, basically claims and deployment.


Joel Cheesman (12:37.621)

Yeah.


Joel Cheesman (12:55.758)

No.


Jeff Pole (13:05.208)

from the EOC over the last five years, roughly 100,000 a year. And about 14 in the last five years have been somehow related to AI automation, if I remember the numbers correctly, is less than 99, sorry, 99.9 % were not based on AI. So if you just imagine, if we even just have a small fraction of those human-related claims now apply to AI, we've got at least a thousand a year, if not more than that, right? So yeah, interesting, see what happens next.


Joel Cheesman (13:23.438)

Yeah. Pain.


Chad (13:23.867)

Yes.


Joel Cheesman (13:28.632)

Pain. Yeah. Yeah.


Chad (13:32.38)

Well, it seems first and foremost, and we talk about, I mean, as marketers put AI all over everything, which, you know, Workday is getting kicked in the nuts with now because they were marketing that all this stuff was AI and they're like, no, no, no, it wasn't AI. Well, you said it's AI. So there's the whole truth and advertising aspect of it. We'll put that over here for a second. But the thing that matters is the outcome of the solution. And it doesn't matter how you got there.


Joel Cheesman (13:45.784)

Mm-hmm.


Chad (14:02.298)

Right? Whether it's AI, whether it's just your shitty process methodology. And in this case, I can see where the government could come down on workday because if there are many companies that are using literally a standard process methodology that workday puts in place, well that...


is that that's workday, right? That's that's SOP, that standard operating procedure. Right. So you could see where that happens. But we also need to go down the road of there is a shared responsibility here. And as we say, it's either the employer's fault or it's the vendor's fault. I call bullshit. This is a shared responsibility. The vendor needs to make sure that their shit is tight before they go out to market without all this, you know, AI marketing bullshit. Either way.


They got to make sure that their shit's tight. And then on the employer side, they have to do the appropriate due diligence to make sure that they can remain compliant. So there's a shared responsibility here. I think Workday Yes should take a hit because if there are a multitude of employers that have used their standard operating procedure and that got them in hot water, two things.


Work, they should be nailed, no question, but so should every single one of those employers because it was their job to be able to do the appropriate, perform the appropriate due diligence to make sure that it wasn't all fucked up. So from your standpoint, who are you mainly working with? Are you working directly with the vendors to make sure that that come to market?


go to market is where it should be. It's tight and it's nice and it's not going to be something that's going to hurt employers. Are you working with employers as well to be able to do that due diligence? Where do you guys sit?


Jeff Pole (15:58.882)

Yeah, so our approach works equally to both sides, right? In a day there's a system, AI system as we call it, in a high risk use case like a talent acquisition use case, that has these legal risks, has these discrimination risks, et cetera, and those should be audited and we have a technical solution to help do that. As of today, most of our customer base is in talent acquisition with vendors who have


Chad (16:04.955)

Mm.


Jeff Pole (16:28.802)

I guess like we're saying the challenge and responsibility and I guess we're facing the concerns from their customer base to demonstrate that their shit's in order ultimately. And so we help them. And so what we're not going into too much, we're essentially helping to evaluate and audit and certify a wide range of...


scenarios that that AI system might be used in. But obviously we can't test every single potential candidate, know, potential scenario, potential job in the world as part of that auditing process, right? There's a limit to the volume we can do for one vendor. And that's where our solution to work with each company that deploys one of our customers, one of the vendors on a real applicant pool.


on real scenario, on real jobs, on real, you know, what have you, that that should also be audited. And that is something that we're starting to do. There's less demand for that in our world at the moment than with vendors, but it's very much our direction of travel.


Joel Cheesman (17:24.45)

Jeff, you mentioned budgets in one of your answers. And I'm curious, are budgets getting freed up for this stuff? You talked to both sides of the equation. I'm not hearing a lot of budgetary enhancements to AI and bias. Like, what are you seeing on the front lines?


Jeff Pole (17:47.246)

Yeah, well, we're an early stage business, year and a half old and growing well. So we're happy with our growth. I mean, when you look at the macro amount of capital deployed into AI compliance and AI bias compared to other issues out there, it is still tiny. Like it's very small and we think it's going to grow hugely. Obviously that's why we're here. But right now, yeah, it is.


It's a new budget, it's a new market, it's a new budget actor, but it's not something that has had an historic budget. And let's be honest, who wants to spend money on compliance, Who's building cool software in AI, and says, want to spend lots of money on compliance here. So it is something that's coming gradually.


Joel Cheesman (18:27.032)

So I'm hearing growing, but not maybe as quickly as you would like. And certainly as a businessman, I can appreciate that, which leads me to the political aspects of your business. When Biden was in California, New York, Illinois, we saw state regulation on a regular basis. A new administration, Trump signed the executive order restoring equality of opportunity and meritocracy.


Chad (18:33.858)

It should.


Joel Cheesman (18:56.931)

in the first few months of his administration. How does politics play into what you do? And how do companies look at politics in terms of are we going to make a decision to do this or not?


Jeff Pole (19:11.182)

It's a great question and tangibly we've not, we've only seen an uptick in over the last, just a steady uptick over the last year or so. So it's not in any discernible way in our, from our point of view changed since the administration came in. And I think there's a couple of factors. One is that a big part of what we do is really looks at human laws, right? Like in the case of discrimination, we've got plenty of civil rights regulations about this.


While it's important to add in the new AI regulations, Colorado, New York City, California, et cetera, doing that EU act, they're still planning to go on even without those. And what's really interesting about the diversity question is...


it still relates to discrimination, Well, that's without getting too into it. I we don't get into politics here, but that's still saying that there shouldn't be positive discrimination. Well, that's a form of discrimination. So you need to measure your level of discrimination or not in your process. in the AI system, it's actually quite straightforward to do that kind of measurement and demonstrate it either way.


And so yeah, what we do doesn't kind of connect to those political decisions, but we're not going to get involved in what fairness should be, whether it should be positive discrimination or not. We're just saying here's how this AI technically behaves and here's what it looks like in terms of AI bias.


Chad (20:30.246)

So as you dig in deeper into the survey and the results and all the data and audits that you performed, what really stuck out to you other than the AI performing better, way better than the humans? What else stuck out to you?


Jeff Pole (20:49.006)

So that was the biggest finding. Although there's a caveat that was quite striking, which is that only 95 % of those audits that we looked at were only actually looking at sex and race as protected characteristics.


Chad (21:04.24)

Bah. Mm.


Jeff Pole (21:05.998)

So that can be out there. One the reasons for that is the 5 % that did more was, I think, mostly our audits. And we looked at few others, or some of our customers, who take more than that. We can do up to 10 different protected characteristics at the moment. But most people are not doing that. They're doing the minimum, which in New York City law kind of requires sex and race to be looked at.


So they're missing quite a big part of discrimination. there weren't any cases, a good example of that, where it's actually a disability, sorry, age, age disability, or two of the three that are in there, and age in particular. And that wasn't even covered by the majority of all this that we were able to access. So there could be much bigger gaps in there. And we find that's quite striking. The fact that people are like, this New York City law says, you must audit AIS systems for sex and race. OK, we'll do that if we absolutely have to.


Chad (21:46.832)

Yeah.


Jeff Pole (21:55.628)

But then there's these, I think it's six others under the civil rights law and the other ones have others. And people are like, meh, not sure we want, not sure we're worried about that. Even though it's a 50 year old law. But it is changing and people are coming around to it and we're getting a of appetite now from those.


Chad (22:14.524)

So I seem to remember earlier this year, it might have been a survey from late last year that they published for 2025 in the lists thing, where CEOs, 80 % of CEOs said that they are looking to institute some form of AI into their workflows, into their systems and whatnot. But yet, what I'm hearing from you is that they're not pushing budget that way.


to ensure that these systems are actually working the way they should. So give me a little bit of background around that. What do you think is gonna have to happen? Is Workday gonna have to get smacked really hard, possibly with a class action suit, and then everybody starts running for, know, running for Katie Barthedore? I mean, get ready, Jeff Barthedore, I mean, Jesus, is that what it's going to take? Yeah, my brother's gonna shit.


Joel Cheesman (22:56.641)

Yep.


Joel Cheesman (23:05.602)

My brother's gonna shit. If he's gonna shit, then he's gonna kill us. Sorry.


Chad (23:10.108)

Is that what it's going to take? Is it going to take one of those big companies getting slapped really hard before people start taking this seriously?


Joel Cheesman (23:22.764)

And is the opposite true? If Workday doesn't get pinched, does everyone go, sweet, we're good. Workday one, we're going to win two. Yeah. Yeah, true that.


Chad (23:28.176)

Yeah, until the next administration comes in.


Jeff Pole (23:33.71)

Yeah, good question. think there's a couple of different like, like overall, this is a bit about adoption of an unsexy thing, right? Adoption of compliance. And even if you've got AI thrown in there to make it sound a little bit more cool, it still has compliance. And that's always like a slow thing that happens. Regulations come around after risks emerge, after technology is doing exciting things.


Chad (23:57.925)

Mm-hmm.


Jeff Pole (23:58.19)

And then once regulations are in place, actually material compliance with the regulations lags behind the actual regulations being put into place. So enforcement and just people just becoming aware of it, the budget's coming and so on. So there's just that trajectory, of modest but steady that's new and is going to continue for a long time. The analogy there is if you look at how, and maybe this isn't a world you're close to, but in software,


Chad (24:04.284)

Enforcement, yeah.


Jeff Pole (24:25.674)

how intense the scrutiny is on data security, data privacy, and really secure even for, you know, tiny systems, barely any data or any real risk of data. So up to, you know, IT teams digging in like loads of stuff on that loads of money spent over the world about data security and privacy, which is a real risk, but even then slightly modest, I would argue. But that took a long time to get there from when software and when the internet's been around for over 25 years. It's a similar trend.


Then the other, I think more exciting vector of growth here is litigation. So, you know, we're not originally from US, but when I said to investors, what makes us big? was like, is it the AI regulation? I was like, not really. It's actually US litigation. US litigation is what is going to make this a much bigger problem. And we're seeing the first of that with the workday lawsuit, but I think that's the first of, you know, if not a landslide.


a common issue in the world that there'll be like a regular stream of of of education that involves AI in some capacity.


Joel Cheesman (25:31.97)

Jeff, you talked about race and sex being sort of the highlight or where people are really focused on. we seem to be falling down on disabilities, on age, on religion. Like some of other things that you talked about, why is that? It's just, there's less money at risk. Like why are we falling down on so many angles around bias?


Jeff Pole (25:54.754)

think it's to do with the enforcement and the regulations. So people I think are doing the minimum, which is often the way with compliance. yeah, the minimum on that New York City law, is actually currently the only law in effect in the effects of HR and TA space is this one law in one city, not even the state, right, of New York City, about bias auditing and it prescribes auditing to be done on sex and race.


Joel Cheesman (26:10.552)

Yeah.


Jeff Pole (26:21.838)

and nothing else. So that's the main driver. Why they've chosen that is partly because, you know, equal employment opportunity surveys tend to collect that over any other data point. Sometimes you may have a couple of others. And so then that's the available data that is more readily available data. And I suppose that's not, I'm not an expert on EOC stuff per se, that's because it is two of the most, arguably the most important protected characteristics. Not that we need to get into a ranking game on that.


Joel Cheesman (26:34.542)

Got it.


Joel Cheesman (26:50.906)

So it's the government's fault, basically. The rules they have say sex and race are important and the others not so much. So companies are going to focus on that.


Jeff Pole (26:51.521)

yeah.


Chad (26:58.554)

Well, I mean, take a look at history. I mean, that was a good place to start, know, sex and race. was white dudes who pretty much had to lay the land, got to do whatever the hell they wanted. And it was like, no, it's time to change. So yeah, then individuals with disabilities came later, then veterans came later. And then it was kind of like just a stepping stone.


Jeff Pole (26:58.786)

They push that more. Yeah, that's true. Yeah.


Joel Cheesman (27:03.671)

Yeah.


Jeff Pole (27:19.864)

Right. Right.


Joel Cheesman (27:20.75)

But with all the old people in government, you think age would be a bigger issue with them. What's, yeah, I guess so. I guess so. Screw everyone else my age that doesn't have what they want. Jeff, I want you to look into your crystal ball. You know, we talked to fair now years ago, you guys, not to speak for Chad, but I'm surprised there aren't half a dozen to a dozen of services like this. I don't know if it's because the expertise isn't there.


Jeff Pole (27:23.534)

You


Chad (27:24.7)

Yeah, but they've got everything that they want. So they're good. Yeah.


Jeff Pole (27:27.79)

They're fine.


Jeff Pole (27:32.75)

Yeah.


Joel Cheesman (27:48.216)

or the demand isn't there. think the workday case is going to be a huge driver, depending on how that goes with, with businesses like yours, but get the crystal ball out. What is, what does bias and AI look like in the future? Do we rely more on AI? Are we more scared of it? Like what, what do you think the future looks like?


Chad (27:54.566)

Yeah.


Jeff Pole (28:07.278)

And before I answer that question, I think the reason for that is because compliance is so boring, right? It's like people don't know, no cool entrepreneur is waking up and like, I'm going to start a compliance business. it's just boring people, Scottish people like me who, who love boring industries like compliance and can't come from that background.


Chad (28:25.786)

don't know any boring Scots just so that you know.


Jeff Pole (28:33.94)

Anyway, to answer your actual question, I think it will be a big issue, not dissimilar to data privacy and security, as for my previous analogy. And I think, though, there's a great opportunity. And that's what we found in the early signs up in the report, a great opportunity for AI to be better. So it's not just about faster, more efficient. It's also potentially, if you get this right as an industry, as a society,


Chad (28:52.027)

Mm-hmm.


Jeff Pole (29:00.992)

about improving on these consumer outcomes like fairness. And I do genuinely believe that if it's uncrackly, this will actually be the next step change of fair and equality in the world, actually, because we can monitor like, and it goes all to beyond DHRMD and TA and generally we can, with AI, tools like ours can constantly monitor all these AI systems that will increasingly do what humans do.


in way that we don't really monitor, right? We don't want to be big brother and monitoring like every HR person's like day to day, know, inputs and outputs, but we can do that with AI and we can make sure it's actually fair and make sure it's actually compliant and so on. So I do perhaps overly optimistic see a world in which we get through this and actually it's a better outcome for everyone, but we'll take some time to get there.


Chad (29:48.636)

So Jeff, back in the day where you were in diapers, we had this thing called VEVRA 503 and then the Bush administration put a big push on enforcement. And then that industry created a, it became a much larger cottage industry. And I'm going to make a prediction, here it comes, that within the next at least year to 18 months,


that there's going to be a revving up of the engines, not just from an enforcement standpoint, but just from an awareness standpoint. So get ready. Like I said, Jeff, bar the door. But in the meantime, if kids want to find out where to hook up with you, to connect with you, and or get this wonderful advice and also survey and data and find out about audits,


Jeff Pole (30:36.907)

you


Chad (30:45.456)

Where would you send them?


Jeff Pole (30:47.283)

So then go to warden-ai.com to learn more, download the report for free. You don't need to sign up or anything. It's open to everyone. I'm personally on LinkedIn and if anyone wants to email me, they can do so at jeff at warden-ai.com.


Chad (30:50.331)

Okay.


Joel Cheesman (31:03.63)

Chad, that's Jeff Pohl. I think you're saying you're predicting a big push by the Bush by your last comments, Chad. I'm not sure. Well, maybe, maybe dig into that a little bit later. And old people and old people. Chad, that's another one in the can. We out.


Chad (31:11.845)

You said, were talking about race to sex, okay? That's all I have to say.


We out.

Comments


bottom of page