top of page
Indeed Wave.PNG

A.I. Hiring Tools: The Wild West?

The Wild Wild West wasn't just a hit song from Kool Moe Dee in the '80s, it's also how workers' rights expert Matt Scherer describes the current state of A.I. in the world of work. With lawsuits, state-by-state regulations and minefields as far as the eye can see, this interview is a must for employers looking to buy A.I. recruiting tools, vendors building A.I. tools and for everyone in between. Are we any closer to taming the Wild West?

Gotta listen to find out.


Matt (1s):

The bad press that you occasionally start to see for these vendors is only going to get worse.

INTRO (9s):

Hide your kids! Lock the doors! You're listening to HR’s most dangerous podcast. Chad Sowash and Joel Cheeseman are here to punch the recruiting industry, right where it hurts! Complete with breaking news, brash opinion and loads of snark, buckle up boys and girls, it's time for the Chad and Cheese podcast.

Joel (31s):

Oh yeah, what's up everybody. It's your favorite guilty pleasure the Chad and Cheese podcast. I'm your co-host Joel Cheeseman joined is always the Robin to my Batman, Chad Sowash and today we are man, super excited to welcome Matt Scherer , Senior Policy Counsel for Workers' Rights and Technology Policy at the Center for Democracy and Technology.

Chad (54s):


Joel (55s):

Oh, that's a mouthful. Another example of someone way smarter than us on the show. Matt has a JD from Georgetown. Matt, welcome to the podcast.

Matt (1m 6s):

Glad to be here. We'll see about the smarter part.

Joel (1m 10s):

You're still here on the show, so I'm not quite sure you are that smart, but let our listeners get to know a little bit about Matt. What makes you tick, man? Give us a Twitter bio.

Matt (1m 20s):

As my lengthy job title kind of suggests I work at the intersection of the workplace and technology. I started off as an employment lawyer who kind of did AI related stuff as a side hustle. I found a way to combine the two by doing a lot of AI related legal work on employment at Littler Mendelson, which is as I'm sure you and many of your listeners know the world's largest labor and employment law firm, but they're exclusively management side. Yeah. And then I decided to make the jump over to civil society and take up the mantle of workers' rights on these issues.

Matt (2m 4s):

And that was about a year and a half ago when I joined CDT and left the billable hour and the big law firm life behind. If you've seen Michael Clayton, that is not what my life is like, but I do enjoy civil society and my work at CDT much more after this.

Chad (2m 27s):

Well, that's good. I'm sure the paycheck's a little bit different, but Hey, you know, we all do things for different reasons. And that reason I would like to know why get into the, the swamp of hiring technology? I mean, it is really a swamp out there. We talk about politics being a swamp, but this is, this is entirely different.

Matt (2m 47s):

I w I would call it more the wild west and then a swamp. That's how it it's always felt to me. And that's kind of the case with a lot of stuff whenever you're talking about artificial intelligence and automated systems. The way I always describe it, our entire legal system is kind of premise on the idea that humans are the ones who make important decisions. And in many ways, it just, if you read enough of our laws, you start to understand just how deeply that's baked in. One of my favorite examples is even just with driverless cars. If you read the highway safety transportation regulations, they have all these things like, oh yeah.

Matt (3m 31s):

Well, when you build a car, you have to have a foot operated brake. And, you know, so the idea that there is something without a foot that could be operating a car, would, you know, just didn't occur to anybody at the time that these regulations were made.

Chad (3m 46s):


Matt (3m 47s):

So it's no different than that in the world of HR, if anything, it's even more kind of wide open and unclear how automated systems fit into the existing laws that happen.

Chad (3m 59s):

So it's the wild west, really?

Matt (4m 3s):

Exactly. And that's my preferred metaphor is it's the wild west and that kind of vacuum I just find fascinating and troubling.

Chad (4m 14s):


Joel (4m 14s):

For how long will it be the wild west? Are we talking decades? Are we talking years?

Matt (4m 19s):

There's your guess is as good as mine. I think that we're going to see some state level regulations starting to happen in the next few years, whether or not there are federal regulations, whether in the form of new EEOC guidelines or a federal statute, that's really tough to predict. And certainly federal legislation I don't expect to be happening anytime soon, which means that we're probably going to have a kind of balkanized system of how these hiring tools and other uses of AI in the workplace are treated on a state by state basis.

Chad (4m 55s):

Gotcha. Gotcha. So back to that point, back in November, you penned an article entitled NY City Council Rams Through Once-Promising but Deeply Flawed Bill on AI Hiring Tools. How did that start out? Because it sounds like it was promising at one time and then it wasn't, and it sounds like we're going to see more of this state by state, somewhat promising, not so promising regulation actually happening.

Matt (5m 25s):

I think that's right. And, first, let me just quickly correct the record. It was co-authored by me and my colleague Ridhi Shetty. Ridhi and I are always kind of each other's sidekicks on these sorts of projects at CDT. So, but that piece we put together in about 48 hours after the New York City bill got passed. And for both of us, it was a lot of rage writing. And the reason is that the bill did start off, certainly not as perfect and that's why I used the word promising, but, you know, there were definitely good aspects to it. The big thing is as originally written, just when you come down to the substance of it, it would have required companies to examine the potential for all forms of discrimination before they released a hiring tool, if they did not do so, there were actually some teeth in the bill, not the strongest teeth, kind of more like heavily worn off 85 year old dentures may be more than sharp incisors, but there were at least some teeth to the bill, some consequences for companies that didn't do those sorts of in-depth examinations for discrimination.

Matt (6m 42s):

The bill that got ultimately rammed through, and it really was rammed through and we can talk about that in a second. It's stripped away the, you have to check for all forms of discrimination. And they did it in a kind of a coy subtle way, that, if you read the bill, you can't even tell what's going on, but what they ended up doing is they just saying, all you have to do is check for adverse impact on race, sex, and national origin. And here's the thing, you already have to do that under federal law, and in fact, you have to do that and other stuff under federal law, right? So it ended up actually setting the bar lower than what already existed under federal law, in terms of employer's obligations.

Matt (7m 27s):

And then it took away any teeth or consequences really that would ensue if you failed to do that. It was promising initially and it ended up being frankly a joke, that introduced no meaningful new obligations on employers.

Chad (7m 39s):

So how did that happen? I mean, how did it get derailed and how were some of those aspects dropped so that it was pretty much teeth free.

Matt (7m 50s):

We are not the NSA we did not have bugs planted on the inside of the New York City council buildings. Sometimes I wish we were, but we were not in this instance, so we don't know for sure. But the bill, it is no secret that the bill was originally drafted and backed by Pymetrics, which is one of the big vendors in this space.

Chad (8m 11s):


Matt (8m 12s):

And that it was largely a big, it was largely a vendor backed proposition. And there has been a trend nationally for vendors to back these bills, but unlike a lot of kind of industry backed bills in other spaces, they often go to progressive legislators and bill themselves as, Hey, we are the pro civil rights, progressive folks in this space and what we are asking you to do is help us regulate ourselves. We're the good guys, help us be the good guys. Yes. So even though I don't know what happened behind the scenes, I think that it is safe to say that the vendors who had their hand in at the beginning, also had their hand in the watering down of the bill before it got to final passage.

Chad (9m 7s):

I mean, if they can write it, then they know what they want to the outcome of it. Right?

Joel (9m 13s):

Sounds like the Fox in the henhouse.

Matt (9m 15s):

Yeah. That's exactly right. And also the way that it happened, where the revised version of the bill was released essentially publicly less than 24 hours before it was put before a committee vote, without a hearing, the revisions were never put for a public hearing and within 24 hours of their publication, it had gone through a committee hearing, committee vote, and passage by the full council. All in 24 hours, no opportunity for public input.

Chad (9m 43s):


Matt (9m 45s):

That's sort of coordinated, you could almost call it ambush, doesn't happen without some significant behind the scenes coordination. So even though I don't know exactly how it happened, it's safe to say that this was done in a way to ensure that a more vendor friendly version of the bill was passed without any meaningful opportunity for pushback from groups that would have concerns about the changes.

Joel (10m 12s):

So Pymetrics is not Google. They're not, you know, Meta. I wouldn't put them in the category of an army of lobbyists that can go to the government and make sort of these efforts. So what from your perspective is, I mean, is it like a secret war chest that we don't know about or what?

Matt (10m 30s):

No, no. The way that I always describe it is everybody's the hero in their own story. And I have no doubt that the vendors in this space, and in fact, having had conversations with them off the record, they genuinely viewed themselves as the good guys. They think that they have found a fairer, less discriminatory way to do things. I think that if you take the humans out of the hiring process and give it to their machines, that you end up with a fairer, less discriminatory and more effective hiring process. I think that that's a load of something, but that is their view.

Matt (11m 13s):

And what I think happened is that message that, 'Hey, we live in a discriminatory system'. Well, that initial premise is something that's going to get a lot of progressive heads nodding because it's true, we do live in a system where hiring is rife with discrimination, both overt, covert systemic at every level. The idea that there is a better way to do things is inherently appealing. And the idea that it can't get any worse, doesn't often cross people's minds when they see the mini flaws that exist in the current system. I just think that they are successful in what I would frankly call co-opting progressive legislators, because they tell those legislators what they want to hear, which is there's a way to help reduce the impacts of systemic discrimination and these long existing patterns of bias.

Matt (12m 2s):

And that's just not the case in reality.

Chad (12m 5s):

Yeah. I mean, we've seen vendors write regulations for years and hand it over to, you know, their progressive counterparts or their conservative counterparts.

Joel (12m 15s):

In my research on this AI audits, there are no standards you meant back to the wild west. So there, there really are no sort of here's the baseline for what AI is and what it should be in shouldn't be. Is that correct? Like, what are your takes on AI audits? What's wrong with them? I know Pymetrics, HireVue and Humanly have been sort of highlighted in articles around the audit system and those companies being audited, but there are no standards for what makes good AI.

Matt (12m 44s):

Right? And I would not call on their best day what any of those vendors have done an audit. Audit implies some sort of unbiased, driven by an independent source, review of information. When in reality, all of the audits that these companies have conducted, they gave highly curated and limited information and a limited scope to these auditors to do the review and just like the New York City bill, what happened was a joke. None of them were audits. Which goes to your point that there's no standards for what it means to have an audit in this space, for AI tools.

Matt (13m 24s):

There are regulations, as I'm sure you, and many of your listeners know, when it comes to selection procedures, more generally, they're called the uniform guidelines for employee selection procedures. I don't want to say that those are a dead letter, but other than the OFCCP, they are not often used as the basis for strong enforcement action. And even with the OFCCP, their resources are stretched so thin that even federal contractors, which is who the OFCCP governs, don't often have to get pinned down on kind of the specifics of the sorts of auditing processes that are supposed to happen before they introduce a tool.

Matt (14m 11s):

And when it comes to kind of the unique characteristics of AI, it actually makes the uniform guidelines inadequate to really deal with some of the types of discrimination and the types of harm that can arise, so the uniform guidelines are themselves, they're there, they're better than nothing, but they're not adequate to the task. And yeah, right now there are no standards that say, this is what a good audit of a modern hiring tool should look like.

Chad (14m 37s):

Gotcha. And again, if you don't have the enforcement, then you really don't have the teeth. But when we take a look at actual, let's just say OFCCP audits, when they do go through the audits, and there are standards that they have now, wouldn't those standards still be the same, no matter whether it was a machine or human, because it's all around outcomes and how you treated the actual candidate pool. So wouldn't, they still be usable for AI, as they are today with humans.

Matt (15m 15s):

Yes. And again, in my view, if the uniform guidelines were interpreted and enforced consistently, that would be a good thing. It wouldn't be enough, but it would be enough to, I think, significantly limit the ability of these tools to be deployed as extensively as companies are, at this point. And I also agreed, we shouldn't necessarily hold these tools to a special standard that doesn't apply to more traditional types of hiring tests. There are plenty of problematic and concerning non-AI based selection tools out there. Company's frequent use of personality tests, that is deeply troubling.

Matt (15m 56s):

Yes. Some of my colleagues at CDT ended other civil rights organizations have done plenty of work on this. You know, there's those very frequently discriminated against people with disabilities. There's lots of evidence that they discriminate against black candidates, female candidates, et cetera. So just because it's not AI doesn't mean that it should get a free pass or it should be held to a lower standard necessarily.

Chad (16m 21s):


Matt (16m 22s):

But the uniform guidelines were written 50 years ago now. You know, they went into force about 45 years ago, but they were actually written in the early 1970s.

Chad (16m 34s):


Matt (16m 34s):

They were out of date sometime around the time I was born in the Reagan administration. You know, so they are dramatically out of step with the modern social science. And one of the ways in which they are, that this is the most technical all get in this podcast, they allow you to establish that a tool is basically okay to use, by showing that there is a correlation between the tools, recommendations or scores, and job performance. The problem is it's incredibly easy to establish that sort of correlation when you're putting thousands of candidates through a tool, which is what you can do in the age of big data.

Matt (17m 15s):

That's not something you could do back in the seventies and eighties, when you were developing and testing these tools. It wasn't easy to establish that kind of correlation. Now it is. So you can have these tools that do frankly, a pretty crappy job of matching candidates to job performance, but you can claim, 'well, the uniform guidelines say, as long as you can show that there's a correlation, that's enough and technically there's a correlation'. It's a very weak correlation, but it's a correlation.

Chad (17m 48s):


Matt (17m 48s):

That would no longer be possible under modern social science. But because the uniform guidelines were not with modern social science, a lot of these tools, you know, I've read validation reports for these that, you know, to be clear, in case anybody from my old job is listening, these are not ones that I'm talking about that are covered by attorney client privilege don't worry. But I've read validation reports even since I've come to CDT where some of these tools do, I kid you not, a 2% better, than randomly picking names out of a hat in terms of picking good candidates for a job.

Chad (18m 26s):

And that's on a validation study?

Matt (18m 29s):

And that's from their own validation studies.

Chad (18m 31s):


Matt (18m 32s):

2% better than picking names out of a hat, but that's enough, technically under the uniform guidelines! Right?

Chad (18m 39s):


Matt (18m 40s):

Because it's a statistically significant 2% better than names out of a hat. So that's kind of the reason that new guidelines and new audit standards are needed in this space.

Chad (18m 54s):

Oh, hell yeah. Do you see vendors like Pymetrics get in front of legislation or regulation or what have you as being a part of it or writing it? What about HireVues AI explainability statement? They're trying to get in front of this by going public, putting it out there for everybody and actually having a quote unquote "explainability statement". Have you seen this thing?

Matt (19m 17s):

I have and stay tuned on CDTs website. There'll be a blog post dropping about that at some point in the next few weeks.

Joel (19m 28s):


Chad (19m 28s):

So this comes from the actual document quote "since HireVue's platform does not make recruitment decisions. If the candidate wishes to query the decision-making in the recruitment process, then the challenge needs to be made to the hiring company, which uses HireVue's platform and ultimately makes the final recruitment decision" end quote. So what they're saying is, look, we don't make the decisions. This has nothing to do with us, but yet they have algorithms. And then in California, they're looking to put new regulations on the books that pretty much puts the, keeps the vendor on the hook around training data. So again, we were just talking about New York, now we're talking about California, where do you see this actually going?

Chad (20m 12s):

Because I feel like there are going to be vendors that are on the hook and if California or New York does something like this, every vendor wants to do business in those states, they're going to have to comply. Right?

Matt (20m 25s):

Right. Well, first off, I love that particular passage that you quoted from their explainability statement, which was a pun that would've made Ray Guy jealous. So I mean, and that sort of, 'Hey, it's not me. It's them', that you see between vendors and employers in this space. The first thing that I think you should do when you do regulations, you should take away their ability to do that by saying that they're jointly responsible for ensuring that it is. And that sure you can make a contract says who actually has to pay at the end of the day, but you're both on the hook and then you can sort out later between yourselves who gets to pay, if you do something wrong. That's my take.

Matt (21m 6s):

Having had conversations with civil rights organizations in this space, that finger pointing, which we've seen a lot already in Europe, that was the purpose of that passage that you read in the HireVue explainability statement. He was saying under European law, we are not the data controller. I think that that's a term, but I might be getting it wrong, but we're not the people you should talk to. It's the employer.

Chad (21m 28s):


Matt (21m 28s):

That sort of finger pointing we need to avoid here, if and when these regulations start getting passed in the United States and the way to do that is to say, you're both responsible, you can't simply point the finger at the other. It doesn't do you any good, you're still on the hook and then if you know, you end up being liable down the road, who actually has to pay between the two of you, that's a matter for you to sort between yourselves, but you're both on the hook for it. So I think that the regulations need to be aimed at both, so that both have an incentive to police each other and to assure that the proper validation and discrimination checks are done.

Joel (22m 7s):

We have a lot of employers, as I'm sure you can imagine that listen to our show. And in addition to our show and everything else that they're hearing, every conference they go to, they're hearing AI is great. AI is going to solve all your problems. AI is something you need to buy immediately. And then they also read about Amazon, you know, ditching their sort of AI recruitment technology. They're hearing about these court cases. What tips would you give an employer that's looking to buy an AI solution in terms of what to be careful for? Maybe what questions to ask a vendor to help protect yourself? Because I think there's a lot of fear out there and fear often breeds sort of inactivity, which isn't necessarily what a good employer should do.

Joel (22m 50s):

What tips would you give them to do it safely and to do it correctly and keep yourself out of court?

Matt (22m 58s):

For sure. The first thing is to have more faith in your HR team than the vendors want you to have. And I admit I'm biased. My mother was and is a career HR person. I'm somewhat predisposed to not throwing HR people under the bus, needless to say, but I, you know, genuinely yes, human decision-making is flawed. That does not mean that we've come up with a better way to do things though. And I'll get to the, here's what you should do as far as specific questions, but just to give one example of why automating the process is not necessarily a good idea.

Matt (23m 42s):

One thing that these vendors love to do is say, well, yes, are whatever problems you might identify are problems. Yes, they might be there, but humans do it even worse. That is frequently the response you get when vendors try to sell these tools to companies. And one great example of that is the technology that underpins a lot of AI is voice to text transcription VTT or speech to text, where the computer hears as it were, what a candidate is saying during an interview during some, you know, some sort of game-based assessment, et cetera.

Matt (24m 23s):

And it transcribes what they're saying. And the error rates on those transcriptions are sometimes, you know, relatively high and they're much higher in particular for candidates that have a disability that affects their speech or who have an accent because they speak English as a second language, or they speak a non-standard English dialect.

Chad (24m 42s):


Matt (24m 42s):

Well, these companies often say, well, yes, we have, there's sometimes a high error rate, but so do humans. Here's the difference. If a human doesn't understand what it can that it just said, they can say, oh, I'm sorry, can you repeat that? You know, like, that's the difference between a human and a machine.

Chad (25m 0s):


Matt (25m 0s):

Or that's one difference, is that a human has the ability to slow down and go at things a different way and adjust on the fly in a way that an automated one size fits all assessment, which is what these vendors invariably are selling, does not. And that is a dangerous thing to introduce into an HR process, to have that kind of rigid one size fits all, there's no second chance. If you want an accommodation, the accommodation is you don't use this process and you do something completely different. And that's often what these vendors are selling.

Joel (25m 36s):


Matt (25m 36s):

And if that sort of inflexible tool is what's being presented to a company, they should be very, very careful before proceeding because that sort of rigidity is just not going to have the flexibility that's needed to fill a lot of positions because positions are not one size fits all like that. They should ask the vendor what steps they've taken to ensure that if candidates whose credentials are not obvious, or who have difficulty showing in the context of whether it's a resume screener, game-based assessment, a video interview, whatever it is, if a candidate has either a disability or some other aspect of their physical makeup, of their background, whatever it is that might make the tool less familiar with what it is they have to offer.

Matt (26m 31s):

How's the tool going to pick up on that and how will our recruiters know that they need to look more carefully or deeper? Those are the sorts of questions they should be asking first and foremost, and they should carefully pay attention to what the response is and make sure it's not just, well, that's a, you know, something that we don't do, that's your recruiter's job to pick up on that. Well, if that's their answer, then guess what, why do you need the tool in the first place?

Joel (26m 57s):

Yeah, we, in addition to employers, we have a lot of vendors that listen to our show and it sounds like a lot of mistakes have been made in terms of their end. And personally, I don't think that there's a lot of evil doers in the vendor space. I think it's a lot of people that, you know, have a quote unquote, "good idea". They build this thing and then they don't sort of see the sides of it that you do. If you could say something to the vendors out there that are looking to build something new, or to do something cool around AI? What kind of things would you caution them to build before they spend the time and money to make a new feature in their product?

Matt (27m 36s):

The first thing that I do is I'd ask them to proactively engage with groups that they don't want to engage with, as they develop these tools. With civil rights organizations and particularly a major area of focus for my organization is the rights of disabled people and disabled workers. That is frankly, an area where these vendors get one of the worst failing scores I can possibly imagine for them. They just do not design their tools with disability and the need for accommodation in mind.

Chad (28m 8s):

They like to think they do, but they, they do not. You are correct.

Matt (28m 14s):

Yeah. They do not. And frankly, the answers that they give of how they try to build robustness into it, they make disability advocates angry. It's things like, well, we had five different people with disabilities tested and they did okay. Or we talked to the parents of disabled people and they liked our approach, nevermind that parents, parental organizations relating to people with disabilities are very much not the same as talking to the disabled people themselves.

Chad (28m 43s):


Matt (28m 44s):

Well, if there is an accommodation issue, that's ultimately the employer's responsibility, not ours. All of those are inadequate responses in my view. So that's the first thing. And then they should take a similar approach when it comes to racial justice. When it comes to gender disparities in these tools, don't just try to paper over these issues instead engage with these civil rights organizations and instead of having your own, in-house, what you want to pass off as an audit, where you provide carefully curated information, have it done by somebody who actually has an incentive to check to make sure that issues that actually are present, get resolved.

Matt (29m 30s):

If you're not willing to do that, then there's frankly going to continue to be an adversarial relationship between vendors and civil rights organizations. The bad press that you occasionally start to see for these vendors is only going to get worse.

Chad (29m 43s):

Okay. So I'm gonna hit the other side. The candidate black hole has been the bane of the application process for candidates in the easy apply process that has been created in that timeframe, has provided more candidates applying quicker for jobs, thus finding more candidates in this incredibly horrible black hole. Right? Where they don't have anybody getting back with them, because it's more of a human experience. They're using technology, but there's nothing that's actually automated around. It's horrible for the brand because all of these candidates could be customers too, but they could also be great candidates that they just can't get back to because they're human and they aren't as scalable.

Chad (30m 28s):

So today tech is trying to actually fill that gap to provide a much better candidate experience, to be able to get back to those individuals and to be able to, in some cases, sift through them and see if they score well. My question for you is, where's the equilibrium? Because as technology starts to grow, Moore's law is still rolling strong. We're going to have technology pushing forward and just jamming great candidates into a system that human beings can never get back to them with. So how do we fix this? Where's the equilibrium?

Matt (31m 1s):

So I think that the way that I always described, and this was the case, even back when I was in private practice, the best way to use technology in this space is to increase the number of candidates who come under consideration for a job and to increase the number of people who get serious consideration for it rather than to contract it. It should be used to identify and lift up candidates who may otherwise be overlooked, not to screen candidates out who may otherwise have gotten serious consideration. That's the way that I always try to frame it.

Matt (31m 42s):

The problem is, and this is why the equilibrium that I suggest is one that does not lie easily. The goals of C suite, where HR is often viewed as purely a cost center, or really a cost sink. It doesn't generate revenue. So every dollar that is spent on HR by C-suite, as again, many of your listeners I'm sure know, is viewed as a dollar that should be saved if possible. So the approach that I suggest of expanding the number of candidates that are considered for positions, that's the opposite of what vendors try to sell. The vendors, try to sell you on, we're going to cut costs.

Matt (32m 25s):

We're going to reduce the number of candidates that your recruiters have to review. The problem is when you are doing that, you are increasing the risk of discrimination and you are, frankly, from a business perspective, you're lowering the chances that you will find candidates that are the best fit for your position. You are increasing the likelihood that you will find an, at least, adequate candidate quickly, but you may be overlooking the best candidates for a role. If you're okay with that, then maybe the use of an automated process makes sense on some level, but you have to understand, again, the costs associated with the risk of discrimination. Where I think the equilibrium is ultimately going to lie, depends on where regulation in this space goes.

Matt (33m 9s):

I would say that we are headed for a world where there is going to eventually be regulation of these tools. We are not in an environment that is friendly to technology on the policy world right now, if you haven't noticed, so there will be regulation eventually. And this is a rare instance where the, you know, conservatives are skeptical of big tech companies. And in the specific case of HR tools, civil rights groups are very, very skeptical of the technology. Civil rights groups are not convinced by the arguments that human bias, as it exists now is a sufficient reason to not regulate these tools.

Joel (33m 50s):

It's time to tame the wild wild west.

sfx (33m 53s):


Joel (33m 54s):

I think that that's exactly right. And I think that eventually will be, and the equilibrium that I hope we ultimately reach is the one that I just described, where technology is used to help us find diamonds in the rough to help us find the people that were often overlooked in the past, and to expand the number of people, to expand the possibilities for people in the labor market, not to contract them. Matt Scherer everybody. Matt, that was awesome, dude, for those listeners who want to know more about you or the organization, where would you send them?

Matt (34m 32s): You can find I'm on our privacy and data team and keep an eye out for a lot of future work in this space on this. And then on the other major issue we work on is technology used for surveillance and privacy in the workplace.

Joel (34m 44s):

Yeah let us know when that blog post goes up. We might have to have you back on the show for a little summary of that.

Chad (34m 53s):

Hell yeah.

Joel (34m 55s):

All right, Chad, another one in the books. We're all a little bit smarter. Thanks to Matt.

Chad and Cheese (35m 2s):

We out. We out.

OUTRO (35m 47s):

Thank you for listening to, what's it called? The podcast with Chad, the Cheese. Brilliant. They talk about recruiting. They talk about technology, but most of all, they talk about nothing. Just a lot of Shout Outs of people, you don't even know and yet you're listening. It's incredible. And not one word about cheese, not one cheddar, blue, nacho, pepper jack, Swiss. So many cheeses and not one word. So weird. Any hoo be sure to subscribe today on iTunes, Spotify, Google play, or wherever you listen to your podcasts, that way you won't miss an episode. And while you're at it, visit just don't expect to find any recipes for grilled cheese. Is so weird. We out.


bottom of page