top of page
Indeed Wave.PNG
DS_logo_Primary.png

The Algorithm: How AI Decides Who Gets Hired


Artificial intelligence and the world of work are strange bedfellows, especially when it comes to recruiting. It's OK if you're a bit confused, but trust us, you're not alone. We're pretty lost too, particularly when you start imagining where things could go as AI develops at lightning speeds. That's why we invited Hilke Schellmann, NYU Professor, Emmy Award-Winning Technology Journalist, and author of How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now to the podcast. Together, we cover a broad range of topics from her take on HR Tech, to bias on steroids to rampant sexism to vocal biomarkers (say what?). It's a must-listen for navigating the ever-evolving minefield that is AI in recruiting.



Intro: Hide your kids, lock the doors. You're listening to HR's most dangerous podcast. Chad Sowash and Joel Cheesman are here to punch the recruiting industry right where it hurts. Complete with breaking news, brash opinion, and loads of snark. Buckle up, boys and girls. It's time for The Chad and Cheese Podcast.

 

[music]

 

Joel Cheesman: Ohhh, yeah. It's President Biden's favorite podcast, aka The Chad and Cheese Podcast. I'm your co-host, Joel Cheesman. Joined as always, the salt to my pepper, Chad Sowash is in the house. And we are just giddy...

 

S?: Push it.

 

Joel Cheesman: Giddy to welcome Hilke Schellmann, NYU professor and Emmy Award-winning technology journalist.

 

Chad Sowash: What?

 

Joel Cheesman: And author of the book, "How AI Decides Who Gets Hired, Monitored, Promoted, and Fired, and Why We Need to Fight Back Now". Hilke, was AI used in creating the title of that book? 'Cause it's a mouthful.

 

[laughter]

 

Joel Cheesman: It's a mouthful.

 

Hilke Schellmann: I know, I know it was a mouthful. It wasn't created by AI, but we did try to use AI for the book cover. I ran it through all the AI image generators, and you know what, it's very sad. It's a lot of robots. It was a lot of blue and zeros and one, and I was like, "I don't want that. This is about humans, humans." So we came up with orange and yellow and a face and sort of fractaling how humans are reflected by AI and seen by AI. That's what we wanted to convey. So I worked with a human on that.

 

Joel Cheesman: And we love humans on the show.

 

Hilke Schellmann: Ohh. [laughter]

 

Joel Cheesman: Before we get to all the AI, let our listeners know who Hilke is. What did I miss in the intro? What do you like to do in your personal life? Give us an insight into Hilke.

 

Hilke Schellmann: Yes, yes. I'm Hilke Schellmann. I have a funny name, hence I'm originally from Germany. It sounds very German, but it turns out the people of Germany have also never heard of my name. So...

 

Joel Cheesman: Ye gut?

 

[laughter]

 

Hilke Schellmann: Sehr gut.

 

Joel Cheesman: Sehr gut.

 

Chad Sowash: Sehr.

 

[laughter]

 

Joel Cheesman: What part of Germany?

 

Hilke Schellmann: Oh, you know what I'm actually from the northwest. It's a town called Bielefeld. It's between Hanover and Cologne. It's one of those... A fine town, mid-level size. I would not recommend you visit it.

 

Joel Cheesman: Good place to raise a family, I guess.

 

Hilke Schellmann: Totally. Totally.

 

Joel Cheesman: Yes.

 

Hilke Schellmann: But apparently, according to the German myth, Bielefeld doesn't exist. But I'm living proof that it does exist. Although now I live in New York city. I'm a reporter here, and I'm also a professor at NYU, I teach students about being a reporter. And I think everything has been driven in my life about curiosity. So I got really curious about AI a few years ago. And I also always loved math and I guess missed that. I was like, "Whoa. There is a technology that supposedly quantifies humans. I wanna know more." And the origin story is, I took a Lyft ride, 2017, at a conference in Washington DC, which was with consumer lawyers. So nothing... It has nothing to do with AI. But I took a Lyft ride from the conference to Union Station to take a train back to New York, and I talked to the driver and I asked him, "Hey, how was your day?" And he was like, "It was really weird." And I'm a reporter, I'm like, "Oh yeah, tell me more. Why was it weird?"

 

[laughter]

 

Hilke Schellmann: And everyone else would be like, "Ohh." And I was like, "Oh yeah, tell me more. Why was it weird?" And so he was like, "I had a job interview with a robot." And I was like, "What? A job interview with a robot? Tell me more." It turns out he had applied for a baggage handler position at a local airport and he got some pre-recorded phone message on his phone, asked him three questions, and I had never heard of that. This is like six years ago. And I was like, "What?" So I made a note like "Robot interviews," and I forgot about it. Until I went to an AI conference and somebody... There was a sparsely populated panel and somebody who had just left the Equal Employment Opportunity Commission was talking about algorithms going through people's calendars and finding how long people are absent. This was in early 2018. And she's like, "I'm worried that this will harm mothers and people with disabilities. There could be bias and discrimination here." And I was like, "Oh, I must look into this." And I talked to some people, went to my first SIOP conference, and I was just blown away by the technology, and have been fascinated by it ever since. And I've published about it a bunch. And then finally I was like, "I think somebody needs to write a book about this and look at the plethora of all these exploding tech tools and maybe dig a little deeper."

 

Chad Sowash: Well, AI's all over the place. It's not just hiring. So why did you pick this space? It's everywhere. Why did this fascinate you?

 

Hilke Schellmann: It's really hard to tell. At the time I was also investigating facial recognition and other technologies for the Wall Street Journal.

 

Chad Sowash: Ah.

 

Hilke Schellmann: But I really felt like, "Wait, no one is looking at AI in HR." And I went to HR Tech, and I know you've been there many times and, my head was hurting.

 

[laughter]

 

Hilke Schellmann: I was on the floor for a few hours and I was like, "Oh my God, this is incredible, all this technology." And you know what? I didn't see a lot of reporters and I was like, "Wait a second, this is really changing. It seems like HR is really changing with this AI technology and other automated tools coming into the industry and we really aren't talking about it." So I was like, "I think we really should talk about this, and we really need to know how these tools work and what's happening." I got really fascinated by it and felt... And I do think that reporters and the public spends a lot... We spend a lot of time thinking about high-stakes decision-making, facial recognition and who gets to board an airplane, how long are you gonna be sentenced to go to prison? Those are all high-stakes decisions that we use algorithms.

 

Hilke Schellmann: And I would also put hiring in that category, or using AI at work, because it matters if I get the job. I understand that people get rejected all the time. And we often don't get the job more often than we do get a job. But I'm nervous before I go to the job interview because it matters to me if I get the job or not. And it's high-stakes for me and I can provide food for my family and a roof over our head. And a lot of people have a lot of their identity tied to their jobs and they would really like to have a job that they love. So we gotta make sure if we use technology to sort people, and reject people and put them into the next round, that that technology works.

 

Joel Cheesman: Our heads hurt at HR Tech as well.

 

Hilke Schellmann: Yes. [laughter]

 

Joel Cheesman: But it's usually a morning-after hurt that our heads encountered...

 

Chad Sowash: Oh. That's usually your head that hurts.

 

Joel Cheesman: Yeah.

 

Chad Sowash: Yeah. Your whole body hurts. Anyway, so getting into the research, how did you perform the research? Did you work directly with vendors on this research? Give us a little background of the research first and foremost, and then start diving into the book.

 

Hilke Schellmann: I'm a reporter. My longtime home has been the Wall Street Journal. I did a couple of stories for them, so we are as a reporter in a glorious position to call up vendors and ask them questions and they show me their technology. And I got really curious and wanted to know, like, "Okay, what's under the hood? How does this work?" And you can also sign up for free trials, so some of the software I tested myself. I like to test it and understand, "Okay, if I was a job applicant, how does this feel like? How does it feel to play an AI game? What do I have to do? How does it feel to do a one-way video interview?" And I think what's also interesting, I'm a professor, I teach undergrads and graduate students and you know what they all know HireVue. If I talk to anyone over 35, they give me a blank stare and like, "What is that?" All my students know all about it because the jobs that they often apply for like video interviews are super ubiquitous, or also, the career entry-level jobs often now have AI screens. They know all about it. We have some great discussions.

 

Joel Cheesman: I assume it's second nature. They don't think it's weird or, "Why am I talking on a vid... " It's second...

 

Hilke Schellmann: Oh, they totally think it's weird. They totally think it's weird.

 

Joel Cheesman: Oh, they think it's weird?

 

Chad Sowash: Oh yeah?

 

Hilke Schellmann: Yeah.

 

[laughter]

 

Hilke Schellmann: They think it's like, "Why am I speaking to myself in this video presentation?" And I think also, a lot of them really care about jobs and they wanna be doing a really meaningful job in a meaningful company. And a lot of them feel... They don't actually get to talk to the companies, and they don't get to ask questions about the company, they don't get to see the culture of the company a little bit. If you go to an office, you get a glimpse and you get to ask questions. And most of them really do not like it. And some do love it. There're some people who really love to be in their own room and it's quiet and they can just riff off why they're good at things.

 

Chad Sowash: So it's a one-way conversation and they don't like it.

 

Joel Cheesman: That's great insight.

 

Hilke Schellmann: Yeah. It's like... And this is totally anecdotal, the slice of Hilke students. But I've also talked to a bunch of folks who run career centers at universities, and they had similar sentiments from their students, that most of them really don't like it and some actively run away from it. And for some, they love it. But I think especially students who have a disability, they're really worried that this will be somehow, hiring managers will see this and they don't get to explain why they maybe look off camera or something like that, or they're worried that they have a speech impairment and the AI doesn't fully grasp what they are saying. And I think what's interesting is, with prior assessments, I don't know if I used to apply for like a fire department job, I knew how much weights, I have to lug around from A to B and I can train for that.

 

Hilke Schellmann: But now, these assessment screens, or these AI screens, a lot of people... We just don't know how we are being screened. A lot of folks that I've talked to didn't even know that they were possibly screened by AI. They just thought they'd do a one-way video interview and poor HR person has to watch the thousand of video interviews, they didn't know that maybe AI is even being used on them. We don't know how we're being scored. And I think that's actually, gives a lot of people anxiety. They're always asking me like, "Well, what can I do?" And I'm like, "I don't necessarily know that either."

 

Chad Sowash: We didn't know how we were getting scored before, but a lot of times we would just go into a black hole. It just seems like there's bias inherent in the system no matter what. Now we're adding AI. Do you expect to get worse than what it was?

 

Hilke Schellmann: I hope not.

 

[laughter]

 

Hilke Schellmann: But I'm hoping that we wouldn't replicate the human biases. I'm actually not advocating to go back to human hiring. I actually think that's very flawed as well. And you all know this, that we are happier when we have a full belly, yaddi yaddi yaddi. We all know that. We shouldn't go back to that. But my idea is like, wait, now that we are digitizing this and using AI to do a lot of the hiring, let's do this thoughtfully and let's not replicate the biases of the past and build them into these new systems and objectify them through their technology and bringing new machine bias, that also comes into the system. Let's thoughtfully really think about it and talk about, like, "How should we do hiring? Should we really base hiring on past successful employees currently in the company if we wanna diversify, is this a good idea? How can we make sure that we don't pick up what's special to these 50 people that we are looking at versus real skills that we needed on the job?"

 

Hilke Schellmann: And I think we are not quite there yet. In my work, so I get to talk to the largest vendors, I go to HR Tech and I'm very curious how these things work. And I'm glad, I'm grateful to all the vendors who show me their technology, showcase it. I test it out, and often I get a trial run and start it. One thing I did, it was a company that I met at HR Tech when it was during the pandemic, when it was all online, and it's called Curious Thing. And they, at the time, were marketing their AI technology...

 

Joel Cheesman: Hilke, before we get in the weeds too much with every vendor at HR Tech, trust me, we don't wanna get too far in the weeds on that. Let's talk about the book. What was the inspiration for it? What was some of the research that went behind it? What are some things that shocked you as you were going down this journey? Let's get into the book a little bit.

 

Hilke Schellmann: Yeah. What was interesting to me, I felt like... I'm actually kind of a podcaster and a radio person and I did videos, but I actually felt like...

 

Joel Cheesman: We won't hold that against you.

 

[laughter]

 

Hilke Schellmann: But I actually felt like, "Wait a second, how am I gonna explain a resume screening tool in video? I actually need words for that." So I went down the rabbit hole and had everyone and their mother explain everything to me, how their technology works, all the different ones. I played AI games, I did video interviews, I tested my resume, I worked with people who have disabilities and also asked them to go through the screens. And then I did a lot of work on how we are being tracked at work. I'm really interested in things like, "Oh, how does flight risk calculations work? What can you monitor and what can you predict out of my work habits?" I tracked myself for a couple of weeks and figured out, like, "How productive am I?" Those kinds of things. I'm really interested in the idea that we can quantify human beings and how good we are at that, that's the driving questions in all of that. I was blown away when I walked into my first... One of the first SIOP panels in 2018, and somebody was showing me how, back in the day there was the facial emotion recognition, the intonation of our voices, the words that we use and how that is all calculated to figure out how successful you are at the job, and I really was like, "Wow, this sounds like magic. Maybe we have found the key to find a way to hire people better."

 

Joel Cheesman: Or it's total bullshit.

 

Hilke Schellmann: Well, it turns out when you look into the tool, it's a little bit more complicated. The facial emotion recognition, we moved away from that because there isn't a whole lot of science, scientific solid underpinning on that. I think once I got started doing deeper and looked at things and talked to folks who get to also look into the black box a little bit, there are things that came up that I think show you that maybe some of the tools do more harm than they actually do good. And there was a little bit of an a-ha and an awakening moment that I felt like, "Wait a second... "

 

Chad Sowash: Any examples?

 

Joel Cheesman: Yeah.

 

Hilke Schellmann: Yeah, totally. I talked to a couple of folks who get to come in when large companies test these tools and they wanna bring in vendors, they have pilot phases. And I talked to a couple of folks who looked at online resume screeners, and one of them found out... They look at their technical reports and the keywords, and they found out that one of the tools used the word "softball" to down weigh way applicants and the word "baseball" to give them a little bit more weight in the tool.

 

Chad Sowash: Ouch.

 

[laughter]

 

Chad Sowash: Ouch.

 

Hilke Schellmann: And...

 

Joel Cheesman: What?

 

Hilke Schellmann: That is probably gender discrimination, right, 'cause the people that put "softball" on their resume in the United States are more often than not female, and males maybe put "baseball" on their resume. And this job had nothing to do with baseball or softball or whatever, it was just a statistical prediction. And so they found that and told their clients, "I really wouldn't use that tool because you have a gender discrimination lawsuit waiting for you." That's kind of a thing, that's the problems that I kept discovering and discovering. There was another online resume screener that predicted on first names, Thomas, same thing it picked up hobbiew...

 

[overlapping conversation]

 

Chad Sowash: What?

 

Hilke Schellmann: Hobbies, basketball. Apparently, if you had the word "Syria" and/or "Canada" on your resume, that was a predictor success. And these are just...

 

Joel Cheesman: Well, I agree with that.

 

[laughter]

 

Joel Cheesman: Canada is a definite... Whether success or not. Are you finding that sexism or racism is a bigger issue with AI?

 

Hilke Schellmann: It's hard to tell. Probably, maybe a little bit more sexism, because I think it's easier to check, because when you do the four-fifth rule, you check men versus women. And so it comes off pretty easily, it's a pretty easily gauge if there's something wrong. And that, I heard over and over again. There's another lawyer who looked into an AI game spender. He was like, "We tried to do it in the pilot phase, slice and dice it every different way." It was always discriminating against women. And what's striking to me is that we as the public don't know about this. I don't know how many HR managers know about this. I'm sure there's... And actually, I know there are, obviously people talk and somebody who's in part of a pilot phase, but I talk to the next person and share this, but we actually don't have this public knowledge. I think what happens a lot is that, we don't push vendors and people who build this technology to do better, because we don't publicly test, we have very little transparency around this.

 

Hilke Schellmann: So I do think, if a tool works, like, "Show me the tool, great, and let's build on that, that would be great." But more often than not, what I found, when I did the test myself or talk to other people, there were really striking things built into it that did not make it as fair as we wanted it to be. And I think another thing that people don't really look at often enough is, so there's intersectional fairness, if you are a part of two groups, if you're maybe a African American woman, that came up a couple of times as well that vendors and companies don't always look for that because they feel like, "Well, the EOC is not really mandating that." And I think that's debatable if you should do that or not. And a couple of companies that did that, it did not turn out in their favor. I think we have a lot of work to do because we know from other examples that this is a problem. And as we've seen again and again with AI, these problems keep replicating, and if we don't take a closer look and monitor these systems constantly, biases creep in all the time.

 

Chad Sowash: Were you able to go down-funnel and better understand on not just the front-end on, let's say, for instance, keywords, but also looking down to the slates of candidates and watching, just literally the percentage of females and let's say individuals of color, were actually out, they're filtered out due to the AI? Because, to be quite frank, most of these AI models are trained on past behavior. And past behavior is...

 

Hilke Schellmann: Exactly.

 

Chad Sowash: Bias. None of this surprised Joel or myself because we keep seeing this over and over.

 

Hilke Schellmann: Yes.

 

[laughter]

 

Hilke Schellmann: Probably no one to bisect in the industry. But the question is, if we know that, why do we keep using these tools and why do we train it on past employees or current employees?

 

Chad Sowash: Yes.

 

Hilke Schellmann: And that's a question... And I don't know what you mean further down the funnel. I did test video interviews...

 

Chad Sowash: On the top of the funnel is the questions that they ask early on for the pre-assessment. And then down-funnel is the actual outcome. Who were the gold medalists, silver medalists and bronze medalists?

 

Hilke Schellmann: Oh.

 

Chad Sowash: Were they all three white dudes? And then what did that pool look like prior to getting to those last three, let's say?

 

Hilke Schellmann: No one has opened their hiring funneling to me like that.

 

Joel Cheesman: I'm shocked.

 

Hilke Schellmann: But I actually would love to do this kind of inquiry. And I'm working with sociologists and computer science and we do these larger sample sizes, then I can just do as a journalist. I do a sample size of two or five and I think that's... I can say like, "Something is weird here." But I think more often enough, we need these larger sample sizes. And I would love to do a long-term study and understand, like, "This AI tool has labeled this person as a no-hire. Let's hire them and let's hire the people that you use with another AI filter in a traditional way and let's follow them for years, thousands of them, and figure out who is actually successful." I think that would be a real benefit to society to do this.

 

Joel Cheesman: A real benefit, but also a really nice fantasy, I think, in most cases. You mentioned monitoring this. We've talked everything from having an audit system, whether that's government or private sector that comes in. You talk... Whistleblowers, because this is bias at scale.

 

Chad Sowash: Oh God, yeah.

 

Joel Cheesman: This isn't the individual...

 

Hilke Schellmann: Yes. Yes.

 

Joel Cheesman: Hiring manager. This is mass bias.

 

Hilke Schellmann: And I think that's where the risk lie, right, like in traditional hiring.

 

Chad Sowash: Oh, yeah.

 

Joel Cheesman: Vendors don't wanna necessarily open the kimono. Employers don't wanna open their windows to what's going on. What's your thoughts on monitoring this? What's a common sense approach? What has worked or what do you see working in the future?

 

Hilke Schellmann: I think what might be helpful is, if everyone could be a little skeptical and ask questions about accuracy. If AI is 90% accuracy, what was the training data? How did you come to that 90% accuracy? Was it a holdout of your own training dataset? Or if you don't have 90% accuracy, that's very bad, your tool probably doesn't work. It's the same dataset. Have you tested it in the wild? What other data have you used over time? How have you monitored the system? Those are pretty easy questions to ask, I think that gives you a little bit of insight. And then I think everyone should go through a pilot phase and really figure out, "Okay. Can I use biased datasets in here? Do I get biased outcomes? What happens if I use synthetic data?" And then I also think, I test these tools and I always tell people, "Steal my methods." I want people to... Obviously I want people to read my book, but I also want them to read the book and steal my methods, like how did I test the tool?

 

Hilke Schellmann: One of the tools like that says, it can find out how good people speak English. This is for call centers abroad. When you hire people abroad, when competency is English. So when I did it, I did the interview, I spoke English and answered all the questions in English. I got an 8.5 out of 9 English competent, very competent actually. I was very proud of myself. I was like, "Oh man, English is my second language. This is great. What a great AI tool." And then I spoke to it in German. 'Cause I was like, for sure all the vendors that I always talked to talked about like, "You have to have a threshold that you have to overcome. If you have a speech impairment or there's silence, you get an error message." I was like, "I will get wanna an error message." And so I spoke to it in German, sent it off, and then I got a 6 out of 9 English competent.

 

[laughter]

 

Hilke Schellmann: It was all German. In fact, I read the Wikipedia entry on psychometrics in German. It's called "psychometrie." There was not one English word, but I got a competency score in English. And I did that with a couple of other interviews. And sometime... One other tools, and one actually gave me a transcription. And it was... It was just gibberish. It didn't even make sense at all in English. But I got a 73 match score for the job.

 

Chad Sowash: Now you talked to one of these vendors after this, did you not? And...

 

Hilke Schellmann: I talked to all of the vendors afterwards.

 

Chad Sowash: And they gave you reasoning behind it. What was their reasoning?

 

Hilke Schellmann: For the one that I was 6 out of 9 English competent, it was very... I was like, higher math interview. They were telling me that this is in a 5D space and in this space there was, German and English were close by and that's why this confusion in the AI tool...

 

Chad Sowash: Like total bullshit.

 

Hilke Schellmann: And I was so confused that I was like, "I don't know what a 5D space is and I'm just so confused."

 

[laughter]

 

Hilke Schellmann: And at the end it was just like, if you're in front of a judge and you have to explain why I was rejected or got the job... Made it into the next round. What... Can you just say how that happened? And it was another round of 5D and spaces that I don't understand, and I was like, "I'm not sure if you understand the developer of the tool, what your tool predicts upon." And I think that's where I feel like we really need to be skeptics about it, and really understand like, "Wait, we make high-stakes decisions on people here if they get a job or not. We need to know what we are predicting upon and how these tools do the job and how can we actually have a printout, old-school speak at the end to understand like how did the tool derive at these outcomes and at these predictions and what were taken into consideration?" 'Cause we don't want proxies, baseball, basketball or the way I speak or the way I look to be part of that, but we have to monitor these systems.

 

Hilke Schellmann: I also talked to the company that gave me a 73% match success score rate for speaking German, and it came up with this gibberish transcript in English, and they told me that it wasn't the words that I used because clearly their transcription was gibberish. And they were like, "No, no, no, the AI tool knew that." But it was the intonation of your voice that they checked that was 73 match score to the job. And they congratulated me of having great intonation. I was like, "I don't even know what science that is built on. We really shouldn't use this for hiring. We should use methods that work."

 

Chad Sowash: Well, they were assessing for English language. So therefore your intonation, it doesn't matter.

 

Hilke Schellmann: Well, but they were saying the intonation, they also checked the intonation of voices, and my voice was, apparently that was the one thing that stood out to the AI tool.

 

Chad Sowash: But to qualify, you have to be able to speak English and you did not speak English, so therefore you didn't qualify. Although they're saying you did qualify because of intonation. It makes no sense.

 

Hilke Schellmann: It makes no sense. But that's some of the tools that I have encountered in the space. And so that makes me think, if me, somebody who has really no complex technical training or knowledge, can play with these tools and they break upon impact, we really have more work to do. And do you want me to read what came out of the transcription when I talked about psychometrie?

 

Chad Sowash: Yeah.

 

Hilke Schellmann: What came out of the transcription was, so this is the words I spoke in German and then the English transcription was, "So humidity is desk a beat-up. Sociology, does it iron? Mined material nematode adapt. Secure location, mesons the first half gamma their fortunes in." It goes on and on and on. But as you can tell, it's total gibberish and I got...

 

Joel Cheesman: Crazy pills...

 

Hilke Schellmann: A 73% match to the job. I think, I want people to use these methods too and test these tools...

 

Joel Cheesman: Hilke, when I knew we were gonna talk to you today, I felt really confident that you were gonna clarify all this stuff, that I was gonna leave this interview...

 

[laughter]

 

Joel Cheesman: And be totally in sync with what's going on. But I feel less secure. I feel like the cat is out of the bag. I feel like this stuff is advancing so quickly. And we also... A lot of these are tools for individuals. We have no control for the rogue recruiter that says, "I'm gonna throw this video in some tool that isn't okayed by my company and see what it says." We talked about Google Gemini on the show last week and being able to AI from voice, sight, sound, text, everything, we're going into a full monitor mode of our employees, not just hiring, but while they're on the job.

 

Hilke Schellmann: Oh, totally. At work, yeah, totally.

 

Joel Cheesman: Our raises, our promotions, are gonna be driven by what AI says about us. I'm just overwhelmed by this and it sounds like you are too. Give me some hope. Give me some hope as we...

 

[laughter]

 

Joel Cheesman: Wind this down, that, I don't wanna jump off the ledge.

 

Chad Sowash: Give Joel a hug.

 

Hilke Schellmann: No, no, no, don't jump off the ledge. And in fact, I'm very hopeful that I feel like we're at the beginning of this.

 

Joel Cheesman: Oh, good. Phew.

 

Hilke Schellmann: It has taken over a lot of the HR space and we see 8 out of 10 of the largest companies in the US use some form of monitoring and a lot do. But I actually think now that we talk about it and we humans have still control over these systems, let's actually talk about it and have a conversation how we can do this better. So I actually feel this is exactly the time we need to talk about this. The same with audits. I've reported on audits that companies have done themselves and paid a third-party entity to do these audits. And strikingly enough, the tools work as advertised. I wonder why.

 

Joel Cheesman: Phew.

 

Hilke Schellmann: Because maybe there's a conflict in interest of bay. And also, we don't even really have audit standards, what are we auditing for? One of the audits was basically like a roundtable discussion. It didn't even look at the algorithms and figure it out like, "What is going on here?" The other one was a conflict of interest, people who were leaders of the company were on the scientific papers that was published about the audit. That is not clear delineation of church and state and an independent assessment. I don't actually believe that is a good way. I think what would help is transparency. And we need non-for-profits or some folks or universities or who knows, testing these tools, at large scale publishing that. And also building tools so then we actually maybe have an online resume screener that works and we can tell people, "Here's the GitHub. Take that code and build your own and manage it and monitor it." And don't just build like black boxes and the... Out there and don't share it, because that's the only way that we will actually get better and understand how these tools work.

 

Hilke Schellmann: But I agree with you that some of it is really scary. In the book I tested vocal biomarkers, that out of our speech stream can find out, are you possibly gonna have Parkinson's? Or maybe you're already sliding into Parkinson's. But they can also find out, are we anxious? Are we depressed? And I feel like, I don't know if I want my boss to know that. But anything can be used. We could use this and run this over the vocal biomarkers. And in fact...

 

Joel Cheesman: Ooh.

 

Hilke Schellmann: I've done that because there're apps that you can download from these startups and I've just... Poor random people's YouTube audios and ran it through the vocal biomarker and it gives you a score, how depressed, how anxious you are. So anything can be done with these technologies, and we don't actually know if they work. We know from science that there's something in our voices, but how exactly it works and really 30 seconds at a time one day is probably not a good idea to understand...

 

Joel Cheesman: No.

 

Hilke Schellmann: Like how depressed or anxious you are. And maybe you just ran up a flight of stairs and you're out of breath and that's why you aroused in your voice. It's really hard to tell, maybe over a long period of time.

 

Joel Cheesman: Some of us don't need AI to figure out that fat, drunk and stupid, is not something that needs to be analyzed by AI.

 

Chad Sowash: Or a way to go through life, Cheesman. So it's scary...

 

[laughter]

 

Chad Sowash: It's muddy, but it is the algorithm. Can you tell us how AI decides who gets hired, monitored, promoted, and fired, and why we need to fight back now? How do I buy this book, Hilke?

 

Hilke Schellmann: Oh, how do you buy it?

 

Chad Sowash: Yes.

 

Hilke Schellmann: You just go online and buy it. Also, there's an audio version, that I spoke. I think it's really fun because I got to reiterate my own experiments. So I think it's really fun.

 

Chad Sowash: Nice.

 

Hilke Schellmann: You can also listen to it and you can tell me all about it. I have social media feeds and I would love to actually hear people's feedback on my methods and what I did. I'm very open to critical feedback 'cause I wanna know, what can we do better? That includes me. Tell me what you think and what else We need to look at in the space.

 

Joel Cheesman: The book...

 

Hilke Schellmann: And I'm also interested in looking at these tools myself and finding ones that really work. I'm really open to that. I just need to look under the the hood. I'm just not gonna buy marketing language that this stuff works as advertised, because we know way too often that it doesn't. I'm inviting everyone to look under the hood with me.

 

Joel Cheesman: She is a whirlwind, everybody. Go read the book. Listen to the book. She's also very active on X, the artist formerly known as Twitter. Chad, that's another one in the can. We out.

 

Chad Sowash: We out.

 

Outro: Thank you for listening to, what's it called, podcast, The Chad and Cheese. Brilliant. They talk about recruiting, they talk about technology, but most of all, they talk about nothing. Just a lot of shout-outs to people you don't even know. And yet, you're listening. It's incredible. And not one word about cheese. Not one cheddar, blue, nacho, pepper jack, Swiss. So many cheeses and not one word. So weird. Anywho, be sure to subscribe today on iTunes, Spotify, Google Play, or wherever you listen to your podcasts. That way you won't miss an episode. And while you're at it, visit www.chadcheese.com. Just don't expect to find any recipes for grilled cheese. It's so weird. We out.

Comments


bottom of page