top of page
Indeed Wave.PNG
Parental Advisory.jpg
Color-YouTube-logo.jpg
Apple Podcast.png
Spotify.png
Chad Sowash

Can A.I. Fix Hiring?


It's finally 2021. Can Artificial Intelligence finally takeover hiring?


HiredScore's founder and CEO Athena Karp is one of the world's experts on A.I. and its impact on recruiting. Is it really the panacea to unbiased recruiting? Is all A.I. created equal? And will there still be human beings in recruitment 10 years from now?


The boys cover all this and much more on this NEXXT exclusive. Enjoy!


PODCAST TRANSCRIPTION sponsored by:


INTRO (1s):

Hide your kids! Lock the doors! You're listening to HR’s most dangerous podcast. Chad Sowash and Joel Cheeseman are here to punch the recruiting industry, right where it hurts! Complete with breaking news, brash opinion and loads of snark, buckle up boys and girls, it's time for the Chad and Cheese podcast.


Joel (21s):

We've got the Greek goddess of war on the show today, everybody. This is Joel Cheeseman, and you are listening to the Chad and Cheese podcast as usual. I am joined by my cohost, Chad Sowash.


Chad (36s):

Well, hello.


Joel (38s):

And we're joined today, honored to introduce Athena Karp, founder, and CEO of Hired Score. Athena, are you still there?


Athena (49s):

I am. Thank you so much for having me.


Joel (51s):

We haven't scared you off. So for the listeners that don't know you, give us a little Twitter bio and then tell us the elevator pitch for Hired Score.


Athena (1m 1s):

By the way, one correction Athena is the Goddess of Victory in war, gentlemen.


Chad (1m 7s):

Whatever.


Joel (1m 8s):

What a flex. Yeah, that was, that was a major cause of is not enough.


Chad (1m 15s):

The Goddess of War isn't enough, it's gotta be the Victory in War.


Joel (1m 18s):

You got olives according to what she eats.


Chad (1m 20s):

What her mom told her, she's like, remember victory. We're not going to take anything less.


Joel (1m 24s):

Go ahead and give everyone your siblings names.


Athena (1m 27s):

There's a Ulysses. A Penelope, and Ananda. We've got some Indian as well in there.


Chad (1m 34s):

Did you have a tiger mom? Was she one of those?


Athena (1m 37s):

No, never. A Tiger Dad actually. Anywho. I'm Athena Karp CEO, Founder of a company called Hired Score. We power hiring internal, external, permanent flexible workforces across the global fortune 500. Live today at over 20% of the fortune 100 powering efficient and fair hiring decisions and always a fan of the Chad and Cheese podcast and Joel and Chad, both of you. So really a pleasure to be on.


Chad (2m 13s):

Very smart sucking up to the sucking up to the hosts already, that's already getting points. So quick question right out of the gate. Can AI fix stupid humans? That's that, that's what I think we're going to be really focused on this show.


Joel (2m 29s):

And stupid podcast hosts. Can they fix that?


Athena (2m 32s):

Maybe in future deployments we'll tackle podcasts? No. So I think, I think a lot of the discussions around how far can AI go? You know, we hear a lot garbage in, garbage out, but there's a much deeper discussion here that I think is really important. One to have, which is what is it you're asking the system to either solve partially or fully and go back to when we think about artificial intelligence, it's far less, you know, replace the humans and now the AI takes over. We're not building that, most of those, I know building AI or not building that, or, you know, that's not being deployed.


Athena (3m 14s):

What we're really trying to do is say, what do we augment the human doing with an AI and, and what processes aren't done today are too cumbersome or impossible for a human to do at the speed at the amount of data that we would need process automation so that they can then continue on the rest of the processes? So I think when we break the world down into what are we augmenting or automating with process efficiency, we can then have a clearer discussion. And then what data sits below that for us to do machine learning and understand what would the best human being do in these situations?


Athena (3m 56s):

And how can we replicate that with a technology? You know, when we think about fixing the human decisions, there's also a number of data science techniques you can deploy, that correct for things that a human might consciously or subconsciously be doing, that we would actually not want ever replicated number one. And we would want a human to stop doing if we knew that data, right? So in our world of identifying candidates that are relevant to the job, when we talk about correcting humans, I think one of the big fallacies is that if I were to read a job post that says, you need to have a degree in a related field, as an example, I have a recruiter might only know the related fields that I've come across at my time in that company or that I've come across across the ethnicity groups or the gender groups that I'm used to hiring.


Athena (4m 55s):

I might be far less good at understanding that a degree in African-American studies and a degree in literature and comparative writing and all these other areas are actually really good fields of study for a content manager, for example. So there's a lot that we can detect from data that is generated by humans. Like what degrees have they selected when they looked for a degree in a related field, does that degree selection generate a bias against any ethnicity group or gender group or other disability groups like veterans. Maybe I'm looking for relevant experience and it's a sales manager and I keep going back to territory managers or sales managers at companies like us before, but I don't understand military taxonomies that these sergeants have actually great experience managing, building and growing teams.


Athena (5m 55s):

But to me that doesn't speak my language. So there's a lot of testing that we can do bias understanding what generates the bias. Is it acceptable job related, defined by a wreck? For example, if it's because you required a PhD for this researcher role, and we know that that generates a bias, then we can at least have a data back discussion of, do we actually need a PhD? Is that truly required for this role? Or if we dropped to a master's and three years of experience, how much would that open it? So I think people tend to maybe underestimate those working in the cutting edge space here of how much we can actually diagnose, detect, choose what goes in, what doesn't go in or at the very least have data back discussions of what we've learned from the data before it's implemented to decide if that's appropriate, if that will have a bias, that is what we want to have, or if we want to undo that, remove that and go more broad.


Chad (6m 59s):

Who makes that determination though, Athena, because going back to the veteran example. Veteran background and qualifications, in many cases, they have training and work experience for the job, but they are passed over. And historically military veterans, their experience is not understood and they're totally undervalued. As a matter of fact, for the most part they're accepting positions that are several pay grades under their training and experience. So if an individual who doesn't understand any of this is making those definitions, those decisions because of prior behavior and said, Hey, we're hiring a lot of military veterans.


Chad (7m 39s):

What they don't understand is they're actually they're hiring individuals at way below the level of their capability who makes that decision. That's the hardest piece.


Athena (7m 50s):

Yep. Well, and I think for our listeners here, this is actually one of the exciting places where I think AI has a starring role because a lot of companies, as you're saying, Chad only know what they've ever done, which may or may not be good or right, right. But when you start to say, well, there's also general understanding of a better way or a different way, right? So a great example is we have a large, large global client that is really good at promoting from the lowest ranks of the org into managers and into very high ranks of the org and the majority of the company is run by people who started truly at the bottom, this continues today.


Athena (8m 37s):

And you know, it's not that their competitors couldn't implement similar models of, you know, from blue collar workforce to white collar workforce, they're just not used to those career trajectories happening. And I think Chad similarly, companies that aren't used to hiring air force pilots might not know that an air force pilot can be so much more at your company than the company pilot. You know, they might not know about all these other roles, but if you took examples of companies doing it well, and you brought those models into companies that have the desire to increase the presence of military veterans within their orgs, increase the ranks, increase the diversity of role types that they recruit for.


Athena (9m 24s):

You can almost let, you can leverage a more general understanding that helps your org shift in a way that you've not shifted to date for any number of reasons. And de-risk it because you're saying, well, if it's been possible here, you can bring these logics and share them in more places. Or if these, if these orgs didn't discriminate and require that you have this specific degree type or this specific job training, and instead we saw this other training course, and this all these different experiences, you actually led to success in the role. So I think a lot of this goes to how much we can democratize the bright spots of where we see these positive trajectories and shifts and positive inclusion models to more comfortable.


Chad (10m 15s):

That's the hard part though, because that's all hiring data for the most part that companies are incredibly protective about whether it's aggregated, it's anonymized, what have you to be able to get to that level you have to be able to grind on a lot of data, not just for one organization, but for a multitude of organizations to get that underway.


Athena (10m 36s):

Yes, it definitely takes a company that's been at it for a decade plus, is very diligent in the level of anonymization and aggregation meeting, you know, all the global standards that are required to do that and never, you know, be identifiable by company, by individual, for sure. There's also, you know, a lot of learning that we see companies themselves could understand from their own data, but because these systems don't give them easy to understand ways of plotting that without massive data science teams and data engineering teams, they themselves lose the insights of how things might be done differently even within their own org.


Athena (11m 19s):

So I think it's both the democratization of the bright spots, of course, where, where that's never translatable back to individuals or corporates in any way, shape or form. And then number two, the helping companies understand where are there non-traditional things already happening in their company successfully today that they've just never shown a light on to make sure it happens more and more.


Joel (11m 45s):

I want to back up for just a second Athena. I didn't go to Georgetown and, and you're talking, you're talking to the Chad and Cheese audience. So I want to back up for just one second and explain to me in layman's terms, your definition of AI, because I think a lot of people think of AI and it's, you know, because I watch a Hitler documentary on Netflix, it recommends Saving Private Ryan to me, but I assume your definition is much more congruent than that. So can you do that for me real quick?


Athena (12m 14s):

Absolutely. It's quite simple. I think in just saying to us, artificial intelligence is computers, machines learning how with great human examples to replicate tasks, processes, or augment certain places of work in a sophisticated, intelligent, efficient manner. And I think that the new definition of AI also requires a layer called, transparent, explainable, tested, you know, both viable in terms of results, but almost as important if not in many places more important, especially when we talk about hiring decisions and hiring decision support and internal mobility and redeployment and all these things that algorithms are supporting that both from an efficacy standpoint, which I think is what you mentioned of, how do you teach it, what humans do in a effective manner, replicable manner, but it's also, how do you do that in a way that the system can fully explain itself, can log everything it thought through and how it came to its decisions and the weights that it put on the different parameters, the weights are so important, and then that last piece that it can test.


Athena (13m 35s):

Are there any pieces of its decision that might adversely impact a certain group of individuals? And if so, by design, before even going live, exclude that. And then we talk a lot about how AI learns after it goes live. That can be completely controlled learning that's fully tested before deployed. It's not just that the, you know, algorithms built the right way, don't just on their own, unsupervised learn and get bolder and bigger and grow over time. You can always control, almost think about it like updates you can always control how it gets updated, how the update gets tested in advance.


Athena (14m 18s):

If the update fails, then you can limit the pieces of it that failed so that you don't have to deploy that or totally limit the update itself until it would ever pass those tests and control that learning over time.


NEXXT (14m 35s):

We'll get back to the interview in a minute. But first we have a question for Andy Katz, COO of Nexxt. What kinds of companies should be leveraging programmatic? Every fortune, 1000 company out there, anybody with extreme volume of jobs, you're recruiting for 20 positions a year. You don't need programmatic, you can go to a recruitment marketing agency or a job board to do, a direct email with your company only. You're not in with another 20 companies and a job alert, or you're not just on a career site or a job board. You could do banner advertising by premium placements. So where programmatic again is one piece of the puzzle. It's not going to ever be the end, all be all. And I do believe all the programmatic platforms out there have ancillary services to support that, knowing that you can't just survive on a one trick pony For more information, go to hiring.nexxt.com.


NEXXT (15m 28s):

Remember that's next with the double X, not the triple X hiring.nexxt.com.


Joel (15m 40s):

I'm curious about consumer the level of consumer sort of understanding of this. And I'm guessing that when you field sales calls and get RFPs that people, at least at this point, don't think, well, this AI is the same as this AI is the same as this AI. So to answer the question of how is your company's AI different from another company's, whether it's more advanced or it's just different, I think you alluded to that when you talked about, you know, how your finger on the scale and to what degree are you doing that. Talk about what you would say to a customer when they say, how is this AI different from this AI?


Athena (16m 13s):

First of all, just like any spectrum there's companies that need, you know, power tools and companies that you just need a hammer or something basic to get the job done. I don't, I don't think one is lesser than the other. And I have a lot of respect for most technologies across the spectrum, as long as it's actually doing what it says, it does to verify for bias and, and building responsible technologies. But when you go across that spectrum, we sit in a place in that Fortune 500, Fortune 1000, that especially our, you know, government contractors, they need something that is the highest level of explainability, fully tested, understands what OFCCP and EEOC means, not just for a human, but for a technology that helps a human and holds itself at minimum to the best human standard and at maximum two ways that a human would never even be able to explain all the different levels of decision and test out all those different levels.


Athena (17m 17s):

So, you know, when we think about our solutions, it's definitely for those larger companies that need that level of explainability and auditing and vetted solutions, number one. Number two global, so, you know, global is often swept under the rug of, Oh, I used Google translation or I translated from native language to English. And then I did match. That will never work because if you take a country like China, Brazil, UK, and US. I'll give you a great example, in the UK, they don't ask for a number of years of relevant experience in China, they have foreign language certificates and in Brazil versus US versus France, the way that they describe their degrees is, is all very different.


Athena (18m 10s):

So you can't just take the local language, drop it in this magic box called, make it English, make it Western, and then term for it. We go through, you know, over the last nine years, a lot of painstaking, I think I have some gray hairs over this thing called global localization because actually almost every country or at the very least every region, you know, with countries that are small enough need to be looked at in their local taxonomies, local understanding of how they post jobs, how people express their professional data, how you match between that. So that global lens is really important. And then the last piece, the connectivity across the ecosystem.


Athena (18m 53s):

So where companies have a few recruiters and ATS and they just need something to, you know, surface a few people that's probably less important, but what we see as a lot of companies are expanding the ecosystem. So they have an ATS, a CRM an HCM, a chat bot. You know, some companies would kill me that I called them that the conversational layer, the scheduling, the scheduling automation, the assessment test, different for tech jobs versus campus recruiting versus so when you start to look at these seven to ten point ecosystems, all generating data on people, we take a lot of pride in pulling all of that together, wrapping our AI around that.


Athena (19m 43s):

Never assuming that we're the only decision parameter, there's a lot of other data points. And at certain points in a candidates or employees journey, they might be much more important than my score in who progresses and who doesn't so incorporating that holistically is kind of that third piece that's important for us.


Chad (20m 1s):

So one of the things that, I mean, this is just human behavior, the silver bullet, and in this, in this case with AI, it's more of like a set it and forget it behavior. You mentioned auditing a little bit earlier and to be quite Frank, I don't know that I've actually heard someone in an AI discussion say anything about auditing. So there's a big applause for that, because understanding the difference between, where you're going wrong, kind of like the, the whole Amazon idea of they had, they had an algorithm that obviously started to become biased against females. Well, there was no audit that was happening and they were just going off of the behavior of dumb humans.


Chad (20m 46s):

What do you expect? Right. My question is going into a white box versus black box conversation. But two years ago, when we started having a lot of these conversations, most vendors were saying white box, isn't doable, pretty much humans. Aren't smart enough to understand what's going on. You talk about explainability, which I like to call defendability because that's exactly what a federal contractor does they defend against, against the OFCCP. Can you talk about that? How, what's the difference today of white box versus black box and what can, what should an employer expect when they're buying tech today?


Athena (21m 27s):

Yeah, it's a great discussion. I'll just give a plug for a nonprofit, the World Economic Forum. We've been working over the last year on a global frameworks for ethical AI and HR. So that will be coming out soon and people should definitely subscribe to that. It's a cross section, all different views and included in that, in that working group.


Chad (21m 48s):

Nice.


Athena (21m 48s):

But one of the things we've been working a lot on, I assume a plug for nonprofits are okay, or NGO or two, what should you expect? And what can you get from vendors? And we tend to see this either. I've got to take all the risks and go black box for the sake of innovation and efficiency and business impact, or I've got to restrict to nothing because nothing can meet that. And we'd like to talk a lot first, I think the spectrum of it, if you're just, you know, the, the way your, where you're applying an algorithm in HR, in recruiting, and specifically what we do, how you prioritize people, or even surface people who applied in the past, you really want, I think you need that explainability and justification that it was done for every single person in a fair, consistent manner.


Athena (22m 49s):

Maybe if you're talking about the chat bots or some other things that are lower risk, if I say hi to you versus hello to you versus something else to you, then, you know, who really cares? So looking at the spectrum of where are you planning to deploy this? I can answer that in the specific area that we work, which is, you know, candidates and jobs, and passive candidates and opportunities, and employees and jobs, which I think takes a really vigilance if you will over not having a black box technology and requiring, I can tell you, for example, we see some of our most innovative clients launching ethical AI reviews.


Athena (23m 34s):

So, you know, and even a good warning here, there's a lot of vendors that come in as one thing, and then just flip the AI switch cause it's a feature for free for you time. That's not going to get you out of a government audit or a bias or an Amazon like investigation, or, you know, the FTC is recently investigating our space for AI video, for example, because you turned on the AI feature. You know, I think is often a misnomer of it's just as important to vet anything and everything that's helping you make decisions across the spectrum. So I can tell you, for example, the standards that we hold ourselves to, and our clients hold us to, is that before we go live, we've fully tested this system to understand, are there differences between groups?


Athena (24m 27s):

And if there's differences, they have to trace back to job requirements, basic requirements that are actually specified on each individual job. And if there's anything else, the algorithm or the treatment, or the progression logic that creates a difference across any group, then that's not acceptable for launch, right? So one that audit and that explainability of what's creating a difference? Is it acceptable? How do we define it? And is that defensible that should be provided before even go live. We also provide explainability that the data we've learned from has been balanced.


Athena (25m 9s):

So I think this is a really important point. Chad, you mentioned the, how do we make sure the same things we're doing over and over, or if we want to start doing things differently that we're not barred from the technology we've deployed. So we test our, the data we've learned, we're learning from before learning to make sure it's fair and representative a great example in technology roles we see on average men apply at a rate of one to four to women. And when we start to look at highly technical roles, women on average have six years less relevant experience than men. So when you start to think about, if I studied in the machine learning interview rates and offer rates, and this is what good looks like, I can end up with an Amazon problem unless I'm studying instead, not here's what good looks like, but I'm balancing that data.


Athena (26m 1s):

Even if the men are one to four likelihood of getting an interview or an offer that doesn't mean I need to, or should learn one to four and the learning I do want to do so I can balance that down to equal. So only take 25% of that male learning data. If all I have is that complimentary female data set so that the system is always learning balanced, and even, and then number three, after launch, I need to have monitoring that shows is the system learning, what is it learning? Do I want to deploy that? Is there any trade off in efficiency and effectiveness to fairness? And if so, does the, is that valid?


Athena (26m 43s):

Is that explainable, is that justifiable? If not, you know, so all those different levels, the pre the learning and the post all should have that level of audit explainability and testing, aligned with however you run your adverse impact testing.


Chad (26m 59s):

Yeah. I think it's interesting. And for all those employers listening, who always think AI should be perfect when humans aren't even fucking close. So I'm going, I'm going to go ahead and lighten things up really quick. My last question to you, what is more scary to you? The Netflix documentary, the Social Dilemma or Elon Musk putting microchips in human brains?


Athena (27m 26s):

Because I'm not a doctor. I would have the chips because I'm not sure what else is happening once we've received chips and the God modes of biology are always more scary to me than things that I think are far more controllable, testable, explainable, which are, which are algorithms, the right algorithms. Right. But I guess I have a bias.


Joel (27m 53s):

I'll let you out on this with a fairly more intellectual question, but not, not much more than that. I want you to get your crystal ball out for a little bit and paint a picture for me post COVID. Chad and I talk quite a bit about, you know, everyone who was on the fence about AI in their recruitment technology are going to come off the fence and choose that over, hiring more recruiters. I'm curious about your opinion on that. And then just a general, you know, what does recruitment technology look like let's say 10 years from n


Athena (28m 22s):

So I think the, the discussion of technology versus humans, I just got off a call with a client who is hiring a few dozen recruiters in Q4. I, you know, because there's so much work to be done and there are such challenges. I don't see it as a clear, you know, the tech or the human. I think it depends what work you need done as this opening and in this crystal ball. And I really truly believe in one of my favorite client question is what's your view of recruiters of the future? I've never heard anyone say, well, there are none in 10 years because we've got technology. Technology, doesn't build human relationships with candidates nor hiring managers.


Athena (29m 6s):

It doesn't, there's a lot of limitations, to it. So I think what it looks like is the function of recruiter, really being able to kind of be the workforce advisory. What are our needs, what's the best way to get that? Is it contingent talent is a permanent talent in snap of the finger and oppressive the button having that automatically delivered to you across internals and externals and passes and actives and employees that have never done this before, but are taking courses online and showing promise in this area. I think we're going to have a lot more opening up of opportunity on less frequently use and increasingly reliable de-risk factors, like if people are showing promise in tests or in new ways of assessing potential or in other skills that they're learning, how can we include them in new job trajectories across the company?


Athena (30m 8s):

So I get really excited about kind of the ability to open the funnel of job opportunities and reevaluate people in new ways. And I think the recruiters are going to be driving those changes across the org, by explaining to people why they're doing things differently and that's okay and it's not so risky. And here's why that's necessary for the org! Number one, I think that there is also a lot of exciting opportunities for technology that people maybe thought about before we built some processes of, but, but had an implemented, like talent redeployment, bringing people back from furlough in a fair, consistent manner where humans would be riddled with bias and keywords and who they know and who the manager liked.


Athena (30m 55s):

So I think there's a lot of democratization of opportunity because of those deployments of technology that does really excite me.


Chad (31m 4s):

Well, excellent, Athena.


Joel (31m 6s):

We're excited you joined us today.


Chad (31m 8s):

We were happy. Yeah. Obviously you classed up the show. You smartened up the show. Thank you for taking the time, joining us here, talking about all this smart stuff. If somebody wants to find out more about you or I don't know, maybe Hired Score where, would you send them?


Athena (31m 23s):

If any of these topics are interesting to any of the listeners, you can find me Athena Karp at LinkedIn or at Twitter. I do have social media accounts, or even Athena@hiredscore.com.


Chad (31m 34s):

Excellent.


Joel (31m 35s):

Chad We Out.


Chad (31m 37s):

We out.


Athena (31m 38s):

Thanks guys.


OUTRO (32m 2s):

Thank you for listening to podcasts with Chad and Cheese. Brilliant! They talk about recruiting. They talk about technology, but most of all, they talk about nothing. Anyhoo, be sure to subscribe today on iTunes, Spotify, Google Play, or wherever you listen to your podcasts. We out.



Comments


bottom of page