Thank heavens for tenue... How else would The Chad & Cheese's brash, challenging, and no bullshit attitude get in the same room, hell the same solar system, as NYU and UNC academics?
Yup, tenue must be the answer.
Bascially, earlier this year, NYU’s Institute for Public Knowledge, the 370 Jay Project, and the NYU Tandon Department of Technology, Culture and Society hosted a new discussion in the series “Co-Opting AI”, which included this humble podcast.
This event was created to examine how AI intersects with recruiting and with gaining access to the labor market. Taking a deep look into the industry and providing insights on the HR tech sector.
The players: - Ifeoma Ajunwa is an Associate Professor of Law with tenure at UNC School of Law. - Mona Sloane is a sociologist working on design and inequality, specifically in the context of AI design and policy.... and The Chad & Cheese of course :)
Props to The Co-Opting AI event series and Mona Sloane. It is hosted at IPK and co-sponsored by the 370 Jay Project, and the NYU Tandon Department of Technology, Culture and Society, and the NYU Center for Responsible AI.
TRANSCRIPTION SPONSORED BY:
Mona Slone: Dr. Ifeoma Ajunwa is an Associate Professor of Law with tenure at UNC School of Law, where she also is the Founding Director of the AI Decision-Making Research Program. She has been Faculty Associate at the Berkman Klein Center at Harvard Law School since 2017. Her research focuses on race and the law, law and technology, and importantly, employment and labor law. Her work has been widely published in very many high-impact journals, among them the California Law Review, Cardozo Law Review, Fordham Law Review and many more. She also is an avid public scholar. She has Op-Eds out with The New York Times, The Washington Post, The Atlantic and so on. Just testified before the US Congressional Committee on Education & Labor and has spoken before governmental agencies, such as the Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission. And she also has a forthcoming book called The Quantified Worker: Law and Technology in the Modern Workplace, which I'm also very excited about. Now, my other two guests are, kind of come together, Joel Cheesman and Chad Sowash.
Joel: My God.
Mona Slone: Joel Cheesman is a recruiting industry tech geek, his words not mine, from the late '90s when he worked at eSpan, one of the world's first job boards as well as JobOptions, CareerBoard, Jobing and Recruiting.com. He also did partnership stuff at employee screening IQ, but you probably know him from his days as Cheezhead. Joel likes to tinker, which means he has started a variety of businesses, and his latest venture is a market sentiment platform called Poach. He is the co-founder of The Chad & Cheese Podcast, which is the thing the two do together. And Joel does this together with Chad Sowash. Chad, if you could wave too, say hello.
Mona Slone: Chad has actually worked... Hello, in the HR talent acquisition and HR tech space for over 20 years, consulting hundreds of Fortune 500 companies. He is a former Army Infantry drill sergeant, who cut his teeth in online recruitment in '98 with an outfit called Online Career Center before it launched in '99 as Monster.com. He went on to build startup DirectEmployers Association, steer RecruitMilitary toward revenue as CXO and build RandStad's, which is a big agencies first military veteran hiring program. He is also a professional podcaster today and is doing the The Chad & Cheese Podcast together with Joel. Welcome everybody. Thank you so much joining me today. I am handing it over to Dr. Ajunwa, who will set the scene for us. Ifeoma, AI deployment in 2023, what does that look like?
Ifeoma: What does AI in HR or recruiting look like? Slightly bleak is probably my answer. So as you mentioned, I have a forthcoming book titled The Quantified Worker. As the title suggests, I do take a critical view to the role of AI in the workplace, and one of my chapters... Actually two of my chapters is devoted to automated hiring and the role that AI plays in that, be it in terms of sorting applicants through ATS Applicant Tracking Systems or through the use of automated video interviewing and what sort of like pseudoscience can come from that. So with the book, I'm really very interested in the role that AI is already playing in the workplace, which I think is somewhat getting subsumed by the conversation around chatGPT, DALL-E, all this generative AI that's supposedly going to put workers out of work and just take over the workplace. And then unbeknownst to us, we've already had AI... To be quite frank, I don't like the term AI. I actually prefer the term automated decision-making, because that's what really these things are.
Ifeoma: But we already have automated decision-making in the workplace long before chatGPT and any of those generative AI. Currently, we have automated hiring systems that are actually very actively affecting workers' lives, and this is impacting who's considered a candidate for recruiters, it is impacting how the candidate experiences the hiring process. Even just putting in your application, you basically have to run the gauntlet of automated hiring algorithms that may be looking for certain keywords or may be designed to eliminate candidates based on gaps in employment and all the things like that that might actually have a discriminatory impact on workers. So I write about that in the book, so please pick up the book, The Quantified Worker, coming out soon. Available for pre-order. But I think what I really want to talk about with Joel and Chad is what trends that they're seeing with AI in HR, ignoring the... Well, not totally ignoring, but getting past the chatGPT hype, getting past all the generative AI putting people out of work. What is AI doing currently in the recruitment space?
Joel: Well, first, I wanna thank Mona for having us on, and I particularly wanna thank tenure, because without tenure, I'm not sure a couple of clowns like us would be on a podcast or a webinar like this. I wanna thank tenure and thank Mona for having us on. And I think before we talk about current state and where we're going, which is very important, I think it's important also to look back a little bit at the history of AI. And in our intros, everyone knows that Chad and I are pretty old. And when Chad and I first got into the business world, you'd literally send a paper resume with a paper cover letter to an employer. In that world, it's very hard to apply to many, many jobs. The internet changed that. The internet enabled people to shotgun their resume to literally thousands of employers, and when that happened, employers were like, "Oh hell, how do we control the flow of all these resumes?" And the early days of filtering that was like pre-screening questions, it could be as easy as, "Are you 18 or over? Do you have a driver's license?" The filtering started in simple format, and it was generally not discriminatory against race and age and things like that.
Joel: As technology evolved and we got away more from posting a job online, it became finding people online, and we can thank LinkedIn and others for that. So in addition to posting jobs, recruiters were then, "Okay, fine, people." And then services came out that tried to match these people that are online with jobs that are online, and the early iterations of that, as Chad and I know, were really, really bad and didn't really work very well. So technology evolved, databases of people, how do you search them, how do you find them, all these things have evolved into what it is today and what we're talking about today. But it's also important, I think, to highlight that there's no conspiracy to screw over people in this process. These are engineers that sit in a room and go, "Wouldn't it be cool if we could look at someone's face and how they answer questions, and are they more likely to be lying about this than otherwise?"
Joel: There was no conspiracy to screw over people of diversity and diverse candidates. But it's kind of turned into that, unfortunately, and I think that's what we're talking about today. But I wanted to kinda set the table historically with how this thing was sort of born, and it's become a bit of a Frankenstein's monster that we're trying to figure out, "Okay, how do we fix this?" Government's involved, employers are involved, there's a lot of fear and uncertainty and doubt in this, and hopefully we can set some of that fear, uncertainty and doubt to rest as we talk about some of these issues, but that's sort of my take on the current state of things, looking at the past to explain the current day and future.
Chad: Well, and a couple of things Joel was talking about, that's all scale. And that's what we're gonna be talking about today. But that's scale, when you had to actually mail or... You couldn't email at the time, you faxed. You faxed then your resume, you hand-carried it in. Well, imagine that day versus the next day when you got flooded with hundreds of resumes, what would happen. Most of those went into a black hole because recruiters didn't have time to get to them because they couldn't scale. That's the big change. Now, what we didn't ask then, what we should ask now is what Joel was alluding to with regard to facial recognition and all those things is not, "Can we?" It's "Should we?" Should we actually go that way? And talking about bias, the history of bias is rooted in humanity, not AI. The difference is scaling bias. Humans don't scale well. The best that we can scale is through training. We can train people to be biased, and then you have armies of people going out to be biased, but AI scales in an instant.
Chad: So some of the best and most powerful systems can carry bias, and if we don't pay attention to the outcomes and feverishly audit the algorithms, we're going to have bias. Regulations do not distinguish between human and AI bias. Bias is bias. It's just that bias can scale much faster with AI. That's where we need to focus, where we're hearing all these glitz and glamor things about chatGPT, which I think is demonstrating that we're getting closer to the promise of AI than all of this vaporware we've been talking about for years. But then we talk about... And we know history from Amazon, building an algorithm, this is the, "Can we?" Yes, they can. They should have asked, "Should we?" And they shouldn't, because they automatically failed sourcing and hiring experiments by knocking out females. Why? Because they were feeding the algorithm, the machine, they were training it on data that was what? It was biased. That was human behavioral bias. And what happened? The machines spit out bias. So as we start to have these conversations, it's the can we, should we, but it's also understanding that we have to innovate. So as we innovate, we also need strong regulatory entities to ensure that the enforcements and the standards are clear, which we just don't have today.
Ifeoma: Yeah, I think that's such a great... So many great points, Chad. And also to go back to Joel's point first about thinking about the historical development of automated hiring, it's funny that you brought that up because a co-author and I actually researched how did automated hiring come about, how was it even advertised when it first came out such that companies could take it on, and it was this idea that a lot of it was trying to find talent away from your geographical location, and it was actually the advent of software development that drove a lot of automated hiring development, which is that we need a software developer. We don't necessarily have them all concentrated in Silicon Valley, like you have now, let's find them wherever they are. And first, people would mail in their resumes on like a CD-ROM. They would do all their little coding tests on a CD-ROM and send that in, and that was like, "Okay, that's not so efficient. What if we had a centralized thing now that we have the internet where people can just access it from anywhere and do that?" And then when the first automated hiring programs are being developed, to go to Chad's point, one of the slogans, one of the advertising slogans was, "Clone your best worker." So just sit and think about that slogan, "Clone your best worker."
Ifeoma: So automated hiring programs were never really meant to diversify the workplace, which is somewhat how they're sort of viewed now or used now by a lot of corporations, they were really meant to replicate exactly what you already had. So you take your best worker and you clone them. So if you think about historical bias, historical human bias, such as workplaces where women have been shut out, workplaces where there are not really a lot of minorities. When you have an automated hiring program then, what you're doing is not so much eliminating the bias, as Chad mentioned, but you're actually exacerbating it, because then you're just basically trying to clone your best workers. So unless you're very careful and thoughtful and deliberate, an automated hiring program is just going to come in and replicate bias. And do it at scale. That's also important as Chad mentioned, because as I mention in my book, one biased human manager can maybe affect, I don't know, thousands of resumes in there, tenure as a recruiter or a manager. But a program that's written in a way that's biased or trained in a way that results in bias getting included in can impact millions of people. It can impact millions of people.
Ifeoma: As you touched on, Chad, it's just so important to start thinking about regulations. Thus far, I feel that automated hiring has been really a wild, wild West. It's really anybody can create an automated hiring program, and we've seen this. I know you've seen this in industry, and just make claims about what this automated hiring program can do, and a lot of the claims are about, "Oh, it'll just find you more diverse workers. Oh, it'll just create all these avenues of talent for you." A lot of it is snake oil, and the question is that why is that allowed to exist? Why do we not have regulations to curb misuse of automated hiring? Acknowledging that it can have its uses, like Joel mentioned before, it can allow for this efficiency in finding talent, efficiency in applying, efficiency for the applicant, but we need regulations. So my next question for you guys is... I mean, I have my thoughts on what regulations should be in place, but as industry players, what are your thoughts about what regulations we need?
Chad: So any vendor that says that their tech is compliant and not biased shouldn't be trusted right out of the gate, because it's not the tech that's biased, I think we've already established that. It's the humans driving the tech, meaning the developers perspectively on the vendor side, but also the hiring companies. As we take a look at regulations today, it's focused on outcomes, and I don't think that changes. We have to take a look at outcomes, but again, we're scaling outcomes differently than we did just five, 10 years ago. So being able to take a look at the frameworks that we currently have, 'cause we know that government doesn't move as fast as technology and/or business, that we can take some of those frameworks that already exist and they work, and we just need to enforce them. Now, being able to move past that to some of the regulations and standards around auditing, like in New York City, I believe their move forward is smart. It sends a signal, sends a message, but they are going to have to work toward ensuring that there are frameworks and standards in place so that companies aren't throwing their hands up in the air saying, "Well, I don't know what to do." They do need direction, there's no question, although the direction right now is current regulation and outcomes.
Joel: I think ultimately, there are going to be some features that are gonna have to be outlawed. I think automation with video, I don't know how that gets like rubber stamped approved. There are so many pitfalls in that, so something like that, I could see like your technology cannot do that. Now, other things like with people with disabilities that can't speak in a more fluid manner, like things that are gaps in speech can't be a feature that eliminates someone from getting to the next level of an interview. So to me, it's like ultimately certain features of the tech that pre-screen or get someone to the actual human being interview are gonna have to be outlawed. Many things in automation are great. I have a 16-year-old son that just got his first job, and I can tell you that applying through McDonald's with a conversational chatbot versus sending in a paper resume at your local subway, the McDonald's experience is far better. And then in terms of bias, it's like you pass the main stuff, we'll schedule an interview, you can manage all that with automation. That's great.
Joel: But I think a lot of these core things that discriminate are gonna have to be outlawed and vendors won't be able to create those features and employers won't be able to leverage those features going forward. I think we're seeing that a little bit on the local level, state level, Illinois has a great case with facial recognition, a company called HireVue that a lot of people know, so these things are coming out at the state and local level, but eventually on a federal level, these things are gonna have to come into play. I think a real challenge though is everyone's work from home, it's a global workplace, we're hiring people everywhere, and then that creates a contractor versus an employee situation. So are there loop holes around this or where you hire. So again, it becomes really complicated, but here in the US, I think there's gonna have to be an effort to say, "Look, these features, we're not gonna stand for it because they're discriminatory."
Ifeoma: I so whole heartedly agree, I'm so glad you brought up the video interviewing issue, that's also something I write about in the book and in my research for that, what I noted in speaking with people who had been subjected to it was just a great potential for accent discrimination. The great potential for the use of pseudo science where supposedly algorithms are able to accurately determine somebody's emotion or even determine somebody's trust worthiness or if they're lying, it's just rife with all kinds of potential abuses and known abuses. So I just think that video interviewing, especially when purporting to read emotion or do official analysis, that it just has to be banned. But this also brings me to a point that Chad made, which is this idea of, so in my book, I talk about ex-ante versus ex-post regulations, and so Chad touched upon audits as a type of ex-post regulation, which is looking at outcomes and seeing, are these outcomes good? Are these outcomes something we want? And if they're not, we need to change them, or we need to change how we got to them. So that seems like it's very ex-post where you've already launched the automated hiring, but I also want to push upon like we shouldn't forget ex-ante regulations, and outright banning is an ex-ante regulation.
Ifeoma: It's like, we just know that's bad, we're just gonna ban that, we're not gonna try to do an audit, we're just not gonna do automated video interviewing, but should we also be thinking about ex-ante regulation in the form of design features? So for example, one design feature that I'm proposing, and I'm trying to push the EEOC to mandate is the idea that you actually keep a record. So right now, automated hiring programs are not required to keep a record of all the applications, whether they pass through to be interviewed by an interviewer or not, a human interviewer, and even the interview attempts. So I actually think we might need an ex-ante regulation, which is the design of the automated hiring has to allow or actually mandate record-keeping, where every person that applies there's a record of that. And even failed applications because with the research I was doing, I was finding that some automated hiring platforms, they were actually preventing people from completing the application. So they were already culling people even before they could complete the application. So just to give you an example of that, I know it's hard to imagine.
Ifeoma: I tried to, as part of my research, fill out an application for a major retailer, this is a, think major huge clothes, groceries, everything in one place kind of retailer. And in the application, I pretended to be different types of people, and one type of person was somebody who has a limited amount of time to work per day, so somebody that could fit the profile of say, a stay-at-home mother, most of the time, like somebody that would need to pick up their kids from school at 12:00, 02:30 or 03:00. So I put my availability from 09:00 to 02:00. What I found, I could not actually complete the application even though I had checked that I was applying for a part-time position and my availability was certainly enough for a part-time position, I could not actually complete the application. The application just would not refresh to the next page until I checked that I had unlimited availability.
Chad: Okay. So the question around that is, is that a technical issue from the vendor standpoint or is that a corporate issue? Right? Because the company could be dictating that. There's a separation between vendor and process and standard operating...
Ifeoma: But the program is allowing that. The program has to be programmed to allow that. Right?
Ifeoma: So should we have basically design mandates, like you can't do certain X types of programs, do you see what I'm saying? So whether it's a corporation mandating it or the vendor, if the vendor says, "We can't actually legally design that for you," You see?
Chad: Well, yeah. The question for me is, I mean, so first and foremost, that's what audits are for, and that's what being a, that's what standards are for. So being able to actually point out where they're going awry with regards to standards, and you can't apply because at that point you don't become a candidate, you don't become a candidate, then you're not in the audit and you're not in the talent pool, right? That's a step that you need to take to be able to actually be "part of the record keeping process." So they were stopping that. I don't think that is the vendor's fault, number one. I think it's, the difference between adding seat belts to a car and allowing somebody to actually take a left or right, whether they're taking the wrong directions or not. So we have to be careful around what we actually dictate vendors to do, is that their responsibility or is that the responsibility of an organization who could be following EEOC or OFCCP, being a government contractor rule.
Chad: So I think personally, from my standpoint, being in the OFCCP space for a very long time, that's on the employer. Now, are there some aspects where the vendor should definitely stay away, much like the HireVue instance? Yes, I think, again, that's the can we, should we kind of conversations, and that's pretty much where Illinois, they stood up, they pointed directly at HireVue and said, "You're the problem." So we need to start pointing out platforms and features and issues that are the problem, and again, do that through regulation, and at that point, nobody's gonna buy it. And we are a capitalist country, so therefore people are not going to buy it if it is against the law.
Joel: And there's a little bit of a buyer-beware highlight here, not to pick on HireVue, oh, what the hell, let's pick on HireVue.
Chad: We always do.
Joel: So HireVue in the last six months or so has updated their terms of service to essentially say that, "Hey, employer, if something happens legally, it's your fault, not ours." You're gonna see more and more vendors try to immunize or vaccinate themselves against legal issues and put the blame on employers. So if you're an employer, make sure the vendors you use, what kind of indemnity or threats or dangers might there be if their tech is discriminatory because you're probably on the hook if their tech is discriminating against candidates that you're interviewing and hiring.
Chad: And I think that was in direct response to California, because California is trying to push their regulations that actually start to hold the vendors responsible for some of these issues. So again, we're seeing some play here, some gamesmanship, but again, there is a huge buyer beware, not to mention we talk about auditing, who should audit? Who was credentialed to audit, and who was just audit washing? We saw, I think Mona actually published a study around Pymetrics where they were audit washing, they were paying an organization to say everything was fine, everything is good, and this is I think before they were even acquired, which brings up some other issues, legal issues. At the end of the day, there are many of these steps that we need to think about. We need to be incredibly intentional and thoughtful about putting frameworks and standards in place so that we don't have organizations going off the rails like we've seen, vendors going off the rails and hiring companies going off the rails.
Mona Slone: Thank you for those contributions. I'm so glad that we kind of got to the juicy bits right away, which is regulation and enforcement of regulation.
Chad: It's what we do.
Mona Slone: Yeah, so on that, on the kind of capital A audit question, you brought up some very concrete issues and questions here, all three of you, thank you for that. With colleagues from data science, journalism and psychology, we actually conducted a stealth audit of two personality assessment tools that are used in the hiring space, Crystal and Humanatic. And we found some instabilities in there that kind of show that these instruments are not really fit for purpose. For example, we found that one of the tools would predict a different personality type for the same person depending on whether the resume was uploaded as raw text or as PDF, for example. So non-job-relevant kind of elements that skewed the result here.
Mona Slone: Now, that work was extremely cumbersome, it was interdisciplinary, I think that that is definitely needed, that we kind of have not just a purely technical approach here and end up with a statement that says, "Oh, we just need to make the algorithm better," but there are reasons behind the fact that some of these technologies can't work because they are snake oil as Ifeoma said. So that was a long process and it was kind of a bootstrapped project, there isn't necessarily funding available for that, and so my big question for the two of you is how should we actually design that whole regulation enforcement process vis-à-vis audits. Who is gonna pay for that? We can all say we all need independent audits, we need stealth audits, but then what? Who is gonna do it and who is gonna pay for it?
Chad: I have a shortcut. There are hundreds of thousands of government contractors that are out there today who receive hundreds of millions of billions of dollars, and they can easily, if they want the government's money, which they obviously do, they could definitely have to go through a battery of tests for the tech stack that they use. And Ifeoma was actually talking about the applicant tracking system earlier, which is really a relic of our past. We now work in a tech stack, where there's more than just one piece of tech, where before we just had an applicant tracking system, and that was our record-keeping process.
Chad: Today, we are much more advanced, and there are some incredibly powerful systems that are out there today that are stacked, all the way from a programmatic outreach to the chatbot application process that Joel talked about, dynamic screening, matching, engagement, there are so many different things that actually happen. What we need to do is we need to find easy ways to at least start the process, and it's very simple, I'm a tax payer, if you want my money, you have to go through this battery of tests to be able to get those hundreds of millions and/or billions of dollars of contracts, that to me seems like the easiest way forward to get this moving.
Joel: The answer Mona, and Chad touched on this, to all of your questions, is money. So when you look at the problem... Employers are driven and technologies are created in large part because of supply and demand, and features are built because employers say, "We want that." Right? So if you create a system where an employer says, "I'm not gonna buy your tech unless you've got the seal of approval from blank," then that vendor is then encouraged, if they wanna stay in business, to get that badge, whatever that looks like in order to sell their product to employers. Now, does that badge come from a government agency? I prefer it not to.
Joel: I'd prefer private companies to create an audit system that is approved by the government, that then they can say, "Hey, we're gonna run a fine tooth comb through your tech, we're gonna do an audit that's approved by the government and we're gonna give you our seal of approval that is approved by the government agency or body that we've been audited ourselves by to provide this badge." If you created an ecosystem where the employer felt confident buying from the vendor, the vendor felt confident selling it because they've been audited by an approved auditor, then you've got a winner. Now, how we get there, somebody smarter than me is gonna have to figure that out, but that's I think the environment that you have to create for everyone to be comfortable buying and selling products and services.
Chad: It's our current process now for OFCCP in distribution of jobs, it's our current process. We set a standard, the OFCCP has education and enforcement, then there is a layer of what Joel talked about, these organizations who know what the standards are and they help the companies abide by the standards, it's something that we already have in place. So it's not recreating the wheel, it's a process methodology that we already have, it's just new tech that we have to be able to credential.
Ifeoma: Yeah, I fully am so on board, especially...
Joel: Then we're done.
Ifeoma: We're done, we're done. It's so funny, 'cause I actually wrote a paper about this in 2020, about creating something called, The Fair Automated Hiring Mark, and this would be a certification, and I likened it to how you have certified green buildings. So it could be, like Joel mentioned, a third party certifying that these automated hiring programs do meet the required standards of the EEOC, and I'm not sure that the EEOC can't get involved. It seems like you're very much against that Joel, but in my paper I'm more open ended, it could be the EEOC actually getting involved in issuing the certification, or it could be a third party. I see pros and cons either way, right?
Ifeoma: So with the EEOC obviously, they are a government agency, they have a lot of other things on their plate as well, with the third party, that's a new market, there's going to be people who will want to enter that market and provide that service, but there's also the con of, could it be co-opted? Could we get certifications that are not necessarily on the up and up, but I tend to agree with you, Joel, that it is about money, so I don't think that most third party agencies will get away, I mean third-party certification programs would get away with certifying things that don't work because they will get found out sooner or later.
Joel: They're a business, yeah.
Ifeoma: And then, yeah exactly, then no one will go work with them, so that's exactly what I argued in my paper. So I tend to very much agree with you. I also see that audits are part of this process. So we had talked about audits as being an ex-post regulation, but I actually think it could be an ex-ante regulation where you audit the algorithm or you audit the program before it's even launched and that audit is what allows it to be certified. Now, would I say that audit is enough? Would I say that's the only thing you have to do and then that's it? No. So in my paper, what I actually say is that even with this certification, the EEOC should still mandate that the employer do internal audits on a timely basis, I don't know what the time frame for it would be, that's something they can work out, but that they are required to do these regular audits and also keep the results of the audits because then if there is a lawsuit, they would be required to provide the results of the audit.
Chad: Let me enter real quick on that one, 'cause I don't think that being able to audit it before makes any sense, because when we're talking about AI, it's all about the information the algorithm is trained on. So if it's trained on nothing, then I mean, you're really auditing nothing, you're auditing behavior. Again, the AI itself is not generally the problem, the Amazon's AI wasn't really generally the problem, it was the information that was fed into it. So the machine is what it's fed, it's what it learns, right? So I do agree...
Ifeoma: Right, but there're training models out there you can use. So right now there's been a big move for training models to be made available on an open source basis, so there are lots of training data that can be used for audits. Now, will it be specific to perhaps the specific corporation? No. So that's why I say that first audit can't be the be-all and end-all, the individual specific corporation still have to do the internal audits because then they're using their own specific data, but I still believe that you can audit the program before it's launched just using all the training data that's out there free of charge.
Chad: It's gonna be an entirely different animal over the data that it's trained against, right?
Ifeoma: Sure. Well, it's possible.
Chad: So it's taken us 49 minutes to get to ChatGPT. ChatGPT could actually be an entirely different animal if it was trained on current data. I understand what you're saying from a basic philosophy standpoint, but to be quite frank, it can grow into using those standards moving forward, the biggest point of audit for me is after it starts eating that data.
Ifeoma: Sure, but even using the chatGPT example, not to be pedantic, you're seeing the problems already, because what is ChatGPT right now if not, basically, a huge audit, right? Or basically everybody using chatGPT is beta testing for them, they're doing the audits for them right now.
Chad: Yeah, it's genius.
Ifeoma: So you have all this wide variety of data and you're still seeing problems, different types of problems, depending on who is interacting with ChatGPT and what type of data they're putting in, but you are seeing the genres of problems already. So I think that's useful. Anyway.
Mona Slone: So before we go down further the ChatGPT rabbit hole, just another call to the audience that we'll be starting to ask the audience questions very soon. So feel free to drop them into the Q&A box to the right of your screen. I wanna shift gears just a little bit and talk a little bit more about the HR tech industry per se. We kind of have very concrete ideas about regulation, how this could be done, who should pay for it, but as people who are not in that sector, in that industry, it would be very useful to understand what does that sector look like? Who are the players? What are the specific tools, especially the "AI-driven tools" that are out there? And what are the tools that recruiters really like to latch on to, 'cause I know there are some that they're just being mandated to use and others that they really love. So maybe Joel and Chad, you can give us a little bit more inside knowledge into that whole space that is quite hidden from public view to be honest.
Joel: Hidden from public view, the dark web of recruitment. We talk a lot about automated recruiting, and we talk about augmented recruiting, and I feel like the augmented stuff is catching on much more than the automated stuff in terms of recruiters. So you've got conversational AI, basic questions, 24 hours a day, ask a chat bot on your phone or on whatever career site you're on, like that's embraced. Conversational AI is not going anywhere. Scheduling, automated scheduling, scheduling is a pain in the butt, so if you can augment that for a recruiter, that's gonna be popular. So you have companies like GoodTime, most chatbots or conversational AI solutions have scheduling as well, that people can control that. The other one I think is probably like sourcing, in other words, we have a huge database of people, here's my job, now go find me people who qualify for that job. And the augmentation or the robot says, "Okay, here's everyone in this database that we think you should be talking to or recruiting." So those kinds of three things, or anything that is a real pain in the butt or time-consuming, is being replaced and embraced by recruiters from my perspective.
Chad: Yeah. And my favorites, and I think has the most promise is that companies today spend hundreds of millions of dollars on recurrent marketing alone, and that's just pushing jobs out to be able to pull people in. Well, then they build these candidate databases that we call them the black hole, where candidates go. Well, they've got these amazing matching technologies that give you an opportunity to use that database that you spent all that money on to be able to draw those individuals back in to apply for like positions or positions that they could prospectively be more qualified for. So there's a lot of heavy lifting for that. That's just an invite, and that's something that is a lot of heavy lifting from a data standpoint, but if you have the behavior of the candidates and you have their past and you know what jobs they meet the requirements for, that's an easy match just to invite them back to apply for another job. There are some... My favorites, and we obviously, with the Chad & Cheese podcast, we have sponsors, but my favorites, you're talking about programmatic job distribution, you're talking about matching, you're talking about conversational AI.
Chad: So the thing that I think that ChatGPT has brought to us is first and foremost demonstrating that the promise of AI is finally meeting today, we've been seeing vaporware for years, and vaporware is pretty much just a promise that just never came true. Today, I think the promise is starting to come true with some of these products, and now that's pretty powerful. And then being able to have transparency, which chatGPT provides. If we see more of that from our industry, in recruiting and outsourcing and on all these things, then I think we aren't trying to look into a black box, we have more transparency and we understand what's happening and how it's a part of the process itself, because to be quite frank, building a tech stack for most talent acquisition professionals is like trigonometry for goodness sakes.
Joel: Yeah, it's a hard question to answer because there are so many providers. It's not like the days where, "Where do I post my job to make sure everyone in the US can get it?" Like, "Well, go to Monster, go to CareerBuilder." Your best tool is either a really good agency, recruitment ad agency that knows these tools, or go to a G2 or go to a Product Hunt, look at reviews, ask your colleagues on LinkedIn who they use, it's really dependent upon your needs, your location, what you're hiring for, like there are so many variables, it's a hard question to ask or answer. And Chad and I get that question all the time. Like, "Who should we use to solve all of our problems?" And unfortunately, there's no silver bullet that we can recommend that everyone can just get on easy street with their recruiting automation tools.
Mona Slone: I'm gonna ask a question that I get often when I'm being asked about my own research on this topic, which is, how do you trick the system as a candidate? And that's also something that of course is interesting for our students, Ifeoma's laughing. We get that question, "How do I tweak my CV?" We have this kind of urban myth of, "Oh, you just put the key words in white font on your CV, and then the system picks it up and pushes you up on the ranking." So I hear kind of these myths, and then when I interview recruiters and sourcers for my research, they kind of categorically deny that that's even a thing and say like, "Absolutely not, you cannot trick the system and you cannot trick me." And I'm so curious to hear your responses to kind of that whole space, that whole question, candidates and AI.
Chad: It's funny that you say the white text...
Joel: How much time do you have? How much time is left? [chuckle]
Chad: It's funny you say the white text, 'cause that's like circa 1998.
Joel: SEO, yeah.
Chad: Yeah, that was one of the things that we were doing. It was all keyword search back then.
Joel: They're the tags.
Chad: It was all and we were actually teaching employers how to HTML some of those key words at the bottom of their job descriptions as well, so that they would rank higher. So it was happening on both sides. And it will continue to happen on both sides, so the gaming and we talk about ghosting and how candidates ghost employers. Well, they ghost employers because employers ghosted them first. It's a learned behavior. So I think for me, the one that's hardest for me to believe is the psychobabble that happens out there in all these pseudosciences which I call psychobabble in most cases, because I can go through and answer questions differently as I think they want me to answer them to try to trick the system into what I think they want versus who I really am.
Chad: And not to mention... I mean, if you take a look at a lot of the data that's out there, females won't apply for jobs unless they are more qualified, 100% qualified or even more, versus men who could be like 20% qualified and we don't care. It's like, "Ah, I can do the job." Right? So it's like how... That's almost like tricking the system itself, I see what the requirements are, but I'm gonna go ahead and just push past that and click on apply because I think I'm qualified for the job, whether they think I am or not. So it's been happening and as Joel said, there are so many different ways for a candidate to trick the system, but there's also even more ways for the employers to trick the system.
Joel: I love this question because we talk so much about employment tools that are automated, the job seekers have automated tools too, but to get to the tricking your resume question, yes, those white text on white background, that's really, don't do that. However, there are some really sound SEO strategies that you should still be using, like a good title, like a structure, like this is for real, robots that are scraping your resume are dumb, make it easy as hell for a robot to read your resume. So go to Google Docs or wherever, get the most basic formatted resume and use that as your resume, don't get fancy with columns or images or graphs or just like...
Chad: That will break it.
Joel: Straight text, man, make it as simple... Like, imagine a robot is reading your resume, make it as easy as possible to get your content. Now, back to some of the other stuff that you're talking about, we had a story that we talked about on our podcast recently about an agency, an ad agency who vetted candidates for a job, a copywriter. And the copywriter answered every question with ChatGPT. So they actually didn't even answer on their own, they answered through ChatGPT. And they actually got through to the final round of interviews by using ChatGPT and then, of course, they were sort of like, "Hey, this is an experiment."
Joel: But employers need to be aware that job seekers are gonna get really good at applying to a lot of jobs as if they're a human being going through the automated interviewing process, and how do you police that? How do you really cut through the best candidates if they're all using a natural language processor to answer your interview questions? It sounds like a sci-fi movie, but we're basically almost at a point where robots are interviewing robots to figure out who actually gets to speak to each other face-to-face. And that does nobody any good because it's not really who they are. So it's getting a little bit weird out there, we'll see how it shakes out, but the job seeker side of this equation is real, and it's something that we need to be aware of.
Chad: And Mona, from the experiment that you talked about earlier where a PDF versus Word documents, some parsers will get broken with PDF documents, so therefore you will get different results if you're using a PDF versus Word. And that's happening and it has been happening. Most of the more advanced parsers don't have that issue, but not all.
Joel: Text only, baby.
Mona Slone: Text only, which is really interesting, 'cause it is really the talking with the machine and to the machine, and sort of how do we trick the machine so we can get around it quicker so we can actually talk to a human, whether that's actually a recruiter or a candidate.
Joel: You're not really tricking it, I don't like the word trick, I like the word optimize, you're making it as easy for a robot to read your resume, index it, and then make it searchable, scanable, bring it together with whatever algorithms are sourcing that candidate. I think the word trick is not a good word. I know English isn't your first language, but like optimization or standards I think are better than teaching people how to trick the robots.
Mona Slone: Yeah, be good data.
Chad: Well, and tech is like a three-year-old at this point, you're doing baby talk to it versus trying to use PhD level...
Joel: Oh yeah. Some of this shit, I don't know if I can cuss or not on this webinar, but some of this stuff is homemade. Some of these recruiters are like hackers, self-taught programmers, and they're making up their own stuff. So like that's super simple. So yeah, make sure your resume is as simple as possible kids.
Joel: Be good data. I want to take the last 10 minutes that we have together to bring in the audience who have been extremely active actually in the Q&A, and we have a ton of questions. So I'm gonna throw specific questions to one of you. So I'm gonna start with one that I found really interesting, which actually says that, "Focus has been on AI in hiring, but should we pay more attention to AI in the firing process?" Ifeoma, I think that one is for you.
Ifeoma: Yeah. So I'm so glad somebody asked that, actually, I have been looking at that and been thinking to write a paper in that. Here is the problem from a legal standpoint. You have more recourses when you are not hired and you should have been hired than when you've already been hired and then fired, as long as you were not fired for an explicitly discriminatory reason. So just to simplify that, most of the US is employment at will. So it means you can be fired for any reason, as long as it's not about your race, your gender, your religion, right? So it's a little more tricky for workers to basically have recourse when they're fired by algorithm, I'm calling my paper Firing By Algorithm.
Ifeoma: So it really comes down to, well, okay, maybe you're not gonna be able to get your job back if you're fired by algorithm, but should there still be some sort of regulation about treating humans that way? Do humans deserve some sort of explanation, some sort of human contact when they're getting fired? I think so, right? But I'm also not a CEO of a major corporation with thousands of workers. So I'd love to hear from you industry people, like what do you feel about this trend towards firing people by algorithm as we've seen so many companies doing now, whether it's algorithm or just an email or some people got an automated text that they were fired. What do you think of this trend?
Joel: I think...
Mona Slone: Joel, you can take that and then we'll move on to the next questions 'cause we have a whole bunch.
Joel: Oh, yeah, I'll take that one. So I think some of it is country specific. I think as, the thing I say on the podcast is, this is America Jack, and we're really good at firing people. There was a time where that was really taboo. I think we're getting away from that. I think getting fired via email, getting fired via text, we're just becoming either numb to it or we just accept it. Now, country by country, that's gonna be different, but in terms of America, I think it's gonna happen. Look, again to ChatGPT, we did a show the other day about the black hole and not getting a, "Thanks, but we've moved on to another candidate," right? So you can go to ChatGPT today and say, "Hey, write a letter to a candidate who didn't get the job saying you're sorry, blah, blah, blah for this job description."
Joel: And ChatGPT will create a really nice little form letter that could go out automatically to these candidates. And that can also happen with employees. So you could really easily create a natural language processing strategy where letters go out to people that sound really nice and sound really human-esque to let people go. So I think, yes, it's the future, whether you like it or not. Corporate America doesn't give a shit. That's the way that it's gonna go because again, it's efficient. You don't have to have the uncomfortable conversation face-to-face about thanks for playing, but we're moving on. It's gonna be automated as well. And people will just take it like most of the things that they take in the workforce.
Mona Slone: Thank you for that. I think we should have a whole co-opting on that one, maybe later in the semester. I'm gonna move us on and I'm gonna combine two questions from Sig Silberman and Heather Moffett. And so it sounds like in five to 10 years, we should be expected to see a lot of technical standards from bodies such as ISO, ANSI, NIST, IEEE to become important in this space. So question is, is that plausible? And I'm gonna tag on Heather's question, who's asking, what if there was a sample data set that could be used by this standards body, wouldn't companies feel more confident running through this to ensure no adverse impact prior to launch and help with the adoption of these technologies? Chad, this one's for you.
Chad: Yeah, I think...
Joel: That's good 'cause I don't have enough degrees to even understand that question.
Chad: Any organization that has standing, yes, that would be wonderful. We're in the Wild West right now, right? So I think the best we can do is start to create standards around regulations that we're trying to press as it is. We have laws that are on the books that are already supposed to be in play. Like in New York City, we've gotta wait now till April. Again, these are signals and companies should be taking these signals, but also the vendors who already do compliance audits and those types of things should also be taking these signals in building their own somewhat standards and working with local, state and federal governments to be able to apply them. If they have IEEE or what have you to back them, that is wonderful. I think those types of partnerships make sense.
Mona Slone: Thank you for that. I'm gonna combine a couple of questions on sort of audits, and I know Ifeoma has sort of one she wanted to tag on here as well. So Jiahao Chen, who I'm glad to see in the audience, is saying that all of you brought up the ex-ante auditing as possible requirement for employment AI. Those are not only possible, but exist in other verticals, such as consumer finance. But one key difference is that employment AI are decision-support tools and not usually used in full autonomous business processes. And so just curious to hear what you think about how AI to aid decisions should be regulated differently from autonomous decision-making AI.
Mona Slone: And then we have another question from Nilesh, who is saying, "Well, in sourcing, there perhaps also is bias in other parts of the process in sourcing for example reaching out to candidates on LinkedIn who worked at major consulting firms and went to Ivy League schools. So isn't the process already biased?" And so that is part of the AI auditing question. And then Ifeoma, I know you had one about sort of addressing that via the contractor idea that Chad articulated earlier. So I'm gonna toss it to you, and you can decide who gets to answer these three questions.
Ifeoma: Yeah, so I think this is for Chad, and definitely Joel can jump in, of course. But yeah, Chad, you actually touched on a subject that I perked up immediately 'cause it's been something I've written and think about, which is the fact that federal contractors have these higher standards imposed on them for making sure they have frankly a diverse workforce, right? And in terms of including disabled workers also, for example, that corporations don't have. And you mentioned that this could be kind of a start for regulations and maybe for audits. And I just wanted you to touch about that, on that a little bit more because I think we're still sort of struggling with the idea of what would be a meaningful audit? What would it actually look like? So for other industries like the financial industries, we have Sarbanes-Oxley that lays out here's what your audit needs to look like. We now have established industries of auditors. They actually are certified auditors that will come and do the audits, but we don't have that yet in the automated hiring space. So let's say starting with the contractors, which I think is a good low-hanging fruit, what would a meaningful audit look like?
Chad: Well, first and foremost, there is a robust set of contractors and advisors in that space. So if they're not already putting together solutions for this, I would be surprised. Because obviously, when you're talking about OFCCP, you're talking about 503 with individuals with disabilities, you're talking about VEVRAA, veterans hiring, you're talking about the whole scale of diversity and things that you have to do as a federal contractor. And once again, these are higher requirements because you are taking money from the federal government. So the federal government wants to ensure that you're meeting these higher standards. So again, I think this is very simple and could be pretty much along with OFCCP regulations. The thing is, they have to. And this is one of the things that EEOC did not do in their last webinar. They did not bring vendors on, they did not bring practitioners on. They had only academia. That is literally a tenth of what they needed because the work happens with vendors and practitioners, mainly.
Chad: Academia is there as advisors, and I think that is amazing, research, advice, those types of things. But we need to ensure that we pull the community together, and we have a vehicle in which to do that. And the vehicle is money, and that money is government contracts. So I really believe we could pull that together. And again, that could be a part of OFCCP and the current standards and outcomes that they have to abide by. There will be additional work that has to be done around trying to understand how these outcomes happened from the scaling of an algorithm, but you still know what the outcome is. This is not something that we don't already see because the hiring is there. The talent pool is there. All of that is the same. Nothing's changed. So now what we have to do is dig into the algorithm to understand where it's going wrong.
Mona Slone: Thank you for that. We are at time, so I wanna ask my kind of closing question to all three of you. Joel, I'm going to start with you. Where do you see this space in five years?
Joel: This space being recruiting or the AI?
Mona Slone: AI and recruiting and...
Joel: Yeah, it's gonna happen. I think the guard rails, safety nets, whatever metaphor you wanna use, are gonna be put in place 'cause there's simply too much money to be made/saved in automating the recruiting process. Even now, we're seeing companies that are laying off thousands of people, recruiters and HR professionals are part of those layoffs. They're not being brought back or they're being brought back as contractors more than people would thought. And most people are looking for these automated tools, these platforms to manage everything from recruiting to payroll, to onboarding, to offboarding. Everyone's looking for technology to save money and create efficiencies around this. So it's going to happen. The legal asks, the issues around biases, they might not all get worked out, but they'll get worked out to a point where people aren't afraid to buy services and create new companies and sell services. They'll be guard rails created, and this space will be off to the races in terms of AI.
Mona Slone: Thank you for that. Chad?
Chad: Recruiting, also known as talent acquisition in our space, is literally the beating heart of every company. No product or service is ideated, developed, sold, serviced, customer-retained while it's opened without the actual talent that is acquired through recruiting. It doesn't exist. But it's incredibly underfunded, much like Joel has said. So being underfunded means you're flooded with tasks, and many of those tasks can be carried out by robotic process automation or AI. Those tasks, I think within the next five years, at least 70% of those tasks, we will see more augmented recruiters where 70% of their tasks are actually part of RPA or AI. And then those individuals, depending on the organization and their care for the candidates, will actually use their people to be more human. Today, recruiters can't be as human because they're doing all these stupid little tasks. If you give them their time back so that they can actually give it to the candidates and they can be more human, then we can put the human back in human resources.
Mona Slone: That's a great...
Joel: By the way, imagine a world where you apply to a job at Tesla, and Elon Musk is actually the one interviewing you on your screen for a job at Tesla.
Chad: I'd hang up. [laughter]
Joel: Yes, politics aside, however you feel about Elon, but we're going to a world where video of a human and an actual human, you can't really tell the difference. And Elon will speak different languages based on where you're located in the world. This is where we're going. Five years is a long time with how fast the tech is going. I think you'll be gobsmacked by what it looks like in five years.
Chad: I'll give you a great example. We have a podcast that we put out in English, and we have had our voices cloned, Joel's voice and my voice cloned. It is now also in four other language: German, French, Spanish and Portuguese. The AI has translated and cloned our voices in those different languages.
Joel: Yeah. All it needs is the text.
Mona Slone: I will audit the German one.
Joel: You sure?
Mona Slone: I'm gonna toss it over to Ifeoma...
Mona Slone: For the last word before I close it off. We're a little over time already.
Ifeoma: Yeah, I mean, I really can't add too much more, just to say that I do concur with this idea of a move towards more augmented decision-making rather than really some sort of a wholesale transition to AI. So I guess to be more colloquial, I see like basically half of the people that have been fired in favor of GPT, they're gonna be hired back, essentially, because we're then gonna realize that it's not the same. We still need the humans. So, yes, I do think there will be more helper AI, more sort of augmenting and also making more efficient the sort of mundane tasks, but we're still gonna need that human decision-making at the final say.
Outro: Wow. Look at you. You made it through an entire episode of the Chad and Cheese Podcast. Or maybe you cheated and fast forwarded to the end. Either way, there's no doubt you wish you had that time back. Valuable time you could have used to buy a nutritious meal at Taco Bell. Enjoy a pour of your favorite whiskey. Or just watch big booty Latinas and bug fights on TikTok. No, you hung out with these two chuckle heads instead. Now, go take a shower and wash off all the guilt, but save some soap because you'll be back. Like an awful train wreck, you can't look away. And like Chad's favorite Western, you can't quit them either. We out.