The Chad & Cheese continue the well-pedigreed conversations, this time with a CEO/cofounder reppin' Harvard, Dartmouth, and MIT. Pymetrics, Frida Polli joins the boys on a quest for a better understanding of personality tests, matching of soft skills to job openings, and the threats that A.I. backlash has on businesses like hers. There even tips for female entrepreneurs.
Anything this smart has gotta be powered by Sovren, with AI so human you'll want to take it to dinner, right?
PODCAST TRANSCRIPTION sponsored by:
I just don't want people to get the impression that again, cause I just don't believe in it that there's such a thing as people that are always going to be top talent. I think that's, that's kind of a myth and it leads us to a lot of unproductive places. And I think in the world of recruiting,
Hide your kids! Lock the doors! You're listening to HR’s most dangerous podcast. Chad Sowash and Joel Cheeseman are here to punch the recruiting industry, right where it hurts! Complete with breaking news, brash opinion and loads of snark, buckle up boys and girls, it's time for the Chad and Cheese podcast.
Aw. Yeah, we're getting real cerebral on a Friday, kids. Welcome to the Chad and Cheese podcast. I'm your cohost Joel Cheeseman joined as always by Chad Sowash and today, holy shit. We were getting big brained, everybody. Okay. We got Frida Polli co-founder of Biometrics and that's just the tip of the iceberg. She's a smarty pants, Harvard MBA, Dartmouth undergrad, PhD from Sofolk. My brain hurts already Frida, welcome to the podcast.
Frida (1m 3s):
Thanks you guys.
Joel (1m 4s):
Let me apologize ahead of time for everything that's about to go down.
Frida (1m 7s):
No, I, this is a great way to spend a Friday morning.
Joel (1m 10s):
Unwind on a Friday.
Frida (1m 12s):
Joel (1m 13s):
So what should we know about you that I didn't cover? You don't like Irish people. What else?
Frida (1m 18s):
I mean, and they don't let them go drinking on a a hundred percent.
Chad (1m 28s):
Not cool Frida, not cool.
Frida (1m 30s):
I don't know what you want to know about me. I was born in Italy. I'm not from this country, although I sound like it, so I'm not an army brat. I moved around a lot because my dad was in management consulting, but more importantly, I guess for the show, I spent 10 years at Harvard and MIT becoming a smarty pants, cognitive scientists, and that's where a lot of, and that's where a lot of the science that we use at Pymetrics comes from and then, you know, went to the, an MBA program at Harvard. And that's where I saw recruiting firsthand and thought to myself, Ooh, this is, you know, all that science we've been using to, to look at people's soft skills could really be come in handy in this problem of people to job matching.
Frida (2m 13s):
And that's how Pymetrics was born because you know, really what I saw at HBS was it's not that people were confused about what was on people's resume, it was pretty clear. What they wanted to do was figure out who they were as a person, who they were as a, you know, individual human being. And they were trying to like tea leaf read off a resume, you know, they'd be like, Oh, you know, Chad played sports, Oh, it must be a team player. You know? Or I don't know, Joel had a side job in college, must be hardworking. And I mean, you can actually assess, I don't know if that's true or not, but the point, is you can actually.
Joel (2m 44s):
I would fail the metrics.
Frida (2m 48s):
This is what recruiters are doing, trying to figure out who you are as a human being, right from your resume when you can actually measure a lot of those things directly. And that's what we were trying to do is avoid people.
Joel (2m 58s):
You started this thing in 2013, right?
Frida (3m 1s):
We started the science part of it in 2013. That's correct. But we didn't have a product in market, until 2016.
Joel (3m 7s):
Talk about being ahead of the game? Like what, what the hell did you look at in 2013 and say, this, this is the wave of the future that allowed you to create this and how has it evolved in the last eight years?
Frida (3m 19s):
Yeah, well, again, I mean, I think it was, again, it was my experience at HBS. So, you know, an HBS, tons of companies come and everybody's looking for a job. Right. And so it was watching that process and realizing that people were still relying on coffee chats and resumes and all these like pretty outdated things to understand, Hey, does this person have what it takes to do the job? And then also seeing the flip side of that, which is, you know, some colleagues wanting to go into investment banking when all their friends were like, but you like to sleep 15 hours a day. Is that really a good fit? And then, you know, they'd get the job. And then, and then they, you know, two, two days later, the big, I hate my job. And so just realizing for the matching was not working that well. And a lot of it was because the resume stuff is clear. It's, what's not on the resume that people really, you know, the soft skills that people want to understand.
Chad (4m 1s):
It is it, is it really something that we should be doing is looking at people's personalities and saying that, that person's perfect for this job versus the skills they have? Because I mean, soft skills versus the hard skills certificate, certifications.
Frida (4m 16s):
So I would argue that, you know, absolutely, first of all, it's already what people are doing, right. So when they're trying to, so like take the recruiting situation at any college or school, the resumes, honestly, all look the same. They've been formatted the same way everyone's coming from the same school. They've all, you know, it's very hard to distinguish. So they're already looking for, how can I determine, you know, what makes this person unique, soft skills are what make people unique? And they're not about categories like pigeonholing people. It's actually about understanding what really makes them truly unique and enabling them to be that unique person. That's the way that we view soft skills in any, in any case. So it's not about homogenizing people.
Frida (4m 57s):
It's really about bringing out their diversity. The second piece about soft skills, unlike hard skills, they are far more equally distributed. So I always say you could take the same person with a particular soft skill profile and, you know, have them raised in, you know, an impoverished environment versus, you know, an elite environment. You're going to have very different resumes as we can all imagine, that person is fundamentally the same person, right? So if we want to start talking about equalizing gaps, whether there's socioeconomic gaps or racial gaps or gender gaps, soft skills are the way to do that. You know, there's just, I mean, if you're going to continue to rely on hard skills, hard skills are way more, not evenly distributed because of different advantages that, you know, different groups have in life.
Frida (5m 37s):
And not only that, I mean that if we rely too much on hard skills, that's when we get into you guys all remember the Amazon resume parser, fisasco, right? well, what was wrong with that resume parser? It was learning that their top folks went to colleges. Like for example, like some of the things they had that were different, you know, women go to Bard well, because they had more male engineers, no male engineer had ever gone to Bard, right? Cause it's a, it's a female college, right? Women play softball, men play baseball. So if you are literally training at off of a resume that has so many proxy variables for gender, race, socioeconomic status, you are almost invariably going to have some pretty big issues in terms of creating algorithms that are not biased or creating processes that are not biased versus soft skills are actually very equally distributed, so they are good equalizing factor.
Frida (6m 26s):
We have this way that we look at people in terms of, are you sort of more bias to action and impulsive, or are you more, you know, sort of thoughtful and really attentive to detail? Well, I have, since the age of zero always been a little bit on the impulsive side, which makes me bias to action. That's a soft skill, right? Like where you fall on that continuum. It doesn't make one end of the spectrum better or worse, it just means that I'm going to be more predisposed to jobs where I can do stuff. And somebody who falls in the other side of the spectrum is going to be better suited to jobs where they can sort of think, consider and sort of be more deliberate and thoughtful. Does that make sense?
Joel (7m 4s):
Yeah. And, and as isn't it fair to say, as automation comes more and more into the hard skills that the soft skills are really going to be, what separates the top talent from everyone else?
Frida (7m 15s):
So it's funny that you use the word top talent. In Primemetrics point of view, in my view, top talent is top talent for that particular job at that particular company. Like I think Frida Polli is, you know, top talent from Biometrics. I don't think Frida Polly's top talent for everything. You know what I mean? And I think we have to, it's all about matching. It's like Netflix, where it's like dating apps, right? We don't, we don't assume that there's like, it's Rotten Tomatoes versus Netflix. Right? Rotten tomatoes assumes are some people that are like some movies are always good and some movies that are always bad. Netflix doesn't assume that it says, Hey, you know, Joel and Chad, you like these kinds of movies, Friaa likes those kinds of movies and we're going to optimize so that, you know, people end up in the right sort of places where they are going to perform better.
Frida (7m 55s):
So if you're thinking about top talent as like top talent for that job and company a hundred percent. I just don't want people to get the impression that again, because I just don't believe in it that there's such a thing as people that are always going to be top talent. I think that's, that's kind of a myth and it leads us to a lot of unproductive places I think, in the world of recruiting. So,
Chad (8m 14s):
Amen sister. So let me talk a little bit more about soft skills because whenever I hear a company talk or say soft skills, I'm always thinking to myself, how in the hell are they going to defend that against an OFCCP audit?
Frida (8m 30s):
Sure. Yeah. Well, when you think about ONAP right. And the knowledge,
Chad (8m 32s):
No, I don't want to think about own it.
Joel (8m 34s):
Here we go. That's a mess.
Frida (8m 38s):
You know, we all don't want to think about ONAP, but unfortunately, you know, ONAP is an important thing in the, in the world of, you know, employment. And so when you think about ONAP, you know, or any way to think about jobs, there's knowledge, skills, and abilities, right? And when we think about knowledge skills and, you know, knowledge is primarily more sort of the hard skill domain, but depending on skills and abilities, they can be things like, you know, attention to detail, for example, you know, and things like that that are not, that are what we're calling soft skills, right? And again, people talk about these things differently, but a lot of the things that we measure are actually, you know, very directly related to the KSAs of particular occupation.
Frida (9m 20s):
So there are very defensible. And then on top of that, you know, you, we, and one can do job analysis, you know, including, you know, interviews with subject matter experts and all sorts of other ways to do job analysis, to make, to ensure that what you're measuring, you know, is related to the job, right? So you have the ONAP codes, you have a job analysis that you're conducting to confirm, or, you know, tweak the initial idea that you have based ONAP on. And then you're building models that presumably have good concurrent validation. And then last but not least, you're doing some sort of, you know, validity analysis on the backend. So I think there's a lot of ways that you can, you can validate these things and, you know, we have an, you know, we have OFCCP clients that use our products.
Frida (10m 5s):
So it, it has, I think it's, it has great defensibility.
Chad (10m 8s):
And you've been up against that and defended against it. That's the big question. Right? And that's, and that's the big, the big stamp, I think for any organization.
Frida (10m 17s):
I mean, you know, to be completely really honest, I mean, we haven't done that. I, and again, I know that, you know, OFCCP can audit you for a variety of different reasons. I mean, they're typically going to audit you if they see just, you know, signs of disparate impact?
Chad (10m 30s):
Frida (10m 31s):
And so we don't, again, I think I've mentioned this to you, but we don't actually, we have a way of creating our algorithms called fairness optimization, where we don't only optimize for performance, we optimize for performance and lack of disparate impact. So actually our platform won't release an algorithm if it has disparate impact. So the likelihood that we're going to get flagged by an OFCCP audit, looking for disparate impact is extremely low because.
Chad (10m 53s):
That's the answer right there.
Frida (10m 53s):
We don't see the answer right there. We don't have any algorithms that we release that have disparate impact. But I think historically, there's been this sort of dichotomy. It's like, Oh, either you can have these very predictive cognitive tests, but while they've got a lot of disparate impact, so like make sure your defensibility evidence is strong, or you can have these personality tests today. You know, they don't predict a whole hell of a lot, but you know, they don't have adverse impacts. So, you know, you're good to go. So it's sort of been like the red steak that tastes good, but it's not good for you. And the vegan burger that's tastes awful, but it's not going to kill you. Right. And, you know, along comes the impossible burger and it's like, Hey, tastes really good. And Oh, by the way, it's vegan. So it's good for you. Right. And I think that, yeah, by the way, Chad, I had been using, I've been using that analogy everywhere, but I think that there, you know, that's what more modern techniques can actually yield is something that is predictive.
Frida (11m 41s):
We have predictability, but that has, you know, that lacks adverse impact. And I think we have made that the impossible area for so long and, and, you know, come to find out actually, you know, with some more modern approaches to this, it's actually quite possible.
Joel (11m 57s):
Alright so we, so we agree, this is a trend that's happening. And we also agree that there are, there are pitfalls and minefields and whatnot. And I think that for a lot of our listeners or employers that are looking at integrating this and implementing it, everyone's doing matching now. I mean, everybody. So you have, you know, what I would consider as like the Briggs and Stratton stuff, the Wonderlic stuff, what you guys do. And then we get into, you know, every ATS has matching every job, every job-board has matching. So for someone that says, how do I, how do I make sense of all this? Do I use multiple solutions? Is there one, solution to rule them all? Like, what's the answer for someone that's looking at this landscape?
Frida (12m 36s):
Yeah. Well look, I mean, everyone's obviously going to come up with their own answer. Right. What I would say is I think that, you know, both your experience, your hard skills and who you are innately as a human being, your soft skills are equally important. Right. And they may be, you know, sort of, you might need to focus more on experience in certain cases and, you know, sort of aptitude and, and other cases. But I think both are critical. And I would say that, you know, generally speaking, when you're looking at most platforms that are doing some kind of matching, whether it's an ATS or, you know, a job board or whatever, they're generally relying on hard skills, right, cause they're using resumes and job and job postings? I think when you start going into the soft skill space, then you are thinking more about what's traditionally thought of as an assessment, right.
Frida (13m 18s):
So then you are looking at more traditional, you know, like the Wonderlic cognitive testing or SHL or something, or, you know, a Hogan personality test or some sort of personality test. And I would say that, you know, I think there are newer platforms, like Plymetrics that, you know, combine both cognitive, you know, and sort of socio-emotional attributes, but also achieve sort of this Holy grail of lacking disparate impact, but also having, you know, validity. So again, I, I would be biased obviously towards something that can produce, you know, that can be predictive i n that space without having adverse impact. Cause I think it's, it's pretty critical, not just from a defensibility standpoint, but just also it aligns with what employers are looking for. I mean, you know, we all know we're in the epicenter or, you know, the Zeitgeists of the time is we have diversity and equality.
Frida (14m 6s):
And I think it's a little bit counter to the time, to continue to rely on tools that have a lot of disparate impact. That's my personal view.
Chad (14m 15s):
Well, let's talk about adoption really quick. It really doesn't feel like corporate America is truly wants to change its biased patterns and processes. I mean, seriously, we put a human on the moon in 1969 for God's sakes and we're still trying to tackle this issue. So why are employers so reticent to adopt tools like Pymetrics that would help make the process less bias and their workforce more diverse?
Frida (14m 43s):
So I guess I would say, I would say a couple of things, right? I would say that the, we, we work with employers who really are the forefront of diversity. I mean, if you, I mean, I can't name all our employers, but if you were to take a look, you would see employers that, you know, are truly pretty cutting edge when it comes to diversity, including some, some large OFCCP contractors, you know, who are probably the ones, you know, who, who are yeah. I mean really thinking about this the most carefully. So I do believe that there are sections of corporate America that really care. So that's one thing, right? I think the second thing is there's a little bit of like a disconnect between, you know, you have the CEO of the company being like, I want more diversity and then, you know, somewhere down deep in a different part of the organization, you have somebody who's, green-lighting a tool, you know, that has adverse impact.
Frida (15m 32s):
And those two parts of the organization are not talking to each other. And you know, why is somebody green-lighting a tool that has adverse impact? Well, you know, for a variety of reasons, because, you know, they think cognitive tests are super predictive or they don't know that this tool has adverse impact or whatever. So I think it's a little bit of like, there's just a lot of lack of transparency and sort of understanding of these types of things. I mean, I've personally seen this, right. I've personally seen, you know, debates between the business that's saying, Hey, we should be, you know, adopting this tool. And then, you know, other folks in the, in the business being like, no, no, no, you know, this cognitive test has been validated and defended in court and you know, and you're like, sure, it has been defended in court, but we all know they have really bad, you know, they select three African-Americans and five Latinos for every 10 Caucasians.
Frida (16m 22s):
I mean, you're not going to get a diverse workforce using pyro testing. Right. And I think it's like 60% of companies still use them. So I think there's a bit of a disconnect between what people are saying they want. And then folks that are used to sort of evaluating hiring procedures, comfort level with, you know, new things when these older tools, to your point have been battle-tested right.
Joel (16m 45s):
Will small businesses ever use tools like these? I'm guessing that your portfolio of companies are big ass companies. Right?
Frida (16m 52s):
Joel (16m 52s):
But do you have any thoughts on, does this go downstream into small companies or maybe even gig platforms?
Frida (16m 57s):
Ithink it does. I mean, we've focused more on large companies just because, you know, we don't only do recruiting. We also do mobility. We do re-skilling. We do L and D. I mean anywhere where you need to understand somebody who's fit to a role. Right.
Joel (17m 10s):
They also have money.
Frida (17m 10s):
So we sort of, there are a variety of different reasons, but I think that, you know, we focused on those companies mostly, because there's just so many things you can do within those companies, not just recruiting, we've been focused almost entirely on recruiting so far, but we do a tremendous amount more. I mean, the mobility and re-skilling space, I would say is completely taken off because again, think about it, right? Like, especially with COVID where, you know, maybe I've had to put a pause on hiring, but now my business has to respond to the environment and a completely different way. And everybody's like, Oh my goodness, I need to re-skill ASAP. Right. And so they're trying to understand, again, you know, soft skills is a really good way of understanding what someone can be re-skilled into, which is not possible with a resume, because a resume just tells you what they've done, not what they could do.
Frida (17m 52s):
Right. And so we see a lot of, you know, a demand for biometrics, both from the private sector, as well as the public sector in re-skilling opportunities. It's a no brainer and then obviously also a lot of the L and D market. So the point is just simply that we've primarily focused on, you know, sort of Fortune 2000, but I think there is, you know, applicability beyond that
Joel (18m 14s):
Sort of a self-serve model at some point, maybe.
Frida (18m 17s):
Definitely, possibly. Joel,
Chad (18m 19s):
Joel just wants to know if he can go play the, do the test. That's all he wants to do.
Frida (18m 23s):
You, we, I will be happy to send you happy to send you a link. I mean, at some point I think it'd be really cool to have like a direct to consumer call as well. You know what I mean? Because we get that question all the time. Like, Oh, I want to learn more about myself. This is such a cool application, you know, blah, blah, blah.
Joel (18m 38s):
You guys do job description. You guys do candidate to job matching as well. Right? For the job seeker
Frida (18m 44s):
Only in cases, we have some instances, where we do that. Yes, absolutely. So we, so where we would do that, for example, so we're working with the state of Ohio to reskill workers. So in that case, absolutely. We do that. We also like, so for example, like the, the reason I say Pymetrics is an optimization engine is because, Hey, you go through my metrics. Once you're applying to job A, hypothetically, for whatever reason, you don't get job A, we actually can help you and match you to other jobs at that company where you would be a good fit. And so we can do that, right? So that's job to employer employer matching. And then if for whatever reason you get dispositioned out of that company's process, we can actually match you outside of that company, to other jobs at other companies that you're a good fit for.
Frida (19m 25s):
So the reason I say this is because it's not the first way that a candidate would interact with us probably, but it's oftentimes sort of a secondary, tertiary way that, that somebody would, experience us.
Joel (19m 37s):
It's commercial time.
SOVREN (19m 39s):
You already know that Sovren makes the world's best resume CV parser, but did you know that Sovren also makes the world's best AI matching engine? Only Sovren's AI matching engine goes beyond the buzzwords. With Sovren you control how the engine thinks with every match the Sovren engine tells you what matched and exactly how each matching document was scored. And if you don't agree with the way it's scored the matches, you can simply move some sliders to tell it, to score the matches your way. No other engine on earth gives you that combination of insight and control. With Sovren, matching isn't some frustrating "black box, trust us, it's magic, one shot deal" like all the others. No, with Sovren, matching is completely understandable, completely controllable, and actually kind of fun. Sovren ~ software so human you'll want to take it to dinner.
Chad (20m 39s):
It's show time. Have you seen the, the new documentary on HBO? Max persona. Okay. Okay. So having those, I mean, these, these, this is, this is almost like an hour and a half hour, 45 minute campaign against at least Myers Briggs.
Frida (20m 58s):
Personality tests. It's against personality tests. Really? I think. Yeah. Yeah.
Joel (21m 3s):
I hit job.
Chad (21m 3s):
If you read into it. And, obviously a lot of people have their like little cultish love of, you know, they're there for letters and in the Myers-Briggs. Yeah, yeah. Overall though, you know, it kind of lumps, personality tests all into this one bad group. What did you think while you were watching that? Where you're cringing me entire time or you're like throwing stuff across the room or what?
Frida (21m 28s):
No, I wasn't actually, I wasn't. So, so they had to, they were really talking about half the movie was about the Myers-Briggs the other half was about Big five. Right. So they're totally different tests. So the whole let's forget about the Myers-Briggs. Their big point with the Myers-Briggs is it was developed by somebody who was racist. I was like, Oh, well, okay. I mean, that's unfortunate for sure, but I don't think they were as focused on bad uses of the Myers-Briggs. I think the bad uses really, really was more focused on, the Big Five. Right. And it was following the story, the story of Kyle Beam, and you, look I've had the pleasure of speaking with his father on numerous occasions. And it's a quite frankly, I mean, look, I mean, this may be an unpopular opinion, but this was sort of what I was saying about, I think traditional testing often puts ranked people on a unitary scale, right?
Frida (22m 14s):
So IQ says, you know, smart people are always better for jobs than, you know, less smart people. And I think, unfortunately, you know, the Big Five, I mean, there are lots of articles saying, you know, that people that are emotionally stable do better across all jobs and people that are, you know, conscientious, you know, mobile. And that is just not a fundamental way that Pymetrics looks at things. Meaning we don't think of what we measure as a unitary, we think of it as multi-dimensional right. Like for example, let's take the Big Five, right? Nobody's going to be like, yeah, you should take somebody who's neurotic, not conscientious and something else. And that, that's something that's anti Pymetrics in the sense that everything we measure isn't unidirectional either/and it's multipolar.
Frida (22m 57s):
Right? So either end of the spectrum, like the attentive to detail versus the bias to action, is a perfect example on all of the things that we measure are like that. Right? So I actually don't subscribe to the belief that's sort of held by, you know, sort of traditional practitioners, of the Big Five that, Hey, you know, we should always be pointing in one direction, which, again, I think is, I think that's the issue that, that movie is trying to raise, which is like, you know, you've see these tests have been created so that they always recommend, you know, sort of normal people, right, people that are emotionally stable, people that are not neurotic people that are conscientious people that are agreeable. And, Oh, by the way, like when you look at the opposite, end of that spectrum, you find people that have mood disorders, right.
Frida (23m 39s):
Because people that have mood disorders, you know, are not particularly agreeable necessarily and because they're depressed. Right. And you know, they may be more because they're depressed. Right. So even though it's not a medical test, per se, I think the broader point that Roland Beam was trying to make is that you're still going to put on the unhireable end of the spectrum people that are more likely to have mood disorders. Does that make sense? And that's very un-Pymetrics in the sense that, again, it's back to what I said before. We don't have either end of the spectrum could be fine. Like everybody has many matches in the Pymetrics system. And we have like, I don't know, 700 algorithms at this models, job laws at this point that we built, so there isn't this sort of like creation of great employees versus not great employees.
Frida (24m 22s):
It's like, it's, again, it's a matching it's very similar to like the idea that we're Netflix and you know, this other system is more like a Rotten Tomatoes ranking system.
Chad (24m 31s):
That seems very meaningful to me is you did you reach out to Kyle's dad? I mean, what was that all? Was that before after, do you have a relationship?
Joel (24m 40s):
Who's Kyle again?
Frida (24m 41s):
Kyla's his son, so I'll be totally Frank. I have a personal history in my family of someone that I'm very close to who has very severe mental illness. And, you know, I'll be totally honest too. I have in my lifetime suffered from depression, right. When I was in college, you know, and, and it was, you know, it was pretty significant and I had to leave school for awhile and get treatment for it. And so I guess the reason I'm telling you this is I think that, like in thinking about how to build these systems, I sort of came to it with this idea of like, Hey, you know, we want to make sure you know, that these systems, again, we can never be a hundred percent sure, but we want to be as sure as possible that we're not sort of, you know, creating a system where, you know, people with mood disorders or something else, you know, might be excluded.
Frida (25m 25s):
Right. And so, and I say that because I think that having had that lived experience, does it again, I, you know, I'm not saying it's perfect, I'm sure there's a lot of things we can improve on. But I think the fact that everybody has many matches at Pymetrics means that we're not creating a class of unemployable people. We know that for a fact, nobody that goes through Pymetrics is like rejected from everything. Everyone has fits. Right. And we know that because we've looked at the data, we've looked at all the millions of people that have gone through it. And we know that every one of them has, you know, multiple matches. So versus if you use this more unitary scale, you are creating a system where if everyone's using the same algorithm and it's based on, you know, certain ends of the Big Five spectrum, you will be creating a system where, you know, people will be permanently excluded from things.
Chad (26m 9s):
Frida (26m 9s):
So again, whether it's my lived experience that led me to, to, to build it this way or just, and it's also just like where, you know, cognitive science is just different than traditional personality theory in the sense that it views the brain as very sort of modular and these different things that you can measure. And again, it's this idea that neither end of the spectrum is good or bad. It just makes you sort of well better suited for certain things that others. So it's just a very different philosophy of people that leads to, I think, a different design thinking.
Joel (26m 40s):
I feel like there's a little bit of a, well, there is a backlash on a lot of this stuff. I mean, the movie sort of represents that, but we're also seeing, you know, facial recognition, you know, get banned and questioned. We're getting, we're seeing DNA tests, like why what's going on with this? Do you ever, maybe not y?our business is doing any of this stuff, but do you ever feel like it's a threat that you're gonna just sort of be thrown in with this group and companies will be afraid to use you? There'll be lawsuits for companies that do use these services or do you not worry much about it?
Frida (27m 9s):
Yeah. Look so. I mean, no, we absolutely think about it. I think we think about it more than we worry. So we think when we do, rather than worry. So what I would say is that we have, I mean, it's, so first of all, I mean, I don't know if you know this, but we hired two people from the EOC, like years ago to come and help us build the platform. Right. So we really have built Pymetrics with compliance in mind. And to be honest with you, I was very nervous when I did that, because I was like, Oh my God, they're going to find something and we're not going to know. And, you know, then we'll be like blacklisted, but I think we, we really put, you know, complying with federal regulations at the core and just general fairness, right, at the core of our product. And so that doesn't mean that we've solved every problem or anything like that, but it does mean that I think a lot of those things that you've brought up, I think, we thought about ahead of time.
Frida (27m 55s):
Now, do people have concerns about using artificial intelligence? Yes. People do. For sure. And I think rightly so, because we do see these examples of it being used in ways that are not helpful. But then, you know, you always have to remind yourself that, you know, Hey, these older tools also, you know, have have issues. As we just talked about with, you know, with the movie. So my only view is that no type of technology is always going to be, you know, good versus bad. You really have to investigate each particular technology platform differently and sort of make a decision.
Joel (28m 32s):
Can we, can we play a little science fiction real quick? You mentioned Netflix versus Rotten Tomatoes. And you've talked about sort of the customization. One of the things that, a story that came out recently on Netflix, I found really interesting and it's not only the recommendations of movies, but also the artwork that they show people.
Chad (28m 50s):
Joel (28m 51s):
So easy example is there's a white guy and a black guy in a movie, right. It's Lethal Weapon. So, if I'm a black male, I might see Danny Glover. If I'm a white male, I get Mel Gibson. Right. So that sort of customization is a little bit scary, but also really powerful. Do you envision a day where job descriptions are custom in that way, or we're seeing, you know, automated videos and deep fakes where you're talking to a person in quotes, but it's actually a machine where the person you talk to is customed to who you are sorta like Netflix shows you, do you envision a future like that or my way off base?
Frida (29m 30s):
So if we, if we just leave it in the hiring context, I think that would be trickier. Look, so only because it's a regulated space. Right. And I think it was like, was it Facebook that got into trouble, recently because they were only showing job ads to like people between the ages of 20. So I think that because, you know, because of regulation in certain industries, you know, housing, you know, employment, you know, financial, you know, lending, you're going to be more constrained and with good reason, you know what I mean? Otherwise if, yeah, if it was a free for all, you probably would be saying that. Right. But there are regulations that, you know, to the best of my knowledge, sort of prevent that from hap this type, that type of, sort of what I would call maybe more unnecessary personalization happening.
Frida (30m 15s):
Cause you're talking about personalization there at an individual level. My only point was simply to say that, you know, I think historically we've thought about jobs as being quite unitary and the things people need for jobs as being quite unitary. Right. I mean, just, you know, I mean, go, go Google this, you'll find five articles being like everyone that's smart, hardworking, and conscientious is always good for a job. Right. And we just have this philosophy and it's like, and then all these Talent Wars and there's like talent winners and talent losers. And it's like, it's, to me, it's so 1950s, it's like everybody is, has value. Everybody has potential. Everybody has their right fits, and their wrong fits, you know? And I mean, and again, HBS was a perfect testing ground for that theory, because if the theory that you need to be smart, hardworking, and something else, that means HBS kids would be, you know, hypothetically well-suited for everything.
Frida (31m 2s):
And they're just not, I mean, it's just, you know, I saw that firsthand. Right. You know, and it's just common sense really. So I think it's about personalizing, not to the extent of like, Hey, I'm going to show you Chad a different picture. It's more just around, you know, not assuming every job is the same and that these three characteristics are going to make you well suited for every job. You know?
Chad (31m 22s):
So Pymetrics goes to the process, but with gamification. Why did you choose to go through with gamification? Was it to be able to make it more of a pleasurable experience because tests suck? What went into that decision and how much more research did you have to do to get that right?
Frida (31m 43s):
Yeah. So, people call it gamification because it's a broad category, but all of the Pymetrics tools that are used are actually tools that I used and, you know, the tens of thousands of cognitive scientists across the globe use every day in research. So those are actually research tools, just as an FYI. That's not true of all gamified platforms, but the Pymetrics exercises are actual scientific assays that were taken from, I mean, I used to use these and experiment the 10 years that I spent as a cognitive scientist. Like I used all of these, all of the exercises that we use for talent assessment in research situations. So it wasn't like we had,ew so that's the core premise of Pymetrics was taking new science, cause it's a whole new scientific field.
Frida (32m 28s):
And we actually wrote a paper, a research brief with MIT on this recently basically explaining how, again, it's like these tools are, you know, like if you take the Big Five, right, it's a personality inventory that sort of has these five constructs. It's very different than what we've done, which is selected individual, you know, exercises that look at particular modular functions within people, whether it's, you know, attention or planning or sequencing or, you know, risk preference or, you know, effort expenditure, things like that. So all I'm trying to say is that when I saw the problem of recruiting firsthand and knew that we had this whole new scientific way of looking at people through their behavior, rather than the true self report or, you know, doing math problems, the light bulb was like, Oh wow, that's such a powerful, new scientific approach to this problem that's historically been challenging.
Joel (33m 20s):
We like to highlight diversity on the show whenever we can. And particularly on the founder side of things. So for the female out there that that wants to start a business, that's looking to do this. What advice would you give her to, to make that leap and particular, what challenges, challenges did you face as a woman raising money that might be uniqueto that situation, that might be helpful?
Frida (33m 45s):
Yeah, sure. So I was not only a woman. I was a single mom at the time. So it was, it was kind of an interesting, yeah, time period in my life. Look, I think at the end of the day, like I think you will experience more challenges. I mean, like the data don't lie, you know, women raise less money, you know, and so on and so forth. And that's just true. I mean, you can't ever pin a particular, you know, episode as being like, Oh, that was an example of it, but just in aggregate, you know, that's the case. And so I think you just have to mentally be sort of prepared for that, but I also wouldn't say don't focus on that, right? You don't want to like, sort of have that be the only thing you have going into this process. And, you know, you have to also realize that there are people out there that are very supportive of, you know, women and you have to find, you know, I would say those advocates for those people that are going to be, you know, really supportive and really sort of, you know, help you along your path.
Frida (34m 37s):
So again, I guess I'm trying to strike a balance. You can't be a Pollyanna and be like, Oh, you know, it doesn't exist. Sexism doesn't exist. Well, you know, unfortunately it does, but at the same time, like I think you can't let that be the reason if you're passionate about something, you just have to, you know, you just have to go ahead and do it.
Joel (34m 54s):
How important or important was your education? Cause I think something intimidating about you is like, you've got degrees up the wazoo and I assume you're going to say like, you don't need that to be an entrepreneur.
Frida (35m 4s):
No, I mean, well, so I would absolutely say if I had been starting a shoe company that would have done diddly for me, you know what I mean? Like I think what it is, is like, I think it's domain expertise, right? So I wanted to start a science company. I had a lot of fancy degrees from good scientific institutions. So people were like, Oh, she probably knows what she's talking about. If I wanted to start a shoe company or a toothpaste company, they would've been like, you have no idea what you're doing. We're not gonna fund you. Right. So I don't think you need fancy degrees to be an entrepreneur. I just think you have to domain expertise is just super helpful, no matter what. And I think unfortunately for women, it's more important because there's been research on this too. That, you know, for a guy it's like, Oh, he has so much potential. He's never done it, but I'm sure he can figure it out. Right.
Frida (35m 44s):
A woman it's like she ever done it like nine times before and was she's successful each of those nine times, okay. Then find, maybe we'll fund her. You know what I mean? So there's definitely, you know, and again, there's research to back that up. So I think domain expertise is more important for women. I don't think that's a bad thing. Right? I mean, I think, I mean, it,