The Dark Side of A.I.


Artificial intelligence (#AI) is running amok! Someone has to police this out-of-control state of technology. Enter Miranda Bogen, a Senior Policy Analyst who focuses on the social and policy implications of machine learning and artificial intelligence, and the effect of technology platforms on civil and human rights. Plus, she really classes up the joint.

Enjoy this Talroo exclusive.

PODCAST TRANSCRIPTION sponsored by:

Disability Solutions helps companies find talent in the largest minority community in the world – people with disabilities.​

Chad: Talroo is focused on predicting, optimizing, and delivering talent directly to your e-mail or ATS.

Joel: So it's totally data driven talent attraction, which means the Talroo platform enables recruiters to reach the right talent at the right time, and at the right price.

Chad: Guess what the best part is?

Joel: Let me take a shot here, you only pay for the candidates Talroo delivers?

Chad: Holy shit, okay, so you've heard this before. So if you're out there listening, in podcast land, and you are attracting the wrong candidates, and we know you are, or you feel like you're in a recruiting hamster wheel and there's just nowhere to go, right, you can go to talroo.com/attract, again, that's talroo.com/attract, and learn how Talroo can you get you better candidates for less cash.

Joel: Or, just go to chadcheese.com and click on the Talroo logo. I'm all about the simple.

Chad: You are a simple man.

Announcer: Hide your kids, lock the doors, you're listening to HRs most dangerous podcast. Chad Sowash, and Joel Cheesman are here to punch the recruiting industry right where it hurts, complete with breaking news, brash opinion, and loads of snark. Buckle up boys and girls, it's time for the Chad and Cheese Podcast.

Chad: Oh yeah.

Joel: Friday, Friday, Friday. Feeling spunky. TGIF.

Chad: More than spunk happening today my friend, today, today it's going to be incredibly spunky. We're going to be talking about AI, AI, AI, and some policy with Miranda Bogen, who is Senior Policy Analyst at upturn.org.

Joel: The trend of people much smarter than us continues as guests on our show.

Chad: That's not hard to do by the way. She's also co-chair of the fair, transparent, and accountable AI Expert Group. Oh my God.

Joel: Oh my God, we're in trouble.

Chad: So Miranda, before you jump into Upturn and this crazy Expert Group,

just so everybody knows out there, I heard Miranda on a podcast, I don't know, about a year or so ago, I said, "Yeah, she sounds pretty smart." Then I saw her on stage at Jobg8 in Denver, and I'm like, "The listeners need to hear from Miranda." So Miranda?

Joel: I'm glad you said Job8 and not some other stage.

Chad: No, you weren't on the stage Joel. So Miranda, welcome to HRs Most Dangerous Podcast, what did I miss in the intro? Fill in some gaps for the listeners.

Miranda: Well, thanks for having me first of all. Yeah, you got some of the high points recently, but just to give a little bit of background. I work at Upturn, which is a non-profit organization based in Washington DC, and our mission is to promote equity and justice in the design governance and use of digital technology.

Chad: Wow.

Miranda: So that kind of means everything, right?

Joel: Yes, damn.

Miranda: Hiring tech, recruitment tech is just one part of what I do, but we also study things like Facebook, social media platforms, police technology, credit scoring, anything where technology is intersecting with people's lives, and especially when it intersects with things like civil rights.

Chad: Yeah.

Joel: Yeah, and even though it has nothing to do with the content of our show, the "hidden world of payday loans," which is a blog post on your site, I'd love to dig into that at some point. Maybe as a bonus at the end.

Chad: Yeah, maybe.

Joel: Hidden world of payday loans.

Chad: So Upturn, on the website says, "We drive policy outcomes, spark debate through reports, scholarly articles, regulatory comments, direct advocacy," I mean, you do a lot of stuff. Conferences, workshops, so on and so forth. Are you guys a lobby group?

Miranda: No, we're a research and advocacy group, we're also only seven people.

Chad: Wow.

Miranda: So we do, do a lot, but we really kind of come in where we're most needed. And often times that's just explaining how technology works to other advocacy groups, the policy makers, to journalists, so that when they're talking about it they have a sense of what's really happening, and they can make sure that the policies they're thinking about are actually going solve some of the problems that we're seeing.

Chad: How did it start?

Miranda: So we actually started as a consulting group. My two bosses founded it about eight years ago, out of grad school to help again, sort of other advocacy groups, foundations, philanthropies, really get a handle around technology, which was eight years ago still newer than it feels now, and now it's really ubiquitous. But at the time, it was like, "What is going on here, how is technology changing civil rights? How is it changing policing? How is it changing how people have access to opportunity?" And people needed help, so they founded this consulting firm. I joined when we were still a consulting firm, but we decided to switch over to become a non-profit about two years ago, because we wanted to be mission driven. We wanted to go after issues that we were seeing, and kind of scope out some of these gnarly policy issues before other folks were maybe thinking about them. But then work together with other national and local civil rights groups, global policy groups to help direct policy efforts in the right places.

Joel: So here's a soft ball for you, give us your definition of AI.

Miranda: That's a-

Joel: Aha.

Miranda: It's a trick soft ball question. If you read any paper or article about AI, the very first line is always like, "There's no definition of AI." When we talk about AI... We actually don't use AI to frame our work, because we don't find it a super helpful frame, because it means everything and nothing. Often what we're really talking about is machine learning, uses of data,

Chad: Algorithms.

Miranda: ... data analysis, and mostly when we talk about AI, we're mostly talking

about just finding patterns in data. So that's what I think of when I think about AI.

Chad: Well, that being said, one of the things that really caught me, and I think it caught the entire audience in Denver, is when you talked about Netflix and their algorithm, and how the algorithm picked thumbnail pictures to be able to attract viewers to some of the movies. Can you talk a little bit about that research, or at least some of the information that you provided on stage about Netflix and how that worked?

Miranda: Yeah, this was a crazy story. It wasn't something we were working on directly, but I was scrolling through Twitter one day, and saw that people were talking about this situation where a woman was a blogger, or a podcaster, I can't remember. She was noticing that... She was scrolling through Netflix, now this woman was black, and she was scrolling through Netflix, and she was seeing that the little thumbnails that she was being shown for the movies, were featuring actors from the movies who were also black, but they weren't the main actors in those movies. And she was like, "What's up, I tried to watch these movies, and the characters that I was shown in the thumbnails barely have any lines at all? Why am I being misled?"

Chad: Wow.

Miranda: And so she started asking around, and asking were other people seeing the same thing, and she heard that they were. Other folks who were black were seeing these supporting actors who maybe didn't play a big role, but they were being featured in the thumbnails, while some white users were saying, "No, I'm seeing the main actors." And that was kind of crazy, because Netflix in follow up articles that journalists were writing about Netflix, made a statement saying, "We don't collect race. That's not something we're using to personalize the experience."

Chad: Wait a minute, wait a minute, wait a minute, wait a minute. How on in the hell was this happening then?

Miranda: Yeah, so it's really surprising. So many times you hear, "We're not collecting any sensitive data, we're just giving people what you want." And so that was really interesting to me, so I wanted to figure out what was going on. So it turned out that a year or so ago, Netflix introduced this new feature, where it would dynamically pull images from movies, TV shows that were showing, it would kind of dynamically pick out compelling images, and it would show them to people. And it would show you the image that it thought you were most likely to click on to watch that movie. So if you like romantic comedies, then if a movie had a scene of sort of two romantic leads, it might show you that. But if you like just straight up comedy, then it might show you an actor who's also a comedian, even if that's the same movie. So different people would be seeing different images in these previews, even if it was for the same movie.

Miranda: So what was happening here in this case was, Netflix was predicting that the black users were maybe more likely to click on these movies or shows, when they were shown these black supporting actors, even if they weren't main characters in that movie, and it was just learning that through behavior. It wasn't saying like, "Oh look, this user is... has these demographics, therefore, show them this image." It was all super dynamic, all just learning from people's clicks, and their online behavior. And the thing about AI, and about when you have all this data at your disposal, is you'll end up finding patterns that reflect people's personal characteristics, even if you don't collect them explicitly, or put any sort of label on them.

Joel: So Miranda, does your organization take a stance on these issues, or do you just present them as information? Like, are you saying AI is bad because of this, or are you just saying, "Hey, we've studied this and here are our findings?"

Miranda: I don't think we have a position on what Netflix did per se, but the issues I think, and why I was talking at Jobg8, and why I've been talking on these other podcasts is, that same thing is happening in areas like recruiting, because the technology that's underneath Netflix, is the same technology that's underneath any website that says, "We're going to recommend you things that you like," or, "We're going to find you the best people." They're all just called recommender systems, and if we saw that Netflix was working in this way, I was like, "Oh, man, this is a problem. This is probably happening on these other sites that are much more important to how people are finding jobs and livelihood."

Joel: So you do take a position that this is a bad thing?

Miranda: It's bad if we don't know it's happening, and it's going rogue, and it's ending up directing... it's ending up having the same effect as traditional discrimination in the job marketers, which is what our fear is.

Joel: Got you. Well, let's jump into Facebook. I know that you are well aware, and our listeners are too, that Facebook was presenting targeted advertising for jobs in relation to age, sex, education, all kinds of different ways that Facebook targets. Talk about that, and then talk about sort of what your feeling are with Facebook's solution to this, do you feel like it's a good solution, a competent solution? Or, do you feel like it's more of putting lipstick on a pig?

Miranda: Yeah, so this is something I've been working on for a couple of years now, Facebook ad targeting, ad discrimination. We were involved in some of the earliest conversations of civil rights groups that kind of pointed out the fact that advertisers could target ads based on gender, age, or what they called ethnic or multi-cultural affinity at the time, which was sort of a proxy for race. And then there were a bunch of lawsuits that were filed against Facebook, and part of the... all those lawsuits settled. And what Facebook agreed to was for job ads, but also for housing ads, and credit ads, was to take out some of those targeting categories to say, "Advertisers, you no longer have the tools easily at your disposal to exclude people from seeing opportunities. Or you can no longer..."-

Chad: It's discriminating.

Miranda: Yeah, "you can no longer discriminate," right? That's certainly something we support. But, when we were involved in these conversations, we supported all the efforts to make sure that advertisers themselves weren't discriminating, but Facebook is kind of like Netflix, it wants to show people what they "want to see." Ads that they will like. And so, we started getting into this. We started to do some research and say, "What if an advertiser didn't target an ad at all, or they just targeted it to anyone in the US?" Maybe it's a job, they don't really care who comes to work for them, they just wanted to get it shown to as many people as possible. I know that's not a realistic situation, but just hypothetically.

Miranda: So we ran a couple of job ads, and we didn't target them at all, and we used Facebook's own tools to see who they ended up getting shown to by gender, and by age. And for example, we ran a job ad for a nanny position, we didn't target it at all, and it ended up being shown to over 90% women, just because... Yeah. So the same thing was happening, and we didn't target it to women, but Facebook was making this prediction that women would be more interested in that type of job. And maybe that was true, maybe women were more likely to click on that job ad. Most childcare providers are women, but is that any different from an advertiser saying, only show this to women, just because the platform is using AI to predict whose most likely to click?

Chad: Nope.

Miranda: And so that really floored us, and made us think that the remedy for all these lawsuits, and j