Comments
Transcript
-
BEThere was no real case presented as it pertains to investing. She does niche specific PR for companies who are worried about law suits for whatever latest radical leftist grievance of the day is trending. Her algorithm is a social justice one. It's success may be admirable in certain situations. But it's of little value where rubber hits the rode of profitable investing. Where success is pretty simply defined. Profit. I'm beginning to wonder if media produced in New York or other such radical liberal cities are even able to produce an investment channel without dabbling in the guilty, penances of the wealthy urbanite elite. I'm a half-step from getting this stuff free on youtuble like a good socialist. Where I atleast expect to see the algorhithm biasing liberal nonsense.
CATHY O'NEIL: Certainly, models are used more if you include in that not just economic models, like we're discussing, but things like algorithms. Think about all the VC-funded big data, AI companies nowadays, they are all based on the assumption that predictive analytics, AI, or whatever you want to call it will be able to solve problems. They're sort of inherently assuming that modeling works or works well enough for this company. Uber, all those companies that are sort of using the algorithm as a basic business model.
What do you think? Do you think that-- give me your opinion of that.
CATHY O'NEIL: Full disclosure, I have an algorithmic auditing company. The point of my company is to poke at that assumption. The very short answer is it's a very narrow view. Most algorithms work narrowly in the way that the company that built them and is deploying them wants them to work, but probably fails in a lot of other ways. They might be unimportant to that company but it might be important to the targets of the algorithm or some other side effect of the algorithm.
Just going back to the way that economists thought about derivatives, like the way they talked about it, the way they thought about it-- it worked for them. Put that in quotes, because that's what you'll find over and over again with models. If models are working for the people that are using them, whether that's because the data looked good and that they weren't looking at other data, or because it worked for them politically, or because they kept on getting better and better jobs when they talked about how great these models were--
You could even think of it as a corruption in a certain sense because working politically for them is still working for them. I'm just saying that that is a very narrow view. The real question isn't this work for you-- because yes, it is. You wouldn't be doing it if it didn't-- the real question is for whom does this fail? For whom does this fail? That's the question that isn't being asked, wasn't being asked then, still isn't being asked now.
DEE SMITH: Also, the definition of working for you can also be misleading. What does it mean to be working? It worked for you for the moment. Maybe you got a promotion or something but if it brought down your company, is that really working for you?
CATHY O'NEIL: If you got another job-- that's the thing about people don't quite understand how cynical the world of finance was at that time. I would talk to people about that. Like, oh, this model seems flawed, seems like as a business model, it's dangerous for the company. Oh, but I'm just going to skip ship when it fails, and I'll get another job. That was the assumption.
It's a very, very narrow perspective. Yeah, working means, in the case of many of those models that we saw fail during the crisis, simply meant short-term profit. I mean, it was very simple. It was very money-based. The kinds of algorithms that I think about now-- let's talk about the Facebook news feed algorithm-- works for Facebook, again, and it ends up translating into money, but the short term, the sort of more direct definition, is engagement, like keeping people on Facebook. We're going to privilege the news feed items that keep people on Facebook. We're going to demote the items that people tend to leave Facebook after reading or seeing.
Just that one thing-- of course, it is aligned with profit, because the longer people stay on Facebook, the more they click on ads, the more money Facebook makes. That's their narrow definition of working. They're like, this is working because we're making more money. It's very clear that that's their incentive, but what we've seen in the last few years-- and it was pretty predictable, actually, looking back at it-- is that that also privileged things that we find outrageous. Why do we stay on Facebook? To argue things that we find divisive. Why do we stay on Facebook? To get outraged, to fight with each other.
DEE SMITH: To be part of a group that excludes others.
CATHY O'NEIL: Yeah, or to even be radicalized, and to find your people, your new radicalized people. There's all sorts of stories we've heard. What it doesn't privilege is thoughtful discussions that makes you go and do your own research at a library. Like, we all know that, right? That's not happening. We've seen that when Facebook optimizes to its bottom line, its definition of success, which is profit, it's also optimizing directly away from our definition of success, which is in being informed and not fighting.
DEE SMITH: There's an even further irony to it, which is that by optimizing to that narrow definition of working for them, they've put their company in the crosshairs. In other words, it may not work for them. It may be a very extremely corrosive thing for the company itself in the longer term, in the bigger picture and yet they've been focused on this very short term-- short termism, of course, is one of the great problems of our ages.
To back up for a minute, I want to ask you about what an algorithm is, because it is a term that's thrown around all the time. I'm a music and math person too and there is a beauty and crystalline structure to mathematics that makes people think that it doesn't lie. Mathematics indeed does have proofs, and mathematics itself doesn't lie, but the assumptions behind mathematics most certainly can lie. Walk me through a definition for people, who don't maybe understand when they throw around the word algorithm, what is an algorithm.
CATHY O'NEIL: Okay. I'm just going to back up and just disagree with one thing you just said, which is I feel like axioms in mathematics, if stated as axioms, they're not lies. They're just assumptions. The thing that we're going to see in my explanation of what an algorithm is, is that it's not mathematics at all, actually.
What is an algorithm? When I say algorithm, I really mean predictive algorithm. Because if just taken straight up, an algorithm just means a series of instructions. Like, that's not what I mean. I mean a predictive algorithm. What I mean by that in almost every algorithm I will discuss is predicting and not just predicting something, predicting a person. Most of the examples I talk about predict a person.
Are you going to pay back this loan? Are you going to have a car crash? How much should we charge you for car insurance? Are you going to get sick? How much should we charge you for health insurance? Are you going to do a good job at this job? Should we hire you? Are you going to get rearrested after leaving prison? That's your crime risk score.
It's a prediction. It's a scoring system. It's even more precise. It's a scoring system on humans. Like, if your score is above 77, you get the job. If it's below 77, you don't get the job, simply that kind of thing. More generally, a predictive algorithm is an algorithm that predicts success. Success is the thing I've just been mentioning in those examples. Like, are you going to click? Are you going to get in a car crash? Those are the definitions of success. Specific event.
The reason you have to be so precise about that is because you train your algorithm on historical data, so go 10 year, 20 years. This is what I did when I was working as a quant in finance. You look for statistical patterns. You're looking in particular at initial conditions that later led to success. People like you got raises. People like you got hired. People like you got promoted in this company. We're going to hire you because we think your chances of getting a raise, getting promoted, and staying at the company are good.
DEE SMITH: Because you match the pattern of people who have had that happen.
CATHY O'NEIL: Exactly. The inherent thing is things that happen in the past, we're predicting will happen again, but we have to define what that means. Like, what particular thing is going to happen. That's the definition of success. Really, to build an algorithm, a predictive algorithm, you just need past data and this definition of success. That's it, and then you can propagate into the future patterns from the past.
DEE SMITH: How does that play out into-- I've heard you give a wonderful real world example of your kids.
CATHY O'NEIL: Yeah. I talk about this, because I really do think it's a no-brainer. It's not complicated. It's something we do every day. Sometimes we give the example of getting dressed in the morning. Like, what am I going to wear. You have a lot of memories.
It doesn't have to be formal. It doesn't have to be in a database. It's just like memories in your head-- things I wore in the past. Was I comfortable? If that's the definition of success for you today, being comfortable, you have a lot of memories to decide what to wear if you want to be comfortable. If you want to look professional, then you have memories to help you look professional.
Another example I'd like to give, though, that shows more of the social structure of predictive algorithms and how things can go wrong is cooking dinner for my family. I cook dinner for my three sons and my husband and I want to know what to cook. I think back to my memories of cooking for them. This guy likes carrots, but only when they're raw. This guy doesn't eat pasta, but he likes bread. Then I cook a meal. Of course, it depends on what ingredients are in my kitchen. That's data I need to know. How much time do I have-- also data I need to know.
At the end of the day, I cook something. We eat it together and then I assess was this successful. That's when you need to know what was my definition of success. My definition of success is did my kids eat vegetables. I say this because I want to contrast it against my youngest son, Wolfie, whose only goal in life is to eat Nutella. Like, his definition of success, if he were in charge, would be like, did I get to have Nutella.
The two lessons to learn from that are first of all-- well, the first thing is it matters what the definition of success is because I'm not just asking to know. I'm asking because I'm going to remember this was successful, this wasn't successful. This was. In the future, I'm going to optimize to success. I'm going to make more and more meals that were successful in the past because I think they'll be successful again.
That's how we do it. We optimize to success. The meals that I make with my definition of success are very different meals than I would make off of my son's definition of success. That's one really important point.
The other just as important point is that I get to decide what the definition of success is because I'm in charge. My son is not in charge. The point I'm trying to make is it's about power. Predictive algorithms are optimized to the success defined by their owners, by their builders, by their deployers, and it's all about the power dynamic.
When we're scoring people, the people who are being scored might not agree with the definition of success, but they don't get a vote. The people who are owning the algorithm, the scorers, are the ones who say, here's what I mean by a good score. That could seriously be different for the person who's being scored.
For that matter, many of the examples I wrote about in my book, they're just unfair. They're simply unfair. Never mind the definition of success. They might even be defined in a reasonable way but the score is actually computed in an unfair way and the people who are being scored have really no appeals system.
The typical situation for a real power relationship-- and most of the examples I just gave are like that-- insurance, credit, getting a job, or sentencing to prison-- all of those are examples where the standard setup is the company who uses the scoring system licenses the predictive algorithm, the scoring system, from some third party. That third party basically scores the person, tells this big company what the score is.
The person being scored can't ask any questions because these guys don't even know how it works. Typically, they have a licensing agreement that stipulates that the big company will never get to see the secret sauce of the scoring system.
DEE SMITH: Because it's a trade secret.
CATHY O'NEIL: It's a trade secret. It's really opaque, and it's often unfair. There's just nothing that the person being scored can really do about it. At the same time, they're missing out on really important financial opportunities, or job opportunities, or even going to prison.
DEE SMITH: Well, this is really a problem. Because people, when they see that something's been done by a computer and that there's an algorithm, this magic word involved, they think, oh, well, that's objective, because the computer did it. People didn't do it. It's number crunching, and it must be right. I don't like the result, but it must be right.
Then the people who are in charge of it feel like they've, in some way, moved the responsibility for the decision over to some black box that has some kind of magic secret sauce, whatever you want to call it. But that it's because it's mathematical and algorithmic, it's objective, and so that is just not true, is it?
CATHY O'NEIL: It's really not true, but you said it well. That is the assumption going in and that's the blind trust. I call it, the blind trust that I'm pushing back against. That's what I do now. I push back against this blind trust that we have.
Now, it's true that it's not idiosyncratically favoring certain friends. It's not nepotistic. Like, you could imagine a hiring manager who just simply lets their buddies get a job, even though their buddies aren't qualified under the official rules. Algorithms don't do that. They're not particular to a specific person but they are inherently discriminatory in as much as the data that they're trained on is discriminatory. For example, if we train an algorithm to look for people who in the past were given promotions in a company where men got promoted over women and tall men got promoted over short men, then that algorithm would be trained on that data and would replicate that bias.
There's no sense in which it's actually objective. To be objective would mean something else. It would be closer to something like, what does it mean to be good at your job. Let's measure the person's ability to do their job. That would be closer to objective. That's not what we typically see with algorithms. We're training on success, but success is really a very bad proxy for underlying worth. It's actually difficult to measure somebody's underlying worth, underlying ability.
You can try to get there through various means, but most of those means that we've developed are known to be biased, biased against minorities, biased against women, biased against the usual suspects. What we end up doing is replicating that bias in the algorithms. In some sense, you could just say, okay, we're just doing what we used to do-- no better, no worse. I would argue that it's actually a little worse, because now we are also imbuing this stuff with our blind trust.
We think we've done our job and we think, okay, great. Now, it's fair. We don't have to worry about it. In fact, we do have to worry about it.
DEE SMITH: There are at least two ways that this goes off-track. One is the people who are sort of selecting the tools and the elements that are going to feed into this in terms of what's important and what isn't, or even if you didn't have that, and you just are looking at past patterns, if the past patterns are going to be whatever they are, if it's machine learning, the machine is learning itself from that. It's still going to reflect the fact that the company only ever hired men.
CATHY O'NEIL: Yeah. You're distinguishing-- and I was kind of conflating those two things. I think it's really important to distinguish there's lots of different ways algorithms can mess up. One of them is bad data. Like, garbage in, garbage out. If it's biased data, it's going to propagate the bias. That's what we just discussed. But then, there's also the definition of success itself and you can take perfectly good data, but turn it into a garbage algorithm by defining success the way Facebook defined success and making things worse-- making everyone worse off and democracy dissolve.