Technology, Incentives and Cognitive Bias
Featuring Dee Smith and Raoul Pal
Published on: May 22nd, 2019 • Duration: 10 minutesDee Smith of Strategic Insight Group sits down with Raoul Pal to discuss the confluence of behavioral economics and technology. The principles of behavioral economics combined with machine learning and algorithms can lead to amazing results, but what happens when human bias bleeds into the very algorithms we believe protect us from it? This video is excerpted from a piece published on Real Vision on September 7, 2018 entitled “Modern Manipulation: Behavioral Economics in a Technological World.”
DEE SMITH: There is an emerging narrative that is claiming that computer programs do have biases, and the biases are based on the people who write programs.
One of the interesting topics in that, of course, confirmation bias is one of the most deadly of the behavioral economic biases that we look to find information that confirms what we already believe, instead of finding information that could falsify it, which is what scientific method is based on, is that you could try to falsify. You don't try to verify.
That the whole American adventure in Iraq based on the intelligence finding that there were weapons of mass destruction, which was in large part based on ignoring evidence that there weren't. It was simply selective use of intelligence, which is confirmation bias. It can be incredibly problematic. But it's something that is so, there's some kind of a switch that flips and you decide, oh, it's an aha moment. I see it, I got it, I understand it now. Let me find all these things that tell me I'm right.
RAOUL PAL: Because humans are so delusional. I mean, I fall into that bias all the time, as everybody does. And this is why the machine is so powerful and why we have to be actually truly concerned. Not flippantly concerned, but truly concerned, because there is no bias. And it's the massive ability to process data in ways that the human brain can't.
We can process incredible data. Everything we're seeing now and all the colors. Machines are nowhere near that, nowhere near our cognitive abilities in certain ways. We cannot process a fixed amount of, a fixed type of data in the quantity that machines can without a bias, because we need patterns to fill in the blanks.
DEE SMITH: Yeah.
RAOUL PAL: And it's all pattern recognition that causes our problems.
DEE SMITH: It is our pat, it's both our bless, it's a double-edged sword.
RAOUL PAL: Exactly.
DEE SMITH: Classic double-edged sword.
RAOUL PAL: Yeah.
DEE SMITH: There is an emerging narrative that is claiming that computer programs do have biases, and the biases are based on the people who write the programs. And there's been some interesting work on this that you think it's unbiased, but when you actually reverse engineer it-
RAOUL PAL: That's an interesting point. So therefore, is AI a function of its environment too, like a human is?
DEE SMITH: Exactly.
RAOUL PAL: So behavioral economics applies to machines in the end too.
DEE SMITH: In some way that we don't understand.
RAOUL PAL: Although the interesting one, was the DeepMind experiment with Go and chess and all of these things. And this machine got so fast at beating every grandmaster and every computer that ever existed in all of these games, but what happened was, what really blew people away, it was developing moves that nobody had ever used before.
DEE SMITH: Right.
RAOUL PAL: Because humans hadn't moved them. So I don't know where that came from, because it was learning, one machine was learning from another. So I don't know whether those biases carry through necessarily because there's been clear evidence in that particular experiment, that they learned their own way, which hadn't been used before.
DEE SMITH: They did, and that is very interesting. And yet another topic we don't have time to go into. But one of the most fascinating things about the argument about artificial intelligence, is computers are great at calculation. And they're great at things like Go and chess, that require calculation. They are not great at things that even animals can do, that require something that seems to be beyond calculation. There's a lot of debate about what that is now, and whether computers will ever get to it.
RAOUL PAL: That's right.
DEE SMITH: And whether computers will ever get conscious. There's this thing called Mary's Room. And the Mary's Room is a thought experiment. And I can't recall the name of the philosopher who came up with it, but Mary is a very bright girl who has learned everything possible about color. She's learned everything about how the eye receives color and the cones and she's learned all the things about the electromagnetism of color and of the color red particularly. And she's learned everything that you can learn from reading about the color red.
But she has done it her entire life in a gray room on a gray screen, and she's never seen any color, except for gray. So the question is, when the door finally opens and she walks out and she sees a red apple, does she have new information? Is the actual visceral experience of that something that can't be accounted for in her vast knowledge of the color red?
And that's one of the thought experiments they used to say, will computers ever have consciousness? Will they ever actually be able to do the things that humans can do so easily, like looking around a room and saying, those are chairs and that's a box, but I can turn it over use it as a chair? All these things that we can just do without thinking about it, and many, many more.
And so it's, I agree with you, we need to be very worried about it, but it's still up in the air whether artificial intelligence is really intelligence or whether it's really consciousness and where the divide comes between what makes us human. Is it, or even what makes biological intelligence different from Silicon-based intelligence.
RAOUL PAL: Yeah. And then he who controls the machines in that world, rules the world.
DEE SMITH: Yeah.
RAOUL PAL: And again, it goes back to whoever has the capital to run these things essentially can run the world.
DEE SMITH: Well, and this is going to become I think, one of the biggest issues. Is that as, and again, it's where, they're going to be political solutions that are going to have to emerge. Because you simply are not going to have, it's not going to be feasible. It's going to be a negative externality if you want to put it that way, to the people who own these things, that they simply, there's going to be too many people who are simply not going to be satisfied for them to have all-
RAOUL PAL: But the thing is, is Silicon Valley thinks it has an answer in the universal basic wage. I've never seen that which is a inverse incentive system.
DEE SMITH: Yeah.
RAOUL PAL: Actually creating happiness amongst humans, because they have no sense of purpose. And they're not incentivized to be productive. Maybe they don't need to be productive. But how do they then live productive lives that gives us a sense of being?
DEE SMITH: Yeah.
RAOUL PAL: That's very-
DEE SMITH: Meaningful lives.
RAOUL PAL: Meaningful lives. That's very complicated, and that requires a whole new set of government. Maybe if you look at the experiment Bhutan did and the universal happiness, maybe that is something that's within it, and that's maybe that's what the government's role is to do in the end and not just endlessly drive GDP for no purpose, but you need to have a happy population. The question, what is government and what is its purpose, is nothing that is really ever addressed.
DEE SMITH: No, that's true. And we've had lots of theories about it. And I think another interesting thing about the present moment, is that all of the systems that we've had, are seem to not be working. And so again, we're at this point where I like the term, horizon problem, meaning that you can't see what's over the horizon by definition.
RAOUL PAL: But what's odd, is that horizon is hurtling towards us.
DEE SMITH: Exactly, exactly. That's exactly right.
RAOUL PAL: I think the world has become, is becoming almost ungovernable in the way that we understand it, because, and I produced a piece on Real Vision for the Macro Insiders guys, that most people haven't seen, but I think we'll put together with this to go out, because I talked about the tribalism stuff that you and I have talked so much about.
And what world can you construct a societal moral code or ethics code or a group of society norms when we physically are located in this society, we're right now in the United States, but we can do anything online. We can be anybody. We can have whatever ethics code we want, and we are basically not restricted by any government in what we do. So we have the code that we have in this country, but that now is an increasingly small part of our lives.
DEE SMITH: Yeah.
RAOUL PAL: Because increasingly, more and more of our lives are lived in the global sphere of the internet. And I don't know how government can even operate in that system. How do you, how do you even get people to do things? Which is why I think behaviorally economics is part of it. And also somehow, the problems of the tribalism is this, is that it breaks rifts everywhere you go.
We've just seen the story broken about the Russians and the vaccine story. That they've gone online looking for contentious issues, vaccine, and then have given the pro and the against arguments, created big fights and splintered people.
DEE SMITH: Yeah.
RAOUL PAL: Again, using basically behavioral economics principles on what drives people's emotional behavior and creates a rift where none existed.
DEE SMITH: Right. And
RAOUL PAL: The online tribalism creates rifts everywhere.