Big Tech’s Tangle with Data, Privacy and Trust
Featuring Roger McNamee
Published on: May 29th, 2019 • Duration: 8 minutesRoger McNamee, who manages private equity firm Elevation Partners with U2’s Bono and other investors, talks about the challenges new technologies pose to society. McNamee breaks down hacking, data breaches and artificial intelligence. This video is excerpted from a piece published on Real Vision on February 22, 2019 entitled “Facebook and the Future of the Social Internet.”
ROGER MCNAMEE: So, you have the worry of data leakage, you have the worry of hacking for all those devices. And we're about to repeat the same mistakes we've made with Facebook group.
So, let's just talk about what the future holds for these guys. Let's remember that Facebook is the greatest advertising platform ever created. They essentially have everybody with any disposable income in one network with an almost perfect high-resolution view of each one of them, with emotional triggers and all these other things. You know their birthday. You know where they live. Everything. You've got their credit card information. You've got all their location information, because you buy it from the cellular company. So, you know everything there is to know, the targeting is magnificent. That's why the numbers are still great.
But the other thing you can see is that they're having to load a lot more ads into people's newsfeed. In my case, every fifth or sixth post is an ad now. And that's probably doubled in the last year. And that's because usage is declining. That is, they say that the number of members is about steady. But Nielsen says that the minutes of use are down, I don't know, 20% or 25%. And so, the present is very strong for them fundamentally.
They're not going to lose the ability to be a great advertising platform relative to newspapers or television or magazines. But if they lose the trust of people, they're going to lose their attention. And if they lose their attention, then the future for Facebook and Google and Instagram is not going to be as good. We're still early in that process. I don't think that's going to happen anytime soon. And the reason I wrote the book was to give people, hopefully, a much better understanding of why they should be concerned now. Why they really need to get out of their chair and just take a troll.
When I look at what's coming in tech, these smart devices, Alexa based, Google home based, are going to come in gazillion categories. And they may fill the hole left by the peak and now decline of smartphones. So, that at one level is a really exciting category. What I would like to see there is a set of rules that just determine what kind of data can you gather? What do you do with it? And what do you have to do to protect against hacking?
There was a story just last week that Google's Nest division, which has both thermostats and security systems, somebody hacked a Nest device and convinced people that there was a nuclear missile coming their way. Well, nobody got hurt. But that could have been a disaster. And so, you have the worry of data leakage, you have the worry of hacking for all those devices. And we're about to repeat the same mistakes we've made with Facebook group.
And then you got the whole issue with AI. AI may be the single most promising thing to come along since the microprocessor. And it should make the world a lot better place. But the early approaches of it were like I described before- they ship the product as soon as they can get it to work without really thinking through the negatives. And if you look at the early applications in real estate, like the mortgages, there's this concept in real estate lending called redlining, where they would not let people of certain religions or races into certain neighborhoods.
All the people who made those AIs trained the AIs with the data from the real world and didn't correct for those implicit biases. So, they created a black box that has all the flaws of the old world and none of the benefits. Because how do you challenge an AI? You can't figure out how it made the decision, you can't appeal it. It's just final. Well, that's terrible. And it's totally unnecessary.
Same thing is true with jobs. These AIs that read resumes inherited gender bias and racial bias. Well, seriously, I think with AI, you have to treat it like it's a new pharmaceutical. You got to have proof of safety, efficacy and incrementally, you got to get rid of implicit bias. The good news, you're not going to spend 10 years in a clinical trial, we're going to create standardized software modules that are embedded in every AI. We're going to create standardized data sets for testing for implicit bias. We're going to apply it to everything.
And it might take a year or two to develop those things. But then you have a standard that everybody can use. And now, everything's better. That's what we did in chemicals. That's why you can have really dangerous chemicals and not have to worry when you go outside. Because we've got rules. And I think you have to protect society that way.
So, I think the future for tech is bright on opportunity. But it's so pervasive and so important in life, that we have to start to be subject it to the big boy rules that apply to every important industry. This whole thing that boys will be boys and okay, so they blow up a country? No problem? I don't see that working going forward.
BRIAN PRICE: Who leads the charge?
ROGER MCNAMEE: Well, I don't know. What I'm really hopeful is that we will start a next big thing in tech. And go back to Steve Jobs' notion of bicycles for the mind, of technology that empowers us, right? The problem with AI, the three most viable use cases are getting rid of jobs, filter bubbles. The things that Facebook and Google do, they tell you what to think. And then recommendation engines to tell you what to enjoy and consume. And you're like, wait a minute. I'm going to have a computer, take away my job, tell me what to think and tell me what I like and enjoy? Those are three things that are really important to our identity. That's what makes you Brian and me, Roger. Not the only things. But those are three important ones.
That's not a bicycle for the mind, that's replacing you. And it doesn't have to be that way. We should be using this to make doctors more successful, we should be doing this to make engineers more successful. There's so many things you can do with AI that are really viable. Let's do that.
And if the whole world dedicates itself to going back to human driven technology as opposed to human disabling technology, I think we're all going to be really happy. And that's just a matter of people deciding to do it. We're going to need some regulatory help because Facebook and Google are choking off the sunlight everywhere, but we can get at that.