Comments
Transcript
-
MZAnother useless skit on an old interview. Why does RV produce these time wasting, content filling vids when people could go back and watch the original video if they so desire. As a founding member, it pisses me off.
-
lDYeah i never thought about AI like that inheriting the old biases & yo! that is one calming voice you got there so i will listen to more hahah
-
GFI think the idea that regulation is going to fix the problems in Facebook and Google is hopeless. Facebook, Google, and their advertisers are going to buy far more influence with Congress than private individuals will ever exert, and none of the gatekeepers have any interest in "bicycles for the mind". They want their users addicted to their platforms and unable to resist being guided where the corporate AIs are programmed to guide them. I am part of that small minority that is not on FB. I signed up for an account, spent about 30 minutes looking around, and signed out, never to return. It was clear on that brief tour that it is a time sink and, while it might have some fun features, it would not enhance my happiness or quality of life. And I don't need or want any more time-sinks in my day. It might be a workable fix to prohibit FB from selling advertising or selling any information to advertisers. Put it strictly on a subscription model where the users have to pay a small monthly or annual fee, and that is where it derives its income. But the chances that such legislation could ever be passed is ZERO. Of course, all other social media platforms would have to be placed under the same rules. It would be a different world, and hard to see where it might go.
ROGER MCNAMEE: So, you have the worry of data leakage, you have the worry of hacking for all those devices. And we're about to repeat the same mistakes we've made with Facebook group.
So, let's just talk about what the future holds for these guys. Let's remember that Facebook is the greatest advertising platform ever created. They essentially have everybody with any disposable income in one network with an almost perfect high-resolution view of each one of them, with emotional triggers and all these other things. You know their birthday. You know where they live. Everything. You've got their credit card information. You've got all their location information, because you buy it from the cellular company. So, you know everything there is to know, the targeting is magnificent. That's why the numbers are still great.
But the other thing you can see is that they're having to load a lot more ads into people's newsfeed. In my case, every fifth or sixth post is an ad now. And that's probably doubled in the last year. And that's because usage is declining. That is, they say that the number of members is about steady. But Nielsen says that the minutes of use are down, I don't know, 20% or 25%. And so, the present is very strong for them fundamentally.
They're not going to lose the ability to be a great advertising platform relative to newspapers or television or magazines. But if they lose the trust of people, they're going to lose their attention. And if they lose their attention, then the future for Facebook and Google and Instagram is not going to be as good. We're still early in that process. I don't think that's going to happen anytime soon. And the reason I wrote the book was to give people, hopefully, a much better understanding of why they should be concerned now. Why they really need to get out of their chair and just take a troll.
When I look at what's coming in tech, these smart devices, Alexa based, Google home based, are going to come in gazillion categories. And they may fill the hole left by the peak and now decline of smartphones. So, that at one level is a really exciting category. What I would like to see there is a set of rules that just determine what kind of data can you gather? What do you do with it? And what do you have to do to protect against hacking?
There was a story just last week that Google's Nest division, which has both thermostats and security systems, somebody hacked a Nest device and convinced people that there was a nuclear missile coming their way. Well, nobody got hurt. But that could have been a disaster. And so, you have the worry of data leakage, you have the worry of hacking for all those devices. And we're about to repeat the same mistakes we've made with Facebook group.
And then you got the whole issue with AI. AI may be the single most promising thing to come along since the microprocessor. And it should make the world a lot better place. But the early approaches of it were like I described before- they ship the product as soon as they can get it to work without really thinking through the negatives. And if you look at the early applications in real estate, like the mortgages, there's this concept in real estate lending called redlining, where they would not let people of certain religions or races into certain neighborhoods.
All the people who made those AIs trained the AIs with the data from the real world and didn't correct for those implicit biases. So, they created a black box that has all the flaws of the old world and none of the benefits. Because how do you challenge an AI? You can't figure out how it made the decision, you can't appeal it. It's just final. Well, that's terrible. And it's totally unnecessary.
Same thing is true with jobs. These AIs that read resumes inherited gender bias and racial bias. Well, seriously, I think with AI, you have to treat it like it's a new pharmaceutical. You got to have proof of safety, efficacy and incrementally, you got to get rid of implicit bias. The good news, you're not going to spend 10 years in a clinical trial, we're going to create standardized software modules that are embedded in every AI. We're going to create standardized data sets for testing for implicit bias. We're going to apply it to everything.
And it might take a year or two to develop those things. But then you have a standard that everybody can use. And now, everything's better. That's what we did in chemicals. That's why you can have really dangerous chemicals and not have to worry when you go outside. Because we've got rules. And I think you have to protect society that way.
So, I think the future for tech is bright on opportunity. But it's so pervasive and so important in life, that we have to start to be subject it to the big boy rules that apply to every important industry. This whole thing that boys will be boys and okay, so they blow up a country? No problem? I don't see that working going forward.
BRIAN PRICE: Who leads the charge?
ROGER MCNAMEE: Well, I don't know. What I'm really hopeful is that we will start a next big thing in tech. And go back to Steve Jobs' notion of bicycles for the mind, of technology that empowers us, right? The problem with AI, the three most viable use cases are getting rid of jobs, filter bubbles. The things that Facebook and Google do, they tell you what to think. And then recommendation engines to tell you what to enjoy and consume. And you're like, wait a minute. I'm going to have a computer, take away my job, tell me what to think and tell me what I like and enjoy? Those are three things that are really important to our identity. That's what makes you Brian and me, Roger. Not the only things. But those are three important ones.
That's not a bicycle for the mind, that's replacing you. And it doesn't have to be that way. We should be using this to make doctors more successful, we should be doing this to make engineers more successful. There's so many things you can do with AI that are really viable. Let's do that.
And if the whole world dedicates itself to going back to human driven technology as opposed to human disabling technology, I think we're all going to be really happy. And that's just a matter of people deciding to do it. We're going to need some regulatory help because Facebook and Google are choking off the sunlight everywhere, but we can get at that.