17.05.2017, 16:23
|
#6298
|
Szenekenner
Registriert seit: 28.10.2011
Ort: Karlsruhe
Beiträge: 9.008
|
Ich habe noch einen sehr interessanten TED Talk gefunden, in dem Sam Harris dringend rät, sich Gedanken darüber zu machen, wie man eine - in seinen Augen unvermeidbar entstehende - Superintelligenz kontrollieren kann:
Can we build AI without losing control over it?
Einige Auszüge:
Zitat:
So imagine if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?
|
Zitat:
Now, one of the most frightening things, in my view, at this moment, are the kinds of things that AI researchers say when they want to be reassuring. And the most common reason we're told not to worry is time. This is all a long way off, don't you know. This is probably 50 or 100 years away. One researcher has said, "Worrying about AI safety is like worrying about overpopulation on Mars." This is the Silicon Valley version of "don't worry your pretty little head about it."
|
Zitat:
The computer scientist Stuart Russell has a nice analogy here. He said, imagine that we received a message from an alien civilization, which read: "People of Earth, we will arrive on your planet in 50 years. Get ready." And now we're just counting down the months until the mothership lands? We would feel a little more urgency than we do.
|
|
|
|