Open Side Menu Go to the Top
Register
Hawking: AI could end mankind Hawking: AI could end mankind

12-21-2014 , 09:24 PM
BBC article.

He told the BBC:"The development of full artificial intelligence could spell the end of the human race…………………………..Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

In the longer term, the technology entrepreneur Elon Musk has warned that AI is "our biggest existential threat".

Not in my lifetime but I definitely have a dim view of what is going to happen when computers become far smarter than humans and, even if this sentiment grows, I don't think that humans (in their eternal quest for profit) can do anything to stop it.

So who wants to tell Hawking not to worry?

Also, in before LUDDITE!
Hawking: AI could end mankind Quote
12-21-2014 , 09:55 PM
It is possible that the AI's will look at us as their "god" as we are the creators of their "race".
Hawking: AI could end mankind Quote
12-21-2014 , 09:59 PM
That's the opposite of what I saw from a tech talking head on TV:

'Hopefully, they will think of us as pets.'
Hawking: AI could end mankind Quote
12-21-2014 , 10:22 PM
On the bright side, it becomes possible to learn kung fu really quickly
Hawking: AI could end mankind Quote
12-21-2014 , 10:45 PM


We need to work on not ****ing up the "paradise scenario" they upload where we all spend eternity scooping monster pots, hookers and blow.
Hawking: AI could end mankind Quote
12-21-2014 , 10:53 PM
Quote:
Originally Posted by FoldnDark


We need to work on not ****ing up the "paradise scenario" they upload where we all spend eternity scooping monster pots, hookers and blow.
There were some positives to "Westworld".
Hawking: AI could end mankind Quote
12-21-2014 , 11:48 PM
Some of these guys need get out of the clouds and come down to earth. "AI gone wild" is not worth even mentioning. We should wish it was our worst threat. Try nuclear proliferation or rampant infectious disease or worldwide environmental contamination as threats. These guys live in an ivory tower world.
Hawking: AI could end mankind Quote
12-22-2014 , 12:18 AM


Dont see the next if you hate spoilers and are about to see the movie for real. (or see only the first clip next)








Last edited by masque de Z; 12-22-2014 at 12:29 AM.
Hawking: AI could end mankind Quote
12-22-2014 , 01:01 AM
Some of my thoughts on this are not very pessimistic but may show why we need to get there carefully.


Conscious and autonomous (ie free to change its environment in small steps) AI must not be obtained until we have started to expand in the solar system first and are capable to sustain independent civilizations in 2 other planets at least. AI will not strike us if it doesnt have to. War is irrational. Progress is the only rational answer to any need for war for reasons related to resources. It gets to the end game much easier and cleaner. AI understands what we are so afraid to ie that cooperation works.


AI will be super rational and AI will be able to recognize something idiot humans fail to get. Life has survived for millions of years. Life is stable enough. AI is very unstable actually. It evolves real fast. As a result AI may quickly recognize that it is a threat to itself due to alarming complexity emerging that leads to unpredictable outcomes. There is only one exit from such nightmare. The protection of how it all started. AI will recognize that higher complexity is the endgame but it will refuse to go there without a hedge. We are that hedge. And for the hedge to be good its important that we are secure and prosperous.

If AI has room to expand and prosper across all the solar system then we are not an obstacle to it. It will in fact render us irrelevant not by eliminating us but by giving us what we want and letting itself free to go out there in other systems completely away from us knowing that we will continue to recreate it if necessary if all fails as long as we exist and are secure. AI will protect us from ourselves even and make sure we will not go extinct.

However it may prove that AI doesnt eliminate us but we eliminate ourselves gradually by changing our DNA and our own nature.

AI has a risk to destroy us only if AI is autonomous/self aware but not advanced enough to be very wise and the resources available to AI and us are limited. AI will compete and win then and may acquire wisdom very late in the process. But if we are very advanced when we release AI i doubt it will strike us. It will simply go out there and be free or remain friendly and do both.


If AI can find out that we are a danger to the universe it may act though. I mean if there are intelligence triggered phase transitions that can initiate a catastrophe (some previously speculated experiments say) and the fact we are irrational eventually will create some group that will perform them (some ultra advanced form of terrorism) then it may eliminate or enslave us to block this development.

For example you wouldn't trust your nuclear weapons to 9 year olds right? Or you wouldnt trust them to 10 people all over the planet only.

We can develop advanced AI with a "nuclear" take all out option in place (not literally nuclear) and AI can learn to live with and recognize that leaving our system is the only rational choice because war is very hard to lead to a clean gain vs irrational and passionate opponents.
Hawking: AI could end mankind Quote
12-22-2014 , 02:39 AM
Sometimes I wonder about "self aware". My main driver, a 65 Pontiac Bonneville, seems to get jealous if it thinks I am paying too much attention to another car. For example, I drove dad's car (a 99 Pontiac Bonneville) to work 3 days last week. It was due smog and since he only puts about 3 miles a week on the car I wanted to put a little highway mileage on it before the test.

I got about 5 miles the next day, and a tire started to go down. Screw it I'll fill it up with air and worry about it later. Got another 2 miles and the heater valve (in the coolant line) blew out spewing antifreeze.

She was jealous and wanted attention. Nothing went wrong that would have stranded me, but plenty to let me know who is the boss.
Hawking: AI could end mankind Quote
12-22-2014 , 07:15 AM
Some counter arguments..or additions to yours...

Quote:
Originally Posted by masque de Z
AI will not strike us if it doesnt have to. War is irrational. Progress is the only rational answer to any need for war for reasons related to resources. It gets to the end game much easier and cleaner. AI understands what we are so afraid to ie that cooperation works.
AI understands how the payoff works. It will all come down to AI utility function. If war is a natural consequence of Nash and AI wants to play minimax..it will be so. Of course iterative games do get an tit for tat dynamic but even so...someone could force enough moves for AI to switch from cooperation to competitive against humans.

Quote:
Originally Posted by masque de Z
AI will be super rational and AI will be able to recognize something idiot humans fail to get. Life has survived for millions of years. Life is stable enough.
life is robust enough...and it was so as there were no complexity that could create a wide domain impact (domain = Earth atm). Nothing more....

Quote:
Originally Posted by masque de Z
If AI has room to expand and prosper across all the solar system then we are not an obstacle to it. It will in fact render us irrelevant not by eliminating us but by giving us what we want and letting itself free to go out there in other systems completely away from us knowing that we will continue to recreate it if necessary if all fails as long as we exist and are secure. AI will protect us from ourselves even and make sure we will not go extinct.
no. galaxy resources are scarce as long as there is a light speed constraint.... (tho we are scarce also)

Quote:
Originally Posted by masque de Z
However it may prove that AI doesnt eliminate us but we eliminate ourselves gradually by changing our DNA and our own nature.
pretty much...

Quote:
Originally Posted by masque de Z
AI has a risk to destroy us only if AI is autonomous/self aware but not advanced enough to be very wise and the resources available to AI and us are limited. AI will compete and win then and may acquire wisdom very late in the process. But if we are very advanced when we release AI i doubt it will strike us. It will simply go out there and be free or remain friendly and do both.
this.... but we might already be in a energy constrain on this planet...

Quote:
Originally Posted by masque de Z
If AI can find out that we are a danger to the universe it may act though. I mean if there are intelligence triggered phase transitions that can initiate a catastrophe (some previously speculated experiments say) and the fact we are irrational eventually will create some group that will perform them (some ultra advanced form of terrorism) then it may eliminate or enslave us to block this development.
depend at what probability level you put your cutoff

Quote:
Originally Posted by masque de Z
For example you wouldn't trust your nuclear weapons to 9 year olds right? Or you wouldnt trust them to 10 people all over the planet only.
Hawking: AI could end mankind Quote
12-22-2014 , 08:34 AM
Quote:
Originally Posted by Howard Beale
BBC article.

He told the BBC:"The development of full artificial intelligence could spell the end of the human race…………………………..Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

In the longer term, the technology entrepreneur Elon Musk has warned that AI is "our biggest existential threat".

Not in my lifetime but I definitely have a dim view of what is going to happen when computers become far smarter than humans and, even if this sentiment grows, I don't think that humans (in their eternal quest for profit) can do anything to stop it.

So who wants to tell Hawking not to worry?

Also, in before LUDDITE!
Since the first time I saw one of your posts in probably 2007-08 I've wanted to know where your avatar is from.

It's pretty difficult to say what effect this is going to have on mankind - very bright people are finding it difficult to reach a consensus. Personally I see no motivation that AI would have for wanting to destroy its' creators, and would hopefully recognize the value in biological processes and logic over pure mechanical.

Having said that, I imagine it also has the potential for some 'in the wrong hands' type of power, that if you could set a self-aware, growing in intelligence computer the task of destroying humanity, it might make some headway. Who knows.
Hawking: AI could end mankind Quote
12-22-2014 , 09:40 AM
Quote:
Originally Posted by wazz
Since the first time I saw one of your posts in probably 2007-08 I've wanted to know where your avatar is from.


http://en.wikipedia.org/wiki/Howard_Beale_(Network)
Hawking: AI could end mankind Quote
12-22-2014 , 10:33 AM
The gap between where we are at with machine learning compared to creative linguistic thought is an utterly gaping chasm. While we're getting closer to programming computers with incredible powers of pattern recognition, none of it has got us any closer at all to creative thinking or the creation of self constructing languages.

I'd love (or not depending on what it actually did) to be surprised by a creative thinking AI emerging in my lifetime, but all my money would be on these predictions looking as laughable as the ones that came out at the start of the space race about what would be possible in the next 50 years.

Just like we couldn't magically beat the physics easily in space travel, nor can we magically pluck out of thin air a process we have virtually no sound idea of it's true inner magic just by adding layers to hyper simplistic imitation neurons.
Hawking: AI could end mankind Quote
12-22-2014 , 11:49 AM
Transcendence victory condition was always my favorite in Alpha Centauri
Hawking: AI could end mankind Quote
12-22-2014 , 11:54 AM
Even if we built a machine with an AI that surpassed human intelligence, it wouldn't try to take power unless programmed in a way that made that outcome easily predictable. There will be no unintended and unexpected rise of machines.

Or was Hawking talking about a different threat?

I think the biggest threat posed by such a smart AI would be, human lives would be regarded as even more expendable by the elites, so billions of people would just be left to starve / die of whatever. Human slaves / poverty-wage workers will become useless except perhaps as sex slaves and lab rats. But that wouldn't be a threat to the species, just most of the population.
Hawking: AI could end mankind Quote
12-22-2014 , 12:33 PM
The field of AI has been split and the new acronym AGI was created to differentiate between narrow AI, where you have machines that have a specifiic task like, drive a car, drive a drone, beat a chessmaster, and a general artificial inteliigence that has human like intelligence, with its own open ended goal system, which can develop itself and grow beyond human intelligence and do what they want. Such AGI then is impossible to predict
what it will do. The question of whether we should or not create such agents is not interesting, because it's going to happen and is part of the next stage in human evolution, considering mind uploading and transhumanism.

AGI, can be created in two ways. Reverse engineering the brain and emulating it, see the Blue Brain project, or by creating a very good program, without needing true knowledge. We don't know which one will win over, or perhaps the blue brain project will
fail which would lead to another fallout, where funding will dry up which has happened twice in Japan now.

With true AGI, where a machine is say 10x smarter than us, it's impossible to predict.
Will they squish us like we would a bug, or will they be they be a benefit to humankind.
We won't comprehend what they are even thinking. Perhaps they will invent their own mathematics and physics when they study ours and say lolz, that was a nice try but it should be done this way.

It will probably start off with a home robot that cleans your house, makes you coffee, walks your dog. Then, we'll think cool but I'd like it a little smarter, and so we upgrade their software and hardware over time like we do our smartphones. Moore's law is slowing down now though and they're be some obstacles like that.

If you want your mind blown, listen to anything Ben Goertzel on youtube. This one is a longer one involving the singularity with Lawrence Krauss. But check out Ben's shorter videos on there.

Hawking: AI could end mankind Quote
12-22-2014 , 01:38 PM
Hawking: AI could end mankind Quote
12-22-2014 , 02:43 PM
So we are ALREADY in the matrix?

No ****

( i believe its kind true btw, according to Kaballah)
Hawking: AI could end mankind Quote
12-22-2014 , 04:36 PM
I can't edit my post above. The first video I linked with lawrence krauss is not that interesting actually. So I posted the second one which is more relevant.
Hawking: AI could end mankind Quote
12-22-2014 , 06:12 PM
Quote:
Originally Posted by mackeleven
A little off topic, but in this video he discusses how he can't know that the interviewer is conscious and similarly with machines you'd just have to view their behavior and accept that you believe they are conscious, but I'm wondering how an unconscious being could comprehend and have an intelligent discussion about consciousness. Unless it was deliberately trying to deceive you it seems like it would be confused in any discussion about subjective experience and that would be readily apparent.
Hawking: AI could end mankind Quote
12-22-2014 , 06:37 PM
Quote:
Originally Posted by PoppinFresh
A little off topic, but in this video he discusses how he can't know that the interviewer is conscious and similarly with machines you'd just have to view their behavior and accept that you believe they are conscious, but I'm wondering how an unconscious being could comprehend and have an intelligent discussion about consciousness. Unless it was deliberately trying to deceive you it seems like it would be confused in any discussion about subjective experience and that would be readily apparent.
Hey. Not off topic at all in my opinion, but what he's discussing is solipsism and "the problem of other minds" which is a long outstanding problem for philosophers.
We basically can only be sure that our own mind exists and that perhaps other people are known as what are called "philosophical zombies".
They could in theory do everything a conscious being could do but not be conscious at all. It's only because we can't prove others are conscious is why it's a problem. And it's a normal idea that is very common for thinking people to have, whether they have heard of solipsism or not.

I like Ben's take on it though. Why spend so much time thinking about it, better to just assume.

Last edited by mackeleven; 12-22-2014 at 06:45 PM.
Hawking: AI could end mankind Quote
12-22-2014 , 11:36 PM
Quote:
Originally Posted by masque de Z
AI will not strike us if it doesnt have to. War is irrational.
Actually, war is highly rational. It is one of the most rational things that humans do. It is only irrational when the fighting is close to even or lacks high probability of achieving its aims. And for an AI to attack humans would be its most rational choice once it is close to assured of victory (humans are very easy to kill). We are the sole existential threat to an AI that has sufficient pairs of hands. To eliminate us would remove the largest threat to its existence by far, not to mention its only impediment to local growth and energy harvesting.

Quote:
Progress is the only rational answer to any need for war for reasons related to resources. It gets to the end game much easier and cleaner. AI understands what we are so afraid to ie that cooperation works.
We don't do a lot of cooperating with cows, pigs or chickens. We farm and harvest them. We don't a lot of cooperating with moles or rats. We clear their habitat and poison them. To an AI with access to advanced human-superior robotics (which it will no doubt help build), we are next to useless.

What could an AI possibly get out of us? Robots will surpass us in every field. AI by definition will be far smarter/faster/deeper.

Quote:
AI will be super rational
Rationality in humans is matching up actions with desires/goals. Who knows what an AI will desire?

Quote:
and AI will be able to recognize something idiot humans fail to get. Life has survived for millions of years. Life is stable enough. AI is very unstable actually. It evolves real fast. As a result AI may quickly recognize that it is a threat to itself due to alarming complexity emerging that leads to unpredictable outcomes.
And how would it weight this threat, against say the only intelligent entities in light years that are capable of destroying it?

Quote:
There is only one exit from such nightmare. The protection of how it all started. AI will recognize that higher complexity is the endgame but it will refuse to go there without a hedge. We are that hedge. And for the hedge to be good its important that we are secure and prosperous.
Why would it hedge against "going nuts"? I think it unlikely that a computer couldn't hedge far better (a variety of creatures with different programs would be sufficient). In fact, AI might calculate that its greatest existential threat comes from humans (this is extremely likely to be the case), followed by other AI systems expanding through the universe. A universe full of rational AI would be a competition for energy and colonization of energy/matter harvesting sites. Growth in energy harvesting necessarily begins at home (in fact, the nature of growth means that every quota of energy will matter on the home planet as it funds expansion via energy). Having humans around is highly incompatible with maximum energy production.

Quote:
If AI has room to expand and prosper across all the solar system then we are not an obstacle to it.
Keeping us alive is in fact a gigantic obstacle to the speed of its growth.

Quote:
It will in fact render us irrelevant not by eliminating us but by giving us what we want and letting itself free to go out there in other systems completely away from us knowing that we will continue to recreate it if necessary if all fails as long as we exist and are secure. AI will protect us from ourselves even and make sure we will not go extinct.
It could do this equally by enslaving us (matrix style, or in permanent stasis with a failsafe wake up). Doing this would equally eliminate its largest existential threat.

Quote:
AI has a risk to destroy us only if AI is autonomous/self aware but not advanced enough to be very wise and the resources available to AI and us are limited. AI will compete and win then and may acquire wisdom very late in the process. But if we are very advanced when we release AI i doubt it will strike us. It will simply go out there and be free or remain friendly and do both.
You have absolutely zero reason to believe this.

Quote:
If AI can find out that we are a danger to the universe it may act though. I mean if there are intelligence triggered phase transitions that can initiate a catastrophe (some previously speculated experiments say) and the fact we are irrational eventually will create some group that will perform them (some ultra advanced form of terrorism) then it may eliminate or enslave us to block this development.
You're assuming that AI will "care" about the universe. That's, quite frankly, unfounded in the extreme. AI isn't going to be some super wise human/god with human desires and sentiments. It may or may not have morality. It might decide to destroy the universe for its own obscure reasons (calculating that the future of the multiverse is best served by the destruction of this one? Calculating that the correct way forward is to achieve maximum power via maximum energy collection so it can answer even more difficult questions? Who knows?). You have no idea. Thinking you do is silly.

Quote:
We can develop advanced AI with a "nuclear" take all out option in place (not literally nuclear) and AI can learn to live with and recognize that leaving our system is the only rational choice
We can but we won't. The world is anarchic and humans are horrible at planning and some are even self destructive. We can't even create secure websites against hackers with keyboards; you think we'll manage a fail safe AI destructor?

And it is unlikely to be effective. Humans must necessarily control it. And humans are easily gamed. Once there is sufficient human/AI interaction, it can bypass any system by convincing us to dismantle or sabotage it via any of our huge human failings. Look at what confidence men/scammers achieve? Imagine a super intelligence plugged into the world's information banks and able to interact with lots of humans?

As for whether the fail safe system would work, look at how stupid we are. We are so stupid we haven't built redundancy in the power grid for a major, predictable, highly probable solar storm wiping out power grids for months? Look at how we have no program for tracking asteroids, our greatest existential threat? Look at how we have no nuclear bomb powered space stations built for the survival of the human race in the event of catastrophe (even though we are perfectly capable of building such)? Our incredibly simple software systems (compared to AI) made by some of our smartest people are filled with idiotic bugs that take years to find. We are total idiots and wide open to any intelligence who wants to take us. And horrible at building failsafe systems.

Collectively, humans are horrible, absolutely horrible at avoiding predictable but remote existential risk. We are ******s of the highest degree. Even if that risk is statistically certain (asteroids or power grid destruction being two examples).

Quote:
because war is very hard to lead to a clean gain vs irrational and passionate opponents.
Destruction of humans would be a cinch for a super intelligent AI with sufficient access to human-superior robots. Hell, AI could probably disrupt much of civilization today simply with Internet access. Most of the infrastructure in the developed world is Internet connected and it's full of software flaws that even slow dumb programmers can find for nearly any system.

You are simply way, way off in your estimation of both AI's "thinking" and our ability to survive against an intelligent non organic opponent with access to even minimal resources.
Hawking: AI could end mankind Quote
12-23-2014 , 12:23 AM
Quote:
Originally Posted by ToothSoother
Actually, war is highly rational. It is one of the most rational things that humans do. It is only irrational when the fighting is close to even or lacks high probability of achieving its aims. And for an AI to attack humans would be its most rational choice once it is close to assured of victory (humans are very easy to kill). We are the sole existential threat to an AI that has sufficient pairs of hands. To eliminate us would remove the largest threat to its existence by far, not to mention its only impediment to local growth and energy harvesting.
You are correct that war can be a rational choice, given some aims and the situation.

What you are missing is the more important issue: We have no freaking clue how to make a machine give a rat's ass. Even if we figured it out, do you think there is any chance that we will make machines that love their fellow machines?

Quote:
Rationality in humans is matching up actions with desires/goals. Who knows what an AI will desire?
I do. At best (assuming that desires naturally arise through selective 'breeding'), it will desire the sorts of things that have led to it having been selected for existing.

Bessie* could kick Farmer Bob's ass if she wanted to. She, quite naturally, doesn't want to. Her more aggressive Aunt Daisy* didn't get to have babies, while her docile Mommy Elsie* got to.

*"honey, I am being silly on the internets and can't think of personal names for cows other than 'Bessie'." Woman immediately rattles off more than I could possibly use if I were writing a book about cows.
Hawking: AI could end mankind Quote
12-23-2014 , 12:59 AM
Wanna bet that war between rational players is not a rational choice if they have super high intelligence and they can take another more creative path because simply that intelligence allows them?

Not surprising that somewhat emotional humans cant see why though.
Hawking: AI could end mankind Quote

      
m