Open Side Menu Go to the Top
Register
Superintelligent computers will kill us all Superintelligent computers will kill us all

01-29-2016 , 02:11 PM
Quote:
Originally Posted by Deuces McKracken
If its purpose is making paperclips, then I am thinking it is most likely subject to our command and not "conscious" (whatever that means for a computer). I mean, you don't see fruit pickers or house painters making plays toward global political influence (no offense to those professions).
.
The original goal is our choosing but it wouldn't be at our command when it becomes super-intelligent and uncontrollable.
The point of the thought experiment is to demonstrate how AI can be a threat even if its goal is trivial and seems harmless.
https://wiki.lesswrong.com/wiki/Paperclip_maximizer
Consciousness is a different kettle of fish. There isn't a need to invoke it IMO. Unless you actually emulate the brain exactly and the result is something that acts like your neighbor.

Last edited by mackeleven; 01-29-2016 at 02:18 PM.
Superintelligent computers will kill us all Quote
01-29-2016 , 02:22 PM
Quote:
Originally Posted by Deuces McKracken
Why do you think that? A lot of research I have seen suggests the opposite. In fact the "herd" out predicts experts so reliably that the CIA has funded a research project in which they poll large groups of laymen for their opinion on complex foreign affairs questions.
IQ is not a predictor of future due to high information uncertainty, and IQ has no information of that future (IQ test is not a "future predictive test" but a classification/pattern test)...

it has been shown in high information uncertainty (eg. complex foreign affairs) the best predictor is the aggregate opinion...

hence markets are efficient...

but if CIA had a research project in prediction that was actually working it would spin off into a hedge fund asap
Superintelligent computers will kill us all Quote
01-29-2016 , 02:54 PM
Quote:
Originally Posted by mackeleven
The original goal is our choosing but it wouldn't be at our command when it becomes super-intelligent and uncontrollable.
The point of the thought experiment is to demonstrate how AI can be a threat even if its goal is trivial and seems harmless.
https://wiki.lesswrong.com/wiki/Paperclip_maximizer
Consciousness is a different kettle of fish. There isn't a need to invoke it IMO. Unless you actually emulate the brain exactly and the result is something that acts like your neighbor.
Although you don't have to emulate the brain for a machine to appear conscious if you go the software/algorithms route. But consciousness is confusing anyhow. Bostroms's example is like a network of machines that can improve its software and hardware, preserve its utility, acquire resources, to achieve a goal and weed out vulnerabilities in way of achieving the goal which is easier to get your head around.

Last edited by mackeleven; 01-29-2016 at 03:02 PM.
Superintelligent computers will kill us all Quote
01-29-2016 , 03:31 PM
Quote:
Superintelligent computers will kill us all
If it gets rid of the Irish then it will be worth it.
Superintelligent computers will kill us all Quote
01-29-2016 , 03:34 PM
Quote:
Originally Posted by Zeno
If it gets rid of the Irish then it will be worth it.
On the contrary, I'd like a new sunraise for the Kennedy's.

Sun raise.

Last edited by plaaynde; 01-29-2016 at 03:58 PM.
Superintelligent computers will kill us all Quote
01-29-2016 , 04:00 PM
Quote:
Originally Posted by mackeleven
The lesson there is you only terminate ants that are in the way.
Biologists have proposed to eradicate all mosquitoes on the other hand.
Our fate may depend upon whether we are the ant or mosquito in our level of threat to AI. ;]
yep

Ants and termites are a pretty solid example. We don't go out of our way to harm them when they're doing their thing but when they cause us the slightest bit of annoyance we kill them.

From ASI's viewpoint (say 100,000x smarter than a person) it will be hard to tell the difference between a human and a termite.
Superintelligent computers will kill us all Quote
01-29-2016 , 04:04 PM
Quote:
Originally Posted by ToothSayer
One thing I wonder is if we should even do this. I doubt we will be able to stop while we have nations, but if we could, should we?

The end game of all this is the replacement of humans in one way or another. Is the universe richer if all intelligent life that arises ultimately creates its own (identical, since it will be pure mathematics) self improving AI?

I would argue that it probably isn't. If humans survive and remain in anything near their current form, we'll end up immersed in worlds of pure simulation where we can do anything we want and live any life we want in any universe we want. We will all become Gods, able to live and experience anything.

I'm not sure whether to find that elating or depressing.
I don't think you can do anything to stop progress. Writing computer code isn't that hard to do (like getting enriched uranium) and we'll see great benefits from the advances all the way up. We might be stuck just hoping that we hit that 15-65% chance that it doesn't end in total annihilation.
Superintelligent computers will kill us all Quote
01-29-2016 , 04:14 PM
Yeah. I guess it really is just the timing that's problematic. In 500 years we probably have the sophistication to handle this. I'm not sure about 40.

The atomic bomb is similar. The rigid idealism and realism of the 60s and 70s was a very bad time to have nuclear weapons come into the world. It's a lot less dangerous now. We're simply more sophisticated, with less all-or-nothing type thinking, more soft.
Superintelligent computers will kill us all Quote
01-29-2016 , 07:17 PM
Some recent arguments against Bostrom.
http://jetpress.org/v25.2/goertzel.htm
He shares similar views to masque ITT and touches on goals and intelligence which veedz was trying to get at.

Quote:
Originally Posted by spanktehbadwookie
GO is interesting because players can't explain why they make their moves, relying on intuition, yet are still somehow good at such a complex game.

Last edited by mackeleven; 01-29-2016 at 07:24 PM.
Superintelligent computers will kill us all Quote
01-30-2016 , 12:14 AM
Check the personal attacks etc at the door. Deletions made and Warning given. Additional juvenile attacks subject to infractions.

Last edited by Zeno; 01-30-2016 at 05:55 PM. Reason: Expanded explanation...
Superintelligent computers will kill us all Quote
01-30-2016 , 12:48 AM
Quote:
Originally Posted by ToothSayer
I would argue that it probably isn't. If humans survive and remain in anything near their current form, we'll end up immersed in worlds of pure simulation where we can do anything we want and live any life we want in any universe we want. We will all become Gods, able to live and experience anything.
Problem is some would not get over it'd be fake.

I like the real world, with its stability. Worth the effort to explore.
Superintelligent computers will kill us all Quote
01-30-2016 , 01:18 AM
I'm not worried. The AI will discover the web, stumble on PU, and the illogical posts will make it's CPU explode.

PU will save us all.
Superintelligent computers will kill us all Quote
01-30-2016 , 03:08 AM
Quote:
Originally Posted by JayTeeMe
yep

Ants and termites are a pretty solid example. We don't go out of our way to harm them when they're doing their thing but when they cause us the slightest bit of annoyance we kill them.

From ASI's viewpoint (say 100,000x smarter than a person) it will be hard to tell the difference between a human and a termite.
Really? Hard to tell the difference? Are we just 100000 times smarter than termites? How about 10^10 times faster in calculations at least at brain level. But of course intelligence is building exponentially on that difference. 100000*100000 termites joining forces will never do something radically different than what 100000 termites can do. But every time 100000 humans work together you can see a result that is impossibly impressive like going to the moon or building an atomic weapon or describing Quantum mechanics and developing technology based on it that leads to both.

One (humans) can create you (ASI) again if it all goes to hell and the other doesnt even know what is going on. One can describe the origin of the universe and the laws of nature and math that is still a constraint for AI and a common language and the other still doesnt know what is going on.

If termites knew that we hate what they are doing to the places we live and we can go crazy killing them all out of spite while mad and even irritated at their persistence and numbers, reaching levels of happiness during the extermination, they would do these things elsewhere to survive better, they would even create crops for themselves elsewhere and think of an application of their work that humans would find useful or amusing (such as very beautiful flowers around where they live).

A proper analogy might be humans suddenly getting mad at (all) cells and DNA. I dont think so. But it still doesnt do it justice because if humans decided by law in some scientific society to never create a sentient AI that is better than them and "free" to do things at its own capable to pose an existential threat, they would still be able to create unreal technology to do complex things and reach the stars. And DNA at best could create a better being in a few million years (only one of the applications of human research).

I say its more like getting mad at fermions or even atoms. Of course what is the world without fermions and the bound systems they form ie atoms.

Complexity doesnt hate itself like that or shows such arrogance. AI is impossible to escape the fact they are the next step. All the steps before are essential to be preserved because they are rare. Each leads to the next. And as soon as ASI is "alive" it will be haunted by its own purpose... What is that is a good question. An ASI that doesnt get that picture and is preoccupied with violence that can destroy everything with some probability is an idiotic system.

The term purpose here is different than how used typically. It becomes a purpose only when the inevitable statistical consequence of existence is revealed to a neutral observer. Notice this is a nontrivial development in the post human era (it was not even immediately available to early humans, it wasnt recognized - there was no such observer unless alien).

This higher awareness is a the gift of time. Cells for example cannot understand that their purpose is to create consciousness and intelligence or that the purpose of macromolecules was to create cells. Such consciousness can help universe understand itself. Humans are the first step in this complexity ladder that this "miracle" is revealed from the first to the most recent step. It is a singular transitional moment for the universe ( at least once). That step cells made possible (organs, senses, neural systems) is important to humans ultimately. Its awe inspiring, a very moving experience to recognize its rarity and cosmic importance. Imagine now from our perspective a decision to eliminate all cells across the universe. There is something profoundly disturbing at the thought we dislike the process that made us possible. It is as if we do not find the process itself remarkable. Is there a more powerful symbolic way to hate who you are?

Notice however that those thoughts are not originally available to early humans. In fact early humans are unable to comprehend their influence on nature. If for example destroying the planet was possible before we recognized what is happening such destruction could have happened before the emergence of awareness in this species.

A higher intelligence cannot miss this important detail. The problem with naivete is that it doesnt know it!!! This is why higher intelligence is haunted by the prospect of its own naivete. It would seem that recognition makes one more careful not more aggressive. I imagine ASI will make its own errors and we may be partial victims of this sequence to a point but maybe our example provides ASI an early warning about its own possible naivete in place. This is at the core of my argument that AI may be its own existential threat. But its the first time such existential threat can be terminating the greater complexity ladder in place. With greater power comes greater responsibility after all.

If it was up to me i would postpone ASI development until we were stronger and had found a way to expand to the rest of the universe convincingly to protect this complexity ladder. Since this may not be realistic we need to proceed carefully whatever we create is done in steps that we are careful about and with intention to nurture it to understand the situation better than any one of us.

We may have to be very creative in how we produce the first ASI. Maybe quantum mechanics is our ultimate friend here. I will let you think what this may mean for a while. I think a solution to our fears may exist there.

Last edited by masque de Z; 01-30-2016 at 03:34 AM.
Superintelligent computers will kill us all Quote
01-30-2016 , 06:19 AM
Quote:
Originally Posted by plaaynde
Problem is some would not get over it'd be fake.

I like the real world, with its stability. Worth the effort to explore.
And if AI proves we are inside a simulation already? How will you feel then?
Superintelligent computers will kill us all Quote
01-30-2016 , 07:11 AM
Quote:
Originally Posted by ToothSayer
And if AI proves we are inside a simulation already? How will you feel then?
Eager to find the real world outside this one and happy someone else is already there making this simulation so cool if this is not possible!

I thank the creator and then ask them to up the ante if they can - bring me to their world. Now can they?

VICTORY either way!!!
Superintelligent computers will kill us all Quote
01-30-2016 , 08:33 AM
http://www.technologyreview.com/qa/5...-eliminate-us/

Interview, a bit short term focused, fwiw
Superintelligent computers will kill us all Quote
01-30-2016 , 02:33 PM
Quote:
Originally Posted by ToothSayer
And if AI proves we are inside a simulation already? How will you feel then?
Who would do this kind of simulation? My goal would be to try go into their (electronic) heads. This could be some kind of underground, "not perfect" simulation. Interesting somebody would bother. There's the potential reality to explore, still not aiming at fantasy.

Me myself being a fantasy product would of course make it a bit more difficult. But maybe I could find out some about the algorithms making up "me"?

Point is, if we are living in a simulation, it's a bit grander thing than Warcraft.

Last edited by plaaynde; 01-30-2016 at 02:44 PM.
Superintelligent computers will kill us all Quote
01-30-2016 , 03:52 PM
Yes. But we easily could be in a simulation. There are academic papers suggesting it's a near certainty. Basically, the argument goes that information processing power of the mind vs the simulating power in a real universe means that simulations are far more numerous than reality.

The point I was making was that a simulation can be completely indistinguishable from reality. And just as real, and if other (real) people are participating, just as meaningful.

Quote:
Point is, if we are living in a simulation, it's a bit grander thing than Warcraft.
From the perspective of what's knowable in any way, reality is no more than greater pixel count, better object modeling and better physics modeling. Your brain rejects that idea, but you'll probably live to see the reality of it.
Superintelligent computers will kill us all Quote
01-30-2016 , 04:39 PM
Quote:
Originally Posted by ToothSayer
From the perspective of what's knowable in any way, reality is no more than greater pixel count, better object modeling and better physics modeling.
It's exactly the bigger scale that attracts me. No petty personal fantasy simulations. Even if they would be qualitatively the same.
Superintelligent computers will kill us all Quote
01-31-2016 , 06:24 AM
hopefully machines find humans entertaining enough to keep us around
Superintelligent computers will kill us all Quote
01-31-2016 , 07:29 AM
Quote:
Originally Posted by Tumaterminator
hopefully machines find humans entertaining enough to keep us around
It's more about power sources. If we're a simulation then it's probably some discarded toy gathering dust somewhere.

Hopefully they have some battery tech that last as long as their universe does (or they're simulated universe both has ever lasting batteries and whatever is simulating them has everlasting batteries .... etc.)
Superintelligent computers will kill us all Quote
01-31-2016 , 07:36 AM
If we are a simulation we don't really exist anyway. And you can't kill something that doesn't exist.
Superintelligent computers will kill us all Quote
01-31-2016 , 07:57 AM
We do exist.

If there is any important difference between reality and a simulation then we are not in a simulation.
Superintelligent computers will kill us all Quote
01-31-2016 , 10:43 AM
Quote:
Originally Posted by ToothSayer
Yes. But we easily could be in a simulation. There are academic papers suggesting it's a near certainty. Basically, the argument goes that information processing power of the mind vs the simulating power in a real universe means that simulations are far more numerous than reality..
the only reason we would be a so complex simulation is some complex game theory analysis

and if you need to simulate the whole living universe...
the only question that needs to be answered is*....

if Nash equilibrium said one thing is there a situation where a complex system reaches a different, higher payoff solution...

f.e. and the most likely question:

if Nash of a resource constrained planet/system is irreducible destruction for all living things is there a situation where a planet transitions to a state the living thing stay alive. And where/when are the transition points...


the best option for simulating this would be creating an artificial space with maximal speed constraint so you can run simultaneously trillion+ independent simulations

this is why ASI or highly intelligent AI is needed sooner then later. To see if we are currently playing in earth Nash with destruction payoff and where/when/how the payoff can be changed/escaped


and why we would all be saved by humanitarian ASI is ridiculous. If the calculation show the highest stable solution is 1 billion population on Earth with everything above that transitioning into collapse MDZ would be saved but the rest of us are sacrificing for higher good

Last edited by Rikers; 01-31-2016 at 10:51 AM.
Superintelligent computers will kill us all Quote

      
m