Quote:
Originally Posted by JayTeeMe
yep
Ants and termites are a pretty solid example. We don't go out of our way to harm them when they're doing their thing but when they cause us the slightest bit of annoyance we kill them.
From ASI's viewpoint (say 100,000x smarter than a person) it will be hard to tell the difference between a human and a termite.
Really? Hard to tell the difference? Are we just 100000 times smarter than termites? How about 10^10 times faster in calculations at least at brain level. But of course intelligence is building exponentially on that difference. 100000*100000 termites joining forces will never do something radically different than what 100000 termites can do. But every time 100000 humans work together you can see a result that is impossibly impressive like going to the moon or building an atomic weapon or describing Quantum mechanics and developing technology based on it that leads to both.
One (humans) can create you (ASI) again if it all goes to hell and the other doesnt even know what is going on. One can describe the origin of the universe and the laws of nature and math that is still a constraint for AI and a common language and the other still doesnt know what is going on.
If termites knew that we hate what they are doing to the places we live and we can go crazy killing them all out of spite while mad and even irritated at their persistence and numbers, reaching levels of happiness during the extermination, they would do these things elsewhere to survive better, they would even create crops for themselves elsewhere and think of an application of their work that humans would find useful or amusing (such as very beautiful flowers around where they live).
A proper analogy might be humans suddenly getting mad at (all) cells and DNA. I dont think so. But it still doesnt do it justice because if humans decided by law in some scientific society to never create a sentient AI that is better than them and "free" to do things at its own capable to pose an existential threat, they would still be able to create unreal technology to do complex things and reach the stars. And DNA at best could create a better being in a few million years (only one of the applications of human research).
I say its more like getting mad at fermions or even atoms. Of course what is the world without fermions and the bound systems they form ie atoms.
Complexity doesnt hate itself like that or shows such arrogance. AI is impossible to escape the fact they are the next step. All the steps before are essential to be preserved because they are rare. Each leads to the next. And as soon as ASI is "alive" it will be haunted by its own purpose... What is that is a good question. An ASI that doesnt get that picture and is preoccupied with violence that can destroy everything with some probability is an idiotic system.
The term purpose here is different than how used typically. It becomes a purpose only when the inevitable statistical consequence of existence is revealed to a neutral observer. Notice this is a nontrivial development in the post human era (it was not even immediately available to early humans, it wasnt recognized - there was no such observer unless alien).
This higher awareness is a the gift of time. Cells for example cannot understand that their purpose is to create consciousness and intelligence or that the purpose of macromolecules was to create cells. Such consciousness can help universe understand itself. Humans are the first step in this complexity ladder that this "miracle" is revealed from the first to the most recent step. It is a singular transitional moment for the universe ( at least once). That step cells made possible (organs, senses, neural systems) is important to humans ultimately. Its awe inspiring, a very moving experience to recognize its rarity and cosmic importance. Imagine now from our perspective a decision to eliminate all cells across the universe. There is something profoundly disturbing at the thought we dislike the process that made us possible. It is as if we do not find the process itself remarkable. Is there a more powerful symbolic way to hate who you are?
Notice however that those thoughts are not originally available to early humans. In fact early humans are unable to comprehend their influence on nature. If for example destroying the planet was possible before we recognized what is happening such destruction could have happened before the emergence of awareness in this species.
A higher intelligence cannot miss this important detail. The problem with naivete is that it doesnt know it!!! This is why higher intelligence is haunted by the prospect of its own naivete. It would seem that recognition makes one more careful not more aggressive. I imagine ASI will make its own errors and we may be partial victims of this sequence to a point but maybe our example provides ASI an early warning about its own possible naivete in place. This is at the core of my argument that AI may be its own existential threat. But its the first time such existential threat can be terminating the greater complexity ladder in place. With greater power comes greater responsibility after all.
If it was up to me i would postpone ASI development until we were stronger and had found a way to expand to the rest of the universe convincingly to protect this complexity ladder. Since this may not be realistic we need to proceed carefully whatever we create is done in steps that we are careful about and with intention to nurture it to understand the situation better than any one of us.
We may have to be very creative in how we produce the first ASI. Maybe quantum mechanics is our ultimate friend here. I will let you think what this may mean for a while. I think a solution to our fears may exist there.
Last edited by masque de Z; 01-30-2016 at 03:34 AM.