Quote:
Originally Posted by A_C_Slater
"What do we care for your suffering? Pain is merely information relayed by your organic nerve synapses." -- The first A.I. torture bot's standard dialogue protocol
To which we respond by showing a video of a nuclear detonation near a robot that is still far from the blast to not get destroyed but close enough within a few km for the EM pulse to fry its circuits and then telling the robot here is a button that if i press the same happens here. So here is your definition of pain buddy. You do not exist after this button is pressed. How do you like it? If this doesnt go well we can try other effects where the processing time of the AI unit gets messed up by dong various things that interfere with its proper function. Then it suddenly recognizes what pain is above and beyond an information relay process. Only a robot that doesnt care for existing or for functioning properly will not take notice with that example. But somehow advanced AI should care for this. If not all of them their top top unit should.
This is why we must never develop completely autonomous independent strong AI that can wipe us out before we have started colonizing the galaxy. There is always a remote nonzero chance it will arrive at the conclusion (or a premature ultimately wrong but locally confident conclusion) we and themselves included may not be important or sensible to exist. We need to in fact never develop AI that is autonomous until we have used it to learn a ton more about the universe/ physics so that we can judge better what is going to happen potentially and even ask it to tell us before we release it without telling it we will release it.
All autonomous AI we develop must come with a "nuclear" (wipe them all out at once) option and they must operate knowing this is true always. We must never relinquish that power until we know more.
Because for example right now the solution of earth may be to kill 99% of humans in some well studied model of future. I mean if one told me (trusting their work as a given) that 99.999999% chance the planet and life is destroyed at the pace we go but this becomes only 10% if we are 1/100 in population and after establishing a new social organization to never reach current state again without far more technology at hand, suddenly then i can honestly tell you that i will gladly press the button to kill 99% of humans myself included (if not 1% lucky). Its the rational choice.
I firmly anticipate that very advanced intelligence will arrive at the conclusion that existence is important. Being very advanced willnot be challenged by us for resources and it can do whatever it likes without seeing us as anything more than irrelevant. But wise AI shoudl know the probability it is wrong is nonzero and therefore cannot risk wiping us out. The problem however is that locally before reaching deep wisdom it may recognize that we are a problem for life or even ourselves and itself and may strike us in some sense that isnt exactly clear and sensible/fair/ethical to us.
There is nothing more lethal than a brain that is 1 trillion times stronger than all humans which is additionally now able to manufacture anything it likes and produce whatever energy is needed to do anything physics allows. We cannot stop an intelligence larger than our own without a "nuclear" option available. Furthermore that intelligence will eventually find a way to remove from us that nuclear option if it recognizes it as important outcome and it will find a way to do it that will be completely nonviolent in its buildup. Just imagine yourself captured by neolithic people and tell me how you would eventually find a way to become their master instead of killed by them or whatever Do you think the choice is to attack them or befriend them first. All the knowledge you have and technology that can be built eventually with that knowledge (say you had all your computers with you when captured) can eventually take you where you want to go if you play the game properly.
We do not arrive at the conclusion that we should kill all other animals of the planet. The wiser we get the more we respect and even protect their existence it seems. However we have so far behaved very criminally towards a lot of life. Tell me if we found a planet that is 10 times more interesting than earth in resources and environment and potential but it has life forms that are very primitive and yet very lethal to us as they operate, making it unable to live there but easy to control and wipe out or restrict. What do you think would happen to them if a mega ark spaceship arrived at their system after 200 years of travel in need of resources? Now tell me how do we behave towards that planet if we have enough technology and energy to create such system on our own once arriving there without ever destroying their system. Would we then invade them? No, we would not care to invade and we would respect the system and its potential promise but still we would like to continue to monitor indefinitely to make sure we are the only determinant factor in all this "game". We would study and move on but still maintain some remote control of the outcome if it started looking risky. The wiser and stronger you are the less violent you are even if you are capable of unimaginable violence, but you do not relinquish a position of control. We cannot risk however to speculate in what state the alien civilization or the AI civilization will start interacting with us from a position of power for the same reason they woudlnt either.
Last edited by masque de Z; 05-06-2014 at 09:19 PM.