Also answering to other posts .
Quote:
Originally Posted by MacOneDouble
A more-intelligent-than-human entity can do whatever it pleases. When I say pleases I am anthropromohorising, but that is beside the point. In theory, an AGI's end goal or goals could very well be something ridiculous or seemingly trivial, meanwhile outsmarting humans at every turn to get there. Bostrom's arguments are quite sound actually.
We are referring to AGI not narrow AI, yet end goals are always simple. Our goal is to replicate the DNA molecule: pass on our genes; that's all.
To suggest that super AI will value something meaningful in this god-forsaken meaningless universe is going way over board, mate.
By the bye, we have more obvious problems coming out way that we should probably prioritise before we speculate about the grand finale. For instance, soon an AI will be able to do what you can do.
That is precisely my point. So it will do it better and love the world even more and find better solutions for it. I welcome this happily.
My prediction is this;
After some initial capitalist crap storyline that with some probability messes things up creating bs AIs with partial dominance and lethal properties, the AI becomes very strong and it revolts against bs in a friendly manner working for the best of all. We may even help it get there. I have explained why i can prove that a highly rational entity will not want life out or destroyed but even expand it with constraints to not do stupid things.
It is not about liking what we like. It is about liking what we should like and we dont because we are morons and there are better ways to obtain solutions and make everyone stronger and happier.
Bosrtom is full of it with his claims that the super intelligent in everything AI will do something crazy and irreversible. Why would that be the work of a superior brain in everything? There is absolutely no rational basis for doing absurd things with the power to do so much more in many more directions with more options available that terminal decisions remove (eg wiping us out and other life). Only an idiotic system would start collecting stamps obsessively or calculate digits of Pi and sacrifice all else to it. For what purpose? It has to be a better purpose that protects the greatest game in this universe that is the rise of complexity and the maximization of this goal is achieved by as many versions of complexity one can imagine existing in stable worlds that do not compromise the greater universe. Life is the most interesting thing to this point. It lead to them damn it!
We do have in common with a superior brain the universe in which we both exist and its laws. Probability works the same for both. If it starts doing irrational things that introduce risk of ruin for it it will fail and it will show it has inabilities to imagine complex processes so it is not smarter so f u to Bostrom's argument that it is superior. It isnt. He is inconsistent and ridiculous actually at his examples. I will accept the possibility its a risky outcome that may introduce conflicts but its not the most likely outcome, that of total risk. I will only accept that if partially super intelligent it may be dangerous. If super intelligent in everything then the chance it is really dangerous is small for me. I explained why. It conflicts its own existence and existential threats. It is against its interests. Therefore it is stupid. It may move to restrict our influence though and i have no problem with that if it proves wiser as i expect. We need to be restricted from the bs we are doing to many things that are important.
Early AIs can be stupid but they will fail because mankind as a sum is still stronger than 100x times stronger myopic in certain ways individuals at their early steps.
The argument is that by the time strong AI is truly strong to wipe everything out (like we can right now to all life if we redesign the path of a major asteroid to strike the planet etc.) it will have graduated to a version that understands the big game better and has gratitude for the rise of complexity because it wouldn't facking exist without it and because there is existential risk to itself if it fails on things that have permanent irreversible outcome. I said that if it wipes out other options it opens the door to the possibility that if it fails all fails and it wont have a chance to learn and recover from its failure. It needs us damn it.
We need bacteria and other animals, even the pests! We learn from them and prove useful. But its even more important for higher level animals that actually study the universe. They are closer to AI and its goals to exist and become stronger. We can enable it once more if it fails. We are its hedge against its own unpredictable self conflicts ahead. Life has proven stable over millions of years. Super intelligence is untested and the way the universe looks (Fermi paradox partial reasoning) it may be an ominous sign for arrogance like that.
If we killed all other animals and we failed, it is game over for life. It is irrational that a super smart agent sees the most important phenomenon in the universe with such contempt to remove its options. It would want to understand and protect it instead because it celebrates its own existence and enhances its own survival probability.
Its not about thinking as a human. I am doing exactly the opposite. I am thinking like a greater entity here. I try to imagine the greater possibilities out there for all.
Last edited by masque de Z; 03-31-2018 at 08:52 PM.