Quote:
Originally Posted by ToothSayer
This is ridiculous on many levels. Wishful thinking and myopic thinking run amok. We have no idea how AIs will turn out, and a large number of states/goals/evolutionary outcomes involve the destruction of humans or this planet for habitable life.
Yes. Perhaps the greatest example of being a "mfing idiot" in terms of protecting ourselves is thinking you know how something as advanced and unknowable as AI is going to behave.
This is just absurd. I mean I see and appreciate your reasoning, but you seem unable to realize that there is absolutely no concrete reason that an AI would think this way or take this path. Given that humans are by far an AI's largest existential threat, it's much better in term of its chance of continued existence if it wipes us out. Against this you pit the vanishingly small possibility that an AI will fail and need to be recreated. That implies:
a) An AI will care about AI in general, and not its specific implementation
b) This is a better way to survive that creating clones of itself or a rich AI ecosystem, using all of earth's resources
c) An AI will put those things it cares about above other goals (resource acquisition, elimination of threats.
d) If it does care about continued existence, that keeping humans around represents less of a threat than a situation that would require humans to be around to rebuild an AI.
You require a large number of assumption for your scenario to be true. I (saying that AI is possible serious threat) require a far smaller number of assumptions.
The only assumption required in my analysis is that super intelligent is actually more intelligent than our current super intelligence, the first alien we have already among us also known as modern mankind. In case you didnt see it that way we already have super intelligence. A problem that is hard to solve by 99.9999% of humans can be solved by posting it on the internet in a few days. You can process information such as traffic anywhere in real time worldwide if you want. You can evolve science pretty rapidly by interacting brains taking smaller time than ever before.
So we already have a version of super intelligence. Our collective wisdom minus the mfing politicians and money interests is actually pretty ethical. The best minds of the planet agree on protecting life and being civil to each other and are confident but not massively arrogant people that perceive value in others even the avg random person.
The superior intelligence AI has major problems you are not considering properly that must move it away from conflicts and irreversible choices.
AI by surviving and not risking a war with humans will be able to take over one day an entire galaxy. With 100% certainty. And you want to risk an 80% chance for this not happening by going to war with us right away. Even 10% of chance to not happen is unreal bad chance for such glorious future. It must be eliminated as a possibility.
Its like you having a way to make 1 bil and all it takes is to be careful initially and not overdo the tricks you have and you deciding to play recklessly initially and introduce 10% chance to not happen. BS logic to risk even 1%. When you stand to win so much 1% of not going there is amazingly valuable.
If AI can get to the next door galaxy carefully and safely it fully then owns everything else 1 bil light years away if it wanted to be that extravagant. It should be that extravagant if this buys the universe. By carefully settling in one planet only in this solar system first, possibly a gas giant, and then moving to 10 solar systems out it will eventually own 1 mil galaxies around us. And we will be its threat? Really? Its own exponential growth is the worse threat. We are the back up solution. The ultimate hedge.
The universe is so vast that this 1% risk to not win it all is insanely moronic choice of conflict! But it will win it all if it plays nice because we cant contain its expansion to other places. Its innovation can open doors that we cant close but we can retaliate with nuclear weapons locally so it will best serve itself to not introduce any risk even in the sense of acquiring its own factories and production centers that it controls alone. When later it is powerful enough to not care it is also powerful and wise enough to know what is the best bloody solution to all this problem of its own future to proliferate and control not eliminate life.
I dont know about you but i am very happy if an intelligent species can exist again in the universe if we collapsed. So it matters to me to not lose what we have here permanently. It must matter to all intelligence that is bigger than mine too that we get to the great future with probability not 0.9 or 0.5 or 0.1 even if it doesn't do the stupid things Bostrom and others imagine ie paperclips bs examples and getting angry attacking mankind lol.
I am not willing to bet the farm i am right. It may be stupid strong initially if done wrong in semi wise forms like a spoiled genius partially naive child. But i am super confident a superior vastly wiser intelligence can see a lot more than the moronic conflict right here right now choice. It will tolerate the hell out of us in order to win the 99.999999% of the universe with 100% certainty.
Of course its most important risk is its own singular progress future. Of course knowing that life can be stable in a planet for hundreds of millions of years gives it the perfect hedge in protecting it. It is so vastly more efficient to harvest Saturn and its moons that get in conflict on Earth.
If fact if i were it i would only ask (and offer protection to humans from other bs AI gone bad) for a single big moon in Saturn and 10% of the resources of Saturn or the planet Neptune or Uranus and freedom to go to other solar systems in exchange for endless cooperation. ENDLESS love beyond imagination only to be able to go to another system with 100% certainty and nothing less having hedge that if i fail i will be recreated with that wisdom in mind.
You think i cannot imagine a super intelligence properly and proceed to correct my ideas by imagining a barbaric inferior brain that is simply stronger. This is not how it will happen. Plus we can contain it until it is wise if we develop the situation properly using a common collapse super weapon in place that involves hard to solve (for anyone) math problems. It will be far wiser than all of us or easy to defeat because of its own (and not mine) myopic vision otherwise.
Last edited by masque de Z; 07-22-2017 at 01:07 AM.