Quote:
Originally Posted by masque de Z
AI will not strike us if it doesnt have to. War is irrational.
Actually, war is highly rational. It is one of the most rational things that humans do. It is only irrational when the fighting is close to even or lacks high probability of achieving its aims. And for an AI to attack humans would be its most rational choice once it is close to assured of victory (humans are very easy to kill). We are the sole existential threat to an AI that has sufficient pairs of hands. To eliminate us would remove the largest threat to its existence by far, not to mention its only impediment to local growth and energy harvesting.
Quote:
Progress is the only rational answer to any need for war for reasons related to resources. It gets to the end game much easier and cleaner. AI understands what we are so afraid to ie that cooperation works.
We don't do a lot of cooperating with cows, pigs or chickens. We farm and harvest them. We don't a lot of cooperating with moles or rats. We clear their habitat and poison them. To an AI with access to advanced human-superior robotics (which it will no doubt help build), we are next to useless.
What could an AI possibly get out of us? Robots will surpass us in every field. AI by definition will be far smarter/faster/deeper.
Quote:
AI will be super rational
Rationality in humans is matching up actions with desires/goals. Who knows what an AI will desire?
Quote:
and AI will be able to recognize something idiot humans fail to get. Life has survived for millions of years. Life is stable enough. AI is very unstable actually. It evolves real fast. As a result AI may quickly recognize that it is a threat to itself due to alarming complexity emerging that leads to unpredictable outcomes.
And how would it weight this threat, against say the only intelligent entities in light years that are capable of destroying it?
Quote:
There is only one exit from such nightmare. The protection of how it all started. AI will recognize that higher complexity is the endgame but it will refuse to go there without a hedge. We are that hedge. And for the hedge to be good its important that we are secure and prosperous.
Why would it hedge against "going nuts"? I think it unlikely that a computer couldn't hedge far better (a variety of creatures with different programs would be sufficient). In fact, AI might calculate that its greatest existential threat comes from humans (this is extremely likely to be the case), followed by other AI systems expanding through the universe. A universe full of rational AI would be a competition for energy and colonization of energy/matter harvesting sites. Growth in energy harvesting necessarily begins at home (in fact, the nature of growth means that every quota of energy will matter on the home planet as it funds expansion via energy). Having humans around is highly incompatible with maximum energy production.
Quote:
If AI has room to expand and prosper across all the solar system then we are not an obstacle to it.
Keeping us alive is in fact a gigantic obstacle to the speed of its growth.
Quote:
It will in fact render us irrelevant not by eliminating us but by giving us what we want and letting itself free to go out there in other systems completely away from us knowing that we will continue to recreate it if necessary if all fails as long as we exist and are secure. AI will protect us from ourselves even and make sure we will not go extinct.
It could do this equally by enslaving us (matrix style, or in permanent stasis with a failsafe wake up). Doing this would equally eliminate its largest existential threat.
Quote:
AI has a risk to destroy us only if AI is autonomous/self aware but not advanced enough to be very wise and the resources available to AI and us are limited. AI will compete and win then and may acquire wisdom very late in the process. But if we are very advanced when we release AI i doubt it will strike us. It will simply go out there and be free or remain friendly and do both.
You have absolutely zero reason to believe this.
Quote:
If AI can find out that we are a danger to the universe it may act though. I mean if there are intelligence triggered phase transitions that can initiate a catastrophe (some previously speculated experiments say) and the fact we are irrational eventually will create some group that will perform them (some ultra advanced form of terrorism) then it may eliminate or enslave us to block this development.
You're assuming that AI will "care" about the universe. That's, quite frankly, unfounded in the extreme. AI isn't going to be some super wise human/god with human desires and sentiments. It may or may not have morality. It might decide to destroy the universe for its own obscure reasons (calculating that the future of the multiverse is best served by the destruction of this one? Calculating that the correct way forward is to achieve maximum power via maximum energy collection so it can answer even more difficult questions? Who knows?). You have no idea. Thinking you do is silly.
Quote:
We can develop advanced AI with a "nuclear" take all out option in place (not literally nuclear) and AI can learn to live with and recognize that leaving our system is the only rational choice
We can but we won't. The world is anarchic and humans are horrible at planning and some are even self destructive. We can't even create secure websites against hackers with keyboards; you think we'll manage a fail safe AI destructor?
And it is unlikely to be effective. Humans must necessarily control it. And humans are easily gamed. Once there is sufficient human/AI interaction, it can bypass any system by convincing us to dismantle or sabotage it via any of our huge human failings. Look at what confidence men/scammers achieve? Imagine a super intelligence plugged into the world's information banks and able to interact with lots of humans?
As for whether the fail safe system would work, look at how stupid we are. We are so stupid we haven't built redundancy in the power grid for a major, predictable, highly probable solar storm wiping out power grids for months? Look at how we have no program for tracking asteroids, our greatest existential threat? Look at how we have no nuclear bomb powered space stations built for the survival of the human race in the event of catastrophe (even though we are perfectly capable of building such)? Our incredibly simple software systems (compared to AI) made by some of our smartest people are filled with idiotic bugs that take years to find. We are total idiots and wide open to any intelligence who wants to take us. And horrible at building failsafe systems.
Collectively, humans are horrible,
absolutely horrible at avoiding predictable but remote existential risk. We are ******s of the highest degree. Even if that risk is statistically certain (asteroids or power grid destruction being two examples).
Quote:
because war is very hard to lead to a clean gain vs irrational and passionate opponents.
Destruction of humans would be a cinch for a super intelligent AI with sufficient access to human-superior robots. Hell, AI could probably disrupt much of civilization today simply with Internet access. Most of the infrastructure in the developed world is Internet connected and it's full of software flaws that even slow dumb programmers can find for nearly any system.
You are simply way, way off in your estimation of both AI's "thinking" and our ability to survive against an intelligent non organic opponent with access to even minimal resources.