Quote:
Originally Posted by leavesofliberty
I'm sure you can find several scientists stuck in theory, and laboratories, who thought that all cancer would've been cured ten years ago. I think the true hubris is your comment that it's a 90% probability, and you don't really seem to realize that there are milestones between here, and fully functioning autonomous knowledgeable and thoughtful AI. And, much like the Y2Kers, you do not realize that people see the challenges as the milestones come on the horizon.
The Y2K comparison is silly. Possible mitigations:
- We're smart enough to balance physical and processing power between different AIs, creating an ecology of sorts that evolves peacefulness
- Human augmentation allows us to increase our awareness and processing ability enough to keep up with and keep control of the system
- Motivation is rare and has to be specifically created and is unlikely to evolve
These are the 90% where nothing bad happens
Quote:
There is also no reason to assume that conflict will occur between AI, and humans, yet you arrogantly put it at 10%-99% in coming decades.
I mean, you can't possibly be wrong with your conclusion, but there's like no thought process to how you reach that conclusion, like the Y2Kers. WE WONT FIGURE IT OUT IN TIME GUYZ.
I don't need to explain it. It's obvious, pure common sense. Simply put, out of all the possible AIs that could evolve, only a subset are human-friendly, and there are solid rational reasons for AI not to be. To be more specific
- Pure game theory dictates the destruction of threatening elements if possible without retribution. Any threat assessment and AI will do will find humans to be the biggest threat. An AI that desires survival could easily see its most rational course as the destruction of competitive or existential threats.
- The current best model we have for AI - humans - are warlike and nasty, and only constrained by forced parity with other processing units, and psychological traits such as fear and the desire to survive
- The current best model we have for AI - humans - treat lower forms of intelligence as things to be exterminated when they get in the way of resource acquisition. We do this every day en masse, barely giving the painful poisoning of intelligent feeling social mammals like rats a second thought. This is despite having an evolved conscience.
- Competitive survival dictates the hoarding of as many resources as possible. There is selection pressure toward ruthless, rapidly expanding AIs gaining computer resources and physical resources.
- The current best model we have for AI - humans - become manipulative psychopaths and sociopaths when given sufficient power or when lacking the biology-based emotional programming that keeps us in check.
- AIs being used in warfare will guarantee the creation of armies of killing machines and ruthless strategic intelligences. This is unavoidable while there are major competing superpowers; China alone guarantees this arms race will happen. This will alter the force dynamics of war so as to render current war (including nuclear weapons) obsolete;there are many potential dystopian outcomes from this if the US isn't at the forefront of this.