Quote:
Originally Posted by leavesofliberty
Though it's possible, even probable, heck, even certainty that there will be a robot oligarchy of sorts, this does not entail double-digit extinction possibilities in my estimation. There is no reason robots and human goals should conflict.
Humans are worthless to AI once robotics are superior to human bodies and silicon minds are superior to organic ones. On top of that they're dangerous -we represent the only major existential threat to AI - and compete for resources.
Quote:
In my opinion, robots in the future would find maximum use of earth would be to allow humans to govern it, and to tax it for revenue while governing outer space utilizing unimagined energy supplies. Extension will be from not governing earth. Honestly, what short term use would robots have for earth that outweigh the benefit of an entire species?
You're seeing robots as one entity. If they're not, if they're individual minds, which is a near certainty, wars for control and territory and resources are probable, long before they start expanding into space.
And that's not even considering what will happen sub-autonomy. Robot armies could destroy entire populations quite easily. If China overtakes the US in technology, and they decide they want to control Asia or Africa for resources, and they send in an army of advanced robots, the US does what?
Then there are the genocidal possibilities. Miniature robots are unstoppable vehicles for regional pathogen or toxin delivery, for example, abilities which will only increase as times goes by (we can now print any DNA sequence we choose and run a cell on it, for example).
The future of robotics and AI involves enormous risk.
Last edited by ToothSayer; 07-04-2017 at 05:43 PM.