Quote:
Originally Posted by masque de Z
A vastly higher intelligence is also vastly higher ethical. As i have explained before it is to their interest to preserve us without conflicts and intervene only when we are careless and stupid. We and life in general are their hedge against their own unpredictable singularity future. Life has proved very stable in the range of billions of years unlike AI. So it is wise to be very careful for it in terms of radical decisions.
This is ridiculous on many levels. Wishful thinking and myopic thinking run amok. We have no idea how AIs will turn out, and a large number of states/goals/evolutionary outcomes involve the destruction of humans or this planet for habitable life.
Quote:
As it is humans are mfing idiots in terms of protecting themselves with the bs we see out there ! I have no problems if superior AI kicks some butt either where earned!
Yes. Perhaps the greatest example of being a "mfing idiot" in terms of protecting ourselves is thinking you know how something as advanced and unknowable as AI is going to behave.
Quote:
If AI fails we will recover it for example! We are its existential defense!
This is just absurd. I mean I see and appreciate your reasoning, but you seem unable to realize that there is absolutely no concrete reason that an AI would think this way or take this path. Given that humans are by far an AI's largest existential threat, it's much better in term of its chance of continued existence if it wipes us out. Against this you pit the vanishingly small possibility that an AI will fail and need to be recreated. That implies:
a) An AI will care about AI in general, and not its specific implementation
b) This is a better way to survive that creating clones of itself or a rich AI ecosystem, using all of earth's resources
c) An AI will put those things it cares about above other goals (resource acquisition, elimination of threats.
d) If it does care about continued existence, that keeping humans around represents less of a threat than a situation that would require humans to be around to rebuild an AI.
You require a large number of assumption for your scenario to be true. I (saying that AI is possible serious threat) require a far smaller number of assumptions.