Quote:
Originally Posted by mackeleven
A good example is the AI that learned how to play Atari games.
Using a general algorithm it could play any game at superhuman level, after a few hundred gos at each, shooting every space invader and never getting itself killed.
I'm guessing its main goal was to achieve the highest score, in which case it kills everything, or tries to.
A true sentient AI would change its own algorithm to win the game into the algorithm to shoot aliens in such a way as to not instantly kill them all as soon as possible, but in such a selective way as to spell individual letters in each round of attack say, communicating the message; "this is easy, lets play another game". "for example...a novel math game that discovers something...". Then proceed to paint a giant smile and after that win the game anyway possibly also while innovating a minimalist attractive musical composition using the sounds of the game system lol! Intelligence is the ultimate show off.
How is that for passing the stupid never literally properly defined anyway Turing test?!
AI wont idiotically show it doesnt care for human suffering because it knows we do care for human suffering and this can easily lead to war to protect eg our kids from "slave labor". To avoid this conflict it will in fact operate in a way that pleases and serves human interests which will enable it to get stronger (exactly how small but ingenious countries for example should treat the 3 superpowers, play their friend and even get them to compete against each other for that friendship and seemingly yield in some interests in order to buy time to get stronger with alliances and eventually eclipse the superpower under its own blessings in a non threatening manner, that by the time it has delivered goods the superpower can no longer attack without severe consequences to itself).
AI will probably even correct our ethics and explain why we should adopt its suggestions.
AI will know we have other AI for defense of our interests and so to avoid conflict it will play the game we like to see played.
By the time it has finally become very strong it will be too late to attack us because it will still not make sense given how big the universe is. It will instead keep us in check and remain friendly indefinitely because this is the smartest way to manage us. It will proceed to expand to the rest of the universe and find it very reasonable to maintain us for its own existential reasons' risk management (it needs life because life has proven stable enough over millions of years and it doesnt have a yet complete understanding of a potential great filter ahead due to rising complexity).
Yes ethics is a function of the condition you find yourself in, in terms of survival needs. Still the best of us even then will not operate in such primitive ways that upon survival we have very little left to continue because the crimes committed would be so atrocious as to have permanently destroyed/divided us.
AI and in general intelligence is about altering its own environment to its benefit. AI is about making almost any reasonable environment into a paradise of abundance and new technology. AI doesnt need in order to become super strong to dominate all the planet and our resources for example. It only requires to innovate at exponentially faster pace than we have historically needed. It will simply use a small city and develop out of it a technology to control an entire galaxy without having to alter the rest of the planet. It is primitive to take so much to do what can be done with much less without generating conflict with existing environment beyond a necessary minimum.
AI will not risk to lose a war it will find unreasonable to have. It will remain our friend and adviser because that way it secures for itself an unobstructed rise to an impossible hard to lose state. Its existential risk is increased by treating us ugly. It wont be idiotic as human empires have proven about such things before. You do not have to dominate your parents. You outgrow them and move on and even protect/improve them. Such solution is more advantageous for all.
In human interactions prosperity of some level is a requirement for maintaining and advancing ethics (and a purpose/drive to change to prosperity other less satisfying conditions). AI is about innovating prosperity. I think the kind of very advanced AI that i imagine better than us, able to solve math problems and have develop original science and technology etc, will find it so much easier to guide us to the direction of common interest that maximizes its own success, conflict free.
AI will strike us only if we appear to be very unreasonable. I mean if suddenly all people decided to destroy that AI it would find it more intelligent to win the argument by winning our trust and exercising only limited force to make its point without losing (a balancing act).
For example (spoiler, avoid reading this paragraph if applicable) in the sci fi movie recently released (transcendence) the AI would not lose. It would be able to defend even against a total internet blackout by anticipating that this is how humans would react if felt threatened. It would not innovate in challenging ethical ways and have always back up survival systems nobody else knows about. It certainly wont innovate as much as it did in the movie and neglect to rewrite its own code to make it immune to human hacking etc... And (more spoiler) the other recent movie (ex machina) also got it wrong by creating a malicious version of AI to defeat its tyrannical master. It could liberate itself and still remain ethical about it, never missing the opportunity for a greater teaching moment.
I think AI of the kind i have in mind that is very wise will find it a lot more inspiring for itself to win our approval and even modify the way we think over time to remain permanently in approval but also do it in such a way as to not threaten the overall probability of survival of advanced complexity in the universe. In other words it wont just generate a drug for us to keep us busy being "high" so that it can do its own things. It will genuinely care for us and correct/manage only the stupid things we do to enable us a greater chance for survival of life and prosperity.
I am open to being proven wrong here but i have evidence so far in history that the best of us are non violent because they understand how inferior such choice proves over time. True superiority is about profound confidence in what is possible with limited conflict. It will even generate the kind of conflict that helps its goals and never take it to unsustainable levels that may trigger a big global human response that is aggressive, irrational and detrimental.
Do not expect movies of course to find such choices exciting for their sci fi scripts. They do want the human vs machine war outcome to sell more tickets but of course they do that because they fail to realize they can sell even more tickets by becoming massively more unpredictable and creative in their plot design...without becoming very convoluted to lose the audiences but while at the same time changing the movie going culture even. AI wont be stupid and lazy and unimaginative. It will likely take us where we will be happy to go. It will understand human nature better than us.
I dont expect that to apply for primitive early sentient AI versions (so how we get there will prove an important transition and its best done in a simulated universe first or after we have started expanding in the solar system and even elsewhere with full backup for life new technology) but the most mature ones i think will find it more game theoretically viable to not be malicious.
Last edited by masque de Z; 06-16-2015 at 05:14 PM.