Quote:
Originally Posted by pokerodox
Dudes. Stop attacking each other.
The interesting points:
TS: (1) AI could become immensely powerful, enough to massively overpower humans. (2) We have no reason to think it will want to preserve humans.
leaves: (1) AI is nowhere close to achieving this immensely powerful state. (2) We have no reason to believe it will do so any time soon.
Proceed.
Okay, thank you.
Quote:
Originally Posted by TheHoss
I think you're misunderstanding people's arguments with all of this pop-sci stuff. Nobody is basing their opinion of AI on statements by Musk, Kurzweil, or whoever. Well some probably are, but I doubt any of them are posting in this thread. The fact that Musk may have some valid points on AI is coincidental, and your fixating on the pop-culture presentation of AI is kind of a strawman.
wrt your tongue-in-cheek psychopathic machines comment - the main concern obviously isn't that AI becomes "evil" in a sentient, anthropomorphic sense. Most people working in AI agree that this is a pretty far-fetched issue for now. Andrew Ng's famous quote about worrying about evil AI being akin to worrying about overpopulation on Mars was a pushback against this kind of notion.
The main concern is that AI becomes so good at optimizing to achieve its goals that as a byproduct, humans get optimized out of existence. AFAIK nobody, including Ng, denies this as a threat.
Now, I'm going to place AI on a continum
<--------+----+-----------------------------------------------------------------------------------------------------------------------------------------------------+------------>
< First computers
+ Chess & Checkers
+ Automation & Machine Learning
+ Commander Data is fully functional
> ?
Now, if the existential threat is real, then perhaps we should have stopped the first chess computers, or regulated computers out of existence entirely. Think about that. Think how much worse our lives would be today if we followed that logic. The world population would be cut in half. Fortunately nobody thinks the threat is great enough to warrant that kind of response, except for primitivism.
Regarding existential threats. If an asteroid was coming 100 years from now, it would be an existential threat, and more scientists would go into astronomy, and people would innovate their way out of it. Of course there will be some asteroid alarmists (just as there were Y2K alarmists) who do little more than spread panic. But who cares? They are simply not problem solvers.
This is how I see AI in the future. There is not going to be a single moment where an AI all of a sudden becomes intelligent. There are many, many steps before Commander Data becomes fully functional, and then we'll see if he wants to take over the ship. In the mean time, there are Google Tech Talks, and all kinds of AI summits, where the people who use AI (as in an umbrella to cover automation and machine learning), brainstorm how to make it better.
Now, what's interesting about Commander Data is that he could take over the ship, but this is only because he has a psychology where there is the free will construct of choosing between goals that present themselves. And, we'll start to flesh the construct of free will (and hence it's counter-part determinism).
It is my contention that the AI alarmists today are engaging in a form of primitivism, much like the resistors to computers. And this is problematic for the rest of us who would rather the world be able to hold more people rather than less people because some would argue we're enabling Hitler, Khan, etc. This kind of ignorance makes the world less inhabitable. If these cherry picked "authorities", Bill Gates, and Elon Musk, came to summits, and gave a speech about the dangers, etc. it wouldn't really change how people today look at automation and machine learning.