Murder, I think most of what you wrote is intelligent / well founded, and the answer is simply, "Yes, that's a huge concern, and yes Kurzweil addresses it in his book. He talks about possible pitfalls and agrees that it's not all 100%. He gives a lot of details, facts, data, reasoning, examples, etc. that show why his optimism isn't poorly founded".
Quote:
Originally Posted by MurderbyNumbers123
People see stuff like cloning sheep and say "oh society will do anything in the name of progress". But things like that are to meet legitimate demands on behalf of humans (health etc). Some of the crazier achievements Kurzweil describes seem like they are not driven to fit any demand on behalf of society, or even the AI itself for that matter. Does AI automatically feel the "need" to advance itself all the time? Could it not be content at a certain level which fits its required needs and societies incentives? (much like a real intelligence for the most part). Its just all so presumptious and there are so many things, an unlimited amount of things, that could, and will stand in the way.
This paragraph on the other hand shows a lack of understanding IMO.
There are many scientific research projects that have no immediately tangible goal. CERN exists to prove the existence of the Higgs Boson particle. That in and of itself has very little tangible value. The idea is that it will lead to a better understanding, and that will indirectly lead to things that do have tangible value. It's not like, "I want to invent A, so I invest in research project B" is the only way to go about these things. It's rarely that straight forward.
Quote:
Does AI automatically feel the "need" to advance itself all the time?
Yes, because we will program it with that in mind. Many people ask, "will we be able to control the AI?" To that I like to answer, "WE WILL BE the AI." (neural implants, etc.)
To me there are very few questions that need to be answered to logically "almost prove" the singularity.
A) Will we be able to avoid killing ourselves before we get to the point of mastering C?
B) Forgetting human knowledge for a second, is there a great deal of scientific progress that is even possible (i.e. super bountiful energy sources, various nanotechnology dreams, or crazy **** that happens in sci-fi movies)?
C) Is it possible to create consciousness artifically?
If the answer to all of these questions is yes, I don't see how creating AI won't inevitably lead to a singularity.
That being said, I think there is at least a logical argument for why no is the answer to any of the above questions, but personally I am relatively confident the answer is yes to all 3.