Open Side Menu Go to the Top
Register
"The Singularity Is Near" by Ray Kurzweil, How Close?? "The Singularity Is Near" by Ray Kurzweil, How Close??

10-27-2010 , 02:54 AM
Quote:
Originally Posted by Hardball47
The human brain will never be simulated. Ever.
never say never...

Spoiler:
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-27-2010 , 04:29 AM
Quote:
Originally Posted by Hardball47
Get over it. No, no. That last Terminator movie is only fiction; you can't hook brains into computers, nor will you ever be able to. Sorry folks, but you can't escape your mortality. See: cold hard truth.
I hope this is a joke/level (like your Muslimness?), but in case it isn't, there is no reason why we couldn't escape our mortality with sufficiently advanced engineering. There really is no reason. Cells are large compared to possible nanomachines, and change slowly. They are ripe for engineering without even considering genetics. Stem cells look amazing. Organ transplants exist already, and it will only be a matter time before we can grow new ones. Once you can replace all organs - which is a given - all that's really left is stopping ageing in the brain. Granted that is a much larger problem, but I can't see why that would be insoluble.

But I agree that it's a bit disturbed to want immortality so much that you act like Kurzweil or use it as a crutch.

Last edited by ManaLoco; 10-27-2010 at 04:46 AM.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-27-2010 , 08:53 PM
To touch on a topic from earlier in the thread, RK wouldn't be the first genius to have some rather quack beliefs (his being homeopathy).

http://www.cracked.com/article_18638...ly-insane.html
http://www.cracked.com/article_16559...insane_p1.html

Not to mention Isaac Newton was apparently into alchemy and using the Bible to crack the date of the end times.

In Ray Kurzweil's case, just because you're a genius in one area (technology) doesn't mean you're a genius in another (medicine).
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-27-2010 , 09:00 PM
Quote:
Originally Posted by ManaLoco
I hope this is a joke/level (like your Muslimness?), but in case it isn't, there is no reason why we couldn't escape our mortality with sufficiently advanced engineering. There really is no reason. Cells are large compared to possible nanomachines, and change slowly. They are ripe for engineering without even considering genetics. Stem cells look amazing. Organ transplants exist already, and it will only be a matter time before we can grow new ones. Once you can replace all organs - which is a given - all that's really left is stopping ageing in the brain. Granted that is a much larger problem, but I can't see why that would be insoluble.

But I agree that it's a bit disturbed to want immortality so much that you act like Kurzweil or use it as a crutch.
THAT is your reason for why we will one day be able to "escape" our mortality? Sorry pal, you need much more convincing reasons than that. Anything else is imagination gone wild.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-27-2010 , 09:22 PM
On the issue of immortality I wonder: Is there life before death?
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-28-2010 , 05:20 AM
For those who want to keep up with what's going on in the field of anti-aging.

SENS Foundation
Immortality Institute
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-28-2010 , 09:29 AM
Quote:
Originally Posted by Akileos
On the issue of immortality I wonder: Is there life before death?
For a brief time, yes.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-31-2010 , 01:33 AM
Quote:
Originally Posted by Hardball47
Hey Ray, I know you've read this thread, so here's a prediction for ya: The human brain will never be simulated. Ever.

We'll have human clones far before we get even in the remote area of brain simulations.

Get over it. No, no. That last Terminator movie is only fiction; you can't hook brains into computers, nor will you ever be able to. Sorry folks, but you can't escape your mortality. See: cold hard truth.

Keep fighting the good fight, though. You should write a fiction novel and try to get a movie adaptation; it'll let you do something productive with that imagination.
With enough computational power one could run a physics simulation that mimics a large enough piece of reality to give rise to life, which would eventually evolve into something intelligent given the right selective pressures. That is, if the brain is physical and physical systems can be simulated, then there is no reason why simulating a brain should be impossible.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
10-31-2010 , 02:18 AM
Quote:
Originally Posted by voi6
With enough computational power one could run a physics simulation that mimics a large enough piece of reality to give rise to life, which would eventually evolve into something intelligent given the right selective pressures. That is, if the brain is physical and physical systems can be simulated, then there is no reason why simulating a brain should be impossible.
Good first post. Welcome to the forum.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
03-17-2014 , 10:56 PM
bump

kurzweil was hired by google as director of engineering a while ago.

Looks like google is building a super team of experts to create AGI.

They just bought artificial intelligence company DeepMind for $500 million.

http://techcrunch.com/2014/01/26/google-deepmind/

other big companies are also working on it.

Facebook recently hired NYU professor Yann LeCunn to lead its new artificial intelligence lab.

Will be very interesting to see how this will develop in the next few years.

Here is a great Interview with one of the leading AGI experts.

http://www.youtube.com/watch?v=i6ctsWLi_G4
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
03-18-2014 , 02:29 AM
This silly thread needs some music:



Google is not planning on putting a bar/brothel/poker emporium on the moon. Slackers!
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-03-2014 , 08:21 AM
Quote:
Originally Posted by ZeeJustin
It's a gradual process. We've already started it with prosthetic limbs, cochlear implants. We're not far from neural implants. After the nanotechnology revolution, we'll start shedding ourselves of "standard body parts" and become more machine than human. This will inevitably turn into part of a network.

It's not like someone shoots you, and then inputs your DNA into a computer.
Whoah, futurists ITT. Timely bump since I just watched Johnny Depp in transcendence. ****ing awful overall, but with some cool moments and some fun ideas.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-05-2014 , 10:25 AM
AI would never kill us unless we made it that way.

True/False?
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-05-2014 , 11:42 AM
Quote:
Originally Posted by jackaaron
AI would never kill us unless we made it that way.

True/False?
False (kind of)

Explicitly we would not say it, but implicitly (you are alive, protect your life) A.I. would have no problem reaching the conclusion (wrong or right) to kill a human / humans...
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-06-2014 , 09:26 AM
I don't see why AI wouldn't kill us unless explicitly programmed not to, and even then I would expect any strong AI would be capable of overriding that "prime directive." I'll compare it to a person's instinct not to kill himself, but many do anyway for various reasons.

Another interesting question is whether or not AI will develop feelings/emotions. I'm going to guess no, at least not in the way we think of them, because feelings in humans evolved in us early as a part of our survival mechanism before higher logic functions developed. Since AI will be created by humans with higher logical systems in place, the same survival mechanisms will not occur. At this point I have no clue if lack of emotion will be a good, bad or indifferent thing, and whether it will play any role in the first question.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-06-2014 , 12:04 PM
what's to say we can't develop AI that develops similar to humans. basically starts off as a baby and it needs to learn etc. it may also need to survive. could emotions then develop?
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-06-2014 , 03:59 PM
Quote:
Originally Posted by housenuts
what's to say we can't develop AI that develops similar to humans. basically starts off as a baby and it needs to learn etc. it may also need to survive. could emotions then develop?
Of course not. Nor do we have any reason to suppose it will develop sentience, at all. Artificial Intelligence is like artificial leather; one looks a little like the other but the similarities end there.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-06-2014 , 05:50 PM
"What do we care for your suffering? Pain is merely information relayed by your organic nerve synapses." -- The first A.I. torture bot's standard dialogue protocol
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-06-2014 , 08:54 PM
Quote:
Originally Posted by A_C_Slater
"What do we care for your suffering? Pain is merely information relayed by your organic nerve synapses." -- The first A.I. torture bot's standard dialogue protocol
To which we respond by showing a video of a nuclear detonation near a robot that is still far from the blast to not get destroyed but close enough within a few km for the EM pulse to fry its circuits and then telling the robot here is a button that if i press the same happens here. So here is your definition of pain buddy. You do not exist after this button is pressed. How do you like it? If this doesnt go well we can try other effects where the processing time of the AI unit gets messed up by dong various things that interfere with its proper function. Then it suddenly recognizes what pain is above and beyond an information relay process. Only a robot that doesnt care for existing or for functioning properly will not take notice with that example. But somehow advanced AI should care for this. If not all of them their top top unit should.


This is why we must never develop completely autonomous independent strong AI that can wipe us out before we have started colonizing the galaxy. There is always a remote nonzero chance it will arrive at the conclusion (or a premature ultimately wrong but locally confident conclusion) we and themselves included may not be important or sensible to exist. We need to in fact never develop AI that is autonomous until we have used it to learn a ton more about the universe/ physics so that we can judge better what is going to happen potentially and even ask it to tell us before we release it without telling it we will release it.


All autonomous AI we develop must come with a "nuclear" (wipe them all out at once) option and they must operate knowing this is true always. We must never relinquish that power until we know more.

Because for example right now the solution of earth may be to kill 99% of humans in some well studied model of future. I mean if one told me (trusting their work as a given) that 99.999999% chance the planet and life is destroyed at the pace we go but this becomes only 10% if we are 1/100 in population and after establishing a new social organization to never reach current state again without far more technology at hand, suddenly then i can honestly tell you that i will gladly press the button to kill 99% of humans myself included (if not 1% lucky). Its the rational choice.

I firmly anticipate that very advanced intelligence will arrive at the conclusion that existence is important. Being very advanced willnot be challenged by us for resources and it can do whatever it likes without seeing us as anything more than irrelevant. But wise AI shoudl know the probability it is wrong is nonzero and therefore cannot risk wiping us out. The problem however is that locally before reaching deep wisdom it may recognize that we are a problem for life or even ourselves and itself and may strike us in some sense that isnt exactly clear and sensible/fair/ethical to us.

There is nothing more lethal than a brain that is 1 trillion times stronger than all humans which is additionally now able to manufacture anything it likes and produce whatever energy is needed to do anything physics allows. We cannot stop an intelligence larger than our own without a "nuclear" option available. Furthermore that intelligence will eventually find a way to remove from us that nuclear option if it recognizes it as important outcome and it will find a way to do it that will be completely nonviolent in its buildup. Just imagine yourself captured by neolithic people and tell me how you would eventually find a way to become their master instead of killed by them or whatever Do you think the choice is to attack them or befriend them first. All the knowledge you have and technology that can be built eventually with that knowledge (say you had all your computers with you when captured) can eventually take you where you want to go if you play the game properly.

We do not arrive at the conclusion that we should kill all other animals of the planet. The wiser we get the more we respect and even protect their existence it seems. However we have so far behaved very criminally towards a lot of life. Tell me if we found a planet that is 10 times more interesting than earth in resources and environment and potential but it has life forms that are very primitive and yet very lethal to us as they operate, making it unable to live there but easy to control and wipe out or restrict. What do you think would happen to them if a mega ark spaceship arrived at their system after 200 years of travel in need of resources? Now tell me how do we behave towards that planet if we have enough technology and energy to create such system on our own once arriving there without ever destroying their system. Would we then invade them? No, we would not care to invade and we would respect the system and its potential promise but still we would like to continue to monitor indefinitely to make sure we are the only determinant factor in all this "game". We would study and move on but still maintain some remote control of the outcome if it started looking risky. The wiser and stronger you are the less violent you are even if you are capable of unimaginable violence, but you do not relinquish a position of control. We cannot risk however to speculate in what state the alien civilization or the AI civilization will start interacting with us from a position of power for the same reason they woudlnt either.

Last edited by masque de Z; 05-06-2014 at 09:19 PM.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-07-2014 , 09:23 AM
I think many are suffering from a science fiction based (fairy tale) sense of AI.

If AI would be developed and be a much more intelligent version of us, then what you're really saying is that humans, in the most evolved sense, would kill people they consider to be "lesser" humans.

I fundamentally disagree with that in that I think that as we evolve, we become less likely to kill each other. I don't think the fully realized version of AI that most people envision would kill human beings. If anything, they would value us so greatly they'd kill themselves to protect us.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-07-2014 , 10:43 AM
Quote:
Originally Posted by jackaaron
I think many are suffering from a science fiction based (fairy tale) sense of AI.

If AI would be developed and be a much more intelligent version of us, then what you're really saying is that humans, in the most evolved sense, would kill people they consider to be "lesser" humans.
This is because of morality - which is an emergent biological function that positively influences our hard-coded biological imperatives to (a) survive and (b) propagate. Morality is a function that does not emerge within an AI because this function is not necessary for its survival or propagation.

As such, we are completely expendable in the perception of the AI. An AI is likely to view the world in terms of mathematical algorithms and predictions, as opposed to subjective feelings and interpretation. Subjective feelings (and morality) are a rough guide to better survival. An AI has no need for any such guide. Provided it is able to access enough energy, it is already immortal. Morality thus does not emerge within the AI.

Increases in intelligence do not necessarily equate to increases in moral behaviour. We shouldn't expect an AI to behave morally since its perception of time and its place in it, are more than likely to be completely different.

Last edited by VeeDDzz`; 05-07-2014 at 10:58 AM.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-07-2014 , 12:48 PM
Quote:
Originally Posted by VeeDDzz`
This is because of morality - which is an emergent biological function that positively influences our hard-coded biological imperatives to (a) survive and (b) propagate. Morality is a function that does not emerge within an AI because this function is not necessary for its survival or propagation.
I don't think this is a factor. And, we still kill. In fact, I've said many times in other threads that our species is in no way ready for complete and utter anarchy because we have no evolved enough to not go after each other.

Quote:
As such, we are completely expendable in the perception of the AI. An AI is likely to view the world in terms of mathematical algorithms and predictions, as opposed to subjective feelings and interpretation. Subjective feelings (and morality) are a rough guide to better survival. An AI has no need for any such guide. Provided it is able to access enough energy, it is already immortal. Morality thus does not emerge within the AI.
There is nothing that proves this though (the bolded part). And, of course, you could say the same for me (nothing disproves it).

Quote:
Increases in intelligence do not necessarily equate to increases in moral behaviour. We shouldn't expect an AI to behave morally since its perception of time and its place in it, are more than likely to be completely different.
Well, maybe they do? I realize there are some ridiculously smart but evil people but aren't they the extreme rare case? Overwhelmingly, wouldn't you find that people that commit acts of murder are as a whole, far less intelligent than people that have a higher IQ?

This is fair to compare because I'm talking about higher intelligence VERSUS people that kill. Killers, in general, are not in the higher intelligence group. Or, am I being unfair?
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-07-2014 , 08:36 PM
Quote:
Originally Posted by jackaaron
This is fair to compare because I'm talking about higher intelligence
What you are talking about is people having motivations and desires and behaviors that you find pleasant or unpleasant. That isn't intelligence; that is simply having motivations, desires and behaviors that you like.

There is nothing particularly difficult in imagining a super-intelligent species of alien that would find dismembering humans to be fun and wholesome entertainment, or who would find us to be particularly tasty, or who would simply not like us being around. That wouldn't make them any less intelligent; it would make them not our friends.

It is fairly difficult to imagine how we would program something to have emotions/motivations of any sort. Emotions/motivations are not the thing of higher intelligence.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-07-2014 , 08:46 PM
Quote:
Originally Posted by VeeDDzz`
Morality is a function that does not emerge within an AI because this function is not necessary for its survival or propagation.
We could just copy the relevant bits of our brains... Turn up the volume on things like disgust about blood and guts a bit maybe; perhaps pump up the anticipatory regret involving causing harm.

Reverse engineering for the win.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote

      
m