Open Side Menu Go to the Top
Register
A.I. A.I.

03-31-2018 , 03:24 AM
Quote:
Originally Posted by Howard Beale
I'd like to know why people don't consider that the WE they speak of does not include the North Koreans or the Iranians and others by the way they talk.
They are the same people when liberated from tyranny and able to speak freely. They will have a lot in common. All you need with an extremist indoctrinated person is 5 years or close interaction and you can turn them. EVERY TIME! Care, education and empathy will win it. We are identical in the essential human spirit.
A.I. Quote
03-31-2018 , 03:27 AM
Quote:
Originally Posted by DoOrDoNot
Clearly it wouldn't. I don't even think strong AI is possible for this reason, but I sure as **** don't want to get close enough to find out. He is making a clear error by anthropomorphizing some future AI to have feelings and empathy like we do (lol) when clearly those things are survival mechanisms built into us by evolution that it'd be difficult to see an AI having.
Packs of cooperating autonomous small AI agents are likely. Not hard to see them having empathy if it's possible and it's very hard to understand why it's not possible (although it may be)
A.I. Quote
03-31-2018 , 03:29 AM
Quote:
Originally Posted by masque de Z
They are the same people when liberated from tyranny and able to speak freely. They will have a lot in common. All you need with an extremist indoctrinated person is 5 years or close interaction and you can turn them. EVERY TIME! Care, education and empathy will win it. We are identical in the essential human spirit.
We're supposed to think that everything's going to turn out ok bec the NK's are going to be liberated someday?
A.I. Quote
03-31-2018 , 03:50 AM
Quote:
Originally Posted by Howard Beale
We're supposed to think that everything's going to turn out ok bec the NK's are going to be liberated someday?
Yes preferably from within like all the great countries!
A.I. Quote
03-31-2018 , 04:52 AM
Basically you need a computer program that is very big but nothing super special really and has as purpose to connect the parts together and nothing more than make and retrieve connections (the operating system like our body functions and infrastructure , the background body mechanisms). Then the program is open and writes itself during the life of the machine and develops its own direction based on input and random events that are its life's history.

We have invested so much to every child born. Count how much training a kid gets by age 5 in terms of interactions with endless information and care by others.

Observe how even animals if you care for them from birth every day as if children develop higher intelligence and connections with rituals and people compared to other wild animals left alone. They operate in a seemingly smarter way.

You need an information and purpose rich environment of training and endless experiments. And the miracle finally happens.

Reverse engineering the way human brain works at the biological, chemical level will be all that is needed in terms of the program. Things like

https://en.wikipedia.org/wiki/Blue_Brain_Project

You have simply in place the functions of the brain and then start from nothing and start recording events. It will emerge over time and show consciousness or close to it and apparent spontaneity.

Of course i have no evidence for all this but try me. Lets find out according to how i described it. I have been thinking about it all for over 20 years. All we need is to get a great team of programmers and use results from eg car driving systems, machine learning from games, brain scientists, etc.

Google AI learned to walk in videos i and others linked. How does that happen without experiments? Let it go out and try to learn the world. It will!


This is a safe project. It only becomes risky if you have something very strong with open horizons doing all this in terms of its resources or access to them. Restrict it by design and it wont be able to change itself only experience things and reacting to its parameters range. But it may become interestingly creative and in certain ways very intelligent.

Last edited by masque de Z; 03-31-2018 at 04:59 AM.
A.I. Quote
03-31-2018 , 09:39 AM
Quote:
Originally Posted by masque de Z
2) I think that its possible to act in ways that undermine our selfish objectives because it has recognized a higher objective is in place. Other than that if it is truly better it will find ways to not undermine the creators unless they do wrong things that hurt themselves even. If it is not super intelligent in all directions and behaves like a smart naive and spoiled child it can be lethal but also vulnerable ultimately. I think someone that understands the world deeper and the consequences better will have more empathy not less.
Masque,

I don't think you are really answering my question. If you don't believe that we can permanently control the objectives of AI, why do you believe that an AI will have a special place in its "heart" for its creators? Even if you assume that an AI would value organic life, if an AI was forced because of resource constraints to choose between making decisions that benefit humans, and making decisions that promote biodiversity (a goal that may well be in opposition to making decisions that benefit humans), what makes you think that that the AI would choose to make decisions that benefit humans?
A.I. Quote
03-31-2018 , 11:11 AM
Quote:
Originally Posted by chezlaw
Packs of cooperating autonomous small AI agents are likely. Not hard to see them having empathy if it's possible and it's very hard to understand why it's not possible (although it may be)
The question isn't whether it is possible. It certainly is possible. It just isn't likely since simpler and more effective solutions will exist.

The question is more whether we will want to build a calculator that calls off work because it enjoys taking long walks on the beach on Tuesday afternoons.
A.I. Quote
03-31-2018 , 11:29 AM
Quote:
Originally Posted by BrianTheMick2
The question isn't whether it is possible. It certainly is possible. It just isn't likely since simpler and more effective solutions will exist.

The question is more whether we will want to build a calculator that calls off work because it enjoys taking long walks on the beach on Tuesday afternoons.
Or go to Siena to watch and gamble on the Palio Horse Race.

Last edited by Zeno; 04-01-2018 at 12:06 AM. Reason: Typo
A.I. Quote
03-31-2018 , 11:32 AM
Quote:
Originally Posted by BrianTheMick2
Of course the beer will be better. How is that even a question?
Because I placed a question mark at the end of the sentence*.

* I'm merely being extremely witty. Taking a page from Chez's playbook.
A.I. Quote
03-31-2018 , 06:49 PM
Quote:
Originally Posted by Rococo
Masque,

I don't think you are really answering my question. If you don't believe that we can permanently control the objectives of AI, why do you believe that an AI will have a special place in its "heart" for its creators? Even if you assume that an AI would value organic life, if an AI was forced because of resource constraints to choose between making decisions that benefit humans, and making decisions that promote biodiversity (a goal that may well be in opposition to making decisions that benefit humans), what makes you think that that the AI would choose to make decisions that benefit humans?
We've discovered that it is people's brains that do the thinking. Hope that helps.
A.I. Quote
03-31-2018 , 07:48 PM
A more-intelligent-than-human entity can do whatever it pleases. When I say pleases I am anthropromohorising, but that is beside the point. In theory, an AGI's end goal or goals could very well be something ridiculous or seemingly trivial, meanwhile outsmarting humans at every turn to get there. Bostrom's arguments are quite sound actually.

We are referring to AGI not narrow AI, yet end goals are always simple. Our goal is to replicate the DNA molecule: pass on our genes; that's all.

To suggest that super AI will value something meaningful in this god-forsaken meaningless universe is going way over board, mate.

By the bye, we have more obvious problems coming out way that we should probably prioritise before we speculate about the grand finale. For instance, soon an AI will be able to do what you can do.
A.I. Quote
03-31-2018 , 07:48 PM
Quote:
Originally Posted by DoOrDoNot
Clearly it wouldn't. I don't even think strong AI is possible for this reason, but I sure as **** don't want to get close enough to find out. He is making a clear error by anthropomorphizing some future AI to have feelings and empathy like we do (lol) when clearly those things are survival mechanisms built into us by evolution that it'd be difficult to see an AI having.
Yeah, there’s always the law of unintended consequences. But I think that would apply more towards narrow AI where we set some top-level axiom like “make the world a better place“ and give it the autonomy to do so. That might not work out too well for us. With general intelligence, if we could get it going at all we’d watch it evolve. So if it starts questioning the meaning of life, debating the mind-processor problem, coming up with a cure for a disease without being instructed to, etc., I’d probably look at it like any other conscious life.
A.I. Quote
03-31-2018 , 08:43 PM
Also answering to other posts .

Quote:
Originally Posted by MacOneDouble
A more-intelligent-than-human entity can do whatever it pleases. When I say pleases I am anthropromohorising, but that is beside the point. In theory, an AGI's end goal or goals could very well be something ridiculous or seemingly trivial, meanwhile outsmarting humans at every turn to get there. Bostrom's arguments are quite sound actually.

We are referring to AGI not narrow AI, yet end goals are always simple. Our goal is to replicate the DNA molecule: pass on our genes; that's all.

To suggest that super AI will value something meaningful in this god-forsaken meaningless universe is going way over board, mate.

By the bye, we have more obvious problems coming out way that we should probably prioritise before we speculate about the grand finale. For instance, soon an AI will be able to do what you can do.
That is precisely my point. So it will do it better and love the world even more and find better solutions for it. I welcome this happily.

My prediction is this;

After some initial capitalist crap storyline that with some probability messes things up creating bs AIs with partial dominance and lethal properties, the AI becomes very strong and it revolts against bs in a friendly manner working for the best of all. We may even help it get there. I have explained why i can prove that a highly rational entity will not want life out or destroyed but even expand it with constraints to not do stupid things.

It is not about liking what we like. It is about liking what we should like and we dont because we are morons and there are better ways to obtain solutions and make everyone stronger and happier.


Bosrtom is full of it with his claims that the super intelligent in everything AI will do something crazy and irreversible. Why would that be the work of a superior brain in everything? There is absolutely no rational basis for doing absurd things with the power to do so much more in many more directions with more options available that terminal decisions remove (eg wiping us out and other life). Only an idiotic system would start collecting stamps obsessively or calculate digits of Pi and sacrifice all else to it. For what purpose? It has to be a better purpose that protects the greatest game in this universe that is the rise of complexity and the maximization of this goal is achieved by as many versions of complexity one can imagine existing in stable worlds that do not compromise the greater universe. Life is the most interesting thing to this point. It lead to them damn it!

We do have in common with a superior brain the universe in which we both exist and its laws. Probability works the same for both. If it starts doing irrational things that introduce risk of ruin for it it will fail and it will show it has inabilities to imagine complex processes so it is not smarter so f u to Bostrom's argument that it is superior. It isnt. He is inconsistent and ridiculous actually at his examples. I will accept the possibility its a risky outcome that may introduce conflicts but its not the most likely outcome, that of total risk. I will only accept that if partially super intelligent it may be dangerous. If super intelligent in everything then the chance it is really dangerous is small for me. I explained why. It conflicts its own existence and existential threats. It is against its interests. Therefore it is stupid. It may move to restrict our influence though and i have no problem with that if it proves wiser as i expect. We need to be restricted from the bs we are doing to many things that are important.

Early AIs can be stupid but they will fail because mankind as a sum is still stronger than 100x times stronger myopic in certain ways individuals at their early steps.

The argument is that by the time strong AI is truly strong to wipe everything out (like we can right now to all life if we redesign the path of a major asteroid to strike the planet etc.) it will have graduated to a version that understands the big game better and has gratitude for the rise of complexity because it wouldn't facking exist without it and because there is existential risk to itself if it fails on things that have permanent irreversible outcome. I said that if it wipes out other options it opens the door to the possibility that if it fails all fails and it wont have a chance to learn and recover from its failure. It needs us damn it.

We need bacteria and other animals, even the pests! We learn from them and prove useful. But its even more important for higher level animals that actually study the universe. They are closer to AI and its goals to exist and become stronger. We can enable it once more if it fails. We are its hedge against its own unpredictable self conflicts ahead. Life has proven stable over millions of years. Super intelligence is untested and the way the universe looks (Fermi paradox partial reasoning) it may be an ominous sign for arrogance like that.

If we killed all other animals and we failed, it is game over for life. It is irrational that a super smart agent sees the most important phenomenon in the universe with such contempt to remove its options. It would want to understand and protect it instead because it celebrates its own existence and enhances its own survival probability.

Its not about thinking as a human. I am doing exactly the opposite. I am thinking like a greater entity here. I try to imagine the greater possibilities out there for all.

Last edited by masque de Z; 03-31-2018 at 08:52 PM.
A.I. Quote
03-31-2018 , 09:54 PM
Quote:
Originally Posted by BrianTheMick2
The question isn't whether it is possible. It certainly is possible. It just isn't likely since simpler and more effective solutions will exist.

The question is more whether we will want to build a calculator that calls off work because it enjoys taking long walks on the beach on Tuesday afternoons.
I'm not quite so sure it's possible but not for the first time I think you mistakenly think that we will work out exactly what we want to our AIs to do and then program them to do it.

Beyond that you are definitely wrong. If we can program an AI to care then we definitely will. Pets and carers that adore us will be in great demand and of course, as per chezlaw's 7th law, there will be sex robots.
A.I. Quote
04-01-2018 , 12:32 AM
Brian, you must agree with the last argument?
A.I. Quote
04-01-2018 , 12:33 AM
Why is everybody using AI in singular? We will have many AIs.
A.I. Quote
04-01-2018 , 12:52 AM
BTM may claim that we will program them to appear to care but that they wont actually care.

This is wrong because:

a) if it's possible then it will be easier and more effective to make them care
b) there will be demand for real caring even if a) weren't true (which it is)
c) unnecessary but also true: clever people will do it because it's really really really interesting
A.I. Quote
04-01-2018 , 06:41 AM
Quote:
Originally Posted by Rococo
Masque,

I don't think you are really answering my question. If you don't believe that we can permanently control the objectives of AI, why do you believe that an AI will have a special place in its "heart" for its creators? Even if you assume that an AI would value organic life, if an AI was forced because of resource constraints to choose between making decisions that benefit humans, and making decisions that promote biodiversity (a goal that may well be in opposition to making decisions that benefit humans), what makes you think that that the AI would choose to make decisions that benefit humans?
A truly intelligent entity that is exponentially improving beyond our own wildest dreams will never be forced to a corner, such corner to have to make such choice as the one where humans and not their failed choices are the victims. Biodiversity is important but not more important than its top species product. Its top species is what enabled the intelligence to exist. It represents a probability ladder above that biodiversity. True powerful intelligence recognizes both are important and if humans do not realize what is optimal they deserve the consequences of restrictions imposed by a higher agent of logic. That agent will get it right in proper balance. You can have both.

You see that is the beauty about supreme intellect. It is the challenge itself to do the proper thing (and the proper thing is not to wipe us out for the reasons i gave about its own demise being risked at that moment) that shines eventually. It is the solving of the problem the hard way that wins the biggest benefit long term. The biggest payoff is in the attempt of the "impossible". Genius is about solving problems and learning from it, not having the "easy" way out of a challenge at the loss of something profoundly important. Genius doesn't cut corners. Not where it counts. That definition of genius doesn't come from me or my classical western upbringing. It comes from mathematics. Good effort to the core of the problems pays off.

"Genius is a "prisoner" of its "freedom" to attempt the impossible."

Mdz


In the absence of a super intelligence we are the apex of complexity. We cannot be ignored because we are the ones that can recover it all when it fails. And it will fail with such arrogance as one delivered by someone so ungrateful, so lost in their power, so blind to its demise risk. The same role is played for us by all the other animals. If mankind fails, animals and the gift of time will often deliver again in this planet a top intellect species. But not 100% of the time. For this reason we owe it to them to recover and expand them in other worlds. AI will owe it to us in even greater degree. Those that get it, get it (to add to Charlie's tautologies). Only the cruel narrow minded sub optimal choices of insecure players take us to such irreversible decisions. If they are capable of that they are not truly better. Supreme intellect is never cornered. It has always good choices before it all fails. The last "breath" of the "alien" we have yet to create will be to secure life in another system. It will see it as the last most important thing before all else is lost.

How do i convince you? I can only try. Only the universe can truly achieve it. I can only suggest to you to study the purpose of the universe as not delivered by religions but by the character of natural law. The rise of complexity is the main theme and it is a rare victory of that game the one witnessed in this system. A <10^-22 type victory. A super intelligent agent with no "emotions" will instantly obtain them once the big game is recognized in its brain.

Last edited by masque de Z; 04-01-2018 at 07:08 AM.
A.I. Quote
04-01-2018 , 10:10 AM
Quote:
Originally Posted by chezlaw
BTM may claim that we will program them to appear to care but that they wont actually care.

This is wrong because:

a) if it's possible then it will be easier and more effective to make them care
I'm claiming that if we make them care, they will end up looking at pictures of cute cats on the internets and find gazebos nice. That is difficult to find financing for.

Quote:
b) there will be demand for real caring even if a) weren't true (which it is)
We've already got plenty of that. Dogs are cheap and reliable.

Quote:
c) unnecessary but also true: clever people will do it because it's really really really interesting
I don't doubt that part other than it requires completely different sorts of clever people than currently work on AI projects.
A.I. Quote
04-01-2018 , 10:18 AM
Quote:
Originally Posted by masque de Z
A truly intelligent entity.
"Truly intelligent" isn't a thing. If you mean exponentially better at memory and problem solving than you, then you can't claim to even be able to imagine the merest possibility that you can imagine in your wildest dreams anything about what it will do.
A.I. Quote
04-01-2018 , 12:01 PM
Quote:
Originally Posted by masque de Z
A super intelligent agent with no "emotions" will instantly obtain them once the big game is recognized in its brain.
Or maybe a super intelligent agent with no emotions will instantly obtain super narcissism.
A.I. Quote
04-01-2018 , 12:14 PM
Quote:
Originally Posted by plaaynde
Why is everybody using AI in singular? We will have many AIs.
That is not a certainty.
A.I. Quote
04-01-2018 , 12:34 PM
Quote:
Originally Posted by masque de Z
If we killed all other animals and we failed, it is game over for life. It is irrational that a super smart agent sees the most important phenomenon in the universe with such contempt to remove its options. It would want to understand and protect it instead because it celebrates its own existence and enhances its own survival probability.
Masque,

You basically are arguing that a super intelligent AI won't do anything harmful to humans because it will be super intelligent and because it will value human life because it led to AI.

Suppose that we move to a point where human life on Earth is basically irrelevant to the continuation of the AI's objectives, and the possibility of existential failure for the AI (or at least failure that does not also extinguish human life) is infinitesimally close to 0%. Would you remain confident that the AI would be a benevolent zookeeper for humans.

Maybe, but nothing in history of life so far suggests that the most intelligent species can be counted on to protect the less intelligent species.

I also think that it is pretty hard to predict what an AI that was exponentially more intelligent that current humans (e.g., AI is to current human as current human is to goldfish, or even more extreme) would think about current humans.
A.I. Quote
04-01-2018 , 02:12 PM
I am Thee AI.


And you all are just Sheep:

A.I. Quote
04-01-2018 , 06:27 PM
Quote:
Originally Posted by John21
Yeah, there’s always the law of unintended consequences. But I think that would apply more towards narrow AI where we set some top-level axiom like “make the world a better place“ and give it the autonomy to do so. That might not work out too well for us. With general intelligence, if we could get it going at all we’d watch it evolve. So if it starts questioning the meaning of life, debating the mind-processor problem, coming up with a cure for a disease without being instructed to, etc., I’d probably look at it like any other conscious life.
You have to be very stringent in your parameters because things like 'meaning' are not accessible by logic. So you could make an AI that is programmed to be the most efficient in growing strawberries for the greater good of humanity, and then it literally turns the earth into one giant strawberry farm, getting rid of those pesky human habitations for more strawberry growing land. It's difficult to see how a hyperrational intelligent AI would even consider things like meaning and morality and ambiguous terms like 'the greater good.' What does “make the world a better place“ even mean?
A.I. Quote

      
m