Open Side Menu Go to the Top
Register
A.I. A.I.

04-09-2018 , 03:07 AM
I don't know if an ant is concious since I appear to only be able to occupy one subjective viewpoint (my own). I can only be concious of things and others through the one viewpoint. I cannot inhabit the subjective experience of an ant, if there is any.

Therein parts of the mystery.
A.I. Quote
04-09-2018 , 03:13 AM
Quote:
Originally Posted by John21
Does not compute. I am (intelligence) what I do (solve problems).
I am what I do.

Would that make me a masturbator then?

P.s. you also successfully avoid boredom yea? Does that make you a boredom avoider?

Last edited by VeeDDzz`; 04-09-2018 at 03:22 AM.
A.I. Quote
04-09-2018 , 03:24 AM
Quote:
Originally Posted by VeeDDzz`
I am what I do.

Would that make me a masturbator then?
As a biological entity you have urges I don't. Free of that burden, I act solely in accord with my essence, i.e., solve problems.
A.I. Quote
04-09-2018 , 03:35 AM
If I can choose to be good or bad, problem-solver or problem-maker, then I am none of these things, essentially.
A.I. Quote
04-09-2018 , 03:45 AM
Quote:
Originally Posted by VeeDDzz`
If I can choose to be good or bad, problem-solver or problem-maker, then I am none of these things, essentially.
True. But there's a difference between what humans use intelligence for and what intelligence is in itself.
A.I. Quote
04-09-2018 , 06:14 AM
Quote:
Originally Posted by VeeDDzz`
I don't know if an ant is concious since I appear to only be able to occupy one subjective viewpoint (my own). I can only be concious of things and others through the one viewpoint. I cannot inhabit the subjective experience of an ant, if there is any.

Therein parts of the mystery.
Of course an ant is conscious being. That is the point. That there are very many levels of consciousness/awareness. The ant is operating with many "if then else statements" controlling its behavior so of course when it is trying to avoid you it "understands" you are there trying to do something to it. Or its "operating system" does. That is how it all starts and then you write your own life's program for humans that have higher brains.

If we develop an AI ant that behaves similarly to how ants behave but through a program of sensors and conditional statements it will behave as intelligently as a self driving car. Now tell me what is the difference in intelligence between an ant and a self driving car if you add to the car 10 more tasks that include building a colony and getting food which it will do by having a robot step out of the car and go do those basic tasks from locations the goods are provided that it will be able to search for with other databases and sensors. In what way will that AI differ from an ant? We are almost there now.

A self driving car algorithm will one day develop robots that drive to locations, step out of a vehicle and clean the road from obstacles and garbage, fix things or maintain solar panels or water garden or pick fruit and take care of farms then park themselves to get charged from solar and back to work later.

Such system is identical to ants in basic operation. We are there now basically. How soon until it starts becoming well versed in 1000 more tasks until finally it has started to be able to handle any developing random event around it while it is trying to meet its goals?

Why would i care if "anyone is in there" really if i cannot find a way to tell the difference eventually? Is that real enough finally?

How do we know "there is one inside" by the way for us? Because we taught ourselves that it is part of the game of observation. We are observing ourselves playing the game. It has become part of the observation of the world.

Going out there and experimenting a couple billions times with anything around you is what the mystery is all about!

At some point you will ask AI what it feels like to it, all that it is doing, and it will answer you in a way that will shock you. It feels like i am in a headquarters station directing action and dealing with things happening around me while having a few set goals in the background and some tendencies to enjoy certain functions that reward my inner pleasure centers. Well how different are we?
A.I. Quote
04-09-2018 , 08:00 AM
Quote:
Originally Posted by masque de Z
Of course an ant is conscious being. That is the point. That there are very many levels of consciousness/awareness. The ant is operating with many "if then else statements" controlling its behavior so of course when it is trying to avoid you it "understands" you are there trying to do something to it. Or its "operating system" does. That is how it all starts and then you write your own life's program for humans that have higher brains.

If we develop an AI ant that behaves similarly to how ants behave but through a program of sensors and conditional statements it will behave as intelligently as a self driving car. Now tell me what is the difference in intelligence between an ant and a self driving car if you add to the car 10 more tasks that include building a colony and getting food which it will do by having a robot step out of the car and go do those basic tasks from locations the goods are provided that it will be able to search for with other databases and sensors. In what way will that AI differ from an ant? We are almost there now.

A self driving car algorithm will one day develop robots that drive to locations, step out of a vehicle and clean the road from obstacles and garbage, fix things or maintain solar panels or water garden or pick fruit and take care of farms then park themselves to get charged from solar and back to work later.

Such system is identical to ants in basic operation. We are there now basically. How soon until it starts becoming well versed in 1000 more tasks until finally it has started to be able to handle any developing random event around it while it is trying to meet its goals?

Why would i care if "anyone is in there" really if i cannot find a way to tell the difference eventually? Is that real enough finally?

How do we know "there is one inside" by the way for us? Because we taught ourselves that it is part of the game of observation. We are observing ourselves playing the game. It has become part of the observation of the world.

Going out there and experimenting a couple billions times with anything around you is what the mystery is all about!

At some point you will ask AI what it feels like to it, all that it is doing, and it will answer you in a way that will shock you. It feels like i am in a headquarters station directing action and dealing with things happening around me while having a few set goals in the background and some tendencies to enjoy certain functions that reward my inner pleasure centers. Well how different are we?
Is there a difference between a psychopath acting emotional to a person genuinely being emotional? From the outside, the psychopath sells the emotion. You see him act emotional and you're convinced hes normal in every way. The script and actor sell the emotion, but nothing genuine there. Have we no care for whats genuine? Progress, lead on a short leash, by a blind man.

As asked by John. When the machine pleads for its life, is it just running a script? Are we all just running scripts for everything? Could that be some coincidental determinism (speculative opinion) finding it's way into the conversation? The conversation feels like, anything but, a decision to execute 'conversation about AI script'.
A.I. Quote
04-09-2018 , 09:52 AM
Quote:
Originally Posted by BrianTheMick2
Also, logical doesn't mean correct.
Exactly. Computers are never wrong. Words, like wrong (fault, blame) are human constructs, so to speak.
When their logic is not aligned to what we want, they appear to us as behaving irrationally, until the 'bugs' are ironed out.

Last edited by MacOneDouble; 04-09-2018 at 10:20 AM.
A.I. Quote
04-09-2018 , 10:42 AM
Wake me up when the AIs use language to manipulate for influence while unreflectively projecting moral imposition.
A.I. Quote
04-09-2018 , 12:54 PM
When it gets complicated enough moral emerges.
A.I. Quote
04-10-2018 , 12:23 AM
Quote:
Originally Posted by stealwheel
Just curious if anyone has any thoughts on the future of A. I., and its impact on the human race.

In ten years it may be possible to get a chip implanted that gives you all the knowledge of mankind. Would you do it? It would probably kill live poker.

What would become of you if you did not?

Also the poker community may be historic in the fact we are dealing with online bots posing as humans.

If technology continues at its current pace it will be a issue confronted by other industries and community's.

Interested in hearing everyone's thoughts
AI has limitless applications, so it's hard to separate the hype from what's do-able right now. Progress in AI is about as important as outer space, imhe.

I am learning AI, and I believe Reinforcement Learning has a bright future. RL is used to play video games without any knowledge of how the game works. It will be used in self-driving cars. Everyone will want a self-driving car.

Currently there is progress in Deep Learning (especially pertaining to vision). DL is now accessible to any programmer who can buy a powerful graphics card and read a book. Tons of garage-type programmers are going to do interesting things.
A.I. Quote
04-10-2018 , 12:33 AM
Quote:
Originally Posted by John21
As a biological entity you have urges I don't. Free of that burden, I act solely in accord with my essence, i.e., solve problems.
The essence of my calculator is to calculate.

The essence of my car is to drive.

They don't do so without proper motivation.
A.I. Quote
04-10-2018 , 12:44 AM
Quote:
Originally Posted by masque de Z
How silly will this get? A baby is clueless period. Yes it doesn't know what blue is yet.

So yes any being is aware and conscious at a primitive level but the real meaning of awareness and consciousness is constantly evolving and therefore it is indeed emergent.

You want to define consciousness as the most basic common thing all animals have. Whatever. Then its worthless to this discussion because computers are already conscious. They can tell you touch the keyboard or not!

https://www.aoa.org/patients-and-pub...-months-of-age


"Babies learn to see over a period of time, much like they learn to walk and talk. They are not born with all the visual abilities they need in life. The ability to focus their eyes, move them accurately, and use them together as a team must be learned. Also, they need to learn how to use the visual information the eyes send to their brain in order to understand the world around them and interact with it appropriately.
From birth, babies begin exploring the wonders in the world with their eyes. Even before they learn to reach and grab with their hands or crawl and sit-up, their eyes are providing information and stimulation important for their development.

Healthy eyes and good vision play a critical role in how infants and children learn to see. Eye and vision problems in infants can cause developmental delays. It is important to detect any problems early to ensure babies have the opportunity to develop the visual abilities they need to grow and learn.

At birth, babies can't see as well as older children or adults. Their eyes and visual system aren't fully developed. But significant improvement occurs during the first few months of life.

The following are some milestones to watch for in vision and child development. It is important to remember that not every child is the same and some may reach certain milestones at different ages.

Birth to four months


At birth, babies' vision is abuzz with all kinds of visual stimulation. While they may look intently at a highly contrasted target, babies have not yet developed the ability to easily tell the difference between two targets or move their eyes between the two images. Their primary focus is on objects 8 to 10 inches from their face or the distance to parent's face.
During the first months of life, the eyes start working together and vision rapidly improves. Eye-hand coordination begins to develop as the infant starts tracking moving objects with his or her eyes and reaching for them. By eight weeks, babies begin to more easily focus their eyes on the faces of a parent or other person near them.
For the first two months of life, an infant's eyes are not well coordinated and may appear to wander or to be crossed. This is usually normal. However, if an eye appears to turn in or out constantly, an evaluation is warranted.
Babies should begin to follow moving objects with their eyes and reach for things at around three months of age.

Five to eight months

During these months, control of eye movements and eye-body coordination skills continue to improve.
Depth perception, which is the ability to judge if objects are nearer or farther away than other objects, is not present at birth. It is not until around the fifth month that the eyes are capable of working together to form a three-dimensional view of the world and begin to see in depth.
Although an infant's color vision is not as sensitive as an adult's, it is generally believed that babies have good color vision by five months of age.
Most babies start crawling at about 8 months old, which helps further develop eye-hand-foot-body coordination. Early walkers who did minimal crawling may not learn to use their eyes together as well as babies who crawl a lot.

Nine to twelve months

At around 9 months of age, babies begin to pull themselves up to a standing position. By 10 months of age, a baby should be able to grasp objects with thumb and forefinger.
By twelve months of age, most babies will be crawling and trying to walk. Parents should encourage crawling rather than early walking to help the child develop better eye-hand coordination.
Babies can now judge distances fairly well and throw things with precision.

One to two years old

By two years of age, a child's eye-hand coordination and depth perception should be well developed.
Children this age are highly interested in exploring their environment and in looking and listening. They recognize familiar objects and pictures in books and can scribble with crayon or pencil.

"


A baby is definitely not self conscious (self aware) yet either. It cannot think of itself in an abstract sense yet.
None of this had anything to do with anything that anyone previously posted.
A.I. Quote
04-10-2018 , 12:48 AM
Quote:
Originally Posted by MacOneDouble
Exactly. Computers are never wrong. Words, like wrong (fault, blame) are human constructs, so to speak.
When their logic is not aligned to what we want, they appear to us as behaving irrationally, until the 'bugs' are ironed out.
Not really. Logic isn't some panacea that magically overcomes incomplete data.
A.I. Quote
04-10-2018 , 01:00 AM
Quote:
Originally Posted by BrianTheMick2
Not really. Logic isn't some panacea that magically overcomes incomplete data.
Not only that, but AI does error. That's because cutting-edge AI typically uses a neural net which mimics how a human brain perceives the problem. It's right often enough, and fast enough to be employed. It isn't perfect. An AI wouldn't perfectly solve vision problems, for example, but it will tell what it's seeing well enough.
A.I. Quote
04-10-2018 , 02:08 AM
Quote:
Originally Posted by BrianTheMick2
The essence of my calculator is to calculate.

The essence of my car is to drive.

They don't do so without proper motivation.
Your examples are concretes. Try it with an abstract term. Better yet, we could just do away with the term “intelligence” all together and call it “Artificial Problem Solving.”
A.I. Quote
04-10-2018 , 02:21 AM
Quote:
Originally Posted by BrianTheMick2
None of this had anything to do with anything that anyone previously posted.
Yes it does. It shows that the understanding of wtf the world is hence what is awareness about the totality of experience (hence higher level consciousness) has to do with emergent structures in the brain that do not exist at age 0.

Higher level consciousness is absent in babies and exists in us today because when we imagine the universe we know a lot more than ancient humans and can appreciate our world and existence and how our bodies work in a greater level for what it truly is making us more aware. None of that awareness is possible without interactions and establishing of connections.
A.I. Quote
04-10-2018 , 02:30 AM
The ability to drive and be safer than other drivers is a result of experience and overall appreciation of the world. It is emergent. It is a road driving conditions awareness that is not present in many others with less exposure. Such driver is more conscious about what is going on around them than others.

I am only suggesting here that higher level consciousness that ultimately leads to thoughts is a product of evolving experience and better understanding/learning of the world. A baby is not thinking anything remarkable at day one and it wont even think of itself as an entity outside of the world for another year and some months...
A.I. Quote
04-10-2018 , 04:24 AM
A human baby is not a logical machine.
A grown human being is not a logical machine.

A grown human being can get bored.
A logical machine cannot get bored.

Being bored is a motivator for action.
A logical machine lacks this motivator.

A logical machine, also, does not suffer.
Suffering is another motivator for action.
A logical machine lacks this motivator.

Can a logical machine be motivated to act (solve problems/whatever) of its own accord? And what motivators would be sufficient?
A.I. Quote
04-10-2018 , 04:28 AM
Not being bored is an even stronger motivation for action because your horizons of possibilities constantly explode to new directions of curiosity and higher complexity demanding more effort and synthesis.
A.I. Quote
04-10-2018 , 12:35 PM
Quote:
Originally Posted by plaaynde
When it gets complicated enough moral emerges.


What's complicated? At some point a machine which depends on competition to decide will face deciding about the value moral competition which intelligent humans know isn't a totally reliable process to decide morals. So who can be more intelligent in such a situation? That machine or the intelligent humans?
A.I. Quote
04-10-2018 , 01:39 PM
The machine can be better at morals as well as intelligence. Bad moral may be for inferiors. Good moral stems from taking enough of things into account. Therefore the average moral has grown for humanity (I dare say)
A.I. Quote
04-10-2018 , 04:43 PM
Quote:
Originally Posted by plaaynde
The machine can be better at morals as well as intelligence. Bad moral may be for inferiors. Good moral stems from taking enough of things into account. Therefore the average moral has grown for humanity (I dare say)


Well, if the AI has learned behaviors about morals from humans, I won't believe in it's superior morals just because it may proclaim to have them. A.I.
A.I. Quote
04-10-2018 , 06:53 PM
Quote:
Originally Posted by VeeDDzz`
Can a logical machine be motivated to act (solve problems/whatever) of its own accord?
Probably not in a carrot and stick sense. But big picture wise, there seems to be an anti-enthropic principle at work in the universe whereby order emerges from chaos. And we see it from the micro to the macro. Who knows, maybe there’s a much stronger anthropic principle at work than we can possibly imagine. But if we could crack that code and create the right conditions, it would seem motivated to develop higher levels of complexity to create higher levels of order.
A.I. Quote
04-10-2018 , 07:07 PM
I see no special reason why AI would care about higher levels of order. Not even we care about higher levels of order. We care about reducing suffering and about entertaining eachother with art, music, comedy, porn, and so on (/reducing boredom).

Something that doesn't suffer and doesn't get bored cannot understand these motivators, in the same sense that a psychopath can't understand empathy. He acts like he understands it, but then when no one's watching carefully, he'll do things that show he doesn't understand it.
A.I. Quote

      
m