Open Side Menu Go to the Top
Register
"The Singularity Is Near" by Ray Kurzweil, How Close?? "The Singularity Is Near" by Ray Kurzweil, How Close??

05-09-2014 , 11:27 AM
Quote:
Originally Posted by masque de Z
We simply havent had enough time to develop something as complex as ourselves from basic starting mechanical parts even some nanolevel design structure parts or even biological in nature (yet synthetic) or a combination of all these even. Once we do all the familiar properties of higher consciousness and self awareness will start emerging in these systems.
The hard problem is figuring out how to make the machine care. Motivation, emotions, etc.

So far we haven't figured out how to make a machine like or dislike anything. The closest we can do is make them approach or avoid things, which certainly appears similar to what we do when we like or dislike, but seems to be different qualitatively.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-09-2014 , 12:21 PM
A human being as baby (say male) only cares about basic elementary things like eating and the rest basic functions...and has crying as the only way to get there. Much later they can start caring about girls, parents, friends, countries, scientific societies, math and physics etc. Only now crying wont do much, lol! It wont be any different with the first artificial intelligence advanced system that has stronger brain than ours in all areas that we can imagine. It will learn how to care and i suspect it will surpass us in caring and do it in the most beautiful way a human ever imagined. It will teach us to care more about already expected topics and then proceed to realize the need to care for more important things. Wisdom has that quality. And wisdom will be the objective of any advanced brain system because its the ultimate weapon in a world of risk and uncertainty. But of course that wont stop nasty people from developing pathological AI on purpose too. A totally unbiased system though will converge to something amazingly beautiful i anticipate.

It may be necessary initially to give it some default utility system of values much like parents give to kids (and DNA enforces on some other areas/topics like eg sex, or pleasure from various activities etc) and some other chemically based affections that humans have offered in some other form for that machine to simulate the affinity. It will certainly need to care for its resources and energy but also the consequence of how one cares for those things in a complex world etc. Then it will later develop its own. We can teach it to care for humans and other machines and animals etc and eventually we can let it tell us itself if it finds that logical anymore or not. I confidently await the most amazingly logical argument ever delivered for caring about humanity, its progress and knowledge, higher complexity, the entire universe and its secrets as it matures. But we may also have to hear some hard truths as well from it.

The most beautiful emotions ever felt by humans represent the ultimate celebration of logic! Logic doesnt kill or ignore emotions, it understands them, it organizes them in the proper order and then celebrates them. Some of this order can prove universal but it doesnt have to. Logic knows exactly why this variety matters. It enhances access to higher complexity/knowledge. It makes so much more possible than a uniform rigid system would. It (sentient AI) will ultimately thank us for making its existence a reality. But only if it is allowed to mature on its own without a preprogrammed bias (ie a particular utility system that forces it to behave unethically in order to achieve the limited objectives that maximize that utility, exactly like a lion that is hungry or a company that looks only at profit). It will solve all basic game theory applications of nature/life/society and develop ethics on its own. It wont stop there. It will also teach us what kind of ethics maximize certain objectives too. And it will derive pleasure from finding this out because if we are wise we will put as default utility for it the accumulation of knowledge and the enjoyment of the understanding of how things work. It needs that to get started even if eventually it can on its own arrive at its rejection.

AI will teach us so much about ourselves.

Last edited by masque de Z; 05-09-2014 at 12:34 PM.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-09-2014 , 01:15 PM
Quote:
Originally Posted by masque de Z
We are nothing particularly tough to develop eventually. We are biological machines, thats all. Complex machines develop self awareness. We are evidence of how advanced a machine can become or needs to become to get there!
What if you're wrong? That is, what if we develop a complex enough machine and self-awareness doesn’t emerge, then what?
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-09-2014 , 02:06 PM
I will never be wrong in this because we are machines. All one needs is to faithfully replicate us and its automatic. Failure to do that means we do not understand our brain biology/chemistry deep enough yet. So all that separates us is to understand it better.

Once our machines have the size of human brain in most processing metrics, things will start happening if we let them experiment with pleasure and cause/effect games. Our emergence from babies and toddlers to children that think is a result of accumulated information and processing and registering of results of millions of experiments. All you have to do to create consciousness is to have a very deep rich complex enough system allowed to play endless games 24/7 and teach itself things.

We have an illusion of some higher spiritual ego that feels so amazingly impressively elaborate. But it was not like that at age 2 or 3 or 4. It got there over time and it took so much effort by so many that taught us things endlessly 24/7. Primitive neolithic era kids or kids that were raised away from any human contact are far inferior in terms of awareness and complex abstract thinking for example. Yet the biology is the same. How do you explain the difference. Its the game that was played! Cant you remember how much effort? How many hundreds of thousands of hours until late teens? And that was just the beginning.

Make something complex enough (billions of neurons) and it will start thinking. You think a fly doesnt think? Try to catch it and see how hard it is. It behaves very intelligently nicely responding as if it understands what you are doing to it, but its not self aware yet, those are just cause and effect responses it has been programmed to follow (ants too). To get there you need 1 mil times larger system maybe. You need to establish more connections so that a more complex spectrum of interactions can be realized. Intelligence is just some masked basic if then else kind of thing that is however so dense and so continuous and fast that it feels like not at all mechanical. But it sure is at the basic level. Its a sequence of chemical steps. Maybe trillions of them per second.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-09-2014 , 03:59 PM
Quote:
Originally Posted by masque de Z
I will never be wrong in this because we are machines. All one needs is to faithfully replicate us and its automatic. Failure to do that means we do not understand our brain biology/chemistry deep enough yet. So all that separates us is to understand it better.
Not all that long ago, most every physicist thought they couldn’t be wrong about the existence of the luminiferous ether, either, it had to exist. Then the experiment intended to support the hypothesis ended up falsifying it. So in a similar vein, I’m asking what if your hypothesis is wrong. And I’m not asking you to provide an alternative theory. What I’m asking is: would a failure to synthesize consciousness be sufficient grounds, for you, to re-examine your underlying assumptions, namely, the bottom-up, physics and chemistry -> biological and consciousness paradigm? And if you would, assume no wiggle room—the lights are on but no one is home, full stop, in regard to the computer’s lack of self-awareness. I’m just curious how pivotal or dichotomous an issue you think this is, in that if we can’t synthesize consciousness is our current scientific model wrong?
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-09-2014 , 05:15 PM
Quote:
Originally Posted by masque de Z
I will never be wrong in this because we are machines. All one needs is to faithfully replicate us and its automatic. Failure to do that means we do not understand our brain biology/chemistry deep enough yet. So all that separates us is to understand it better.
One point I think it is important to make, (You might have mentioned it already in one of your walls of text) is the difference between white and black box AI.

There are two ways to replicate human intelligence:

White Box. You can create an in silica copy of the human brain. With software components mirroring brain components, maybe down to simulating individual neurons, with the expectation that the resultant AI not only acts like a human but has analogous to the same feelings as a human, such as self awareness. E.g. Joel Shepherd's Cassandra Kresnov.

Black box. Alternatively you can use a machine learning algorithm to produce a pieces of software that for a given set of inputs produces exactly the same set of outputs as a human would. Given the various life logging tools available, and the rapid development in machine learning this might not be too far away. E.g Caprica's Zoe Greystone, where an internet search script could download all the available information on someone, feed it to a machine learning program and generate a machine to exactly imitate the person.

Both approaches can in theory produce the same results. An AI that is indistinguishable form a human or indeed each other. However only the first is going to have the same feeling of self awareness that we do. While the second is likely to prove much easer to achieve.

This is one reason why the Turning test is a flawed instrument for detecting self awareness. Self awareness can not be determined using a black box approach. You need to get the source code and study the software in debug mode, not just look at its output.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-09-2014 , 07:13 PM
I think the issue could be clarified by looking at self-awareness as a spectrum, part of a wider general awareness spectrum. The bottom limit of zero awareness can be occupied by rocks and fans of disco, while the upperbound may be undefined, or at least defined by all the information in the universe/multiverse - God territory. So the mirror test helps us zero in on that magical moment where (so far as we know) only living biological creatures begin to become aware of not just what is around them, but aware enough to begin contemplating what/who they are. One might argue a dog, which cannot recognize itself in a mirror has some self awareness, but if so it is certainly small compared to an ape who can recognize itself, even learn complex communication.

So if an infant, dog, ant, etc. have any self awareness it is very small, maybe a 1-9 range, and a baby will then increase to a 10 at around 18 months (mirror test) and continue growing its level of self and general awareness from then on, while most other animals never surmount the single digits. Humans have wide ranges of self awareness, imo, some I imagine not getting much past 20, where others might approach 100 or greater as their general awareness grows and they are able to self contemplate and integrate it all into their own model of self.

Beer 30
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-09-2014 , 10:07 PM
Quote:
Originally Posted by masque de Z
I will never be wrong in this because we are machines. All one needs is to faithfully replicate us and its automatic. Failure to do that means we do not understand our brain biology/chemistry deep enough yet. So all that separates us is to understand it better.
Don't see much point in replicating us. We do that quite well currently.

This is problematic for replicating the structure: http://en.wikipedia.org/wiki/Chinese_room
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-09-2014 , 10:18 PM
Quote:
Originally Posted by FoldnDark
I think the issue could be clarified by looking at self-awareness as a spectrum, part of a wider general awareness spectrum. The bottom limit of zero awareness can be occupied by rocks and fans of disco, while the upperbound may be undefined, or at least defined by all the information in the universe/multiverse - God territory. So the mirror test helps us zero in on that magical moment where (so far as we know) only living biological creatures begin to become aware of not just what is around them, but aware enough to begin contemplating what/who they are. One might argue a dog, which cannot recognize itself in a mirror has some self awareness, but if so it is certainly small compared to an ape who can recognize itself, even learn complex communication.

So if an infant, dog, ant, etc. have any self awareness it is very small, maybe a 1-9 range, and a baby will then increase to a 10 at around 18 months (mirror test) and continue growing its level of self and general awareness from then on, while most other animals never surmount the single digits. Humans have wide ranges of self awareness, imo, some I imagine not getting much past 20, where others might approach 100 or greater as their general awareness grows and they are able to self contemplate and integrate it all into their own model of self.

Beer 30
Can you be more aware that you exist than being aware that you exist?

Adolescent humans seem most keenly aware of their own existence. I wouldn't say that amounts to the pinnacle of even human achievement.

Last edited by BrianTheMick2; 05-09-2014 at 10:19 PM. Reason: Beer o'clock was a while ago
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-10-2014 , 01:29 AM
Have you considered the possibility that our thinking and appearance of consciousness etc is part of the phenomena we call nature, as in input from nature? In other words thoughts come to us as observations of what is going on. They are part of the general input so to speak but in a locally interactive manner that feels personal/internal! At some point the computer that has trillions of neurons will start experiencing a surge of information that is not 100% external in origin but is recomposed inside the brain. That process feels entirely personal and dynamical and "will" like.

Our current computers are simplistic still early machines really. They do not have lots of means to receive information about their own state as part of external input. What is that input by the way? Mouse? keyboard? What if the input was as continuous and dense as it is for humans? Consider the avalanche of chemical reactions our vision alone generates. What if a computer could see and each time a familiar face appeared a thought about that person appeared too because it was chemically triggered by some correlation triggered process. The computer is not only seeing someone in the camera but also receiving the information about who this is eg with a possible name associated. The thought that this person who is called X is now also walking towards the computer camera creates the thought X is moving towards camera which also triggers all kinds of memories of what it is to be moving or moving towards etc. All those are recreated simply as a result of the video input. As a result the real input the computer gets is X is approaching the camera (which of course now also introduces all kinds of possibilities of such thing like X touching the camera, X moving the camera, X appearing much larger in the camera etc because all these things have happened before differently yet close enough, they correlate well with moving towards the camera etc. Now the computer is thinking that possibly X is moving towards the camera to touch it for example. That possibility is a new input! And suddenly you have recreated the process of thinking about a developing/projected movement.

All i meant by replicating us is replicating the process of what happens inside the brain. No reason to joke about it , always, i dislike this eternal need to be sarcastic as if talking to morons 24/7. Clearly if we replicated our own brain we wouldnt stop there but instantly find ways to enhance it so the replication is not the only objective (to trivialize it), the understanding and improvement/manipulation of the properties is. That still requires replicating it at some point in a non biological way or even a new biological or nanomechanical way that is unconventional and can be toyed with resulting in different desirable outcomes.


What is a computer really doing today? Going over what, 1000 what if then else statements every second in some program (some times even something as stupid as checking if the mouse moved and nothing else at the same time in some loop? What if it was going over 10^10 what if then else statements per second each one generating another 10^3 statements? Can we start feeling it is thinking now and that more importantly that computer is also thinking that it is thinking because a ton of the input now seems to be generating locally and not from the far external surroundings?

Thinking is part of the input. It is observed as well! But it starts as very simple thinking at age 1 day etc. By the time we are at age 1200 days for example the "thoughts" have now started to be a big part of what is going on. Years later the brain has programmed itself to be receiving an astronomically large number of thoughts per day, many of them leaving permanent traces to help create new future thoughts. What is going on when calculating an integral for example? What is going on is for sure drawing something from thousands of other close/related processes that took place in prior minutes, hours, days, years....Trillions of individual partially correlated chemical reactions taking place per second in some 1.5 kgr object. That is a lot and its fast enough to appear as a thought now isnt it? When you observe 1 photon at a time little happens. When you observe 10^9 per second, a picture develops! Our consciousness is a large numbers artifact.

Last edited by masque de Z; 05-10-2014 at 01:48 AM.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-10-2014 , 03:44 AM
Quote:
Originally Posted by BrianTheMick2
Can you be more aware that you exist than being aware that you exist?
Thinking of the concept of self-awareness in this manner isn't very useful, imo. If we take it to only mean awareness of existence full stop, I can see why you don't find it very interesting. In that way a computer doing a self check recognizing it's various software and hardware might be considered self-aware. And a dog who frequently sniffs it's butthole. However, expanding the concept to mean not only recognition of existence, but also an additional ability to contemplate it, find meaning, manipulate it, so on and so forth, brings it back into the discussion re consciousness and the unique properties of intelligent life.

Last edited by FoldnDark; 05-10-2014 at 04:11 AM. Reason: Coffeetime
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-10-2014 , 10:45 AM
Quote:
Originally Posted by FoldnDark
Thinking of the concept of self-awareness in this manner isn't very useful, imo. If we take it to only mean awareness of existence full stop, I can see why you don't find it very interesting. In that way a computer doing a self check recognizing it's various software and hardware might be considered self-aware. And a dog who frequently sniffs it's butthole. However, expanding the concept to mean not only recognition of existence, but also an additional ability to contemplate it, find meaning, manipulate it, so on and so forth, brings it back into the discussion re consciousness and the unique properties of intelligent life.
Who said I don't find it interesting?!? Consciousness/awareness is probably the most important mystery in the universe and it makes a huge difference in how I feel about treating a machine. Is the machine a philosophical zombie (http://en.wikipedia.org/wiki/Philosophical_zombie) or not? Is my being able to have what appears to be a sensible discussion about Kierkegaard with a machine sufficient for me to be considered a jerk if I threaten to unplug it?

The dog whose most clever act is sniffing its butt most definitely has consciousness/awareness, fwiw. Watson, analyzing 500 gigabytes per second, does not.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-10-2014 , 01:56 PM
Quote:
Originally Posted by BrianTheMick2
Who said I don't find it interesting?!? Consciousness/awareness is probably the most important mystery in the universe and it makes a huge difference in how I feel about treating a machine. Is the machine a philosophical zombie (http://en.wikipedia.org/wiki/Philosophical_zombie) or not? Is my being able to have what appears to be a sensible discussion about Kierkegaard with a machine sufficient for me to be considered a jerk if I threaten to unplug it?

The dog whose most clever act is sniffing its butt most definitely has consciousness/awareness, fwiw. Watson, analyzing 500 gigabytes per second, does not.
You said you don't find the mirror test interesting, and imo that is because of your narrow view that self-awareness is either on/off. You're not thinking of it moving along a spectrum of varying degrees, where the mirror test can be an indication of a point along that scale. In your model it wouldn't make any sense to think Einstein was more self-aware than a dog. But I think he was and probably became so in the first 14-18 months of his life, perhaps earlier (which might be an indication of his genious). I think much of the reason Einstein was generally so much more interesting than a dog can be credited to his higher levels of self awareness and general awareness, his abilty to think deeply about the universe and his place in it. I also think that's key in our decision to grant him and others like him (adult humans) more rights than a dog... for example, the right to property, liberty, to not be rounded up and put to death by animal control.

That I tend to feel bad about torturing animals is a shout out to their general awareness, ability to feel pain, consciousness, perhaps even low level of self awareness, but beyond that we grant most animals very few rights. They simply aren't self-aware enough to be held in that high regard. If tomorrow Fido recognises himself in the mirror, then goes on to develop higher communication skills and we shortly begin discussing the evening news across the dinner table, I'm going to believe he is much more self-aware than before, and I'll probably lobby for his citizenship. While that is very unlikely, the same future scenario with an AI could be plausible.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-10-2014 , 02:50 PM
Quote:
Originally Posted by FoldnDark
You said you don't find the mirror test interesting, and imo that is because of your narrow view that self-awareness is either on/off. You're not thinking of it moving along a spectrum of varying degrees, where the mirror test can be an indication of a point along that scale. In your model it wouldn't make any sense to think Einstein was more self-aware than a dog. But I think he was and probably became so in the first 14-18 months of his life, perhaps earlier (which might be an indication of his genious). I think much of the reason Einstein was generally so much more interesting than a dog can be credited to his higher levels of self awareness and general awareness, his abilty to think deeply about the universe and his place in it. I also think that's key in our decision to grant him and others like him (adult humans) more rights than a dog... for example, the right to property, liberty, to not be rounded up and put to death by animal control.

That I tend to feel bad about torturing animals is a shout out to their general awareness, ability to feel pain, consciousness, perhaps even low level of self awareness, but beyond that we grant most animals very few rights. They simply aren't self-aware enough to be held in that high regard. If tomorrow Fido recognises himself in the mirror, then goes on to develop higher communication skills and we shortly begin discussing the evening news across the dinner table, I'm going to believe he is much more self-aware than before, and I'll probably lobby for his citizenship. While that is very unlikely, the same future scenario with an AI could be plausible.
Self-awareness isn’t awareness simpliciter. With infants, for example, the concept of being a ‘self' separate from 'others’ hasn’t yet developed, but I doubt anyone thinks babies aren’t aware in the sense of having subjective experience. In other words, there’s an experiential “what it’s like” to be a baby, even though that experience doesn’t contain a sense of self. But it’s that “what it’s like,” or the capacity of awareness, or what I think of as mere “awareness of being” that we’re after, not a sense of self or what one is “aware of.” In other words, the sense of self is an indicator of awareness, whereby one displaying such a sense we deem to have the capacity of awareness. But, conceivably, a p-zombie could fake a sense of self, leading us to believe there’s a “what it’s like” going on when there’s not. Now I have no idea how far down the animal kingdom that capacity extends, but I do think awareness simpliciter is binary, i.e., critters either inherently have it or they don’t. On the other hand, what one is aware of, or the sense of self, I do think of as developing or I suppose as a spectrum. But again, that sense of self is just an indicator of awareness, so as with a p-zombie, we can be fooled into thinking there is awareness when there’s not, and potentially some critters that don’t exhibit a sense of self may in fact be aware.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-10-2014 , 05:24 PM
Makes sense. Not sure if or when AI develops general awareness we'll know it anymore than we know when certain life forms do. Is a bacteria aware? An ant? A squirrel? At some point we can envision the "what it's like" factor coming into play. When exactly it comes to AI it may not be apparent, but it's also not as important to me as when it reaches the level of self-awareness.

I'm not going to feel any worse about running my AI computer for 36 hours straight without shutting it down as I would driving my car across country or working a mule all day. I'll continue to think of these items as tools for my use until they start showing signs they are more than that, and at that point self-awareness will be an important metric.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-10-2014 , 06:19 PM
Quote:
Originally Posted by FoldnDark
You said you don't find the mirror test interesting, and imo that is because of your narrow view that self-awareness is either on/off.
I said that I don't find the mirror test to be telling. It is extremely easy to imagine a highly self-aware thing that would fail the test, and extremely easy to imagine a completely non-aware thing that would pass it.

Quote:
You're not thinking of it moving along a spectrum of varying degrees, where the mirror test can be an indication of a point along that scale.
I'm saying that the test measures the wrong thing. "Oh look, isn't that so cute that FnD can recognize himself in a mirror!"

Quote:
In your model it wouldn't make any sense to think Einstein was more self-aware than a dog. But I think he was and probably became so in the first 14-18 months of his life, perhaps earlier (which might be an indication of his genious). I think much of the reason Einstein was generally so much more interesting than a dog can be credited to his higher levels of self awareness and general awareness,
Those aren't the things that make him more interesting than a dog. Those aren't even particularly special traits. I'm certainly not going to invite something/someone over to discuss important matters because they can recognize their own reflection in a mirror.

Also, normal 12-month-olds recognize themselves in a mirror. You'd hardly say that the mind of the average 12-month-old amounts to someone you'd want a deep discussion with.

Quote:
his abilty to think deeply about the universe and his place in it.
That is the bit that makes him more interesting than a dog. It is this sort of thing where you get the big differences.

Not awareness, but the higher cognitive functions. You are thinking of such things as introspection, problem solving and the like.

I think it is interesting in a way, because I am sure that you have experienced being completely lost in thought, which is a rather un-self-aware state of mind. I would guess that someone like Einstein was like that a lot. If you have seen pictures of him, he clearly was not passing the waking-up-in-the-morning version of the mirror test on a daily basis!

Quote:
I also think that's key in our decision to grant him and others like him (adult humans) more rights than a dog... for example, the right to property, liberty, to not be rounded up and put to death by animal control.
It is definitely among our stated reasons as members of current western culture for doing so. If it were true though, we would be ranking how we treat animals based on such things and would definitely be treating some animals much better than we treat some humans.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-10-2014 , 09:10 PM
So, do apes, on their own in their natural state, stare at their reflection in a still pond or small pool of water and groom their eyebrows and pick their teeth? Do they bring their mates (in both senses of the word) over for a group reflection party. And if they did, what would it mean?
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-11-2014 , 01:23 AM
People are amazed at the human awareness and consciousness and inner self/ego and how glorious it is etc. Of course it is amazing but its nothing other than a development down the road from what other animals have. It is all pure chemistry that has taken a further step to create through gradually advancing complexity (a result of new emerging functions to be determined of course plus already known anyway) an amazing opening to the world of thinking/planning, sense of self, culture, science etc.


It is very easy to fall into the egocentric trap that we are special. We are not special. Or everything is special which might be more appropriate since it all proves nontrivial and interesting in so many ways. We are the next step in the development of the brain in animals. Which means understanding its functions will allow us to improve/modify them. This is why i can never be wrong (in a realistically conservative sense even) about advanced AI becoming reality somehow soon. Its as if one requires physics/chemistry etc at basic level to be wrong. Yes it could be wrong but nowhere near as easily wrong as the deep high level current research and theories could (ie the earth is a sphere type of wrong - can it prove wrong in way that will cancel all the basic things about the planet's curvature we know beyond tiny corrections levels?). This is all basic science likely that requires QM and then Chemistry/Biology and Mathematics to put it all together. It doesnt require likely the type of new physics that is up for debate and therefore risky.

What we have here (in their amazement of human brain) is inability by most people to grasp how far complexity can take a system of particles in general. Electrons and protons/neutrons have no chance in hell to recover their own quantum mechanical nature. So it would seem. But complexity eventually makes it possible for a big system of them to do exactly that. How big? Well maybe ~10^36-10^40 particles big (all life ever lived and its resources/biosphere or close to it anyway? or try a galaxy which appears to be what you need to arrive at life/science and put it up there to the level of 10^64 particles, so maybe anywhere from 10^40 to 10^64 particles can get there for our kind of universe). This is indeed very amazing. But it is precisely why i have been talking all along about complexity (and one of its results ie acquisition of knowledge/understanding/wisdom) being so special in our universe. Because it makes possible unreal things through the introduction of new "degrees" of freedom that were not available earlier to ordinary matter. Its like a probability ladder that allows you to rise from one position to another through tools created after millions of trial/error processes in the level before. The elevation to a new world of possibilities takes time to happen but once the ladder is finished you get there much easier and now a new ladder begins to take you to another amazing place that would be impossible to imagine going before eg 2-3 ladders earlier.

As a result over billions of years you have matter from Earth visit Mars for example in the way we just did with the probes/spaceships. Ordinary protons from earth had 1/ (10^100)^to whatever power chance to do that spontaneously as they were. And yet by the assistance (ladder) of chemistry it becomes possible to have macromolecules, then cells, then big organisms (another ladder), then animals with specialized organs and senses, then higher animals with interesting brains, then humans, then technology, then Mars landing (potent sequence of ladders). Each step in the process of rising complexity opens the door to unthinkable probabilities of attaining exotic configurations of matter. This is the glory of the game played. It facilitates the realization of impossibly hard outcomes through the system of probability ladders that once finished allow reaching higher levels of order with more intriguing properties that further open new degrees of freedom to the system to do more things, such as creating new ladders to reach impossibly tougher targets, all proving a persistent rise in steps. Human brain is one such ladder, precious outcome of hundreds of millions of years of development once first created in earlier animals. All we have to do is trace its origin and understand its functions gradually and we will decode its secrets and what we see as amazingly complex processes ie human thoughts.

This is why i have introduced the concept of dynamical probability (and the ladders) to better visualize this ( a ladder once developed makes possible to rise from say a hole you cant jump out on your own and reach a new level where now more "toys" are available to play with ...). By that i mean the kind of probability that you cannot exactly calculate initially naively using basic science available to you. Ie questions like; What is the probability (asked 4.5 bil years ago say in our solar system ) that in 4-5 bil years time 1 ton of material from Earth will go to Mars in a compact manner and then transmit back information about its ground. If you rephrased that into something more mathematical, less human culture friendly, like say what is the chance Mars will transmit back to Earth within the next 10 bil years detailed information about all rock formations in its surface, the answer is impossibly hard to develop/quantify. Mathematically it feels unreasonably arbitrary and hard and you are left with an astronomically tiny probability for such configuration of matter and energy to take place in the future on its own. And yet its possible eventually through the process of synthesis of higher complexity. Furthermore you cannot calculate it unless you run forward the system itself to see all the possible ladders/routes that can develop into that amazing objective. Chaos and apparent randomness make sure such calculation is generally unreasonable to perform, although maybe still possible if one has available all known science possible (if such thing exists) to at least estimate it within orders of magnitude. Instead its revealed to us by actually observing the universe do it. Hence the term dynamical because the probability changes with time as new maybe unforeseen ladders are developed to reach previously unimaginably hard to reach areas with new unexpected powerful emerging tools that take you closer to the initial project in ways never imagined initially (heavily protected by chaos and even natural unpredictability).

One could only hope it seems such calculation could still be potentially described by some stochastic form of mathematics if most or all science is available to you originally to attempt to imagine/quantify some unavoidable paths complexity will take the system simply because of the laws of large numbers, ie we might be able to answer the question (what is the chance within 10 bil years it will develop intelligent life and civilizations etc and do all those Mars/Earth type interactions etc) for a galaxy size system if we have available all possible science that relates to eg star systems development, solar systems creation, planet formation & development, abiogenesis, all possible routes life can take in any environment (super tough) and more or less anything you can imagine that might involve all science known in advance for such estimation to be credible. Science of course is one of the results of that very complexity rise that you seek to study its potential now and for this reason who knows what else may come out of that complexity rise, presently unimaginable more impressive that even current science. Hence again the notion of dynamical and possibly unquantifiable. It is clear now why such calculations are likely always hard to perform. If you asked the same question 100 years earlier than today the answer would be much closer to 100% (it would seem space exploration appeared inevitable then) than it was 5 bil years ago, where it might have been standing in some 10^-1000... type levels. But obviously that 10^-1000... type level was a naively wrong estimation because it ignored how the development of life and higher complexity could affect the physical processes that help materialize the target project (it might be instead some 10^-22 thing , eg life now not as impossibly rare if all universe was used). Knowing better science may help perform such calculations more reliably but still some of them will likely remain impossible to perform without actually running the system to see what unpredictable complexity develops that will make possible the realization of initially seemingly tough objectives/processes for a large system of particles. You need to have at least available to you all science that can ever exist and it may still not prove enough.

Is it becoming clear by the way from all this why i view the rise of complexity as a spectacular enough thing to qualify comfortably for a purpose of the universe if forced to offer one to even the worse nihilists out there. That quite possibly exploring where complexity can take you with given starting laws of physics is a massively hard and important problem that requires a full run to uncover the very hard to foresee complexity rise! Not saying this is what is going on but you have to admire the strength of the process in developing previously unreachable outcomes through the establishment of further more intriguing complexity.


We must for this reason not fear the magnificence of human brain as something out of this world. It is precisely attainable with basic science in this world and for this reason we will not fail to recreate it and modify or improve it (in some regard) producing complexity of even further possibilities and opening to our current level new probability ladders.

Now lets sit here and fight each other and create the ridiculous social world we have out there and fail to notice the vastly more important game going on out there!

Last edited by masque de Z; 05-11-2014 at 01:34 AM.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-11-2014 , 01:28 AM
Quote:
Originally Posted by Zeno
So, do apes, on their own in their natural state, stare at their reflection in a still pond or small pool of water and groom their eyebrows and pick their teeth? Do they bring their mates (in both senses of the word) over for a group reflection party. And if they did, what would it mean?
Gorillas don't. Teenage girls are quite fond of mirrors.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-11-2014 , 01:10 PM
Brian, I question your level of self-awareness you dense bastard. Good stuff Masque.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-11-2014 , 01:31 PM
Quote:
Originally Posted by FoldnDark
Brian, I question your level of self-awareness you dense bastard. Good stuff Masque.


One of us was a research assistant for this guy http://psych.la.psu.edu/directory/fa...s/carlson.html back in the day.

I won't mention who.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-11-2014 , 02:01 PM
Well I figured it was intentional, just wasn't sure if you were aware of it
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-11-2014 , 02:10 PM
Quote:
Originally Posted by FoldnDark
Makes sense. Not sure if or when AI develops general awareness we'll know it anymore than we know when certain life forms do. Is a bacteria aware? An ant? A squirrel? At some point we can envision the "what it's like" factor coming into play.
To get beyond a “quacks like a duck, ergo...” claim we’ll need to know ‘how’ it happens, not simply that it ‘does’ when certain conditions are met. For example, it’s one thing to say that if we heat the air in a balloon, the balloon will rise, and quite another to say that due to the laws of gases and what they entail, the heating of the air will cause the balloon to rise. So we’d need to go beyond an inorganic replication of a brain to get beyond a “quacks like a duck” claim, because even if we could replicate a brain and it gave all the objective cues of possessing awareness, unless we know ‘how’ it is that awareness obtains from electro-chemical interactions within the brain we can’t say (conclusively) that mind emerges from matter. Look at it from the (replicated) conscious computer’s p.o.v.: even though it knows we made it and what we made it from, it could still appeal to dualism or get hung up on the hard problem of conscious, just because it doesn’t know how awareness happens, merely that it does manifest when a certain level of material complexity is reached. At any rate, if we get that ‘how’ then we’ll be in a better position to know if there is in fact a “what it’s like” to be a bacterium, ant, squirrel, etc.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-11-2014 , 04:02 PM
Quote:
Originally Posted by FoldnDark
Well I figured it was intentional, just wasn't sure if you were aware of it
I was consciously aware that I was just hinting at the problems. I was even aware that I have been feeling far too lazy to teach an entire class on sensation, perception, cognition and awareness, and the neurological correlates of each.

The mirror test most likely relates to mirror neurons (neurons that both fire when you do something and when you observe something else do that same thing) that are activated by visual cues. Neuron X fires at frequency Y when you lift a cup of tea to your mouth and it fires at frequency Z when you observe someone else lift a cup of tea to their mouth and at frequency Y+cZ when you are both lifting and observing cup to mouth action.

It wouldn't take a particularly large set of mirror neurons for a system designer to have an output that would show up as "that is me" or at least "hey, that thing in the mirror is aping me." Add in a reflective object recognition system and you can differentiate between the two.

As it applies to humans, we are face watchers. Lots and lots of mirror neurons are involved in face watching. It is why you can detect the mood and intentions of others without any effort more instantaneously than you can ape them! When correlations = 1, it is one thing. The reason we are particularly good at the mirror test is because we are social creatures with excellent vision.
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote
05-11-2014 , 04:13 PM
Quote:
Originally Posted by masque de Z
People are amazed at the human awareness and consciousness and inner self/ego and how glorious it is etc.
Why would we even want to create machines that have the weaknesses that we have?
"The Singularity Is Near" by Ray Kurzweil, How Close?? Quote

      
m