Open Side Menu Go to the Top
Register
Turning to Turing Turning to Turing

06-20-2008 , 11:16 PM
Quote:
Originally Posted by chezlaw
Not magically but yes that's the idea - just like the brain did.
How do you propose we get started on creating the conditions for this to happen? What would the hardware and software look like? What limitations have to be overcome?
Turning to Turing Quote
06-20-2008 , 11:27 PM
Quote:
Originally Posted by Phil153
How do you propose we get started on creating the conditions for this to happen? What would the hardware and software look like? What limitations have to be overcome?
We need enough computing power to:

a) simulate a large population of computation devices each with something similar to the brains power of computation.

b) simulate (or provide exposure to) a suitably complex environment

c) provide a means to communicate/interact with the environment and each other.

d) provide a fitness function that pushes them in the direction we want them to go in. (some sort of hunter gatherer model I'd guess)

In practice we 'cheat' and wherever possible build up expertise in visual/speech recognition/etc systems and then give the above a head start in where we want them to go.

[This is a bit of an over-simplification but not in the direction of having to understand how stuff like protein folding or algorithms work]
Turning to Turing Quote
06-21-2008 , 12:08 AM
Quote:
Originally Posted by chezlaw
We need enough computing power to:
OK, this is good stuff. Trouble is:

a) simulate a large population of computation devices each with something similar to the brains power of computation.
You can't simulate a population of computation devices of non trivial complexity- they have to physically exist. For the same reason that linear algorithms can't simulate non trivial parallel algorithms. Complexity increases too fast.

b) simulate (or provide exposure to) a suitably complex environment
ok.

c) provide a means to communicate/interact with the environment and each other.
but what does communicate/interact mean? Human brains - even the most basic brains - arose out of a billions of parallel trials in billions of generations of billions of creatures that had nerve cells interacting with motor cells which in turn interacted with a huge variety of environmental stimuli. The architecture itself was designed by this process. Creating an analog is a pretty big challenge.

d) provide a fitness function that pushes them in the direction we want them to go in. (some sort of hunter gatherer model I'd guess)
This is problematic. We're not talking one variable or ten variable or even hundreds - we're talking a huge number of variables, some conflicting, some complementary, some groups blocking expression of other groups, some which require others first. I agree that once you have a monkey brain, you can probably do this, but how do you get to a monkey brain?

Quote:
In practice we 'cheat' and wherever possible build up expertise in visual/speech recognition/etc systems and then give the above a head start in where we want them to go.
But these are all specific structured algorithms to solve very narrow problems . Even the best expertise in recognitions systems won't scale to intelligence because it's a fundamentally different problem.
Turning to Turing Quote
06-21-2008 , 12:12 AM
Quote:
You can't simulate a population of computation devices of non trivial complexity- they have to physically exist. For the same reason that linear algorithms can't simulate non trivial parallel algorithms. Complexity increases too fast.
??? Unless you're claiming the brain is more than a set of neurons and connections the of course it can be simulated.

Quote:
but what does communicate/interact mean? Human brains - even the most basic brains - arose out of a billions of parallel trials in billions of generations of billions of creatures that had nerve cells interacting with motor cells which in turn interacted with a huge variety of environmental stimuli. The architecture itself was designed by this process. Creating an analog is a pretty big challenge.
Its trivial, we already produce comptuing devices that communicate /interact with the environment.

Quote:
This is problematic. We're not talking one variable or ten variable or even hundreds - we're talking a huge number of variables, some conflicting, some complementary, some groups blocking expression of other groups, some which require others first. I agree that once you have a monkey brain, you can probably do this, but how do you get to a monkey brain?
Start with a cat's brain.

Quote:
But these are all specific structured algorithms to solve very narrow problems . Even the best expertise in recognitions systems won't scale to intelligence because it's a fundamentally different problem.
Yes, this is just a shortcut guide the brain to do the sort of stuff we do.
Turning to Turing Quote
06-21-2008 , 03:02 AM
Quote:
Originally Posted by chezlaw
Start with a cat's brain.
We can't even get a slug brain yet. Not even close.
Turning to Turing Quote
06-21-2008 , 03:22 AM
Quote:
Originally Posted by Phil153
hahahahahahahaha

The experts in the field (the Japanese) believe that we'll have better-than-human soccer player robot by the late 30s. They don't believe that we'll have anywhere near something capable of being close to a human domestic servant, let along a functioning being of human intelligence. Why do you think that is?


The extremely parallel and integrative nature of the processing required for intelligence. You can't code that crap, it's orders of magnitude too complex. And you can't just stick a blob of learning goo (assuming we can discover how to make such a thing easily, which is very very unlikely) in a pile and feed it instructions or learning experiences...the human brain has a number of very specific ways in which it generates language understanding, context sensitivity, awareness, and so on. It's an *unbelievably* intricate system with hardware encoded learning capabilities that we can't even begin to understand. Psychologists have a history of misunderstanding the complexity of the brain, which seems bizarre since they're the ones actually studying it...
You seem to be assuming this modelling of neurons is done to create the intelligence (or model the behavior) of a human being. It really isn't - that would an incredibly ineffective path to take.

Such projects are usually undertaken by those who desire to learn more about our neural systems, who wants to fit 'all the incredibly array of data' in one place and who want to be able to conduct 'ethical' experiments on it.

As for face-in-crowd recognition, it is many years since that was resolved theoretically and the first systems set in place - as always they were not so good in the beginning. Now you're seeing the first approaches be made that are outperforming humans. You want to debate this issue, you might want to keep yourself updated - it moves fast.

Last edited by tame_deuces; 06-21-2008 at 03:40 AM.
Turning to Turing Quote
06-21-2008 , 03:36 AM
Quote:
Originally Posted by madnak
We can't even get a slug brain yet. Not even close.
Researches at Lausanne reached 10,000 neurons a few years back, which would be enough the entire neural network of a Aplysia Californica - which is indeed a slug. A meager start, but a start and a big victory also.

As I understand it, their next goal is to model a rat's brain within the foreseeable future. They proved the skeptics more than wrong on the last go, so we'll see how they fare now.

But this isn't really a question of intelligence. The most incredible of human brains couldn't even model a single neuron with 'conscious' thought in real time - we're still calling them pretty intelligent. And the ability to do so is certainly not a requirement to have intelligence.

The complexity argument 'of simulation' is has never really been an argument against artificial intelligence. It has only been misunderstood to be so. You and me are not able to within our brains purposefully and accurately run complete bottom-up simulations of a small computer running the slightest of AI programs. I still think we're pretty smart.

A truly smart computer will think in bigger patterns, like we do.

Last edited by tame_deuces; 06-21-2008 at 04:00 AM.
Turning to Turing Quote
06-21-2008 , 04:55 AM
Quote:
Originally Posted by tame_deuces
You seem to be assuming this modelling of neurons is done to create the intelligence (or model the behavior) of a human being. It really isn't - that would an incredibly ineffective path to take.

Such projects are usually undertaken by those who desire to learn more about our neural systems, who wants to fit 'all the incredibly array of data' in one place and who want to be able to conduct 'ethical' experiments on it.
I have no idea what this has to do with anything I said.

Quote:
As for face-in-crowd recognition, it is many years since that was resolved theoretically and the first systems set in place - as always they were not so good in the beginning. Now you're seeing the first approaches be made that are outperforming humans. You want to debate this issue, you might want to keep yourself updated
Nice dig, but no yam. I'm well aware of the current state of biometrics. What you fail to understand is that facial recognition is computer science 101. There aren't any advances here - it's the simple mapping of unique traits to a database, from inputs of varying reliability. Basically, it comes down to extracting a set of measurements that can uniquely pinpoint an individual, and mapping those a database. It's biometrics 101. The difficult part is in dealing with images of variable quality and angles.

Think about how such an amazingly simple problem took so long to solve (and still isn't solved), and how it required a large chunk of hard-coding. Why didn't they run an evolutionary algorithm to find the best face matching algorithm? The answer is because they can't, even for something as unbelievably straightforward as correctly matching finite known inputs with finite known database elements. That's a big plus for my position, and a big negative for yours.

Quote:
- it moves fast.
Like a rocketship:
Quote:
* 1965, H. A. Simon: "Machines will be capable, within twenty years, of doing any work a man can do"[35]
* 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."[36]
Don't worry, you're in good company.
Turning to Turing Quote
06-21-2008 , 04:56 AM
Quote:
Originally Posted by tame_deuces
The complexity argument 'of simulation' is has never really been an argument against artificial intelligence. It has only been misunderstood to be so. You and me are not able to within our brains purposefully and accurately run complete bottom-up simulations of a small computer running the slightest of AI programs. I still think we're pretty smart.
I think you've misunderstood those comments.
Turning to Turing Quote
06-21-2008 , 04:59 AM
No, I didn't misunderstand those comments. But I wanted to address them, because they really have little to do with what we are debating.

Apart from that they are currently developing software that can recognize people from written descriptions, so it moves a steady forward. Is it intelligence? No - I don't think so. But what IS intelligence is actually being challenged by these fields as they trod along.

As for evolutionary computation and swarm intelligence applications, and why they are not used - this is because they are in their meager beginnings. This approach requires a lot of crunching power, but the results you can run in far simpler systems however. But I think starting to think emergence instead of hardcoding is a very exciting approach.

Also over the last few years you've had enormous progress in building mathematical languages which can combine the vast separation of fields who are involved in this issue. It is looking quite sweet.

Maybe my timelines are off, and touched by 'hopeful' estimations - that can very well be. But my method is not.
Turning to Turing Quote
06-21-2008 , 05:08 AM
So is it your position that self aware intelligence can be realized in current computer architecture, assuming much higher processing speeds?
Turning to Turing Quote
06-21-2008 , 05:24 AM
Quote:
Originally Posted by Phil153
So is it your position that self aware intelligence can be realized in current computer architecture, assuming much higher processing speeds?
Self awareness is hazy I think, confounded by the huge variety of people who are using it, so I really don't know, the concept isn't solid enough - as it it stands it is more a philosophical concept than a hard concept.
Turning to Turing Quote
06-21-2008 , 05:59 AM
Quote:
Originally Posted by madnak
We can't even get a slug brain yet. Not even close.
It was a reducto ad absurdam response to phil's objection.

but if we can't get close (and i'm not sure we can't) we will be able to if we have computers of the power of the human brain.
Turning to Turing Quote
06-21-2008 , 06:06 AM
Quote:
Originally Posted by Phil153
So is it your position that self aware intelligence can be realized in current computer architecture, assuming much higher processing speeds?
is that addressed to me?

Yes in principle, if the only factor is computation. A Turing machine could do it if the ticker-tape moved fast enough. (Not in practice because current architecture couldn't possible provide the processing density)

but I don't know if computation is sufficient for self-aware intelligence *. If I was betting I'd plump for it not being enough but for the purposes of discussion I'm assuming it is.

* i don't like the term self-aware because self-awareness is trivial imo, the real problem is experiencing stuff (qualia) which I assume is your meaning.
Turning to Turing Quote
06-21-2008 , 09:25 AM
Quote:
Originally Posted by Phil153
Think about how such an amazingly simple problem took so long to solve (and still isn't solved), and how it required a large chunk of hard-coding.
It is not an amazingly simple problem, it just seems amazingly simple by human standards. We are able to recognize the face of friend amongst a crowd of people in less than a second. Teaching a machine intelligence to recognize faces, is like teaching a blind person, to recognize faces by listening to sonar wave. These are cognitive processes that are ingrained in our senses and mind.
Quote:
Why didn't they run an evolutionary algorithm to find the best face matching algorithm? The answer is because they can't, even for something as unbelievably straightforward as correctly matching finite known inputs with finite known database elements. That's a big plus for my position, and a big negative for yours.
Playing and strategizing a chess match is a lot of degrees more straightforward than visual facial recognition. If we were to build it according to your view -input - conceptualize - match with database - output- it would not act as us in observing and recognizing faces. It is like speaking Chinese by quickly searching everything in a dictionary. We would still call it intelligent. Would you?

Neural networks dont really work with genetic algorythms, but with optimisation of punishment-rules and employing a teacher who rewards valid output. We are able to put a pumpkin, a bag of oreos and a face in front of a computer, and have it recognize a face (classify an object as having the properties of a face). This is already a fair accomplishment. That a computer has trouble filtering input to what is meaningful for humans, is unavoidable. That we can filter out a human voice in background noise, says more about our discriminatory skills/bias than the (non-)ability of computers to do so.
[/quote]
I'll bet for every branch in science there are predictions from the 50s and 60s than when looked back upon will look silly, but those quotes certainly are not silly. I'll even debate they have already come to fruition.

Quote:
* 1965, H. A. Simon: "Machines will be capable, within twenty years, of doing any work a man can do"
in 1985 Machines were capable of doing any work man can do. (If you don't view being a scientist as work, and within reasonable bounds: because pornstars are workers too)

Quote:
* 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."
The problem is mostly solved, by any definition of "artificial" and "intelligent" within the field of cognitive artificial intelligence.

One must remember this is not just building robots that act humanlike. This is a valid and booming science. I've had courses on the subject of intelligence alone, and from many viewpoints (behavioral, computational, philosophical).

It is strange I suddenly need to defend self-awareness as a measure for artificial intelligence. Your arguments against AI sound like me telling Max Raker the super string theory is elusive and laughable, because the universe seems so complex to me.
Turning to Turing Quote
06-21-2008 , 09:52 AM
Quote:
Originally Posted by 46:1
It is not an amazingly simple problem, it just seems amazingly simple by human standards.
I'm not comparing it to human standards, I'm comparing to other kinds of problems in AI.

Quote:
Playing and strategizing a chess match is a lot of degrees more straightforward than visual facial recognition.
Chess algorithms are very easy, I agree. That was a low hanging fruit waiting on processing power, like speech processing which remarkably is still not up to scratch.
Quote:
If we were to build it according to your view -input - conceptualize - match with database - output- it would not act as us in observing and recognizing faces. It is like speaking Chinese by quickly searching everything in a dictionary. We would still call it intelligent. Would you?
Not sure what you're asking. I'm not suggesting that conceptualizing should be an intermediate step. I'm suggesting that the process of biometric matching is a simply linear computer science problem that isn't algorithmically hard, merely messy. And much of the mess itself was solved by hard-coding, not evolutionary algorithms. It's making a joke of AI.

Quote:
Neural networks dont really work with genetic algorythms, but with optimisation of punishment-rules and employing a teacher who rewards valid output. We are able to put a pumpkin, a bag of oreos and a face in front of a computer, and have it recognize a face (classify an object as having the properties of a face). This is already a fair accomplishment.
It's really not. Object classification doesn't interest me since it's just a broader version of an input filtering system. Cognition and meaningful response is an entirely different class of problem.

Quote:
in 1985 Machines were capable of doing any work man can do. (If you don't view being a scientist as work, and within reasonable bounds: because pornstars are workers too)
In 1985, I couldn't have got a machine to organize domestic duties for me around the house. The simplest of tasks. I couldn't have gotten a car to navigate the road. The simplest of tasks that doesn't even require AI, merely good algorithms. I could go on and on. What you're suggesting simply isn't true. Furthermore, the people who made these predictions were talking in exactly the sense of AI that people are hopeful for today - something that can pass a Turing test and do the kind of complex and interpretive tasks we typically associate with humans, like cleaning the loo. I can pull up a psychologist convention's worth of predictions made by experts in the field. Without putting too fine a point on it, the whole thing has been an embarrassment.

Quote:
The problem is mostly solved, by any definition of "artificial" and "intelligent" within the field of cognitive artificial intelligence.
All I can do is laugh is that.

Quote:
It is strange I suddenly need to defend self-awareness as a measure for artificial intelligence. Your arguments against AI sound like me telling Max Raker the super string theory is elusive and laughable, because the universe seems so complex to me.
One is a case of finding simple rules that underlie chaos, the other is a case of building a machine that's extraordinary complex using methods that aren't yet understood. One is a monstrous engineering problem, where the parameters aren't even defined; the other is a search for a theoretical basis for something. I don't see the similarity.
Turning to Turing Quote
06-21-2008 , 10:27 AM
Quote:
Originally Posted by Phil153
I'm not comparing it to human standards, I'm comparing to other kinds of problems in AI.
Yes and I am saying this comparison does not argument against AI. It merely states, and rightfully so, that human intelligence is in some instances far more complex to model. We have a human bias to intelligence, because we are human. While most human cognitive tasks come natural to us, we can't expect nor demand from AI to act like us. Why would we even want to program this highly complex human bias into intelligence?

Quote:
Chess algorithms are very easy, I agree. That was a low hanging fruit waiting on processing power, like speech processing which remarkably is still not up to scratch.
It is because the human bias again. If you smash a coffeecup infront of a speech generator it will try to parse the input. It does not know humans are not capable of producing such sounds, if it has not either met all/many humans, or it is not pre-programmed. A manual to airplane engineers stated: If component A is broken, take out component A and replace. Now how many humans would quickly take out component A and place the faulty unit back in place? That the computer does the latter, is because these are ambiguous rules, and require common sense. It would be fair for computers to rewrite the manual as "and replace with a new component B". Unfortunately humans don't speak in formal terms all the times. Is it the fault of a formal logical system if it cant comprehend people that are (intentionally) vague? Absence of formality in human speech has been a roadblock for philosophers, mathematicians and logicians for centuries now.

Quote:
Not sure what you're asking. I'm not suggesting that conceptualizing should be an intermediate step. I'm suggesting that the process of biometric matching is a simply linear computer science problem that isn't algorithmically hard, merely messy. And much of the mess itself was solved by hard-coding, not evolutionary algorithms. It's making a joke of AI.
I was saying that facial recognition nowadays does not rely that much on genetic algorytms, but neural networks, and that when we are able to have a machine mimicking our brain, it will still be fundamentally different than us. The cognitive processes of "sensory perception --> familiar face output" don't need to be the same, for the output to both be correct. If it outputs correctly, then we call it intelligent. We wont have modelled human cognitive processes to perfection. I wanted to ask if you still call it intelligent if the output just happens to coincide with human output.

More bluntly, you could have a scanner or radar that picks up on RFID-chips in passports and identity cards. This sensory input can be easily matched to a name. You could then have a computer telling you: I recognize your face, you name is: John Doe. We would have created a facial recognition systems that acts validly intelligent, but bypasses any human bias of perception.

Quote:
It's really not. Object classification doesn't interest me since it's just a broader version of an input filtering system. Cognition and meaningful response is an entirely different class of problem.
Facial recognition is just a broader version of an input filtering system too.

Quote:
In 1985, I couldn't have got a machine to organize domestic duties for me around the house. The simplest of tasks. I couldn't have gotten a car to navigate the road. The simplest of tasks that doesn't even require AI, merely good algorithms. I could go on and on. What you're suggesting simply isn't true.
Well you have a too strict definition of artificial intelligence. You dont exactly need to call your toilet flushing system intelligent, but in a lot of sense it is.

Quote:
Furthermore, the people who made these predictions were talking in exactly the sense of AI that people are hopeful for today - something that can pass a Turing test and do the kind of complex and interpretive tasks we typically associate with humans, like cleaning the loo.
That cars get build with robotic arms, says more about our economy and goals than that we are not able to have a robot arm clean our toilet.

Turing tests get passed since the sixties.
Quote:
I can pull up a psychologist convention's worth of predictions made by experts in the field. Without putting too fine a point on it, the whole thing has been an embarrassment.
The field of study "Artificial Intelligence" did not really exist until the early 80s.
The people making predictions were the philosophers, mathematicians, logicians and computer scientists of that time. A time in which we predicted the moon to contain deep layers of moon dust, that would engulve anyone landing on it.

Quote:
All I can do is laugh is that.
Yeah, I agree it is quite laughable you have to study AI for at least a year, to understand what AI really is and isn't, and to lose some of the heavy media-induced bias attached to AI.

Quote:
One is a case of finding simple rules that underlie chaos, the other is a case of building a machine that's extraordinary complex using methods that aren't yet understood. One is a monstrous engineering problem, where the parameters aren't even defined; the other is a search for a theoretical basis for something. I don't see the similarity.
Well for a machine all chaos is 'meaningful' input and humans have attached very individual, very muddy complex rules to this chaos. But I was not really referring to the similarity or difference between Super String theory and AI, but that both have a media counterpart that people who are not too versed in the subject, take for real. And the people that ARE searching for models of human intelligence, do so within a search for a theoretical basis of our mind.

Your vision of successful AI seems like a computer that becomes grumpy when you turn it on. Intelligence from machine-state functionalism does not care for human minds nor human bodies.

Last edited by 46:1; 06-21-2008 at 10:42 AM.
Turning to Turing Quote
06-21-2008 , 10:40 AM
Actually facial recognition is a frightfully complicated process, and that is from a biological perspective. We have specific regions of the brain taking care of facial recognition towards human, while other recognition takes place elsewhere - and facial recognition is heaps more advanced than recognition in general.

(This leads to the possibility of an interesting affliction 'Prosopagnosia' where an individual is not able to remember the names of face. In one instance a farmer could not remember his family when he saw them, but would remember all names of his farm animals when seeing those).

Anyways...awesome google thingy I figured out when looking for information on this: If you search for google images, add "&imgtype=face" to the end of the url-line in your browser (without quotes) and it will only display human faces, awesome.

Maybe in not long we can search google for specific facial features, scary and groovy. And this can mean searching in general based on description being not too long a wait...ooohooohooo.
Turning to Turing Quote
06-21-2008 , 10:57 AM
A classmate of mine wrote his thesis about facial emotion recognition and his application was able to identify emotions ranging from frightened to happy to angry to sad within a reasonable fault margin. These techniques could be, and probably already are, implemented by Google.

A psychopath is still intelligent, even if that person can not distinguish between a crying woman or a happy woman.

For general facial recognition I don't think we will be able to model this process exactly like humans recognize faces, unless we are able to create brains in vats. If we not only mimick the output, but also try to mimick the process underlying it, we are restricting ourself, but in these less effective models, we learn a lot about the cognitive processes that might underly human cognition.
Turning to Turing Quote
06-21-2008 , 11:00 AM
Quote:
Originally Posted by 46:1
Turing tests get passed since the sixties.
Pass the pipe please.

Quote:
The field of study "Artificial Intelligence" did not really exist until the early 80s.
You have no idea what you're talking about.

Quote:
In the middle of the 20th century, a handful of scientists began a new approach to building intelligent machines, based on recent discoveries in neurology, a new mathematical theory of information, an understanding of control and stability called cybernetics, and above all, by the invention of the digital computer, a machine based on the abstract essence of mathematical reasoning.[30]

The field of modern AI research was founded at conference on the campus of Dartmouth College in the summer of 1956.[31] Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing:[32] computers were solving word problems in algebra, proving logical theorems and speaking English.[33] By the middle 60s their research was heavily funded by the U.S. Department of Defense[34] and they were optimistic about the future of the new field
Quote:
The people making predictions were the philosophers, mathematicians, logicians and computer scientists of that time. A time in which we predicted the moon to contain deep layers of moon dust, that would engulve anyone landing on it.
The two quoted were both cognitive scientists. The lack of appreciation of the complexity by over eager losers wasn't just limited to AI - it happened in many fields where algorithmically hard solutions were required. Computer programming is another beautiful example - the search and optimism in the 80s for the magic bullet of programming proved to be a complete joke. You can't wish away complexity, or have it build on itself so simply. It just doesn't work like that, but people are slow to learn the lessons of history as this thread demonstrates.

Quote:
Yeah, I agree it is quite laughable you have to study AI for at least a year, to understand what AI really is and isn't, and to lose some of the heavy media-induced bias attached to AI.
When have any of my comments suggested media induced bias? The bias in both media and among AI researchers is that it's "just around the corner" and that amazing things are going to happen...that's been the prevailing belief for half a century. And my POS computer STILL can't do something simple like recognize my speech adequately, let alone do any of things that were grandly claimed for decades.

What we have now is a very narrow set of linearly programmed tasks that don't relate to intelligence in any meaningful way. Any other view of the current state of AI is nerd fantasy land.

Anyway, this is a pointless debate, so I'm out.
Turning to Turing Quote
06-21-2008 , 11:07 AM
Quote:
Originally Posted by 46:1
For general facial recognition I don't think we will be able to model this process exactly like humans recognize faces, unless we are able to create brains in vats. If we not only mimick the output, but also try to mimick the process underlying it, we are restricting ourself, but in these less effective models, we learn a lot about the cognitive processes that might underly human cognition.
Yes this is it precisely. Also technically speaking we would encounter 'architecture problems', and that means you have to virtually simulate the architecture within a different system - and this hugely inefficient. (If someone wants an easy to understand example, imagine virtually simulating all the processes of a calculator within your own brain to calculate 765*1456 by - very ineffective as compared to utilizing what you already got in there).

This is also what lies at the heart of ignoring the turing test, it binds intelligence to being human, and ignores any displayed intelligence as 'machine-like'. Which is really just believing in souls under a different name.
Turning to Turing Quote
06-21-2008 , 11:26 AM
Quote:
The two quoted were both cognitive scientists. The lack of appreciation of the complexity by over eager losers wasn't just limited to AI - it happened in many fields where algorithmically hard solutions were required. Computer programming is another beautiful example - the search and optimism in the 80s for the magic bullet of programming proved to be a complete joke. You can't wish away complexity, or have it build on itself so simply. It just doesn't work like that, but people are slow to learn the lessons of history as this thread demonstrates.
The mistake was thinking we could somehow do stuff like the brain does when we had approximately 0% of the computing power. Did no-one ask them why large brains were required for intelligence? The big issues here is are large brains enough? and when will we able to produce them?

Its fustrating and silly imo to keep citing people who were over-eager and happened to be wrong. Its no more useful then citing people who were over-cautious and were wrong. My mum grew up in the 50's and 60's and a common saying was 'you've no more chance of doing that then putting a man on the moon'
Turning to Turing Quote
06-21-2008 , 11:29 AM
Quote:
Originally Posted by Phil153
Pass the pipe please.
I have no pipe in my inventory. Why do you suggest me to pass the pipe please?

Quote:
You have no idea what you're talking about.
Really, Artificial Intelligence as a multi-disciplinary academical study did not exist since the early eighties. Ofcourse people before that were working in what we nowadays would call the field of AI. "The field of modern AI research was founded at conference on the campus of Dartmouth College in the summer of 1956". Did that mean that people in 1957 were able to master in Artificial Intelligence? No they went on to study biology with a preference to programming, or became computer scientists with a knack for biology.

Academical degrees and combined courses in AI were first available in 1983 in Holland.

Quote:
The two quoted were both cognitive scientists. The lack of appreciation of the complexity by over eager losers wasn't just limited to AI - it happened in many fields where algorithmically hard solutions were required. Computer programming is another beautiful example - the search and optimism in the 80s for the magic bullet of programming proved to be a complete joke. You can't wish away complexity, or have it build on itself so simply. It just doesn't work like that, but people are slow to learn the lessons of history as this thread demonstrates.
But you point to human complexity (such as human facial recognition) as something that should be captured and appreciated before AI is possible. This shows ignorance on the topic of AI, an ignorance that can be explained through your stubborness or media bias. Read the original Turing article I posted, and you will see why your demands are not met, and why your demands are no valid demands at all, but just "This is how I imagine AI to be, and it isn't"

Quote:
When have any of my comments suggested media induced bias? The bias in both media and among AI researchers is that it's "just around the corner" and that amazing things are going to happen...that's been the prevailing belief for half a century. And my POS computer STILL can't do something simple like recognize my speech adequately, let alone do any of things that were grandly claimed for decades.
I wont ever deny that AI research profits from the ignorance and media bias. If you are asking for a million dollar research grant, ofcourse your predictions are gonna be positive. That we have a knack for presenting the future in a positive light, I will grant you that, but failed predictions for profit, have no bearing on succesfull AI.

Quote:
What we have now is a very narrow set of linearly programmed tasks that don't relate to intelligence in any meaningful way. Any other view of the current state of AI is nerd fantasy land.
This is plain wrong, and biased. Not necesseraly biased by the media (though I dont think you went further than reading the wikipedias and popular science on this subject), but maybe biased by the media in your stubborn mind. Your replies in this thread clearly dont point by being biased/influenced by the source document from Turing that explains it all.

Quote:
Anyway, this is a pointless debate, so I'm out.
meh, this is my default position I take in layman discussions about Intelligence and robotica. It's kinda ironic you deem this debate to be pointless on much different grounds.
Turning to Turing Quote
06-21-2008 , 12:02 PM
Quote:
Originally Posted by chezlaw
Its fustrating and silly imo to keep citing people who were over-eager and happened to be wrong. Its no more useful then citing people who were over-cautious and were wrong. My mum grew up in the 50's and 60's and a common saying was 'you've no more chance of doing that then putting a man on the moon'
Of course it's useful...it shows a powerful inability of AI "experts" to appreciate complexity relating to intelligence. Look at 46:1's comments and you'll see that such utter stupidity is alive and well in the expert community today. He just doesn't get it, and even goes as far to state that computers have passed a turing test, which no one today comes close to believing.

Here's what wikipedia has to say on the subject:

Quote:
Turing predicted that machines would eventually be able to pass the test. In fact, he estimated that by the year 2000, machines with 109 bits (about 119.2 MiB) of memory would be able to fool 30% of human judges during a 5-minute test.
hahahahahaha. More good stuff from wikipedia:
Quote:
As of 2008, no computer has passed the Turing test as such. Simple conversational programs such as ELIZA have fooled people into believing they are talking to another human being, such as in an informal experiment termed AOLiza. However, such "successes" are not the same as a Turing Test. Most obviously, the human party in the conversation has no reason to suspect they are talking to anything other than a human, whereas in a real Turing test the questioner is actively trying to determine the nature of the entity they are chatting with.
So forgive me for not taking the predictions of these guys seriously, or pointing out their past stupidities.
Turning to Turing Quote
06-21-2008 , 12:13 PM
Quote:
Originally Posted by Phil153
Of course it's useful...it shows a powerful inability of AI "experts" to appreciate complexity relating to intelligence. Look at 46:1's comments and you'll see that such utter stupidity is alive and well in the expert community today. He just doesn't get it, and even goes as far to state that computers have passed a turing test, which no one today comes close to believing.
Okay then: people predicted chess couldn't be played by computers.

hahahahaha
Turning to Turing Quote

      
m