Quote:
Originally Posted by Subfallen
and already have super-human performance on narrow image domains like street numbers.
I have beaten the estimated human performance on a similar data set with deep nets. I am a bit two-pronged about this accomplishment.
On the one hand it is a great feat: No domain knowledge, no expert rules, no months of hand-engineering features. Outperforming humans on the same task.
But on the other hand, this has already happened with chess (and distinguishing a cat from a dog seems way simpler than beating a SuperGM with the black pieces). Both are basically very fast pattern matchers. A chess machine will fail or play on like nothing happened, when a human opponent accidentally misplaces a piece. It sorely lacks creativity and psychological play. A deep net will happily see faces in a cloud, and it lacks very basic common sense about semantics, context, scenes and cultural elements.
Like Searle's Chinese Room: We now have a robot that is very accurate and fast in translating Chinese symbols to English symbols. But we are nowhere near a robot showing a fundamental understanding of both languages. Or comparing an airplane to a bird. Sure an airplane flies faster than a bird, but does it really beat the bird? Only in function. Turing would say that is enough for machine intelligence. Romanticists would disagree.
Those Neural Turing Machines are awesome though and there is no telling what will happen when we go from machine learning to machine reasoning.
'...allow the whole system to run for an appreciable period, and then break in as a kind of "inspector of schools" and see what progress had been made...' Turing on his unorganized B-type machines.
As for your prediction, I myself like the more information theoretic approach to this. Moore is too much about hardware, while machine intelligence needs new software approaches (many exciting new algorithms to invent for us this century) and learning fuel: Information. We'd need to bridge the gap between supervised learning (10 million training samples) and unsupervised learning (like how our human children are able to learn and infer without being told the label for everything.).
Information can have both negative and positive value, very much like energy.
Also you can consider a future where an AI pacifies humanity with superior intelligence, communication, distribution, and logic. Keeping the 1% status quo can not be the smartest thing for us humans to do. Unless we really are a big ant hill, and we need our queens and worker bees to function optimally according to our biological assets.
Quote:
Originally Posted by Rikers
you can't forgo the Earth energy constrain
I think all our energy comes from the sun. There is energy a plenty (and a lot of it goes to waste right now, dissipated as heat). This view of complex dynamic systems passing around energy is very interesting. You may be interested in the works of David Wolpert.
https://www.youtube.com/watch?v=9_TMMKeNxO0
The Landauer limit and thermodynamics of biological computation