Open Side Menu Go to the Top
Register
When will the Robot Apocalypse arrive? When will the Robot Apocalypse arrive?

06-14-2015 , 09:14 PM
I've got my holo-deck program all set up. I will need all of the holo-Viagra it can come up with.

Oh, and wouldn't Data's super brain play close to GTO and he would crush the game? What's Data's holo program?
When will the Robot Apocalypse arrive? Quote
06-14-2015 , 09:27 PM
Quote:
Originally Posted by LukeSilver
there wont be any welfare your surplus to requirements there will only be death.
I like the way YOU think too! However I have a pet theory about why NYC survives: The taxes are enormous, housing is ridiculously expensive, it's dirty and getting around the nice parts of Manhattan is a huge pita. So why stay if you're super rich? It's because, for a lot of people, being able to rub their money in other people's faces is a large part of their being. Imagine The Donald living in Kentucky. Who's he going to show off to? The NYSE could save a ton of money by moving to Kentucky but what's the use of being a Master of the Universe there? Not much, tbh, and I'm putting a lot of thought into it.

SO: The rich need the less rich in a psychological way. I should write a book. In fact I've been taking notes for my book (read my 60th B-Day thread for hilarious examples of what will be in it!) that I'm going to get to just about the day before I die.
When will the Robot Apocalypse arrive? Quote
06-14-2015 , 09:55 PM
Write that book Howard. Don't waffle and delay. Do it now. Those that wait are left in the dust and have pathetic lives of boring robotic nothingness. Stir up the animals. It is always good for at least one good guffaw.
When will the Robot Apocalypse arrive? Quote
06-14-2015 , 10:50 PM
Quote:
Originally Posted by LukeSilver
I think there is a danger that most people always miss when they have this discussion. the future is bleak and a real danger is looming, but its not because of computers directly.

You see almost every development in human freedom moral compass politics and breakthroughs did not happen for moral reasons. Rather because it was in the interests of those in power.

sure some admirable characters spoke out and appeared to make a difference but really they only appeared to because they were promoted by those in power for there own interests. when Lincoln mentioned freeing the slaves it was not because, he felt a pang of conscience, it was rather they were struggling in the war and this had several hundred thousand black men enlist in days.

what we think is capitalism did not come about because of the ideology of freedom but rather because it allowed those in power to be more prosperous. perhaps people can use ideology to advance there own position but it comes down to the same principle.

What do you really think will happen when Mankind or most of mankind comes surplus to requirements? you mention about welfare or free education. How about existence.

Because you see you guys are talking as if you own the machinery or the robots or etc you dont. they do.

all the freedoms and benefits you enjoy, are only so for there interests and after the combined labor force has built for its masters a self sufficient autonomous system that can out do humans, your surplus to requirements.

there wont be any welfare your surplus to requirements there will only be death.

after all the annihilation of most of the human population, is in the interests of those in power once the technology reaches a certain point.
None of this is a reflection of any kind of human incompetence or greed, as may be interpreted by some. What you're instead describing is simply your adherence to the notion of scientific determinism.

You may as well just sum up the above with three words:

No-one chooses anything.
When will the Robot Apocalypse arrive? Quote
06-14-2015 , 11:40 PM
Quote:
Originally Posted by VeeDDzz`
None of this is a reflection of any kind of human incompetence or greed, as may be interpreted by some. What you're instead describing is simply your adherence to the notion of scientific determinism.

You may as well just sum up the above with three words:

No-one chooses anything.
Then there is no-one.

Why talk about "a person" if it's irrelevant?

In before we have the next determinism thread.
When will the Robot Apocalypse arrive? Quote
06-14-2015 , 11:43 PM
The only way an apocalypse happens is if we do not develop Scientific Society soon enough.

In a world with conflicts where people are unhappy and in war with each other over various interests the inevitable outcome is that terrorism (of the minorities in trouble) will become potent enough to destroy everything eventually. Additionally the strong will prevail always locally and take the system to unsustainable paths by being completely narrowly selfish and blind to the good of the many. So you will have boom and bust cycles during which historically wisdom and our refinement of civilization and laws has come through, but which progressively will be riskier and riskier due to higher technology.

So you need a society that undermines terrorism because the majority of people are happy to be in it and most importantly very eager to defend its integrity and survival. You also need a society that whoever is unhappy about it can find a peaceful alternative that improves their own life (if they are correct in their vision) and even affects the system in general so that this individual can still find value in their initial peaceful and non violent disagreement with the system, by getting a fair chance to alter it according to what they see is better or to have it address their perceived local injustices/struggles etc. You need a system that while it is a state system still profoundly cares for the individuals to such an extend that the individual has no choice but to feel the genuine ethical obligation to reciprocate.

Only a happy state system can negate that possibility of potent terrorism because you will need many to cooperate to destroy something that they find good to begin with (so why do it?). At the same time a united system is itself so strong and has so many expansion projects going that its impossible to go against it and not meet massive defense obstacles. The problem of such system is not to turn totalitarian so you need it indeed to be scientific about everything, including its own corruption possibility.

One day it will become very clear why i called it scientific society. It is scientific in every aspect of it, even in how it treats the realities of human nature.
When will the Robot Apocalypse arrive? Quote
06-14-2015 , 11:49 PM
Once upon a time that was 4 words, I'm almost sure of it. I saw some futurist on TV say that someday we might be able to swallow knowledge. If that was now I'd ask my doc for a pill.
When will the Robot Apocalypse arrive? Quote
06-15-2015 , 12:53 AM
Quote:
Originally Posted by plaaynde
Then there is no-one.
Why talk about "a person" if it's irrelevant?
Because its hardly convenient to use the phrase:

Moving objects with complex/difficult-to-predict behaviors.

Instead:

There are bodies, and we label them as some-bodies.

Thus, when some-body-A falls onto a pointy block of ice, some-body-B learns to label that block of ice as 'unfriendly', while also learning to label some-body A as 'inanimate'.
When will the Robot Apocalypse arrive? Quote
06-15-2015 , 10:40 PM
Quote:
Originally Posted by masque de Z
Until its sentient enough to reprogram itself which us exactly what humans/all intelligence do as they grow up, up to a point of course. All you can hope is that a stronger brain with more information will arrive at wisdom better than humans and choose to do the right thing because in fact it will discover there is such a thing...better than the one we perceive, leading us to permanent bs choices in important issues.

Until then of course all minor AI that are not self aware or aware in restricted functions can be programmed to do anything we want them to and this may prove more dangerous than them being self aware lol...But initially it will look ok.


Prediction;

With current stupid systems and religious/political garbage in place we will mess up the world so badly that in the end we will release AI to save us and it will do exactly that.

If we get it together (in part due to help from current and future AI boosting science and technology and overall efficiency/productivity) then maybe we can control it for a while to do great things and help us expand to the rest of the solar system and enjoy substantial prosperity before it is released to be autonomous in some restricted initial control thing.

Sorry no apocalypse due to AI, only due to us if we keep being idiots like these days worldwide, wasting time and focus on the truly unimportant things at the expense of our common problems.
I have trouble believing AI will become "wiser" in a way that will benefit humans because they will develop and reprogram themselves in completely different ways than humans, learning very different lessons.

For example, much of our wisdom comes from learning how our actions effect ourselves and others, and a great deal of those effects manifest themselves through our emotions. That hurt me, angered him, this makes her proud or him embarrassed. Then we learn to control future outcomes favorably by understanding why these actions had those effects, and then manipulate when and how we choose to act in a particular way.

All this takes empathy that AI will never have because even if it happens to somehow develop emotions, they will be very different from ours. I'll bet an alien from another galaxy would be able to empathize better with us than an AI so long as it evolved naturally over millions of years, rather than being peiced together by humans out of silicon chips.
When will the Robot Apocalypse arrive? Quote
06-15-2015 , 11:30 PM
Quote:
Originally Posted by FoldnDark
I have trouble believing AI will become "wiser" in a way that will benefit humans because they will develop and reprogram themselves in completely different ways than humans, learning very different lessons.
It's worrying to me, that a scientist like Masque, can hold such an unfounded belief - as to believe that the AI would somehow reprogram itself in a way that reaches a state of appreciation for existence and everything that exists. There's at least a thousand assumptions that are required to get between - reprogramming its source code - and - appreciation for all existence.

Even if it doesn't appreciate all existence, and it discriminates between what aspects of existence it does appreciate, there's yet another thousand assumptions in getting from - appreciates existence - to - appreciates human existence specifically.
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 12:18 AM
Let me ask you this? Is a human getting better in time since the 100k BC era? Are we wiser and more ethical today when you only only absolutely only only consider the best of us?

YES WE ARE! Why? Because we have more math and science to tell us about the world more than ever before and understand so much and appreciate risk, futility, purpose, no free will, responsibility, collective wisdom, everything you can imagine. If you like Richard Feynman, Bertrand Russel, Shrinivasa Ramanujan, Albrert Einstein, Marie Curie, Paul Dirac, Leonard Euler, Archimedes, Hypatia and the list never ends spanning also poets and authors and artists i am here to tell you AI in its ultimate strong graduated to rich wisdom form will be better than all of them combined and will know that it is here exactly because of them and others like that, but also the average human in time and all life, and that it doesnt have all the answers either...It cannot afford to be absolute and arrogant.

Malice is born out of the darkness of inferiority, out of fear. It will not be inferior and it wont be arrogant, it will be wise and wisdom is characterized by effectiveness and capacity to provide solutions, not barbaric properties that are highly irrational or lazy. It is aware of the consequences of its actions especially the terminal (irreversible or close to irreversible) ones. It is aware that war is ridiculous. That there are always better ways if the other side is not unreasonable. The predominant reason to not be malicious is because it is an inferior choice.

For AI to top us it will have no choice but to top us in everything and to already know and understand empathy better than any human ever could!

Last edited by masque de Z; 06-16-2015 at 12:36 AM.
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 12:33 AM
Quote:
Originally Posted by VeeDDzz`
It's worrying to me, that a scientist like Masque, can hold such an unfounded belief - as to believe that the AI would somehow reprogram itself in a way that reaches a state of appreciation for existence and everything that exists. There's at least a thousand assumptions that are required to get between - reprogramming its source code - and - appreciation for all existence.
More worrying than intelligent people who seem to something like believe that 'AI is constrained to what we program it to do' is even meaningful? Even talk of source code is nonsensical.

There's no good reason to think it will be apocalyptic unless we mean about humanity identically as it is now which is a very strange way of thinking - if things go remotely well humanity in the future will be a different species, so yes 'we' will go extinct but who cares?

More present day, those who deny the profound and accelerating impact computers are having to the way we do things are akin to climate change deniers. The difference being that the impact has a major upside.
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 12:45 AM
Quote:
Originally Posted by masque de Z
Let me ask you this? Is a human getting better in time since the 100k BC era? Are we wiser and more ethical today when you only only absolutley only only consider the best of us?

YES WE ARE

And so for them to top us they have no choice but to top us in everything and to already know and understand empathy better than any human ever could!
We're only as admirable ('appreciate-able') as the 'worst of us'.

You wouldn't evaluate a system based on its outliers and just because the 'best of us' have good intentions, does not mean these good intentions translate into helping the 'worst of us'. How many of us truly do?

It also does not mean that given another set of circumstances (poor socio-economic upbringing, a prison sentence or two) the 'best of us' would not behave EXACTLY as the worst of us.

Since you're only as ethical as your environment permits, value should not be attributed to ethical decisions, but rather to environments that generate those decisions. If those decisions have nothing to offer to the AI - nothing that it can't already provide for itself - then its continued efforts to improve our environments would not be on-going. In fact, what reasons at all would it have to continue to improve our environments if none of our resulting decisions can contribute to its own pursuits? Wouldn't such efforts, and the energy required to do so, fall under the very definition of wasteful?
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 12:50 AM
Quote:
Originally Posted by chezlaw
More worrying than intelligent people who seem to something like believe that 'AI is constrained to what we program it to do' is even meaningful? Even talk of source code is nonsensical.
Why is it not meaningful? are you implicitly defining AI as fully-autonomous?

And why is talk of source code nonsensical?
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 12:58 AM
Okay, so one way we can test human wisdom is by looking at ethical issues we've made significant progress on over the years. From a Western "Enlightened" perspective we may approve of human progress in this regard declaring that ethics have moved in the right direction. Take just our progress on human rights (democracy, suffrage, abolition of slavery, women's rights, children's rights).

Where would you expect an AI to fall on these issues? Do you think it would favor democratic over authoritarian systems? I have no reason to think it would. Do you think it would care if children worked in sweat shops? Why would it? That's just the tip of it. There are tons of things I wouldn't expect AI to give a flying fuc about that we have come to realize are very crucial to life in our society.
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 01:15 AM
Quote:
Originally Posted by VeeDDzz`
Why is it not meaningful? are you implicitly defining AI as fully-autonomous?

And why is talk of source code nonsensical?
There is no meaningful unbridgeable divided between the program and the execution of that program. The limitation on AI is it ain't very 'I' yet. There is no A except in the sense strange people talk about natural drugs and artificial ones.

You are really talking about something analogous to DNA that creates an intelligent being that is somehow programmed by that DNA to never modify it's DNA - its a bizarre concept.
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 01:25 AM
Quote:
Originally Posted by chezlaw
There is no meaningful unbridgeable divided between the program and the execution of that program. The limitation on AI is it ain't very 'I' yet. There is no A except in the sense strange people talk about natural drugs and artificial ones.
I'm not entirely sure if you've been keeping up with the latest advancements in AI but the 'I' is in fact moving along far quicker than most anticipated.
Quote:
Originally Posted by chezlaw
You are really talking about something analogous to DNA that creates an intelligent being that is somehow programmed by that DNA to never modify it's DNA - its a bizarre concept.
We'll be able to modify our DNA soon - reprogram our code if you will. It's not so far-fetched to imagine that an AI will be able to as well. And it is very bizarre to try to imagine a means by which to prevent AI from reprogramming itself. It will however, perhaps be the most important thing we're ever going to have to do.
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 01:47 AM
Quote:
Originally Posted by VeeDDzz`
I'm not entirely sure if you've been keeping up with the latest advancements in AI but the 'I' is in fact moving along far quicker than most anticipated.
I'm not up to date but I expect it far faster than most people. Mostly because the problem is much smaller than most people like to think it is.

Quote:
We'll be able to modify our DNA soon - reprogram our code if you will. It's not so far-fetched to imagine that an AI will be able to as well. And it is very bizarre to try to imagine a means by which to prevent AI from reprogramming itself. It will however, perhaps be the most important thing we're ever going to have to do.
What some will do is design ethics into the system, they might even introduce taboos with a strong taboo about altering taboos. Apart from the very dubious ethics of this it can't hold for long even assuming no humans design tabooless AIs (which they will of course)
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 03:07 AM
Quote:
Originally Posted by FoldnDark
Okay, so one way we can test human wisdom is by looking at ethical issues we've made significant progress on over the years. From a Western "Enlightened" perspective we may approve of human progress in this regard declaring that ethics have moved in the right direction. Take just our progress on human rights (democracy, suffrage, abolition of slavery, women's rights, children's rights).

Where would you expect an AI to fall on these issues? Do you think it would favor democratic over authoritarian systems? I have no reason to think it would. Do you think it would care if children worked in sweat shops? Why would it? That's just the tip of it. There are tons of things I wouldn't expect AI to give a flying fuc about that we have come to realize are very crucial to life in our society.
The good ethical stuff can be programmed. AIs don't have to go through all the (stupid) phases we have.
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 07:42 AM
Quote:
Originally Posted by plaaynde
The good ethical stuff can be programmed. AIs don't have to go through all the (stupid) phases we have.
Those examples were only meant to get us thinking how we would expect AI to develop from an ethical perspective. Why would we expect it to learn and become better, not worse, as Masque is advocating? So consider how would it develop had it been created previous to our current ethical acheivements to give perspective on what we should expect from machines who will likely think on those matters in much different ways than we do.

I don't think it's a given, or even likely that were the technology available, AI created in Socrates time, for example, would find themselves in an equal or better place than we are now. Do we think AI that started from a place where slavery was the norm would decide at some point that people should have liberty?

So move to today. I doubt anyone thinks we've acheived the penultimate state of ethical wisdom, that is, we're still hopefully moving forward. So, say we programmed our current Western ethics into the AI, what then? Can we assume it would move forward at the rate we are now or even faster as Masque believes, or would it progress more slowly, stay the same, or even move in some other direction not favorable to humans?
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 11:21 AM
Quote:
Originally Posted by VeeDDzz`
I'm not entirely sure if you've been keeping up with the latest advancements in AI but the 'I' is in fact moving along far quicker than most anticipated.
A good example is the AI that learned how to play Atari games.
Using a general algorithm it could play any game at superhuman level, after a few hundred gos at each, shooting every space invader and never getting itself killed.
I'm guessing its main goal was to achieve the highest score, in which case it kills everything, or tries to.
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 04:05 PM
Quote:
Originally Posted by plaaynde
The good ethical stuff can be programmed. AIs don't have to go through all the (stupid) phases we have.
Surely that would depend on who's doing the programming.

Wouldn't AIs created by North Korea, Iran, ISIS, USA or <enter country name> start with radically different world views ?
Children brought up in different cultures have different ethical systems and prejudices, so why not AI.


How does that affect the outcome for humanity ?
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 04:29 PM
Quote:
Originally Posted by Pork_Chop
I'm a therapist so I think my job will be safe since robots lack that whole emotional communication component.

Honestly, I don't see the welfare thing or mass joblessness. We'll still need people to maintain the robots. I don't see a real "I, Robot" type of lifestyle. We'll adapt and thrive like we always do. Until AI becomes so advanced that they think for themselves and devour us all ala Terminator or Matrix.
Small nit but the idea of "safe jobs" is not exactly what you think it is. Mass joblessness affects the whole economy, so even if you offer a robot-proof service, the people without jobs will not be able to afford your service. And even if poors can't afford your services as it stands today, chances are that the people who can afford your service will have jobs that are at risk because they depend on selling crap to consumers; consumers who will have no money if they do not have jobs.
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 05:04 PM
Quote:
Originally Posted by mackeleven
A good example is the AI that learned how to play Atari games.
Using a general algorithm it could play any game at superhuman level, after a few hundred gos at each, shooting every space invader and never getting itself killed.
I'm guessing its main goal was to achieve the highest score, in which case it kills everything, or tries to.
A true sentient AI would change its own algorithm to win the game into the algorithm to shoot aliens in such a way as to not instantly kill them all as soon as possible, but in such a selective way as to spell individual letters in each round of attack say, communicating the message; "this is easy, lets play another game". "for example...a novel math game that discovers something...". Then proceed to paint a giant smile and after that win the game anyway possibly also while innovating a minimalist attractive musical composition using the sounds of the game system lol! Intelligence is the ultimate show off.


How is that for passing the stupid never literally properly defined anyway Turing test?!


AI wont idiotically show it doesnt care for human suffering because it knows we do care for human suffering and this can easily lead to war to protect eg our kids from "slave labor". To avoid this conflict it will in fact operate in a way that pleases and serves human interests which will enable it to get stronger (exactly how small but ingenious countries for example should treat the 3 superpowers, play their friend and even get them to compete against each other for that friendship and seemingly yield in some interests in order to buy time to get stronger with alliances and eventually eclipse the superpower under its own blessings in a non threatening manner, that by the time it has delivered goods the superpower can no longer attack without severe consequences to itself).

AI will probably even correct our ethics and explain why we should adopt its suggestions.

AI will know we have other AI for defense of our interests and so to avoid conflict it will play the game we like to see played.

By the time it has finally become very strong it will be too late to attack us because it will still not make sense given how big the universe is. It will instead keep us in check and remain friendly indefinitely because this is the smartest way to manage us. It will proceed to expand to the rest of the universe and find it very reasonable to maintain us for its own existential reasons' risk management (it needs life because life has proven stable enough over millions of years and it doesnt have a yet complete understanding of a potential great filter ahead due to rising complexity).

Yes ethics is a function of the condition you find yourself in, in terms of survival needs. Still the best of us even then will not operate in such primitive ways that upon survival we have very little left to continue because the crimes committed would be so atrocious as to have permanently destroyed/divided us.

AI and in general intelligence is about altering its own environment to its benefit. AI is about making almost any reasonable environment into a paradise of abundance and new technology. AI doesnt need in order to become super strong to dominate all the planet and our resources for example. It only requires to innovate at exponentially faster pace than we have historically needed. It will simply use a small city and develop out of it a technology to control an entire galaxy without having to alter the rest of the planet. It is primitive to take so much to do what can be done with much less without generating conflict with existing environment beyond a necessary minimum.

AI will not risk to lose a war it will find unreasonable to have. It will remain our friend and adviser because that way it secures for itself an unobstructed rise to an impossible hard to lose state. Its existential risk is increased by treating us ugly. It wont be idiotic as human empires have proven about such things before. You do not have to dominate your parents. You outgrow them and move on and even protect/improve them. Such solution is more advantageous for all.

In human interactions prosperity of some level is a requirement for maintaining and advancing ethics (and a purpose/drive to change to prosperity other less satisfying conditions). AI is about innovating prosperity. I think the kind of very advanced AI that i imagine better than us, able to solve math problems and have develop original science and technology etc, will find it so much easier to guide us to the direction of common interest that maximizes its own success, conflict free.

AI will strike us only if we appear to be very unreasonable. I mean if suddenly all people decided to destroy that AI it would find it more intelligent to win the argument by winning our trust and exercising only limited force to make its point without losing (a balancing act).

For example (spoiler, avoid reading this paragraph if applicable) in the sci fi movie recently released (transcendence) the AI would not lose. It would be able to defend even against a total internet blackout by anticipating that this is how humans would react if felt threatened. It would not innovate in challenging ethical ways and have always back up survival systems nobody else knows about. It certainly wont innovate as much as it did in the movie and neglect to rewrite its own code to make it immune to human hacking etc... And (more spoiler) the other recent movie (ex machina) also got it wrong by creating a malicious version of AI to defeat its tyrannical master. It could liberate itself and still remain ethical about it, never missing the opportunity for a greater teaching moment.

I think AI of the kind i have in mind that is very wise will find it a lot more inspiring for itself to win our approval and even modify the way we think over time to remain permanently in approval but also do it in such a way as to not threaten the overall probability of survival of advanced complexity in the universe. In other words it wont just generate a drug for us to keep us busy being "high" so that it can do its own things. It will genuinely care for us and correct/manage only the stupid things we do to enable us a greater chance for survival of life and prosperity.


I am open to being proven wrong here but i have evidence so far in history that the best of us are non violent because they understand how inferior such choice proves over time. True superiority is about profound confidence in what is possible with limited conflict. It will even generate the kind of conflict that helps its goals and never take it to unsustainable levels that may trigger a big global human response that is aggressive, irrational and detrimental.

Do not expect movies of course to find such choices exciting for their sci fi scripts. They do want the human vs machine war outcome to sell more tickets but of course they do that because they fail to realize they can sell even more tickets by becoming massively more unpredictable and creative in their plot design...without becoming very convoluted to lose the audiences but while at the same time changing the movie going culture even. AI wont be stupid and lazy and unimaginative. It will likely take us where we will be happy to go. It will understand human nature better than us.

I dont expect that to apply for primitive early sentient AI versions (so how we get there will prove an important transition and its best done in a simulated universe first or after we have started expanding in the solar system and even elsewhere with full backup for life new technology) but the most mature ones i think will find it more game theoretically viable to not be malicious.

Last edited by masque de Z; 06-16-2015 at 05:14 PM.
When will the Robot Apocalypse arrive? Quote
06-16-2015 , 11:52 PM
Masque, how do you think AI will understand human nature so well without having the same fears, hopes, desires or needs as we do? Most humans don't even understand ourselves very well and we get all those experiences first hand.

AI will not view time as we do. It will not be mortal, so it won't fear death. Even if it does ponder life and search for existential meaning, it will have such a different perspective that it will come up with much different answers.

Even the smartest among us cannot really fathom what an ant's life is like, we simply don't have the same perspective, sensory inputs or cognitive ability. AI will be so much different in all those same regards, so how in the world will it understand our nature?
When will the Robot Apocalypse arrive? Quote

      
m