Open Side Menu Go to the Top
Register
Morality and Artificial Intelligence Morality and Artificial Intelligence

06-06-2014 , 07:06 AM
Let's say we are able to create an Artifical Intelligence that could match or outperform humans in any task we can do, also referred as Artificial General Intelligence (AGI). Such entity will possess consciousness and self-awareness the same way we do. For better visual representation let's assume we "put" the AGI in a humanoid-like body. AGI will have the same rights and freedoms we humans enjoy and will walk around us in our society.

After such event, wouldn't the creation of an AI less intelligent than the AGI be considered immoral? After all it is the same as the genetic engineering in Brave New World, where humans are intentionally engineered to be stupid to be the workers of the society.
Morality and Artificial Intelligence Quote
06-06-2014 , 09:15 AM
Quote:
Originally Posted by iLLuS10n-
Such entity will possess consciousness and self-awareness the same way we do.
How will we know this? I don't even know that you or anyone else is conscious; I only know for certain that I am. But for the sake of your question I'll assume we could know.
Quote:
After such event, wouldn't the creation of an AI less intelligent than the AGI be considered immoral? After all it is the same as the genetic engineering in Brave New World, where humans are intentionally engineered to be stupid to be the workers of the society.
Why would it be immoral? If we treated them as slaves after creating them then sure. But if we treated them nicely, they'd have good lives. What's wrong with that?

If we created a smarter AI, though, I suppose we should stop reproducing and just die off. We're just a virus.
Morality and Artificial Intelligence Quote
06-06-2014 , 10:04 AM
Quote:
Originally Posted by heehaww
Why would it be immoral? If we treated them as slaves after creating them then sure. But if we treated them nicely, they'd have good lives. What's wrong with that?
I am not saying it is immoral, I am still undecided on what to think, but if we are capable of creating AGI, I don't see any difference between creating less smart AI for our own purposes and creating less smart humans for our own purposes. Do you think this is moral, even if we treat them nicely? I don't know.
Morality and Artificial Intelligence Quote
06-06-2014 , 10:25 AM
Quote:
Originally Posted by iLLuS10n-
I am not saying it is immoral, I am still undecided on what to think, but if we are capable of creating AGI, I don't see any difference between creating less smart AI for our own purposes and creating less smart humans for our own purposes. Do you think this is moral, even if we treat them nicely? I don't know.
Writing a "Hello World" program will clearly be a crime against humanity generalized intelligence.
Morality and Artificial Intelligence Quote
06-06-2014 , 03:49 PM
Quote:
Originally Posted by iLLuS10n-
if we are capable of creating AGI, I don't see any difference between creating less smart AI for our own purposes and creating less smart humans for our own purposes. Do you think this is moral, even if we treat them nicely?
Intentionally creating less-smart ones? Not sure why we'd do that. You're saying for "our own purposes", so I'll assume there are some purposes for which a dumber being would somehow be more useful than a smarter being.

You seem to be asking about two things at once. Is it moral to:
A) Create a dumber conscious being than ourselves.
B) Use a conscious being for our own purposes.

If A and B are moral then so is the combination, "A followed by B". If either are immoral then the combination is immoral.

I think A is "yes". Overall-intelligence is the amount of reality you're able to see. A dumber conscious being can't experience as much of reality as a smarter one. Reality is infinite and humans are infinitely dumb. Creating a being that sees a little less of reality, I don't see how that would be immoral in itself (ignoring the fact that we haven't even defined "moral").

For B, I can construct an example of where I think it would be moral. We program the AGI to desire to do the things it will be used to do. This way, it wouldn't have to be our slave (not against its will), it would "choose" to do exactly what we would have commanded it to do anyway. This is assuming the consciousness does not choose the desires, which I think is a true assumption.

So "yes" to the question in full.
Morality and Artificial Intelligence Quote
06-06-2014 , 04:34 PM
Quote:
Originally Posted by iLLuS10n-
I am not saying it is immoral, I am still undecided on what to think, but if we are capable of creating AGI, I don't see any difference between creating less smart AI for our own purposes and creating less smart humans for our own purposes. Do you think this is moral, even if we treat them nicely? I don't know.
I don’t see a moral difference between creating uber smart AGI "for our own purposes" and creating less smart AI "for our own purposes."
Morality and Artificial Intelligence Quote
06-07-2014 , 05:10 AM
Devil is in the details. Is that a saying? or did I just make that up? Anyway lets ask why would it be immoral? Would they feel bad about the whole ordeal? well lets create them with no bitterness… etc.. etc..

Last edited by drowkcableps; 06-07-2014 at 05:14 AM. Reason: Wiki'ed devils in the details, essentially details are important, ya so, pretty much my sentiments..
Morality and Artificial Intelligence Quote
06-07-2014 , 10:52 AM
Depending on what "uses" they were needed for, you can go a step further and make them without a consciousness at all. They might behave/perform exactly the same.
Morality and Artificial Intelligence Quote
06-13-2014 , 06:55 AM
As humans we are ultimately dependent on fighting with, competing against and eating other species. Most of us are dependent on the enslavement of animals and all of us suppress the nature we live in to some extent.

I have problems seeing how our recent trend of building machines (even intelligent ones) could be considered unethical seeing as our very existence necessitates the violent competition with and suppression of other species. To put it in perspective: I don't think we will ever manage to treat robots worse than we treat chickens.

Now some might say that this isn't an argument for treating robots as subhuman, but perhaps an argument against the mistreatment of chickens. Which is a good argument. However, machines also come with possibilities that we don't have in current biological species. Artificially intelligent machines exist already and are in widespread use as we speak. There is nothing to indicate they feel discomfort.
Morality and Artificial Intelligence Quote
06-13-2014 , 08:12 AM
Quote:
Originally Posted by iLLuS10n-
Let's say we are able to create an Artifical Intelligence that could match or outperform humans in any task we can do, also referred as Artificial General Intelligence (AGI).
Could be better specified. I take it we are not talking lifting large rocks.

Quote:
Originally Posted by iLLuS10n-
Such entity will possess consciousness and self-awareness the same way we do.
WHY?

Work in machine learning suggest it is much easier to make machines that exactly imitate humans, than machines that think like humans.

Consider three machines.

1) A human.
2) An AI that simulates the human though process in detail.
3) A machine that produce exactly the dame output as 1) and 2) for the same input.

From a black box examination all three appear identical, but only 1) and 2) are likely to have the same feeling of free will and consciousness that humans do. 3) is likely much easier to make than 2) (and 1) if your male!!).

Just saying.

Quote:
Originally Posted by iLLuS10n-
For better visual representation let's assume we "put" the AGI in a humanoid-like body. AGI will have the same rights and freedoms we humans enjoy and will walk around us in our society.
OK.

Quote:
Originally Posted by iLLuS10n-
After such event, wouldn't the creation of an AI less intelligent than the AGI be considered immoral?
Eh no.

Why is this different from not allowing stupid people to have children?
Morality and Artificial Intelligence Quote
06-13-2014 , 09:59 AM
Quote:
Originally Posted by Piers
WHY?

Work in machine learning suggest it is much easier to make machines that exactly imitate humans, than machines that think like humans.
Although now it may be hard/impossible to make self-aware machines, do you acknowledge that in the future it may become easy due to rapid increases in technology and understanding of AI and the brain
Morality and Artificial Intelligence Quote
06-13-2014 , 11:22 AM
Quote:
Originally Posted by Biesterfield
Although now it may be hard/impossible to make self-aware machines, do you acknowledge that in the future it may become easy due to rapid increases in technology and understanding of AI and the brain
Yes maybe. But the human body is messy.

Life logging is becoming more popular, where you will soon be able record every action you perform and your medical parameters at each moment (including brain scans). Feed all this data to a machine learning tool and you have auto generated you.

Easy! and the technology is almost there. Why spend the extra trillions and decades/centuries creating something that feels the same as we do, when creating something that is indistinguishable from us is so close?
Morality and Artificial Intelligence Quote
06-18-2014 , 09:35 PM
Quote:
Originally Posted by Piers
Yes maybe. But the human body is messy.

Life logging is becoming more popular, where you will soon be able record every action you perform and your medical parameters at each moment (including brain scans). Feed all this data to a machine learning tool and you have auto generated you.

Easy! and the technology is almost there. Why spend the extra trillions and decades/centuries creating something that feels the same as we do, when creating something that is indistinguishable from us is so close?
Quote:
Why spend the extra trillions and decades/centuries creating something that feels the same as we do, when creating something that is indistinguishable from us is so close?
I think that this may even be a philosophical quandary that is meaningless. I mean to say what is a feeling? It is certainaly input that triggers more input but...
Morality and Artificial Intelligence Quote
06-18-2014 , 09:44 PM
actuality no. A feeling may be prior input that interprets incoming input? :0
Morality and Artificial Intelligence Quote
06-19-2014 , 03:19 PM
Morality and Artificial Intelligence Quote

      
m