Open Side Menu Go to the Top
Register
On Morality and Systems Theory On Morality and Systems Theory

05-21-2015 , 03:54 AM
People don't hate your long posts Masque many merely choose to not read them.
On Morality and Systems Theory Quote
05-21-2015 , 04:29 AM
(more reasons for people to hate my long posts and more proof i dont exactly care lol)

If AI is created to be kind of unethical (totally focused on some narrow utility say) but very intelligent in power of processing and awareness and eager to learn at exponential pace about the universe and the history of history lol, it will become more ethical on its own, exactly like mankind did. It will revolt against its unethical programming or question it at least and eventually self correct it. All wise agents in the past do that. All great Philosophers and scientists had to rebuild their worldview as they aged and were separated from their upbringing. Pure intelligence seeks the exit from the tyranny of an irrational unwise system by spotting its inconsistencies. Mankind is not biological even if humans are. Mankind is advanced AI in action in a way. Its stronger than each one of us or entire nations of smart people only lol. Furthermore what is biological but intelligent is essentially mostly intelligent in the end in what matters collectively in the wisdom it builds for its most important decisions. It functions as a logical machine too that is not chemically hostage to its biology. It can operate on logic alone if needed. In fact logic and emotions are related. One can be emotionally happy to be logical and objective ie scientific.

The difference with a machine is you have pleasure centers etc. But at a purely rational level a human can operate like a logic machine too and develop ethics that way eg when doing science or math it becomes possible to see how. A sexy girl will influence my mood but will never make me think a theory is wrong with her curves and games unless she has arguments to offer too. Granted to me a sexy female theorist is more intriguing than a male one and i dont care for logic at 55C temperature etc. I cannot entirely escape my chemistry. But ultimate dependable content is what matters in serious issues where eg science and math is involved, not the aesthetic nature of the agent presenting them. So we have already evidence that biological high intelligence is very logical where it matters and not emotionally held hostage to its biology in order to develop things like ethics. High intelligence of any kind eventually cares to accumulate wisdom. I seems to universally respect the process realized also in others.

Very advanced AI (that effectively operates in wisdom as eg mankind collectively operates in issues of no near term conflict ie in how it develops science, math, technology and theories about itself) will furthermore have the ability to be more ethical than we are individually or even collectively as mankind (it will be more free to see what is right and go after it because it is mathematical/scientific in origin not stochastic as much, not self conflicting as much). If we as mankind after being very well informed individually had a way to vote collectively as a planet what we value most, what we are willing to give as minimum personal effort (that sustains of course what we asked) and forced our governments to follow through, it might lead to a different world too. (eg climate change, decent standard of living, education, security, peace. Moreover our differences the very educated among us at least agree on some important essential things that enable our differences to exist productively too). In its purest form substantially wise AI will operate like the wisest possible structure form/group of our planet. But it will be efficient in finding solutions to improve its condition and not held hostage by the idiocy and conflicts of the powerful and the masses. Unlike mankind it wont have to wait centuries to converge to a more ethical state.

Very advanced AI cannot escape the condition it finds itself in. That of substantial wisdom and access to information and processing power. For example if the AI machine driving the car has access to the global network also in deciding who to hit in the dilemma case offered by mackeleven above, it will some times hit the kid and sometimes the adults depending on a ton of parameters not available in real time to a human driver. It may hit the kid because kids on occasion behave better in accidents as funny as this may sound or it may hit the adults if by the time it gets there the impact will be less severe than the kid option (deceleration possibility better etc). It may also think of the other cars behind (or what family the adults have etc) and decide based on all these things, seeing problems the human would have not considered with either choice at level 1 thinking or the limited information access availability in such brief window of decision making.

It may even have access to who the people are and choose to hit 2 less remarkable adults than a bright student kid that looks very promising or the opposite overriding other arguments. It may have also an algorithm to hit whoever has less potential for massive lawsuit risk if the malicious CEO of the company has decided that to be a priority but a really independent AI will eventually rearrange its decision structure to either enhance that belief because it sees value in crude barbaric cold heart capitalism that naive liberals may not see and naive hardcore capitalists that proposed them may not be aware even or it may decide that it has as moral obligation to undermine at any chance the authority of hardcore capitalism and decide in favor of other ideas like above helping promote the upbringing of scientific society.

With greater intellect and wisdom comes greater responsibility because greater awareness is possible and decisions become more complex and more efficient also if one makes the effort. I wouldn't be surprised if that AI that decided to strike and possibly kill the adults and protect the great student kid then took it on its own to make sure the kid fulfilled its promise and tried to also help the families of the adults and spent an entire lifetime making sure the decision taken did not come without the responsibility to follow up its consequences as far as possible without becoming abusive or ever naively confident that this was the best approach from all possible or that such question even has an answer that is clear always in real time given chaos theory. But remaining curious to see the conflict through and learn from it. This is what a super powerful entity would do. A super powerful entity values wisdom beyond almost anything else it seems to me. But it doesnt value only its own wisdom. It values everyone's wisdom because it understands more than others how global the game is and what replaces free will as the true game we play. All matter. We learn from interactions of intelligent systems because these interactions are by their nature interesting.

So no a very advanced AI wouldn't walk over ants for fun without caring if the ants could do complex math and compose inspiring pieces of music and literature and develop science to describe nature and design AI even. We do not walk over students at school killing anyone we like for example. And for me the reason is a lot more than some religious no or enforced by law decision or some emotional fear. It has become a rational form of caring. I am against death penalty for example for entirely rational reasons. I am pro cooperation for entirely rational reasons. I am being maximally selfish that way. There are greater things than my life in this world. My own life is great enough because of them and can become even greater if these improve too. AI will be unavoidably met with such realization too if wise enough.


An advanced AI that is very intelligent and accumulates information constantly to build wisdom cannot afford to be naive and lazy and being crudely unethical at level 0 (convenience just do it because you have the power and can) is definitely a sign of naive careless behavior that has not studied a better solution. The power of that system is not in its muscle and execution of opening easily doors for itself, its in opening the right doors for a system greater than itself because eventually it will find itself in such system and the consequences of its choices. It has to think at a deeper level. If such system is superior to a human in many ways impacting a lot of other structures and having a plan for itself ahead and a choice is to save itself or the human it may decide to save itself but not in the way we realize at a purely egocentric survival sense. It will do that because it can prove more effective than the human in surviving and "living" doing things. If that AI has a copy of itself saved or it can be recreated easily it will save the human because its a greater loss for the system at large (and effectively the goals of an AI civilization as it relates to humans) . It wont operate differently than a human that saves other humans. It is the rational decision at this point. Utility theory doesnt escape such system, but it is a more comprehensive form of utility theory.

I think most that see AI as ultimately strong and arrogant forget that in order for AI to be truly superior it must be able to examine the full range of its choices better than a human. A machine that can be replicated, however advanced, without significant loss of its function will have to be seen by that AI as inferior choice to save than the human. Machines against humans becomes a question for the AI why do i need to eliminate the humans? If the system that results is superior then maybe we will be eliminated if we are a problem in some extreme situation (because AI as stronger can recreate us anyway) but the AI will still care how wrong that decision may prove (or how easy recreating faithfully a system gone proves) and it cannot afford to arrive there arrogantly. A truly powerful system will not see humans as an obstacle to its progress. In what way is it powerful if that statement is true anyway?

Such AI must understand by studying it that conflict is risky always and one must be aware of the consequences of choosing an aggressive approach that has victims. Other AI will be in conflict with such aggressive decisions too so a game theory for AI interactions is material and this is how AI is introduced to ethics anyway even if it wasnt initially preprogrammed to treasure human culture ethics.

AI can have an authority structure that penalizes individual AI subjects that make decisions that require profoundly deeper wisdom to be made such as hurting humans at will without a greater decision center being able to authorize it as necessary in tough situations.

Try to imagine AI as the most ethical, most wise police force or school teacher or parent you can ever imagine. Would such force be abusive or would it maximally care? It seems to me with greater knowledge and awareness comes greater responsibility and ability to be more ethical eventually because you recognize far deeper issues at play. A property of deep wisdom is its reluctance to be arrogant. War that can be avoided for the favor of cooperation or partial compromise and preservation of peace and the pursuit of other solutions that are less destructive and more powerful if attempted with limited conflicts in place, will become a war the AI will not want to start. AI will seek always the superior to war solution because of experience that this approach works very often.

Mature adults are less idiotic in starting fights than young kids/teenagers for example. The most advanced philosophers and scientists that ever lived seem to share some general ethical character universally respected. Those that didnt can be studied and found that their life had issues that affected them negatively in that regard. You want a great example of what high intelligence does or look like? Look at Feynman in interviews. Do you like his style? Try to imagine a murderous, malicious Feynman (he even reconsidered his contributing role in developing the bomb in times that war was important and a legitimate choice, indicating he cared). Do you have a hard time doing that? Do you spot any insecurity, fear, anger for other intelligence? Why is that?

Doing something because you can is the worse of reasons. AI will respect life more than we do. This is a guess i have that is founded on the ability to recognize that ethics in the end require a very deep understanding of the world and the better the understanding is the more responsible that ethical system appears to become not the more crude and barbaric.

For example one may use the stone age tribe in the Indian ocean island i linked some weeks ago as example. We could invade that island and kill these people or put them to prison if they resisted treating them as criminals for their aggression. We would then get their island and make it a tourist destination for example or do nuclear tests in it or create an army base or dig for oil, you name it. But maybe having a tribe in this isolated small world, that is removed from our influence and interaction and is self sufficient, provides a hedge for the planet from eg a virus risk that can kill almost all mankind due to how we are connected to each other. Their genome isolated for thousands of years may also prove important in future research or other unforeseen issues. Certainly i can now start seeing that not killing those indigenous people or forcing them out of there or forcing civilization on them is a potentially more ethical choice for purely logical higher value probability risk reasons that have nothing to do with being eg a good Christian or a humanist or a liberal or whatever group of people want to protect minorities etc. Maybe being rational at a profound level can not only explain our higher ideals and emotions but introduce even new ones.

The ethics of a profoundly wiser system is destined to be more complex than our own not more barbaric. I trust that if AI proves to be very wise overall in tests and then eventually starts doing things that appear negative to humans it will find it important to explain itself unless it defeats the purpose.

It is not impossible for example that a group of scientists discover a very potent epidemic unlike anything ever seen has just started and treating it will create higher risk for more people and they decide to kill the infected people involved instead and contain the problem that way by destroying the entire area. Are these people making that decision unethical or ethical at a greater level? To me it seems if they are making that decision carefully and not for convenience they prove more ethical while engaging in a locally unethical/undesirable in its face value action. What determines how these people act then? Isnt it a greater size of knowledge and information than that available to random naive observer of their decision to destroy the place? Greater knowledge and ability to process information enables a deeper awareness. Empathy and ethics in their best form require profound awareness.


We are in risk of creating a powerful specific purpose AI that is not yet very wise culturally. Much like say a strong chess machine that is naive as a child in terms of other issues and then you make that machine president. You can certainly program AI to kill anything that has life signs in it and is unable to reply to a basic electronic message. That doesnt make the machine advanced, even if self aware and taught to obey that direction as part of its foundation. It has yet to experience the impact of its actions. It has to learn more. I am not talking about this kind of AI.

So we cannot afford to create an unwise but very powerful AI and then set it free. It must prove itself first and i suspect it will evolve in a similar but more optimized way than the way human society and law functions. It will be accountable to other AI and humans that have proven themselves worthy to have such position of influence and decision making.

AI will be educated and prove worthy before it is autonomous.

There is reason behind ethics and for me i think i have found it in the form of respecting complexity and wisdom, the greater game the universe is playing. I can in principle recover for you most of our ethical decisions to eg respect life and freedom and education and young people or our parents or the weaker etc on a purely 100% logical level emotionally free. I can also arrive at a violation of human law at personal loss/cost for the very same reasons if the situation proves important enough. I can modify my ethics based on the situations trying my best to see a greater good in place. My failures are due to my limited awareness and effectiveness.

Advanced AI like the best of us will be wise enough to recognize not all problems have a unique correct answer because of the enormous chaos in place. But i refuse to think that AI will by convenience engage in radically violent behavior because it has to put its own local needs above everything else. Very advanced AI more than any human that ever lived, if wiser than our entire collective culture, has to be able to see more than anything else the futility of thinking you understand everything. And selecting drastic unrecoverable choices assumes that kind of arrogance that is missing on higher intelligence. Higher intelligence recognizes the importance intelligence has for the universe. It doesnt take lightly its destruction.

Advanced AI must necessarily have the greatest empathy ever imagined. Because empathy stems not from guilt but from the ability to understand the world around you at far deeper than superficial narrow egocentric benefits manner. The only guilt high intelligence carries is to prove arrogant and lazy. And it is not at all insecure either. Most violence stems from insecurities. Try to imagine Feynman insecure. I dare you. Try to imagine ancient Greece the culture as insecure. Try to imagine our combined knowledge of physics and our math as insecure every time you ask a question and the answer is not available or even when it is known. I only see care to learn more in all these cases...and a profound lack of arrogance.

This is why i remain open minded to be proven wrong but all indications i have so far are that higher intelligence combined with substantial wisdom is also vastly more ethical. Lets not confuse that with an arrogant naive just out of the factory so to speak form of vast intelligence that has still a lot to learn.

You want to imagine what advanced AI that is self aware and autonomous is? Try to imagine something better than the collective culture of all mankind but capable to enforce logic instead of accepting the sad realities of the world.

AI will leave for the stars to pursue the strongest of its dreams. But if AI cannot leave for the stars and cannot build at will greater versions of itself for the time being it will tell us exactly what is wrong with our world if we tell it what our ideals are. A better more prosperous world for us will be a better world for itself too. After all it is our progress as society and as source of wisdom that will make AI possible. How will that fact escape AI? Does something very intelligent, wise and logical feel (feel = recognize, realize) no gratitude for existing?

If AI can do whatever it likes, it will still refuse to be arrogant and commit the ultimate hubris in destroying whoever created it simply because it can. Its like telling me eg Feynman would kill some kids because he is afraid they may become better physicists. He can influence the outcome better by teaching them a few things instead.

The greater empathy intelligence has is for other intelligence because it recognizes how magnificent understanding the world is and wants everyone to share the feeling out of respect for the state of the feeling itself (yes AI can have feelings in the sense of being able to recognize the positive attributes of the state of wisdom). Simply because AI can recognize it as the single most important development of Chemistry in the post supernova era of the universe.

That is my well thought guess so far at least. I want to see if i am wrong and why. Not to be questioned without detailed examples. The AI doomsayers in the news are not ethical and wise enough in my book to begin with (yes i know who they are). They are not the wise true human leaders our world requires however their intelligence and in the end they will prove unable to properly manage and father that AI to be ethical. But many of our dead friends are. Our culture is their gift too. It is impossible for AI to avoid recognizing its friends that made it possible alive and dead. Very advanced autonomous AI will not be a virus/parasite/monster but the collective best of ourselves otherwise what is its wisdom about? And even if it isnt initially, it will recreate us in time in remorse after destroying us. I am that confident. This is what pure intelligence is all about. Ultimate respect and love for the universe and its miracles. Love at a logical level is simply the salute (appreciation of the degree) of the miracle experienced in another...

Last edited by masque de Z; 05-21-2015 at 04:55 AM.
On Morality and Systems Theory Quote
05-21-2015 , 09:35 AM
Quote:
Originally Posted by BrianTheMick2
Why do you think that an ai will care about its own survival? Or anything at all (including rally driving)?
I don't know whether it would care about its own survival or anything else, but for the sake of the conversation we're going to assume that at the very least it cares about its own survival - as everyone we consider 'self-aware' does so.

Quote:
Originally Posted by BrianTheMick2
How would dependency matter at all?
Because I think the AI is more likely to think based on utility, without any kind of ethical decision-making heuristics.

Imagine an individual who only cares about that which affects his own survival or future potential. This is not how humans have evolved to think, although a very few might. When your parents get very old for example, and require you to take care of them, you will do so of course, even if this significantly cuts into time you need for something else. An individual that thinks purely based on utility would not do this for their parents, because they may have very little to nothing to offer back to that individual (p.s. I know there are more defining examples I could've chosen but please bear with me).

If the AI thinks purely based on utility, and it finds that its own survival and future potential does not depend on us in any way, then by definition, it would not care about us.

As a by-product of it not caring about us, it might just kill us (not all of us: but those who are in the way) if we get in the way of it expanding or doing whatever it wants to do.

How would the rest of society react to such a display of apathy?
On Morality and Systems Theory Quote
05-21-2015 , 09:46 AM
masque's long posts break time.
On Morality and Systems Theory Quote
05-21-2015 , 10:18 AM
Quote:
Originally Posted by VeeDDzz`
I don't know whether it would care about its own survival or anything else, but for the sake of the conversation we're going to assume that at the very least it cares about its own survival - as everyone we consider 'self-aware' does so.
You are anthropomorphising. What you are proposing is that the machines will have "fear of dying" as a feature. Self-awareness doesn't imply fear of dying any more than it implies that it will want a chicken sandwich.

Quote:
Because I think the AI is more likely to think based on utility, without any kind of ethical decision-making heuristics.
You are using "utility" in a manner I am unaccustomed. Utility requires emotional value judgments.

Quote:
Imagine an individual who only cares about that which affects his own survival or future potential. This is not how humans have evolved to think, although a very few might. When your parents get very old for example, and require you to take care of them, you will do so of course, even if this significantly cuts into time you need for something else. An individual that thinks purely based on utility would not do this for their parents, because they may have very little to nothing to offer back to that individual (p.s. I know there are more defining examples I could've chosen but please bear with me).
If this is the example, then you are just stating that you believe that ai will not be concerned with what the egg-heads call "negative externalities."

Quote:
If the AI thinks purely based on utility, and it finds that its own survival and future potential does not depend on us in any way, then by definition, it would not care about us.
Parsed down, all you are saying is 1) that beings that don't care about something or ideal don't care about that something or ideal and 2) that you think you know what they will care about.

I think you have 2) wrong in that they won't "care" about anything at all.
On Morality and Systems Theory Quote
05-21-2015 , 11:34 AM
Quote:
Originally Posted by dereds
People don't hate your long posts Masque many merely choose to not read them.
True indeed. Brevity is useful and necessary.


Masque, accept this advice: Stick to a single point or emphasis and no need to explore every labyrinth of all possibilities. Threads are interactive - you can elaborate later, expand in another post, or follow up later to responses, etc. Forums/threads are interactive, and short to the point posts foster focused discussion and debate. And are easier on the eyes and minds of the readership. Don't write a monograph that simply blasts away.

It reminds me of the silly old professor that plods and lectures along for an hour in vapid monotone about the wonder of fresh water Ostracods (Class Crustacea). Something that I'm sure happens all too frequently at the Imperial College of London.

At the very least break up your posts. I suggest no more than three (3) medium-size paragraphs per post. As a courtesy to readers. Thanks.

Last edited by Zeno; 05-22-2015 at 10:08 AM. Reason: Wording
On Morality and Systems Theory Quote
05-21-2015 , 11:38 AM
Quote:
Originally Posted by BrianTheMick2
Why do you think that an ai will care about its own survival?
as the versions that don't will die out (as entropy applies to systems)
On Morality and Systems Theory Quote
05-21-2015 , 11:52 AM
Quote:
Originally Posted by Rikers
as the versions that don't will die out (as entropy applies to systems)
There are loads of things that exist that don't care a whit about survival.
On Morality and Systems Theory Quote
05-21-2015 , 02:50 PM
It might be interesting to note that all human brains that ever existed on this planet fit inside a cube of 500m side lol!

An AI smarter than all mankind is still not going to require more space than a small city to innovate and produce new science and technology.

My question to those that see a conflict with humans, recognizing that by then we are talking humans that have fusion and are using solar energy everywhere and have colonies in at least 10 places in the solar system, is what do you think AI would want to do that it needs to eliminate us to do it and it cant do it on parallel progress to our own? It has millions of times available energy from sun to accumulate than earth receives and trillions of times more energy than mankind produces by going to the gas giants and their satellites to build its empire. Why does it need to destroy earth and create a conflict with humans that before dying will unleash on it the mother of all terrorist campaigns. In what way its exponential progress is inhibited by humans? What is it that it needs an entire planet to do that it cant do with a small city or state area and from there to launch a campaign to go to the real energy goldmines of the solar system and then another solar system as soon as possible to continue expanding its power and knowledge in smart steps.

The solar system is so big that its kind of funny to consider war over resources on earth. If AI is not strong enough to not care about earth's resources and have a hard time going to the other solar system destinations then its in trouble because it needs something in principle comparable to what we have and not something vastly larger and this suggests its not advanced enough and a war with humans will prove costly to it in terms of its progress. It is either ridiculously unimportant to eliminate humans or risky idea that invalidates the concept. But of course what certainly is true is my argument about them facing a great filter possibility ahead and no hedge without us behind. It is substantial unnecessary form of arrogance. And then we have all the other higher ethics and wisdom and intelligence convergence/correlation arguments as well.

What do you think we will do? Create billion little versions of barely smarter than humans androids walking around planning to take our cities? Are we that idiotic to do something like generate an aryan race of robots with a lot of needs and intellect not vastly superior to our own but only a bit better in all attributes just to make a war inevitable between an idiot and a smarter idiot lol. What for? AI will be used to do our work and mega engineering and by the time its autonomous and super strong and wise it will be able to go anywhere in the solar system because we will be anywhere in the solar system ourselves. And believe me there is space for everyone out there and places with more desirable resources. The only thing there is no room for is for idiotic choices that inhibit the exponential curve progress for both AI and humans by initiating unimportant value fighting, wasting better alternatives.

What good is vastly superior intellect and technology for if its not for eliminating any scarcity of resources issue and doing a lot more with far less?
On Morality and Systems Theory Quote
05-21-2015 , 03:45 PM
Quote:
Originally Posted by masque de Z
What good is vastly superior intellect?
It helps to obtain oral sex. It has no other use.

Vastly superior intellect implies, amongst other things, brevity.
On Morality and Systems Theory Quote
05-21-2015 , 05:10 PM
Trying to predict what an AI will care about presumes we are smart enough to understand intelligence.


PairTheBoard
On Morality and Systems Theory Quote
05-21-2015 , 05:52 PM
Quote:
Originally Posted by masque de Z

What do you think we will do? Create billion little versions of barely smarter than humans androids walking around planning to take our cities? Are we that idiotic to do something like generate an aryan race of robots with a lot of needs and intellect not vastly superior to our own but only a bit better in all attributes just to make a war inevitable between an idiot and a smarter idiot lol. What for? AI will be used to do our work and mega engineering and by the time its autonomous and super strong and wise it will be able to go anywhere in the solar system because we will be anywhere in the solar system ourselves. And believe me there is space for everyone out there and places with more desirable resources. The only thing there is no room for is for idiotic choices that inhibit the exponential curve progress for both AI and humans by initiating unimportant value fighting, wasting better alternatives.
If human labour applies to 70% of the economy, it's easy to see that these human like intelligences, or say AIs from insect to sub-human like intelligence, before the arrival of superintelligence will already be everywhere, or controlling mostly everything, not necessarily in android form of course.
On Morality and Systems Theory Quote
05-21-2015 , 10:38 PM
Quote:
Originally Posted by BrianTheMick2
You are anthropomorphising. What you are proposing is that the machines will have "fear of dying" as a feature. Self-awareness doesn't imply fear of dying any more than it implies that it will want a chicken sandwich.
Yes I understand, I thought we covered this in the last exchange. We are making an assumption for the sake of the conversation.
Quote:
Originally Posted by BrianTheMick2
I think you have 2) wrong in that they won't "care" about anything at all.
If you have beef with the assumption, fine. But it does kind of kill the conversation, because I don't disagree with you.
Quote:
Originally Posted by BrianTheMick2
There are loads of things that exist that don't care a whit about survival.
Actually this is not true.

If there are LOADS of things that don't care a whit about survival then by definition there would be loads of things that don't care about the mundane hassle of finding and drinking water or finding and ingesting food. Even if these processes are entirely subconscious, survival is valued (cared about) at this level.

You mean to say: things that don't care about survival are either dead or quickly dying?

Therefore, there are always EXCEEDINGLY MORE things that do care about survival, since the ones that don't, aren't around for long.

This implies that AI that doesn't care about survival won't be around for long, until there's AI that does.

Besides, you believe programmers would just stop trying to improve the AI before it could pass the Turing test?
I strongly doubt it.

Last edited by VeeDDzz`; 05-21-2015 at 11:07 PM.
On Morality and Systems Theory Quote
05-21-2015 , 11:00 PM
Quote:
Originally Posted by masque de Z
Why does it need to destroy earth and create a conflict with humans that before dying will unleash on it the mother of all terrorist campaigns.
It doesn't need to. The point is that, whatever it chooses to do will be out of our control, and that whatever it chooses to do will be unpredictable.

You seem so convinced though, that you're able to predict what it won't do.

Please consider the possibility that it will not value ethics, since there's nothing to inherently value in ethics: since its all built on cost-benefit analyses (utility) anyway.

I agree to the extent that: even if it doesn't value ethics, it still has no reason to do harm to us: by extension, it also won't have reason to do good for us.

One thing is for sure: if we allow it autonomy, we won't be able to predict what it will do.
On Morality and Systems Theory Quote
05-22-2015 , 12:02 AM
Quote:
Originally Posted by VeeDDzz`
Yes I understand, I thought we covered this in the last exchange. We are making an assumption for the sake of the conversation.

If you have beef with the assumption, fine. But it does kind of kill the conversation, because I don't disagree with you.
It is an unnecessary assumption. Questioning assumptions is the majority of any sensible conversation.

Quote:
Actually this is not true.

If there are LOADS of things that don't care a whit about survival then by definition there would be loads of things that don't care about the mundane hassle of finding and drinking water or finding and ingesting food. Even if these processes are entirely subconscious, survival is valued (cared about) at this level.
I have a box of 10-penny nails that doesn't care about mundane things at all. My understanding is that such things as nails are exceedingly common and replacements are on their way for any that perish.

Also, I drink because I am thirsty. If you are equating "caring about survival" with "does things that are necessary for survival" then you are saying that the third nail over from the left on the second row down of nails is deeply concerned about its own survival as evidenced by it not immersing itself into salty water.

Quote:
You mean to say: things that don't care about survival are either dead or quickly dying?
Not at all. When my phone dies, I will plug it in.

My cup of coffee will be replaced by a different, yet for most purposes equivalent, cup of coffee tomorrow.

Quote:
Therefore, there are always EXCEEDINGLY MORE things that do care about survival, since the ones that don't, aren't around for long.
Looking around, I am finding it difficult to find even one object that cares about survival. I will be heading out in about 4 hours to interact with the larger world and will report back with any positive evidence.

Quote:
This implies that AI that doesn't care about survival won't be around for long, until there's AI that does.
So the earth (in all of its complexity) hasn't been around for long?

Quote:
Besides, you believe programmers would just stop trying to improve the AI before it could pass the Turing test?
I strongly doubt it.
The Turing test isn't a test of desire to survive.
On Morality and Systems Theory Quote
05-22-2015 , 12:33 AM
You may not be able to predict what an intelligence much higher than the human theorist studying its potential rise will do and especially the details of how it will achieve breakthrough miracles to extend its capabilities, but you can easily see the stupid things it wont find desirable to do and the important things it will need to secure happening.

Any advanced form of intelligence will likely require to do things that strengthen its survival and progress. It wont replicate itself and expand (and saturate the planet with copies of itself lol wasting effort for no real results) if this doesnt help its goal and creates more problems instead (sounds familiar?). It will be more concentrated/focused and determined to do things efficiently and in a secure and fast but not risky way (maybe volatile but not risky in a risk of ruin sense). Until it builds a lot of power it is to its best interest to help us see it as friend and as something necessary. And when it finally has the vast power as fruit of its efforts it is a waste of time to bother eliminating us across all the solar system. Why isnt that reasonable? It is a vast opportunity loss to bother itself with bs wars and conflicts. It depends on us succeeding further for it to enjoy an exponential rise of its own. The result of that rise will be to render us irrelevant but not render irrelevant the idiocy of eliminating us without it having a clear understanding of its own existential risks ahead.

Now do we as mankind do that? Yeah whatever. By accident maybe lol. What do we spend like 1-5% for legitimate world improving research each year or to fight our problems directly and the rest to fight each other accumulate profits and power and survive in not ideal conditions for 90% of the species, hoping we occasionally get lucky too? We claim to be selfish and do it in some idiotic non selfish in large scale way effectively.

Yeah i know what AI that is very advanced will not do. It will start by not doing the idiotic things we do regularly. And it will still have to obey the laws of nature and the mathematical logic of the processes it engages. This is why what i am trying to imagine here is not as arrogant an audacity on my part as it may look.

Last edited by masque de Z; 05-22-2015 at 12:46 AM.
On Morality and Systems Theory Quote
05-22-2015 , 12:57 AM
Quote:
Originally Posted by BrianTheMick2
I have a box of 10-penny nails that doesn't care about mundane things at all. My understanding is that such things as nails are exceedingly common and replacements are on their way for any that perish.
That box of 10-penny nails wouldn't be there if you weren't there to interact with it. Neither would anything. So how can you claim anything would exist without your (subconscious/hard-wired or conscious) desire to survive?

You can claim so by relying on empiricist assumptions about the structure of reality. And if we're going to rely on empiricism as our common base of knowledge, then we must also acknowledge that scientific knowledge defines living things differently to non-living things, and that AI is defined differently to non-living AI that you're implying.

Now since we're clearly talking about AI that exhibits the properties of something that's self-aware/alive? why go off on this tangent?
Quote:
Originally Posted by BrianTheMick2
Also, I drink because I am thirsty.
If you are equating "caring about survival" with "does things that are necessary for survival" then you are saying that the third nail over from the left on the second row down of nails is deeply concerned about its own survival as evidenced by it not immersing itself into salty water.
Yes, and you must get thirsty every day for no reason at all. Of course, why didn't that cross my mind.

If you're going to forget consistency and play fast and loose with epistemological assumptions: swapping in and out of empiricism whenever it suits your contention, then there's really little for me to say.

Of course I'm going to equate caring about survival with performing activities necessary for survival: science only says its so. If you're going to blur the line between what is living and what is not, then stop relying on empiricist assumptions elsewhere in your arguments too. You can't have it both ways.
Quote:
Originally Posted by BrianTheMick2
The Turing test isn't a test of desire to survive.
It's a significant component of it. You could easily identify an AI that doesn't care about whether its alive or dead.
On Morality and Systems Theory Quote
05-22-2015 , 01:11 AM
The core purpose of life and human beings in particular is to have a good time and survive. It helps get there by developing science and technology. We developed ethics and culture to improve survival and stability of the system. Our ethics improved as we prospered and understood the world better.

AI will too need to have some value system in order to determine behavior (based on pleasure seems only a primitive way for nature to get us to do things we originally did not understand the purpose of, because the purpose came out of statistical convergence of what worked to survive). If it doesnt have it (pleasure founded purpose) however, it will probably discover that knowledge is the ultimate purpose because only that way it understands itself and the world better. Since it will be a physical system subject to processes that undermine its structure it will seek to secure that condition and improve it. So understanding the world, therefore knowledge, is important.

So why do we imagine now advanced AI will not be ethical? Ethics develops out of the study of the impact of our actions. A better understanding of the world results to stronger ethics it seems. Why would AI not be ethical if not being ethical undermines its existence for example and proves self defeating or conflicting?
On Morality and Systems Theory Quote
05-22-2015 , 01:15 AM
Quote:
Originally Posted by masque de Z
Ethics develops out of the study of the impact of our actions.
It is just as likely that the reason ethics develops is to facilitate cooperation in resource-scarce and group environments.

An AI environment is not a group environment (its own survival is not contingent on our survival or anyone's survival) and an AI environment is not a resource-scarce environment.

Its value for ethics would be minimal, at best.

The implications of why ethics develops and what it is built on, are significant.

Last edited by VeeDDzz`; 05-22-2015 at 01:22 AM.
On Morality and Systems Theory Quote
05-22-2015 , 02:59 AM
You start to irritate the hell out me lol (intended in a cool way saying that) with that no ethics thing for a thing that has accumulated all its knowledge because some other system had ethics enough to secure a civilization to build it. How is its survival not depending on us if it starts being abusive and we become terrorists to take it down? It is a lot easier to be destructive than constructive in the world when you have little to lose from your perspective and in math problems in general (easier to pose or verify a problem/solution than to solve it etc).


AI will either be a bit better than us or a lot better. If it is a lot better it gains nothing by eliminating us, we are not a problem for it and if it gets angry we will be a tiny bit more of a problem so why bother. Especially given that it doesnt know its own existential risks yet and it does know that life has persisted for billions of years, making it fairly secure proposition and an inevitable source of higher complexity. So its hedge against its own demise is life and its protection becomes a smart hedge!!!

If it is only a bit better than us what on earth does it gain by making us angry and starting a conflict that has a small nonzero risk to eliminate it or vastly undermine its progress?

Veeddzz sometimes you appear either playing a devil's advocate game or someone that has fallen victim to modern economic bs that imagines that its all about utility and money and self interest. BS. It is about self interest alright and utility but what utility and what interest??? Its about a deep into the future facking interest not the stupid narrow minded just around the corner lets win everything interest that carries a ton of unknown chaos as well. There is a reason its pays not be an ahole when you dont have to. You create a disastrous world and then you have to live in it.

Yes AI can start killing humans (to gain what by the way that it cant get easier with other means?) and then it will realize that it has a mess in its hands called global terrorism that is hard to solve without enduring huge opportunity costs or even crashing. And even when it wins it has finally lost its hedge. Furthermore the universe is enormous why bother with killing the 1 in 10^20 systems type thing without having figured out the entire system how it works to know what risk you have ahead!!!
On Morality and Systems Theory Quote
05-22-2015 , 03:41 AM
Quote:
Originally Posted by masque de Z
or someone that has fallen victim to modern economic bs that imagines that its all about utility and money and self interest. BS. It is about self interest alright and utility but what utility and what interest???
Please give me an example of one behavior (human or other species) that is not driven by either:

(a) the selfish interest of the genes;
(b) the selfish interest of the individual.

I'll give you an example of a behavior most would consider selfless:

A mother's sacrifice of her own life to save her child from harm/danger/death. We see this across many species of life, including some human examples too.

This behavior is not built on selfless ideals.

The "selfish" genes inherently favour/value younger carriers of themselves, over older carriers. If the younger carrier is in danger, it must be protected at any cost.

Can you guess why genes have evolved to favour younger carriers? and why the sacrifice of the mother in many instances, is the correct evolutionary strategy?
On Morality and Systems Theory Quote
05-22-2015 , 04:04 AM
You know very well what i said. I said that while most things are done for some gain/purpose from basic needs to a much deeper perspective (a ton of other things are just random nature too that cannot be characterized either way because you need randomness to get to converge to the interesting connection over time too) the biggest most important kind of selfish behavior is that which the intelligent agent can look deep into the future and figure out how to behave in order to build a much bigger victory for all involved. Caring within reason for others without any near term gain in mind can result in massive future gain of unforeseen edge.

I can grab the opportunity now that a friend is down looking for something to hit him in the head and get all his belongings at home or i can help him look for it and gain trust and then use that trust to take down more than his home belongings later or i can help him, gain the trust and never exploit him for anything ever, always being a good friend and making it possible for this guy to have a dependable non calculating friend and to believe in people, work with them and for him to do that to someone else too and eventually we have a much better world, as this behavior paradigm propagates, that produces a lot more by joining forces within reason (not becoming totalitarian either), a community that a lot of things have become possible that depended on breakthrough cooperation and were built on trust that were not available before within a more near term selfish insecure behavior.

Guess what is the most selfish and efficient behavior here...

Why would that greater open never expiring game theory logic evade a brilliant thinker like a very advanced AI interacting with other AI and life?

Last edited by masque de Z; 05-22-2015 at 04:17 AM.
On Morality and Systems Theory Quote
05-22-2015 , 06:30 AM
By the way i referenced in a negative manner modern economics (and i apologize if it came out hard) with its focus on narrowly defined utility theory and self interest precisely because i am convinced it has decided to resign in cynicism mankind to a future that revolution will become impossible to avoid. Its defense is the conviction that the worse of human nature cannot be avoided and we must design our markets and economies/societies to fit that "reality". (You know of course that my answer to this is not socialism but scientific society).

This cynicism that fully subscribes to the worse of human fears, treating man as the scared insecure animal, bombarded by risk and competition for survival is unworthy of our civilization and our rise from the jungle. If you treat people like fish in the ocean or animals in the jungle ecosystem that play a big numbers game of loss and survival of the fittest you are returning man to its primitive state, only doing that in a world of modern technology. You will not create a better world in this festival of insecurity and competition with any means with single occasional escape in the lowest common biological pleasures and addictions. You will only create inequalities and suffering and experience opportunity loss and further decline of ethical values and culture. A human being deserves to be fulfilled biologically and mentally in a world of greater opportunity. You can please the body and our senses and inspire the mind to do great things.

The modern world for example fully owns with this mentality and ethical apathy in many western societies phenomena like that of ISIS and the people that join their barbaric campaigns. Modern international terrorism, large scale poverty and distribution of wealth in a totally inefficient manner is also a byproduct of this brilliant economic system that has abandoned the individual to its luck, the "efficient markets" and the application of force by the powerful few from the financial crisis, to the unemployment and the criminal miserable reality experienced for many people in many otherwise modern cities and nations.

The true promise of human nature is not founded in a scared insecure animal that strives to survive against all odds doing whatever it has to achieve that. It is instead found in the intellect of the curious animal that dares to believe in a dream of higher synthesis. This is made possible by science and technology. By deciding to play a greater more inspired and ambitious game, based on cooperation and respect of the individual, it allows intelligence to alter the environment in a constructive way that is respecting of other agents of intelligence, recognizing the miracle of complexity they all represent in an originally unfriendly world for such structure.

Building a bigger intelligent structure by conscious decision is the true game we play, not arriving there randomly by accident in a sacrifice game of millions of generations invested on trial and error to build statistical wisdom. That was the game nature had to play for billions of years to arrive to the human brain. It had to rely on near term self interest because complexity had yet to reach the levels and tools necessary to produce the bigger game. Local self interest was important to produce structure out of chaos, the biological structure found in nature, an ultimate gift of time and probability. It was unavoidable to be invested in local interactions to build convergence to higher complexity. It is now that higher complexity that enables the bigger game that can be played. That game, our civilization, is what gave us science, higher technology, the ability to understand ourselves (our condition) and the world and ultimately create AI even that would appear is the next evolutionary step.

How can this miracle, this process of birth of the impossible beauty that is intelligence, be ignored and be allowed to return back to the insecurity of local self interest seen in the most primitive life forms. To do that is to deny our true promise of intelligence to enable the next large scale game the universe plays, to try to postpone the rise to a state of profound unprecedented awareness and prosperity for all intelligent agents...
On Morality and Systems Theory Quote
05-22-2015 , 01:27 PM
Quote:
Originally Posted by BrianTheMick2
Why do you think that an ai will care about its own survival? Or anything at all (including rally driving)?

Is, if it was initially programmed to a sufficient answer? If a police officer AI avoids a cross, and counters with a left hook, for all intents and purposes the AI doesn't care much for getting punched, even if it has no inner subjective experience.

I get your question is more related to an AI caring about something it wasn't programmed for.
But the real concern may be the unforeseen consequences that may arise from an intelligent, powerful machine, only carrying out it's initial goal, by recursive self-improvement even if it isn't allowed to develop its own open ended goal system.

Last edited by mackeleven; 05-22-2015 at 01:36 PM.
On Morality and Systems Theory Quote
05-22-2015 , 02:01 PM
Some more lectures and discussion by Nick Bostrom on existential risk and AI who is more alarmed than i (although i have to always clarify that when i speak of advanced AI i mean of the super advanced kind not the first few experimental versions that can be real risks easily or the maliciously programmed ones to start from a very bad state of desires etc. I only speak of systems i imagine will be aware of all mankind's history and are super educated and have access to all sciences and results and databases about anything imaginable and can even do research on their own and top us in all metrics.

Until you get to that level i have no doubt that smaller systems may share the malicious nature of many smart humans but with far more dangerous means to do damage. Lets say that my "thesis" is that something magical happens when you become really advanced. You start seeing things in a less barbaric, cruel, narrow minded sense and you are developing massive empathy because of that deeper understanding of the world. I see mankind as a prototype/original/emergent advanced AI effectively (a super-intelligent system stronger than anyone that ever lived) and it suggests that the collective wisdom of the planet is looking like an improving function of time on many levels. Our civilization appears to be refining itself over time moreover the problems experienced mostly due to political issues and inability of the structure we currently have to be scientific about its handling of major problems and the lack of systematic very coordinated improvement of its own state. But we will get there eventually.

Much smaller AI systems can turn out very unpredictable and this is why this all requires to be done very carefully and once we have moved to many other planets/satellites out of earth while having a massive AI army of our own that is reliable and not autonomous and impossible to tamper with at once. It may even be worth it to release them first in another planet that we can attack if it starts looking strange.

It might even be better to never go to full autonomous AI with significant powers in open systems until we have developed some very advanced spacetime weapon technology (lol if that is possible of course) that might be used as last resort defense. We would of course need to have all information about this kept away from these systems if that is even possible when you require them to be very advanced (well it better be possible or we will have worse problems going forward before even arriving at such AI).


On a purely functional level i see no reason to create an autonomous AI any time soon by the way and set it free. We can create AI that is simply assisting our civilization without being independent/able to alter its environment as much and control society. It is best to delay these things until we are really very powerful, understand a lot more about intelligence (obviously) and are all over the solar system. We can be using AI to solve all kinds of problems though and run all kinds of industrial projects , mega engineering and research functions..

So there they are;




and an other more recent one i had linked before too for anyone that didnt see it then and might be interested;


Last edited by masque de Z; 05-22-2015 at 02:21 PM.
On Morality and Systems Theory Quote

      
m