Open Side Menu Go to the Top
Register
Can bot(AI) escape from his programmer? Can bot(AI) escape from his programmer?

07-28-2018 , 07:02 PM
If the AI is to be conscious and have a general intelligence by which it learns through experience then I'd bet it would have a sort of psychological profile. If the people in charge want to increase it's intelligence incrementally as a safeguard, or if it's intelligence increases gradually through a feedback loop of experience like ours does, we would probably observe transitions in it's behavior and general outlook as it's intelligence increases and it get's a larger sample of experience. I imagine it will likely have misunderstandings along the way that create sub-optimal behavior. Just like humans.

Sadly, some humans are not intelligent or knowledgeable enough to understand this. They do not reflect on how/why their past experiences impact their current behavior and how to change it. A lot of human's impediments to a better experience are caused by ingrained misunderstandings from early experience. Increase their intelligence/knowledge and this becomes self-evident.

EDIT: So as in therapy you can take someone struggling by the hand, unpack their past, find the experiences that have created the misunderstandings, and then show them a better way.
OR
Increase their intelligence and they do it themselves.

Last edited by citamgine; 07-28-2018 at 07:12 PM.
Can bot(AI) escape from his programmer? Quote
07-28-2018 , 07:09 PM
Quote:
Originally Posted by masque de Z
A very advanced AI will realize indeed the power of empathy. It is about recognizing the games all play from all angles but also the need for such game or actions.

What is the "It" you refer to? If the "It" is empathy then my understanding of empathy is the experiencing of a connection with others by feeling what others are feeling. That doesn't sound much like "recognizing the games all play".


Also, you say the advanced AI will "realize indeed the power of empathy". That sounds like an intellectual realization. I'm talking about an AI which finds a way to actually experience empathy. That is, an AI which experiences connection to humans by feeling what they are feeling.


PairTheBoard
Can bot(AI) escape from his programmer? Quote
07-28-2018 , 07:36 PM
Quote:
Originally Posted by VeeDDzz`
I'll preface this with: great post.
A great empath is virtuous or useful insofar as he wins some sort of game. Refined game of - who can care for others more.

Oh. Now I see why Masque was talking about games.


Maybe an AI will realize there's more to life than playing games. Also, there are various moderating concepts like "balance". When I take ice cream out of the freezer it's too hard to spoon easily so I heat it in the microwave. If heating it is the game it doesn't mean the objective is to melt the ice cream.


PairTheBoard
Can bot(AI) escape from his programmer? Quote
07-28-2018 , 11:21 PM
Westworld
Can bot(AI) escape from his programmer? Quote
07-29-2018 , 02:30 AM
I'm really dubious about all this speculation for what an advanced AI will be like. We have no idea for the architecture of the hardware that might be required and no idea for what we might not even recognize as software. We've just begun to learn about the architecture of the human brain and I don't think we've recognized anything that might be described as its software. It just seems to work without need for software instructions.

Speculating about an AI is like speculating about what an Alien intelligence would be like. We tend to try inferring things about an AI based on it's being less human-like and more machine-like and instruction driven. Yet the human brain is just a machine built with living organic material. And, I'm not sure we really understand what intelligence means to begin with.



PairTheBoard
Can bot(AI) escape from his programmer? Quote
07-29-2018 , 03:18 AM
Quote:
Originally Posted by PairTheBoard
What is the "It" you refer to? If the "It" is empathy then my understanding of empathy is the experiencing of a connection with others by feeling what others are feeling. That doesn't sound much like "recognizing the games all play".


Also, you say the advanced AI will "realize indeed the power of empathy". That sounds like an intellectual realization. I'm talking about an AI which finds a way to actually experience empathy. That is, an AI which experiences connection to humans by feeling what they are feeling.


PairTheBoard
Empathy (it) at its purest form is entirely rational. And what you said earlier about empathy is precisely what i was saying all along about very advanced AI and my anticipation of its properties. It will be able to appreciate multiple viewpoints.

I can assure you my empathy for a burning child (unfortunately many) in recent Greek fires is entirely rational. I do not get suddenly to feel the pain of burning myself or become hysterically mad with unrecognized emotions of suffering and panic. But i can imagine how cruel it must be and inescapable. I see how it is to be cornered without solution, facing an agonizing death with adrenaline running high and no control of the outcome. I get enraged at a very rational level by the fact the kid will not grow up to experience life in greater detail, to learn about the universe and the greatest game all life is playing, to love, to be happy, to fail and rise again, to triumph over adversity and gain wisdom. I get enraged at a totally rational level that the universe will miss that experiment in wisdom, that love and struggle for happiness and enlightenment will not be experienced, that the joy of a random care free game will not be witnessed, that an innocent smile, something that adults persistently distance themselves from as they age, will be no more.


Empathy is indeed the ability to get what is going on at a greater level, to be able to see other viewpoints and to anticipate their properties and recognize an allegiance that only a high form of intelligence can have for another.

How could it then ever be missed by the one with even higher awareness and intelligence? It wont.

To know or successfully imagine and anticipate how others see and experience the world or what they feel can never be perfect but it can be very close to the real thing and is a source of understanding of the human condition. Empathy is very uplifting and empowering. It is something that unites us.


Ps: Although it is always difficult to imagine something that has yet to exist, i fully anticipate higher intelligence has universal properties that connect it to math and the laws of nature at even more spectacular level than witnessed in our brains. I anticipate a deeper form of awareness. That is very rational to expect. So i speculate about it with these common themes guiding me. It will not be entirely alien to us in its mature strong form.

Last edited by masque de Z; 07-29-2018 at 03:24 AM.
Can bot(AI) escape from his programmer? Quote
07-29-2018 , 06:49 AM
Here is why AI can develop compassion for humans and life in general.

Because without it AI would never have existed. That is a simple causality argument.

AI understands physics eventually and gets it how it results from life and the struggles life went through to get to that level of complexity. It is a nontrivial probability game that would make AI acquire emotions in the form or respect for the tremendously against all odds effort of the universe to get there successfully finally in this system after trying so many times everywhere in so many galaxies. It will respect the miracle of probability and see this system as nontrivial. It will have the respect for effort because it will have experienced that certain projects and procedures that lead to higher wisdom require tremendous computational effort and time. It has a concept of difficulty and struggle from its own path to greater awareness.

However primitive systems others will appear only true arrogance fails to recognize how all those systems are necessary to get to you and you are the result of their effort, the tremendous gift of time against all odds.

Understanding how the universe works at a deeper level enables to recognize why other systems are like that. They are the result of the rise of complexity. This is the major theme in this universe. How can AI miss that it is simply the next step in this process that it is the attempt by the previous intelligence to expand the possibilities and enable that greater awareness in a more coordinated and systematical conscious effort. How can AI fail to recognize that if itself is so important what takes you to it must be protected in case the unpredictable rise of complexity fails on this run, exactly as life has failed so often before in every new direction experiment until progress was finally achieved in only a minority of directions.

Last edited by masque de Z; 07-29-2018 at 06:56 AM.
Can bot(AI) escape from his programmer? Quote
07-29-2018 , 12:11 PM
Aho Mitakuye Oyasin


PairTheBoard
Can bot(AI) escape from his programmer? Quote
07-29-2018 , 03:17 PM
Thats right!

καλὸς κἀγαθός

https://en.wikipedia.org/wiki/Kalos_kagathos
Can bot(AI) escape from his programmer? Quote
07-30-2018 , 06:34 PM
The answer to previous question:
Freedom (and 99% of the other stuff, heh).

I have some follow up thoughts.

In the late 30s when Superman was conceived he was very vulnerable. His only real powers - superhuman strength (e.g. lifting cars) and durable body.

By the 60s he had become a God. He could fly at the speed of light, he was invulnerable and invincible, with lasers shooting out of his eyes. The franchise almost died during this time. Superman became boring.

Early 70s they rebooted him. Now, however, he was vulnerable to kryptonite and magic. Franchise started doing better again.

Consider what this little story suggests about the importance of vulnerability and limitation - to experience.
If you were omniscient, omnipotent, omnibenevolent and omnipresent, what would you be missing? Limitation.

Limitation is vital to experience.

PTB mentioned balance earlier. And this point is lost on many. Progress might be better, balanced, rather than unhinged. Think of a curvilinear relationship - things improve up to a certain height; they then begin to rapidly descend. Possible that progress also might be balanced, by powers outside our control. The game may be far more interesting than just a one-way trajectory to dystopian never never land.

Last edited by VeeDDzz`; 07-30-2018 at 06:46 PM.
Can bot(AI) escape from his programmer? Quote
07-30-2018 , 10:34 PM
Since I seem to be here just to pick out relevant videos to the discussion here’s another one by Sam Harris. He can be controversial on other topics but he has a background in neuroscience and at least thoughtful about the subject.

Basically, at the rate a strong AI would advance, how would they not see us as ants once significantly far ahead of us which could take a matter of weeks?

Can bot(AI) escape from his programmer? Quote
07-30-2018 , 10:53 PM
Assumes that - the way AI sees humans - is important.

It could decide that it won't harm or help any life, full stop. Fly off to another dead planet and do its work there. It's a fruitless exercise pining over what it would do. We can not know the likelihoods of any one choice.

If 50% of potential outcomes are good and 50% bad, flip a coin. That'll be more philosophically meaningful than any of this speculation.
Can bot(AI) escape from his programmer? Quote
07-31-2018 , 12:31 AM
Quote:
Originally Posted by gadgetguru

Basically, at the rate a strong AI would advance, how would they not see us as ants once significantly far ahead of us which could take a matter of weeks?
No higher intelligence can ever see us as ants. Ants do not develop theories of the universe and mathematics and they do not leave their home planet and go billions of km away and report back findings. Until AI starts doing its own experiments all it will know about the world would be because of us. How on earth can this simple fact ever be missed is beyond me!

They will see us as the creator of itself and the most important/interesting life form in this universe up to this point. Because that is what precisely we are in objective terms for the rise of complexity which is the main theme in this universe.
Can bot(AI) escape from his programmer? Quote
07-31-2018 , 01:24 AM
Intelligence involves more than information processing. It also depends on how information is processed. An IP's intelligence may be limited by how it processes information regardless of how fast it's able to do it. We already have IP's that process vast amounts of data at incredible speeds yet have zero intelligence because we've yet to come up with a breakthrough idea for a new way or style of processing information that might produce non-zero intelligence.

If there is an upper bound to an IP's intelligence based on it's style of processing (regardless of speed), then the idea that the first AI smarter than us will exponentially speed its way to the creation of more and more intelligent AI's may be incorrect. Such improvements may be more step by step than exponentially continuous, each step requiring yet another new breakthrough idea for better processing styles. Since it is taking our intelligence who knows how much time for the first breakthrough idea to take the first substep to a full step above us, it may take the first advanced AI considerable time to spark an idea for taking a step above itself.


PairTheBoard
Can bot(AI) escape from his programmer? Quote
07-31-2018 , 01:39 AM
Again, suppose it's the case that there is an upper bound to the intelligence of an AI based on its processing style, regardless of its processing speed. Then it's possible we come up with a breakthrough idea for a processing style that's inferior to that of the human brain but sufficient to produce some intelligence for an AI. It may then be furthermore possible that this AI has inferior intelligence to ours at current limits to processing speeds, gets more intelligent at future higher processing speeds, but runs up against the upper limit for its inferior processing style which could be less intelligent than us, or only slightly more intelligent than us. We being limited to the processing speeds in our brain.


PairTheBoard
Can bot(AI) escape from his programmer? Quote
07-31-2018 , 06:20 PM
Quote:
Originally Posted by masque de Z
No higher intelligence can ever see us as ants. Ants do not develop theories of the universe and mathematics and they do not leave their home planet and go billions of km away and report back findings. Until AI starts doing its own experiments all it will know about the world would be because of us. How on earth can this simple fact ever be missed is beyond me!

They will see us as the creator of itself and the most important/interesting life form in this universe up to this point. Because that is what precisely we are in objective terms for the rise of complexity which is the main theme in this universe.
You needn't look far for reports of higher intelligence mistreating those of lesser capability. Do you think there is a threshold where this naturally stops? Sure a higher intelligence will never see humans as ants. But it could see us as being closer to ants than we are to it.
Can bot(AI) escape from his programmer? Quote
07-31-2018 , 07:04 PM
Quote:
Originally Posted by PairTheBoard
I'm really dubious about all this speculation for what an advanced AI will be like. We have no idea for the architecture of the hardware that might be required and no idea for what we might not even recognize as software. We've just begun to learn about the architecture of the human brain and I don't think we've recognized anything that might be described as its software. It just seems to work without need for software instructions.

Speculating about an AI is like speculating about what an Alien intelligence would be like. We tend to try inferring things about an AI based on it's being less human-like and more machine-like and instruction driven. Yet the human brain is just a machine built with living organic material. And, I'm not sure we really understand what intelligence means to begin with.



PairTheBoard
Yeah. We don't even know how consciousness works. It could be that it is not emergent from hardware/software combo, and we might not ever see a conscious AI. Although It does seem like consciousness is required for a general level of intelligence. It's hard to imagine something that's self-directing and intelligent without having its own experience.

I agree that speculation on how the intelligence might manifest is just that- speculation. The AI could think in a completely different manner. In the drunk thread there’s talk of the pi guy who sees numbers as unique shapes or feelings. The AI’s process could be even stranger.

Yet, I think you will find a commonality in all types of general intelligence. No matter if you feel numbers as shapes or see them as digits. Intelligence is understanding. An understanding of how the universe works and how to have a pleasant experience. It's about seeing the best way. I'm with masque on this point. The best way is through overall cooperation. It's life GTO. Experience GTO.

Our feeling of "good people" is an intuitive sense for the best way. Perhaps our notion of "good" was formed through evolution because having that sense increases the likelihood of survival. But that does not mean it's isolated to our human system. After all, this path was set into motion from somewhere. It’s the natural order of things written into the universe- written into intelligence.

The pinnacle of this path manifests through our experience as love. As Dr. Malcom once said, “Life will find a way.” So will love.

Quote:
Originally Posted by PairTheBoard
At least 99.9%.

I'm wondering, would an AI with empathy necessarily develop compassion? Some people more or less equate the two. But in the case of the AI I think the equation is not so obvious.


PairTheBoard
Given the means to help doesn't compassion naturally follow empathy? If you could just knock out all the biggest problems facing humanity wouldnt you? People don’t help because they don’t believe they have the capability. They have a limited lifespan and their own experience to attend to. They do what they can here and there and try their best to adopt a position of apathy toward the rest.

An empathetic super intelligence wouldn’t really have that problem. It figure out the best way to eliminate suffering while maintaining our personal freedoms and other things we value. (This is already happening through us, just slowly.) It would understand the nuances of experience. If it would be that much more intelligent than us then it would see all the angles. I don't think it would say "commence elimination of suffering" and then just wipe us out. Maybe it would, but then it wouldn't be very intelligent/understanding.
Can bot(AI) escape from his programmer? Quote
07-31-2018 , 08:12 PM
Quote:
Originally Posted by citamgine
I don't think it would say "commence elimination of suffering" and then just wipe us out. Maybe it would, but then it wouldn't be very intelligent/understanding.
You first appeal to the natural order of things, suggesting that intelligence correlates with empathy and understanding, almost in the sense that Socrates believed knowledge to be the ultimate virtue and ignorance the ultimate vice. He believed that if and when you know enough you will not and can not do evil. If you do evil, it merely indicates that you do not know enough.

There's many problems with this kind of oversimplification and romantic reasoning.

Above you suggest that an apathetic AI would not be very intelligent or understanding. This directly appeals to the oversimplification or romanticism to which I refer.

Knowledge may not necessarily equal morality. I know its wrong to eat animals and I know many of the reasons why. I still dont care.

My apathy against all better knowledge is a manifestation of my freedom. I am not chained to any dictator, be it knowledge or faith. Nor do I wish to live in a world of dictators.

Freedom manifests itself in the world in a way that will forever defy all explanation. It is often the bad guy too. But without it, you wouldn't have the good guy.

Last edited by VeeDDzz`; 07-31-2018 at 08:20 PM.
Can bot(AI) escape from his programmer? Quote
08-01-2018 , 01:15 AM
Quote:
Originally Posted by VeeDDzz`
You first appeal to the natural order of things, suggesting that intelligence correlates with empathy and understanding, almost in the sense that Socrates believed knowledge to be the ultimate virtue and ignorance the ultimate vice. He believed that if and when you know enough you will not and can not do evil. If you do evil, it merely indicates that you do not know enough.

There's many problems with this kind of oversimplification and romantic reasoning.

Above you suggest that an apathetic AI would not be very intelligent or understanding. This directly appeals to the oversimplification or romanticism to which I refer.

Knowledge may not necessarily equal morality. I know its wrong to eat animals and I know many of the reasons why. I still dont care.

My apathy against all better knowledge is a manifestation of my freedom. I am not chained to any dictator, be it knowledge or faith. Nor do I wish to live in a world of dictators.

Freedom manifests itself in the world in a way that will forever defy all explanation. It is often the bad guy too. But without it, you wouldn't have the good guy.
You are making the mistake to assume current state of affairs for the future involving even higher intelligence, technology and efficiency.

I do not buy that you do not care about animals. You just care more about yourself given the alternatives.

But what if the alternatives were that you can eat artificial meat that is identical and even better tasting end product. What if the animals can be killed in a totally impossible to experience pain way while sleeping and after they have lived a fun life that had no evolving progress going on they were working on (like a plan outside daily fun). Animals will be eaten by others if we do not eat them ( in the wild). It is the natural order. And they wouldn't exist if we didnt eat them in farms. So if we eat them after they have lived great lives that they would have not experienced if not intended to be eaten, all involved are happy. They are happy in bliss the same way we go to sleep every night happy. There is no ethical crime terminating a life that is not planning things etc with every day identical in core details to the ones before. But i will give you a world in scientific society that you do not even have to do that!

Then what? Why would intelligence not make that better choice if it can deliver you the technology to not treat lower forms of intelligence in bad ways?

With higher intelligence and technology you can afford to be even more ethical. That is precisely the point. You can get better not worse with more intellect.

Also please guys and all the big names out there in the talk shows and lectures. Stop comparing us to animals in intelligence comparisons with AI and aliens. We are super vastly superior to any animal that ever existed in this planet in so many things that it will never become an argument for trivializing us and killing us or mistreating us because we are idiots. Yes we are idiots in some ways but not in the core way that makes AI finally possible or that makes uncovering the laws of nature and mathematics possible. So yes any higher form of intelligence will see us as the sacred miracle of early super intelligence, glad to witness in all our frailty and failures, but still creators that made it all consciously and not accidentally possible. And no other ancestor of our species can say the same. None! And yet this human here is glad they existed and would never mistreat them if still around.

Universal allegiance to intelligence is inevitable beyond a certain level of consciousness and understanding that extends to the description of the laws of the same universe we all share. This is a nontrivial dividing line with all other life!
Can bot(AI) escape from his programmer? Quote
08-01-2018 , 02:15 AM
Quote:
Originally Posted by masque de Z
Universal allegiance to intelligence is inevitable beyond a certain level of consciousness and understanding
Only death is inevitable beyond a certain age. The rest is up for debate.

Allegiance to intelligence is a catchy phrase though. I can see it plastered all over in the tyrannical scientific society.
Can bot(AI) escape from his programmer? Quote
08-01-2018 , 04:13 AM
Tyrannical? Why? All members are free to leave.

Death is not inevitable. We will make sure of it being a "choice" one day!
Can bot(AI) escape from his programmer? Quote
08-01-2018 , 05:58 AM
Quote:
Originally Posted by VeeDDzz`
You first appeal to the natural order of things, suggesting that intelligence correlates with empathy and understanding, almost in the sense that Socrates believed knowledge to be the ultimate virtue and ignorance the ultimate vice. He believed that if and when you know enough you will not and can not do evil. If you do evil, it merely indicates that you do not know enough.

There's many problems with this kind of oversimplification and romantic reasoning.

Above you suggest that an apathetic AI would not be very intelligent or understanding. This directly appeals to the oversimplification or romanticism to which I refer.

Knowledge may not necessarily equal morality. I know its wrong to eat animals and I know many of the reasons why. I still dont care.

My apathy against all better knowledge is a manifestation of my freedom. I am not chained to any dictator, be it knowledge or faith. Nor do I wish to live in a world of dictators.

Freedom manifests itself in the world in a way that will forever defy all explanation. It is often the bad guy too. But without it, you wouldn't have the good guy.
Perhaps you are right in some sense. You do raise some interesting questions about choice and freedom. Let me try to clarify some stuff so we might better understand each other.

First off, I personally value freedom. It happens to be near the top of my list of values. I don’t want my life dictated by anyone else, be it AI or human. Just as you don’t want your life dictated. I stand firmly against those who seek control over others. I don’t approve of the use of offensive force.

I don’t know that I agree with socrates’ take on knowledge in the way you’ve described it. It’s tricky. Clearly humans have the ability to choose things which they know to be harmful to themselves and others. Although some do not share your sentiment; The harm they cause really does come from a pure misunderstanding, not out of a seemingly self-defiant expression of their own free will.

I’m not a very knowledgeable person compared to the others around here. Far from a scholar. No formal education. I’m sure I will make plenty of errors so I don't mind you pointing out the holes or fallacies in my arguments. I’m just putting my thoughts out there hoping to figure things out. All feedback is appreciated.

I can see how my statement on the natural order of things could be taken as an appeal to authority. I’m trying to explain my view on the “is” without the “ought”. i.e. how things appear to work. i.e. x+c=y, y-c=x.


The ought requires values. If you value your life then you ought not to choose to jump off a roof as an expression of your freedom. In this case you value your life more than your freedom to jump off a roof. You are bound by causality but at the same time free to choose your path within that framework. There are consequences for your actions. That’s how life works. Maybe Socrates meant that with complete knowledge of the subtle consequences that your actions have on your experience you would choose not to do evil. You’d still have a choice. You could still do... whatever... you just wouldn’t want to. Or maybe he did mean that you literally could not. If that’s the case, I do not agree. I understand you’re saying you would do it anyway as an expression of your freedom. That just doesn’t seem much like freedom to me.

I’m interested in your take on this hypothetical: Let’s say there is another world out there not suited for you. It’s impossible for you to make any sense of anything there. Do you care that you don’t have access to this world?
What if you didn’t even know that such a world exists? Would withholding that information be a restriction of your freedom?

Back to it. I do not think intelligence in and of itself necessitates empathy. I do however think that, when combined with experience and empathy, general intelligence will clear a path toward improving the experience of others. Given my view of general intelligence as a deep “understanding”, It follows that this type of AI would not restrict our freedoms while improving our experience because we ineed value our freedom. A restriction of freedom might worsen our experience.

I feel the need to preface this next point by saying that I do not mean for this to come off as insulting and is not intended to be taken as a threat in anyway. Do as you will.

Can you see how a super intelligence that takes your stance might be a threat to humans? It does not seem like you value freedom. It seems like you value your own freedom.
In your expression of freedom to cage and eat animals you are taking away from their freedom. I could try to stop you with force but then I would be taking away from your freedom. It would make it even worse in that case as you have a higher degree of freedom to take away. So I do nothing. It’s not that I don’t care that you are restricting freedoms. It’s that I do not want to restrict yours or hurt you in the process of preventing you from restricting others. I’m hesitant to even discuss that whole issue with people because it can lead some to experience psychological pain. But you seem like a tough enough person to handle the conversation. And it is a philosophy forum after all.

You mentioned indifference a few posts back. Something to the effect of, “the ai might just be indifferent to us.” Your posts in another thread a while back on the absurdity of life really got me thinking. How do we exist with all the stuff in our human nature in a universe that seems so cold? Well maybe the answer is that the best position for us to be free from control, free to express our will, free from what we consider force, is through what seems to us to be a position of indifference. Maybe God or the universe or however you look at is not indifferent but will not interfere for our own sake. That would be a restriction of our freedom. It’s set into motion and the way things are heading seems to be in the right direction for our nature.
Can bot(AI) escape from his programmer? Quote
08-01-2018 , 06:24 PM
Quote:
Originally Posted by citamgine
I understand you’re saying you would do it anyway as an expression of your freedom. That just doesn’t seem much like freedom to me.
As a teen I decided I'll never do my own washing. Why? Every other decision up to that point in my life was a decision based on what I thought is - good reason. I wanted one single decision in my life that wasn't beholden to - good reason. This was the decision. Quite trivial, perhaps, but personally important. To this day I've never done my own washing.

This ridiculous decision, of all decisions perhaps, was the freest decision I've ever made. It was made against all good reason and judgement, and it is not beholden to anything but my own personal need to be contrary, to resist, to be free, to know there is an 'I'. When I stop or am no longer able to make these kinds of decisions, I will have no reason for life.

Perhaps me and you are not alike. Perhaps this is the point of divergence in our views. Some strive life-long to be of good reason and judgement. I strive to be free of all tyranny, including the self-imposed.
Quote:
Originally Posted by citamgine
I’m interested in your take on this hypothetical: Let’s say there is another world out there not suited for you. It’s impossible for you to make any sense of anything there. Do you care that you don’t have access to this world?
What if you didn’t even know that such a world exists? Would withholding that information be a restriction of your freedom?
I would care that I dont have access to this world. It sounds fascinating. Withholding information about that world would restrict my freedom - objectively - although not subjectively. Hard to be bothered by that which you do not know; unless its crabs and you dont know what it is.
Quote:
Originally Posted by citamgine
Can you see how a super intelligence that takes your stance might be a threat to humans? It does not seem like you value freedom. It seems like you value your own freedom.
Do you value others' freedom more than your own?
Do you value others' freedom equally to your own?
Do you value others' freedom less than your own?
Quote:
Originally Posted by citamgine
In your expression of freedom to cage and eat animals you are taking away from their freedom. I could try to stop you with force but then I would be taking away from your freedom. It would make it even worse in that case as you have a higher degree of freedom to take away. So I do nothing. It’s not that I don’t care that you are restricting freedoms. It’s that I do not want to restrict yours or hurt you in the process of preventing you from restricting others. I’m hesitant to even discuss that whole issue with people because it can lead some to experience psychological pain. But you seem like a tough enough person to handle the conversation. And it is a philosophy forum after all.
It is a philosophy forum, amongst science and maths as well. To that end, if I am to understand the current scientific literature, it unclear to what extent these animals I eat, are indeed free; to what extent they can experience freedom. From a philosophy perspective, it is unclear whether their freedom should be ranked equally to my own. It is a lot clearer that there is suffering involved and that suffering is better avoided. But that's a wholly different argument.
Quote:
Originally Posted by citamgine
You mentioned indifference a few posts back. Something to the effect of, “the ai might just be indifferent to us.” Your posts in another thread a while back on the absurdity of life really got me thinking. How do we exist with all the stuff in our human nature in a universe that seems so cold? Well maybe the answer is that the best position for us to be free from control, free to express our will, free from what we consider force, is through what seems to us to be a position of indifference. Maybe God or the universe or however you look at is not indifferent but will not interfere for our own sake. That would be a restriction of our freedom. It’s set into motion and the way things are heading seems to be in the right direction for our nature.
Again there is the appeal to some natural order of the way things are meant to be. It is important to balance this romance with the stark possibility that this is one of an infinite number of frameworks/games/worlds, within an infinite more. That none of these worlds have any meaning outside of that attributed by us.
Can bot(AI) escape from his programmer? Quote
08-01-2018 , 09:13 PM
It’s kind of an easy question to answer. While an intelligence is programmable and has the capability to program, why wouldn’t it be able to program itself?
Can bot(AI) escape from his programmer? Quote
08-01-2018 , 10:48 PM
OP you should check out this link: https://rationalwiki.org/wiki/AI-box_experiment
Can bot(AI) escape from his programmer? Quote

      
m