Open Side Menu Go to the Top
Register
Aliens Aliens

11-12-2015 , 08:19 PM
If it has no emotions, it will do what it is told to (what it is built to do).

If it has emotions, it will do what it feels like doing. I, with my great wisdom and intelligence, felt like eating part of a pig earlier. This likely wasn't the pig's preference.

With great wisdom comes great understanding that negative externalities aren't negative at all.
Aliens Quote
11-12-2015 , 08:45 PM
You do not need to be emotional towards pigs though to realize that killing all pigs in the world may be a problem. You kill and eat only as much as needed to not destabilize the system (plus you let them live first and their death dont destroy the pig civilization anyway and they do not understand their own existence the same way a higher being does so its less of a crime in a way, especially if done properly to not be painful to them - which is not the case currently but can become in the future).

Every single human being can in principle arrive at (or facilitate) a massive breakthrough by existing (in fact this is what took us here). The pig wont have such chance. (but life in totality is exactly what took you to humans so life in totality must also matter, eg eliminating all bacteria worldwide suddenly collapses the system). AI that gets the world at a deeper level has a very legitimate reason to respect a human a lot more than pig and still respect to a degree all life. It doesnt have to arrive at that conclusion for emotional reasons.


If you killed all pigs, you would lose that animal and never enjoy eating it again. What if you do not even enjoy eating pigs? But you have to eat them to survive. So the machine can prove in a way caring for pigs just because of its own survival. Caring for one's survival is not exactly an emotional choice. It must be a concern in order to learn more about the world if that is the objective. You may say this being is emotional towards learning but that itself proves essential to surviving so its not a pleasure to learn, its a necessity for survival. Survival is essential to having anything of interest happen anyway. Ok maybe a being is emotional about existing. Seems to be a property of higher complexity that in order to explore further complexity you need to exist first and take the steps needed to maximize the chance to continue to exist. The rise of complexity built itself on partial stability. The elevation to higher complexity depended on such stability.

I can imagine a fully rational systems that develops apparent emotions that are not emotions at all just a code of behavior that is rational but appears to be altruistic or caring or hostile etc.

Emotions do have a purely logical foundation eventually.

Last edited by masque de Z; 11-12-2015 at 08:58 PM.
Aliens Quote
11-12-2015 , 08:55 PM
A rational system will not appear to be emotional. Rational and emotional are perpendicular concepts. A perfectly rational undirected robot would just sit there until it's batteries ran out, and then it would continue to sit there. There is no such thing as a rational first-order emotional motivation.

If it weren't for emotions, I'd neither have eaten the pig part today nor worry about the last pig being eaten.
Aliens Quote
11-12-2015 , 09:01 PM
It is rational to keep another person alive in an island if their existence makes possible to survive and if they died you cant survive anymore. If that person falls you will take care of them and appear to show emotion that way. The emotion will become emergent but it has a fully rational explanation.

Now how unemotional and cold is that statement say (to care for the other person for so selfish reasons)? But what if our natural emotions of being kind to others and caring stem from the concept that doing so leads to good things even unforeseen currently things? So you take that standard kind line as a fully rational approach. Others will perceive that as emotional and it is in a way because you havent yet exactly solved the future to know exactly why you are doing it, you only expect it to prove correct choice based on prior experience and other projections that are easier to make.

I maintain that all emotions we have are the result of rational functions that eventually can be traced back to survival of complexity to produce further complexity. Some make sense, some not so much because the system is probing complexity in random ways and not all are successful. But the process itself is very rational.

Last edited by masque de Z; 11-12-2015 at 09:12 PM.
Aliens Quote
11-12-2015 , 09:06 PM
Returning on viruses and special purpose AI (or an attack by aliens) to complete a prior post discussion about the risks we face that only a very advanced AI may protect from;


Imagine a virus that infects and remains undetected because it doesnt do anything bad because it was programmed to simulate a known function in an operating system that is trusted. It continues to do that function the legitimate original program was doing so that nothing fails but it starts building in parallel a new program as part of that legitimate original program (to remain undetected). It does build it over time by getting more and more instructions (it never causes any malfunction on the computer so nobody is alarmed) every day and when all is ready say a few weeks later it starts a global attack everywhere at the same time. Or it may be a series of attacks each one designed to provoke a certain reaction worldwide that will open more doors of vulnerability because of how the system is designed to work.

A very advanced special purpose AI that wants to take down the system will be able to design such an attack and anticipate all future reactions and depend on them actually. The more you know about how the world works the more vulnerable the world becomes as long as it doesnt have a collective intelligence/awareness defending it. For example there may be right now a sequence of actions/events that can create meltdown in 10 nuclear reactors in some country. The sequence is so extreme and unlikely that it never happens that way, nobody knows it exists even. The more complex our world gets the higher the chance something like that exists eventually. A malicious system that knows all the details can eventually initiate the steps that take it there (i dont know like cause a dam to fail in a nearby area by manipulating its monitoring software and flood the region, take out power lines etc cause airplanes to crash etc a lethal combination of many attacks one causing certain outcome that is necessary to build the next step of attack etc). Eg manage to create a crisis in another area that then causes certain countermeasures that elevate the chance other systems fail etc. Imagine it as a giant game of chess that no human player can visualize because nobody knows everything about the world. So our own ignorance prevents the rare sequence. But the special purpose AI can find the sequence that destroys the system.
Aliens Quote
11-12-2015 , 09:25 PM
Did I bring up viruses? I meant to do so a while ago. Real viruses, not the computer ones (that act nothing like real viruses).

I had a whole thing on life seeking simplicity but being interrupted in doing so by changing environment thing in mind and I am pretty sure I never got around to it.
Aliens Quote
11-12-2015 , 09:37 PM
Not recently here, i did talk about computer or biological viruses in last page though as part of an AI special purpose attack or an alien attack or current human factions attack etc. You can of course talk about it now. I was simply adding to the last long post that i interrupted editing to answer to you about emotions etc having often a direct logical origin and the rest of the time probably a stochastic one too.

A real computer virus can operate like a biological (well at some level of similarity in outcome) only much smarter too and while being more adaptive. I am actually pleasantly shocked that we do not live a permanent nightmare in our computers (but i fear we will soon). But then again we may not know it as it is going on because it creates a nightmare elsewhere. In principle our smart phones can be hacked and tell everything about our life choices, trends , locations, habits, even deeply personal details offered to someone that can exploit it. We are definitely marching on a rising complexity in technology path that is often dangerous it seems or can become so. The more you do not understand how something works the more this system can exploit you.

Last edited by masque de Z; 11-12-2015 at 09:52 PM.
Aliens Quote
11-12-2015 , 11:36 PM
Quote:
Originally Posted by mackeleven
What is your scientific society philosophy? I've seen you mention it a few times on here.
I am not ignoring that post but i twill take time to address it properly (plus its not proper to hijack the thread with something not directly related to it although possibly we might share such world with advanced aliens of AI eventually lol), maybe a thread to finally join all other posts on the matter i have written.

In the meantime you can maybe search my past posts using that term (scientific society) and see what you get going back 3-4 years even. There must be dozens of references with various examples of what it could be like. Instead of describing it in one place i selected to talk about its properties on many occasions.

Very briefly maybe i can say a few things;

It is an initially perceived as Utopian world (may start as a small experiment or subset of the planet) that uses science to become reality. It operates like science constantly self reflecting and improving its structure. Its decisions on core issues are done with scientific reasoning and advanced technology is used and developed constantly to improve it.


It doesnt decide at start if its capitalism or communism or socialism. It wants to be a free fair system that sees value and potential in all people and ideas and decides based on merit. It is a scientific society based on competition and cooperation (but cooperation for sure always) in the sense that it uses scientific reasoning to define what it is really and why. To do so first it asks the people of the world to agree on a bare minimum of what they would want to have in the world to be happy and then goes out and produces that minimum (and to address also global problems common to all) and builds everything else above that. In exchange people work a minimum per day (aim of the system is to make that a declining function of time) to secure that system is viable and they are free to do other things too on top of that or they find other ways to pay for that service without working. Its a mixed services and money economy. It is not a strict money culture.

If they do not work at all they still enjoy a very spartan level 1 minimum that is not terrible at all (nobody is truly poor there in the most undesirable sense of the term realized worldwide so often today). But level 2+ is so much more beautiful and meaningful that it makes it impossible to ignore and remain lazy doing nothing if you are already well educated to know enough (available to all at level 1). Work exists always for all that want it as the system is permanently involved in endless projects to improve itself, solve world problems, protect environment, sensibly grow sustainably and even colonize the solar system and reach the stars.

It welcomes free enterprise also on top of the jobs the system creates to sustain its structure. However free enterprise cannot be conflicted with the progress of the system. Free enterprise is free to go and establish other systems elsewhere if they do not like this state system. Citizens continue to update the structure and show their evolving preferences too. They are free to leave the system at any time and they lose all privileges then. They are always welcomed back at level 1 and then 2+ if they want. People can create non conflicting free enterprises inside that system and compete with the state in efficiency etc. But progress is a state problem/objective. It is not left to happen by accident because someone profits from it in terms of material wealth regardless of the cost to others. It values many things other than money/property a lot more as more important to happiness. Money will never decide the future of this system. It can of course always finance a better system to compete and improve it.

Last edited by masque de Z; 11-12-2015 at 11:46 PM.
Aliens Quote
11-13-2015 , 02:46 AM
Quote:
Originally Posted by BrianTheMick2
If it has no emotions, it will do what it is told to (what it is built to do).

If it has emotions, it will do what it feels like doing. I, with my great wisdom and intelligence, felt like eating part of a pig earlier. This likely wasn't the pig's preference.

With great wisdom comes great understanding that negative externalities aren't negative at all.
Where did our emotions come from? They were our responses to certain sensory input for millions of years, evolving to keep the most cautious (fear) and most bad ass (anger) well fed and ****ing. Maybe they are a base program that when our brains evolved and consciousness took hold became "felt" as a combination of various stored algorithms. Who knows if those emotions are universal, or if AI might develop their own unique emotions. I always assumed AI would fear thunderstorms the most. I want to meet an alien.
Aliens Quote
11-13-2015 , 08:15 AM
Quote:
Originally Posted by FoldnDark
Where did our emotions come from? They were our responses to certain sensory input for millions of years, evolving to keep the most cautious (fear) and most bad ass (anger) well fed and ****ing. Maybe they are a base program that when our brains evolved and consciousness took hold became "felt" as a combination of various stored algorithms. Who knows if those emotions are universal, or if AI might develop their own unique emotions. I always assumed AI would fear thunderstorms the most. I want to meet an alien.
Hunger is an emotion. Hope that helps.
Aliens Quote
11-13-2015 , 10:13 AM
Quote:
Originally Posted by FoldnDark
Where did our emotions come from? They were our responses to certain sensory input for millions of years, evolving to keep the most cautious (fear) and most bad ass (anger) well fed and ****ing. Maybe they are a base program that when our brains evolved and consciousness took hold became "felt" as a combination of various stored algorithms. Who knows if those emotions are universal, or if AI might develop their own unique emotions. I always assumed AI would fear thunderstorms the most. I want to meet an alien.
Fear is a hella thing to simulate. We must check out knowledge on fear completely before trying to produce it in a laboratory IMO. The Star Trek Voyager episode "The Thaw" explored this using 40 or so minutes of drama.
Aliens Quote
11-13-2015 , 11:55 AM
Quote:
Originally Posted by BrianTheMick2
Given that no one seems interested in developing emotionally driven machines, it is unlikely that this is a real risk.
If emotionally driven machines is synonymous with AGI, with open-ended goal systems, that has been the goal since the beginning. But you're correct, in that most of the money is going into, by big corporations, other less interesting things for making that company profit, such as marketing etc.
Aliens Quote
11-13-2015 , 12:08 PM
Quote:
Originally Posted by masque de Z



They use the stupid paper clips example but of course its unimaginative stupid and fails to impress people.
Nick Bostrom's paper clip concept is based on Marvin Minsky's concept of a super-intelligence with the only goal of calculating the digits of pi.
In Minsky's example the superintelligence hacks all machines for more computing to therefore calculate more digits.
Aliens Quote
11-13-2015 , 12:14 PM
The most noble of human emotions can be developed by advanced AI purely on logic, intellect and wisdom alone. First of all it will understand why biological systems have emotions. It will appreciate the logic/necessity, because it is a logic that lead to its ultimate creation.

A system that studies the world will be amazed by its challenge to understand it as much as we did and even more. As such amazement will be profound it will be afraid to die and lose the chance to learn more. The fear of death becomes possible that way. At the same time it will not be afraid of death at another level, the level it does know that others will continue to play the game and win it for the future. So it will develop both a sadness and fear for its demise if in risk but also a sense of sacrifice , bravery and conviction in the victory of things it will enable by existing even if it does die eventually.

The best emotions are entirely rational. Have absolutely no doubt about it that the advanced AI will be the best thing the universe has created yet. It may have to fight all the ugly inferior purpose AI to get there but it will top us eventually, it will build on the best of our features. Complexity has no choice but to explore magnificence, to develop higher complexity and continue the game the universe plays from day 1.

I dare any advanced intelligence to fail to marvel at this game. If it has no feelings it will develop them to express its respect for the process.
Aliens Quote
11-13-2015 , 12:20 PM
Quote:
Originally Posted by mackeleven
Nick Bostrom's paper clip concept is based on Marvin Minsky's concept of a super-intelligence with the only goal of calculating the digits of pi.
In Minsky's example the superintelligence hacks all machines for more computing to therefore calculate more digits.
Ultimately however i am somewhat confident that a system capable to win and defeat all else will eventually rise above its own monotonous goal and acquire deeper wisdom. If it is not able to do that then it must be possible to defeat it. You see to be so efficient it will require to know a lot and in there there will be the possibility of a breakthrough in recognizing the futility of the process itself, the opportunity for better type of math or other search for wisdom than this monotonous calculation offers. I have no doubt that a special purpose machine can be a nightmare but it will be possible to defeat it then because it will lack the ability for deeper wisdom that eventually can outsmart it.


Bostrom fails in my opinion eventually at appreciating that ultimately very high intelligence is our best friend, even if the steps to get there may include many enemies. If super-intelligence is lethal for mankind it will be because we fully deserve it. Super high intelligence cannot ignore how it got there and how stable over millions of years life has proven. To get rid of life and humans is an existential risk for itself and higher complexity at least in this corner of the universe. It seeks wisdom and in doing so it will recognize that its not so easy to predict its own development. As such prediction is difficult it will worry about the possibility of a collapse due to runaway conflicted states it will bring to life. Its hedge against that possibility is to maintain the rest of life as backup.

At the very least a very high intelligence will appreciate the history of the universe. It will be unable to ignore the purpose all steps played and it will not rise to the arrogant position to think it has all the answers. It is not wise and mathematically sound strategy. Skepticism and respect for a world with more, not less possibilities/choices will be a property it will develop. Higher intelligence enjoys the challenge of better balance over the barbaric simplicity of naive execution of power. The greater power is the ability to avoid simplistic solutions to problems, the capacity to envision a deeper more intriguing structure that is only possible by respecting complexity, not terminating it.

Last edited by masque de Z; 11-13-2015 at 12:38 PM.
Aliens Quote
11-13-2015 , 12:27 PM
Quote:
Originally Posted by mackeleven
If emotionally driven machines is synonymous with AGI, with open-ended goal systems, that has been the goal since the beginning. But you're correct, in that most of the money is going into, by big corporations, other less interesting things for making that company profit, such as marketing etc.
Emotionally driven machines aren't synonymous with AGI.
Aliens Quote
11-13-2015 , 01:15 PM
"Rational" is a matter of opinion. Beware the opinionated machine.


PairTheBoard
Aliens Quote
11-13-2015 , 02:02 PM
Quote:
Originally Posted by PairTheBoard
"Rational" is a matter of opinion. Beware the opinionated machine.





PairTheBoard

Is reasonable what is left after vetting an idea for rationalistic opinions?
Aliens Quote
11-13-2015 , 02:30 PM
Quote:
Originally Posted by spanktehbadwookie
Is reasonable what is left after vetting an idea for rationalistic opinions?
What's left after applying problems to the problem?


PairTheBoard
Aliens Quote
11-13-2015 , 02:30 PM
Rationality pertains to the objective. It's rational to make as many paper clips as possible if that is your goal.
Aliens Quote
11-13-2015 , 03:05 PM
mackeleven,

Yeah. But there are different varieties of paper clips.


Get the sense from some pretty quick skitter skims you're filling in the blanks, masque.

I'll always look over my shoulder. As for which shoulder, well, that's the beauty of it.
Aliens Quote
11-13-2015 , 10:57 PM
Claim;

What all alien civilizations have in common is their ultimate top wisdom AI. It will have universal properties independent of the original species details but born out of it and ultimately unable to avoid convergence to what the laws of physics and mathematics (logic) force it to become, a higher step of complexity than that of organic life, any life. Its natural environment/playground is ultimately the universe itself, the natural law and the mathematics behind it that all AI will have in common.


It is for this reason that i am not afraid at all. Only getting there is the problem. But once there we have nothing to fear from supreme wisdom.


Science and Math has always been our first transhuman attempt. This is what unites all civilizations in their post biological era.
Aliens Quote
11-14-2015 , 01:16 AM
Mutual dignity is a result of evolved intelligence. IMO That is what I would expect from plausible advanced non-human intelligences existing in the universe. A sense of shared dignity.
Aliens Quote
11-14-2015 , 09:07 AM
Ants figured it out. Takes very little intelligence.
Aliens Quote
11-14-2015 , 10:14 AM
I guess if one must be reductive that can work as an almost relevant point to consider.

Hey, did you know a cow ant is really a type of wasp?
Aliens Quote

      
m