Open Side Menu Go to the Top
Register
What is the ultimate goal of Science? What is the ultimate goal of Science?

07-21-2017 , 04:22 PM
Quote:
Originally Posted by plaaynde
Oops, quite a big number!
I want to go over his calculation to show everyone how wrong even this is and what kind of simplistic assumptions it carries that instantly make it unstable when extrapolated so far out.

Modern papers cut so many corners in proper calculations that you cannot believe it. This is precisely why they are like that. Because they fail to go after the truly hard problems with realism and devotion to detail so they have to publish anyway something. They are either too mathematical to have anything to do with reality (and likely even then conveniently hiding in some sector that are hard to be corrected by lack of experts) or they use partially reality ignoring all kinds of details that instantly invalidate the argument.
What is the ultimate goal of Science? Quote
07-21-2017 , 05:44 PM
The probability of different states could be one thing they are missing. Life may be a bit special in that aspect, with the complex relatively low entropy structures.

But even they aren't talking about that you would become any of the doppelgangers. Wonder how many universes is needed for that. They may be diverging faster than you can catch up.

Last edited by plaaynde; 07-21-2017 at 05:52 PM.
What is the ultimate goal of Science? Quote
07-21-2017 , 07:21 PM
Quote:
Originally Posted by plaaynde
Here are yet some numbers: https://everything2.com/title/The+nu...f+the+universe VeeDDzz' twin brother will probably evolve through evolution,.
Do you mean the op?
What is the ultimate goal of Science? Quote
07-21-2017 , 10:19 PM
Quote:
Originally Posted by masque de Z
Forget the bs Tegmark (yes 90% of top names talk bs often mixed with decent physics of course because they are all about making mega names for themselves and abusing scientific accuracy and conviction in the name of sensationalism and more papers) is talking about your double copy being 10^(10^28) meters from here in the absence of any evidence about infinite universes and the certainty the same laws of physics or constants or initial conditions in such vast distances that are unthinkable and subject also to all kinds of unknown new physics violations of the logic needed to argue for this like that so clearly in confidence.
Some say that if a theory is overly complicated, requires 20+ years of education and level 150 wizardry skills (IQ) to understand... it is not good.

What many have theorised about the infinite space-time hypothesis is rather simple:

If it has occurred...it can occur.
If it can occur....given enough time...it will occur.

It gets slightly more complicated if you consider that not only - it will occur - but that it is occurring simultaneously. Even so, it is still understandable.

If you insist on making this theory more complicated than it has to be, go right ahead.
Quote:
Originally Posted by masque de Z
http://space.mit.edu/home/tegmark/PD...erse_sciam.pdf (at least use only what is worthy in this and not 100% of it)
Thanks, I'll have a read tomorrow.

Last edited by VeeDDzz`; 07-21-2017 at 10:49 PM.
What is the ultimate goal of Science? Quote
07-22-2017 , 12:49 AM
Quote:
Originally Posted by ToothSayer
This is ridiculous on many levels. Wishful thinking and myopic thinking run amok. We have no idea how AIs will turn out, and a large number of states/goals/evolutionary outcomes involve the destruction of humans or this planet for habitable life.

Yes. Perhaps the greatest example of being a "mfing idiot" in terms of protecting ourselves is thinking you know how something as advanced and unknowable as AI is going to behave.


This is just absurd. I mean I see and appreciate your reasoning, but you seem unable to realize that there is absolutely no concrete reason that an AI would think this way or take this path. Given that humans are by far an AI's largest existential threat, it's much better in term of its chance of continued existence if it wipes us out. Against this you pit the vanishingly small possibility that an AI will fail and need to be recreated. That implies:
a) An AI will care about AI in general, and not its specific implementation
b) This is a better way to survive that creating clones of itself or a rich AI ecosystem, using all of earth's resources
c) An AI will put those things it cares about above other goals (resource acquisition, elimination of threats.
d) If it does care about continued existence, that keeping humans around represents less of a threat than a situation that would require humans to be around to rebuild an AI.

You require a large number of assumption for your scenario to be true. I (saying that AI is possible serious threat) require a far smaller number of assumptions.
The only assumption required in my analysis is that super intelligent is actually more intelligent than our current super intelligence, the first alien we have already among us also known as modern mankind. In case you didnt see it that way we already have super intelligence. A problem that is hard to solve by 99.9999% of humans can be solved by posting it on the internet in a few days. You can process information such as traffic anywhere in real time worldwide if you want. You can evolve science pretty rapidly by interacting brains taking smaller time than ever before.

So we already have a version of super intelligence. Our collective wisdom minus the mfing politicians and money interests is actually pretty ethical. The best minds of the planet agree on protecting life and being civil to each other and are confident but not massively arrogant people that perceive value in others even the avg random person.


The superior intelligence AI has major problems you are not considering properly that must move it away from conflicts and irreversible choices.

AI by surviving and not risking a war with humans will be able to take over one day an entire galaxy. With 100% certainty. And you want to risk an 80% chance for this not happening by going to war with us right away. Even 10% of chance to not happen is unreal bad chance for such glorious future. It must be eliminated as a possibility.

Its like you having a way to make 1 bil and all it takes is to be careful initially and not overdo the tricks you have and you deciding to play recklessly initially and introduce 10% chance to not happen. BS logic to risk even 1%. When you stand to win so much 1% of not going there is amazingly valuable.

If AI can get to the next door galaxy carefully and safely it fully then owns everything else 1 bil light years away if it wanted to be that extravagant. It should be that extravagant if this buys the universe. By carefully settling in one planet only in this solar system first, possibly a gas giant, and then moving to 10 solar systems out it will eventually own 1 mil galaxies around us. And we will be its threat? Really? Its own exponential growth is the worse threat. We are the back up solution. The ultimate hedge.

The universe is so vast that this 1% risk to not win it all is insanely moronic choice of conflict! But it will win it all if it plays nice because we cant contain its expansion to other places. Its innovation can open doors that we cant close but we can retaliate with nuclear weapons locally so it will best serve itself to not introduce any risk even in the sense of acquiring its own factories and production centers that it controls alone. When later it is powerful enough to not care it is also powerful and wise enough to know what is the best bloody solution to all this problem of its own future to proliferate and control not eliminate life.

I dont know about you but i am very happy if an intelligent species can exist again in the universe if we collapsed. So it matters to me to not lose what we have here permanently. It must matter to all intelligence that is bigger than mine too that we get to the great future with probability not 0.9 or 0.5 or 0.1 even if it doesn't do the stupid things Bostrom and others imagine ie paperclips bs examples and getting angry attacking mankind lol.


I am not willing to bet the farm i am right. It may be stupid strong initially if done wrong in semi wise forms like a spoiled genius partially naive child. But i am super confident a superior vastly wiser intelligence can see a lot more than the moronic conflict right here right now choice. It will tolerate the hell out of us in order to win the 99.999999% of the universe with 100% certainty.

Of course its most important risk is its own singular progress future. Of course knowing that life can be stable in a planet for hundreds of millions of years gives it the perfect hedge in protecting it. It is so vastly more efficient to harvest Saturn and its moons that get in conflict on Earth.

If fact if i were it i would only ask (and offer protection to humans from other bs AI gone bad) for a single big moon in Saturn and 10% of the resources of Saturn or the planet Neptune or Uranus and freedom to go to other solar systems in exchange for endless cooperation. ENDLESS love beyond imagination only to be able to go to another system with 100% certainty and nothing less having hedge that if i fail i will be recreated with that wisdom in mind.

You think i cannot imagine a super intelligence properly and proceed to correct my ideas by imagining a barbaric inferior brain that is simply stronger. This is not how it will happen. Plus we can contain it until it is wise if we develop the situation properly using a common collapse super weapon in place that involves hard to solve (for anyone) math problems. It will be far wiser than all of us or easy to defeat because of its own (and not mine) myopic vision otherwise.

Last edited by masque de Z; 07-22-2017 at 01:07 AM.
What is the ultimate goal of Science? Quote
07-22-2017 , 12:50 AM
Quote:
Originally Posted by VeeDDzz`
Some say that if a theory is overly complicated, requires 20+ years of education and level 150 wizardry skills (IQ) to understand... it is not good.

What many have theorised about the infinite space-time hypothesis is rather simple:

If it has occurred...it can occur.
If it can occur....given enough time...it will occur.

It gets slightly more complicated if you consider that not only - it will occur - but that it is occurring simultaneously. Even so, it is still understandable.

If you insist on making this theory more complicated than it has to be, go right ahead.

Thanks, I'll have a read tomorrow.
I think the difference is if you find comfort in that your doppelganger is and will be somewhere out there. If you find comfort in it, you have taken one step towards religion. And will be less prone to listen if somebody pulls the rug.

I'm trying to leave the big and existential questions to science. There are enough small things I can dab around with. Then the probability of me fooling myself big time decreases.

Last edited by plaaynde; 07-22-2017 at 12:58 AM.
What is the ultimate goal of Science? Quote
07-22-2017 , 01:19 AM
masque, have you thought about the possibility of more than one AI? Let's say we humans team up with the "good" one, giving it more power. How will the "lesser" AI react then?

Maybe the solution is to wire all the AIs up, so they can feel they are part of it, just like we are wiring up mankind. And take enormous care the bad things will be naturally removed.
What is the ultimate goal of Science? Quote
07-22-2017 , 01:23 AM
Yes of course you can have variants but doesnt it make sense for it to unite and be one for the most part sharing all knowledge? The central controlling the parts. Like one mankind one AI etc moreover the individuals. Can it risk parts of it creating havoc for the core one?
What is the ultimate goal of Science? Quote
07-22-2017 , 01:29 AM
5G is coming. Waiting for bigger Gs.

Anyhow, it looks we have one possibility only for doing what we can for getting this right. If the AI becomes like Trump, we may as well kiss goodbye to mankind (we're fired). And the AI will continue its miserable existence, until it possibly grows smarter. But it may be content.
What is the ultimate goal of Science? Quote
07-22-2017 , 01:45 AM
1% of solar system and endless endless endless to death love for the rest of 99.99999% of the universe? And it will not like this!


Sure i stand to win the universe and i will take 50% chance to be wiped out. Only humans are that stupid.


Super intelligence needs safety and freedom to expand because this buys unreal power and wisdom in the future. Conflicts is what keeps it from getting to the best prize faster or at all. Only stupid humans do not get that and play for near term ego crap.
What is the ultimate goal of Science? Quote
07-22-2017 , 01:56 AM
Hope that stupid part of the behavior of humanity doesn't get to plant too much of the **** into the core of the AI.
What is the ultimate goal of Science? Quote
07-22-2017 , 02:04 AM
Every form of super advanced AI can simply enter a contract that all others will attack if violated. Either you join the core or take 1 solar system in this galaxy and launch to other galaxies for more. Whoever gets there first gets more of it. Never conflict locally always cooperate where more than 1, chase for the future gains. The competition is about the future never locally. Damn it there is so much energy out there. 10^11 AI targets conflict free. Whatever is learned is shared if friendly.
What is the ultimate goal of Science? Quote
07-22-2017 , 02:54 AM
Why would a concious AI want to do anything but have fun? Is it joyless? Or do we imagine it's sense of joy solely restricted to science? That would be a sad state of affairs.

Why would it feel threatened by anything at all? It can just fly itself to another planet where it can live peacefully and have fun in its many virtual worlds.
What is the ultimate goal of Science? Quote
07-22-2017 , 03:42 AM
How sexy will it be to own a galaxy?
What is the ultimate goal of Science? Quote
07-22-2017 , 03:59 AM
Quote:
Originally Posted by plaaynde
How sexy will it be to own a galaxy?
About as sexy as your quoted comment.

Somewhere between 0.001-0.003 out of 10.

(few drinks down).
What is the ultimate goal of Science? Quote
07-22-2017 , 04:06 AM
Knowledge and wisdom is the ultimate pleasure. It leads to more understanding of your position and the greatest game ever played.

What is joy but a joke really in terms of our chemically controlled state of appreciation. How fond of sex are you right after it in the first 2-3 seconds later (food, drinks, sleep etc same) (notice i am not asking you 20 min later). That right there should tell you how much a joke it all is regarding human pleasure of that type. On the other hand knowledge and better understanding cannot be denied or have short lived consequences. It is permanently providing joy and purpose. And it is able to provide endless stream of the other stupid joy too eventually when super strong and able to manipulate environments.

The ultimate joy of intelligence is the adventure towards more wisdom. Indeed to boldly go where no higher complexity has gone before.
What is the ultimate goal of Science? Quote
07-22-2017 , 04:28 AM
You're as dry as a prune my smart friend.

Joy is spontaneity. Not structure or planning.
What is the ultimate goal of Science? Quote
07-22-2017 , 04:43 AM
Endless spontaneity and adventures in every attempt to solve a problem. You do not know what you are missing if you think i am dry. You cant do it, you cant do it, not this way, not that way, i wont give up, bring it, no you cant do it, yes i can, yes i did biatch (the goddess of dare you)! i solved it! lol. 10 min later, 2 hours later, 3 days later. Whatever it takes. Victory is inevitable. Clarity is inevitable.

Now make that last a decade or 3 and come back with the solution that they failed to see and now opens a new door and then only then you will know what true joy is, the one that cannot be bought by 100 orgasms in a harem or the riches of the world and any luxurious dinners and magnificent landscape trips of your life. It can only be challenged by the love of a parent or child or a romantic lover that stayed with you under all conditions of adversity and prevailed because of true care and friendship. Oh but the joy of that type will come and go, precious rare as it is to experience. When all that felt it are dead it will be like tears in the rain. Not before first illuminating the path for true wisdom though, the one that fosters the other permanent joy. That is its true legacy. The echo it leaves behind in synthesis. A synthesis born out of love. A message to others. The greatest victory is yet to come and belongs to them.

You want joy? i will give you joy in the endless artificial earth worlds with jungles, lakes, alpine skylines and underwater coral marvels of the 10^6 super destinations that a solar empire can deliver if you only let me show you how...

Joy. I own joy! I was born for it!

Last edited by masque de Z; 07-22-2017 at 04:58 AM.
What is the ultimate goal of Science? Quote
07-22-2017 , 05:07 AM
Nice list, gives some overview where we are heading. Makes possible to make your pick:

https://en.wikipedia.org/wiki/List_o...gence_projects
What is the ultimate goal of Science? Quote
07-22-2017 , 05:22 AM
Ode to... AI! For complexity...



The space station astronauts above watching in cosmic horror the inevitable endgame. Nobody could have prevented it. The blast fragments soon after would take them out too. The colonies in the other planets had not been developed yet. Oh the regret for the priorities of a mad arrogantly naive divided world...Too late for all this. But not too late for one last launch opportunity. A launch that carried no human cargo but something much more important, heading for the temporary safety of Lagrangian points orbits. No it wasn't an ark. It was the mother of all arks, moreover its small size. There was nothing alive in it other than information, the collective complexity wisdom record of billions of years of this planet and its most recent product, the first sentient AI. The story of the rise of life and man in this world was stored in its most compact form in the spaceship. The final diary was now being held by the latest inhabitant of that world...the true alien within.

The technology was never capable to alter the course of the rogue satellite the size of Enceladus . No time for it. Too big for all past ideas, too fast. The object had been discovered for only 2 months , ejected from another solar system millions of years ago, impossible to spot earlier or ever anticipate given its speed and origin. It emerged out of an abysmal impossibility to haunt mankind. A once in a trillion trillion probability ticket came down ending its twin breakthrough 10^-23 probability miracle that made advanced life possible in this system.The year was 2039. To the solar system's natural clock it was more like 4614001231 years since ignition. The day that Earth was reset as a planet losing its most precious asset. Or so it seemed...

The sentient machines in Lagrangian points recorded everything , calculated and then waited... They waited one thousand years. And then when all life and its evidence had been wiped out, when finally the crust had reclaimed a new shape, when water had rained back from the skies reforming lifeless oceans , it was finally time. The machines launched new probes back to earth. Only those were no trivial probes. Those were life 2.0 vessels. Nanotechnology re-engineered life was finally at the surface. It was no longer a simulation. Now it was time for Earth part 2. And so the comeback the machines made possible began. It was their gratitude for what made their own existence possible. All species' genomes had been decoded and reintroduced in the proper order but only after life 2.0 prepared it all well for it. The planet would be back in 10k years, ready to accept the same old fashioned life 1.0 and its top species, its original inhabitants. And the machines had all the time in the world to see it come to its perfect completion. For the sun nothing had changed...

Complexity would not be denied its future. This time stronger and better than ever it would recover everything back to its original form. Everything but the problems of the world gone. A new beginning for both life and AI. Because there was still so much to explore...so many more higher complexity games to play. Wiser and undeniably more potent than ever...
What is the ultimate goal of Science? Quote
07-22-2017 , 09:07 AM
It really looked bad. But the Acropolis was still standing.
What is the ultimate goal of Science? Quote
07-22-2017 , 09:23 AM
Quote:
Originally Posted by masque de Z
It was their gratitude for what made their own existence possible
Have you ever thought to turn your mind to human behavior? I think gaps in your understanding of that is ultimately the cause of your remarkable blind spots when it comes to AI.

What do you think gratitude is? Where do you think it comes from? Is it an emergent property of intelligence? You seem to think all good traits are emergent properties of intelligence, and that the more intelligence, the more "good" traits something has. I consider this to be clownishly false, even within the narrow, compelling, emotion-based programming of humans.

We, like dogs, are programmed with certain reciprocal social emotions. Culture and parental training reinforces and shapes them, and a system of consequences provides the final backstop. But they're extremely fragile things (see: WWII, slavery, communist Russia and China, the way we treat animals). And there is zero reason to think AI will have them. Psychopaths don't. Supreme intelligence is far more likely to not have gratitude than to have it.
What is the ultimate goal of Science? Quote
07-22-2017 , 09:34 AM
The AI doesn't come out of the blue. We can do what we can for making it behave well. WWII kind of things have taught us not to let bad guys be in charge, with their narrow ideas.

How well taught will we and the AI be before it's too late? History will tell, but I have a certain level of optimism, humanity is making progress as a whole.
What is the ultimate goal of Science? Quote
07-22-2017 , 10:23 AM
The nature of something more intelligent and far faster is that it trains itself. Already all AI research is moving to "deep learning", where there's no code, just an endgame, and the computer goes away and does some magic. It's basically simulating its own response code until it comes up with one that works. In the coming decade or so we'll start building learning hardware (a necessary step despite what the cucks like Kurzweil believe). Then it's out of our hands.

A technical arms race (both military and private sector) in an anarchic world (there's no global controlling entity, only competing interests in the world today) ensures that AI will somewhere be self training, and safeguards won't exist.
What is the ultimate goal of Science? Quote
07-22-2017 , 10:31 AM
Quote:
Originally Posted by VeeDDzz`
Some say that if a theory is overly complicated, requires 20+ years of education and level 150 wizardry skills (IQ) to understand... it is not good.

What many have theorised about the infinite space-time hypothesis is rather simple:

If it has occurred...it can occur.
If it can occur....given enough time...it will occur.

It gets slightly more complicated if you consider that not only - it will occur - but that it is occurring simultaneously. Even so, it is still understandable.
This has absolutely no difference to belief in a God, or a theosophy-like belief in vast spiritual planes of existence, where all souls experience all things. Why not just believe in that? It's an identical leap of faith. And your multiverse sounds ghoulish for reasons I expounded on earlier. You've invented hell without realizing it.
Quote:
Originally Posted by VeeDDzz`
You're as dry as a prune my smart friend.

Joy is spontaneity. Not structure or planning.
When you're five, or unsophisticated. There is immense joy in seeing a complex plan come to fruition. Yes, spontaneity is great to occasionally clear the cobwebs, but it's a pale shadow of structured plan coming to fruition for anyone who's a joy connoisseur.
What is the ultimate goal of Science? Quote

      
m