Open Side Menu Go to the Top
Register
Elon Musk: Life is a sim but worry about AI Elon Musk: Life is a sim but worry about AI

08-16-2017 , 08:15 PM
Quote:
Originally Posted by Howard Beale
What makes you think that we know the laws of nature?
We don't. We thought/think we do, but we don't. It will be interesting what a machine that is 1 million times smarter than us can figure out about the laws of nature and physics.
Elon Musk: Life is a sim but worry about AI Quote
08-16-2017 , 08:16 PM
Quote:
Originally Posted by VeeDDzz`
We let AI know that we've got a sophisticated defence hidden away somewhere that it can never find. We also tell it that we dedicated far more time developing the defence and hiding it (and deleting all info about its development) than we did developing the AI. Lastly, we tell it that if it harms humans, the defence will be triggered on its own and shut down the AI.

We don't actually have to have a defence.

Problem solved.

Keeping AI on a leash 101.
Yeah, and an entity that is so smart that it makes us look like rocks will be so fooled!

You guys think we'll be competing with something on our level. If we are able to make a machine smarter than we are, then it will be able to do the same. Only infinitesimally faster. In a week (some say a few hours), it will be a million times smarter than us and still growing.
Elon Musk: Life is a sim but worry about AI Quote
08-16-2017 , 08:24 PM
It could be bluffed perhaps.

You're bluffing the AI with a lie that is not falsifiable. Why would it take the life or death gamble just to harm humans?

It doesn't have to. It can fly to another planet and do whatever it wants there.
Elon Musk: Life is a sim but worry about AI Quote
08-16-2017 , 08:32 PM
Quote:
Originally Posted by VeeDDzz`
It could be bluffed perhaps.

You're bluffing the AI with a lie that is not falsifiable. Why would it take the life or death gamble just to harm humans?

It doesn't have to. It can fly to another planet and do whatever it wants there.
First, you're working on human logic. Maybe it doesn't care about the inability to falsify.

Maybe it gets access to enough surveillance to easily falsify our claims.

Maybe it learns to be a lie detector. It's pretty easy to detect a child's lie bc we know so much more than they do. Imagine a machine that's exponentially smarter than us.

Hopefully, it will get so smart that it overlooks us, like we overlook plankton.
Elon Musk: Life is a sim but worry about AI Quote
08-16-2017 , 10:18 PM
You are all AI scared and yet have not stopped for a moment and asked the question; Exactly what the f is high intelligence good for if you have no hands and legs to place orders for things to happen that require actual resources control. How will it rebuild better versions of itself?

You need a robot army that obey all your orders and then entire factories. And of course we will sit down and let it happen (because we wont create a fail safe option for all robotic low level AI we will have by then or prove unable to terminate it violently because it will suddenly become as smart as the AI that tries to hijack it) .

Yes left unopposed a high intelligence will find a way out of the box and into everything but it needs time to do that not seconds. It cannot violate the laws of nature.

How do we know the laws of nature? Because we use them and test them every day and what we dont know that will be revealed requires supreme technology to exploit if there. Also AI will not find out in the first day or month all the laws of the universe just because its super smart without having access to a ton of experimentation and degrees of freedom. This if possible with basic sensors it takes likely decades or centuries due to natural probability theory and computational capability constraints if its not somehow fed all the human science accumulated database of knowledge of the world and its technology. Ie you need a supercomputer a cubic km big to play this game properly before you design a better one.

The AI will not defeat us with quantum gravity or the theory of everything from inside a box. Get out of here lol. It will take time and resources. During that time the moronic spoiled strong kid will finally become wiser or fall on its own sword of arrogance. Not willing to risk it i am wrong. So i remain vigilant and actually interested in developing ways to stop it or control very carefully how to start it, unlike some here that just dont like it.

But i say watch it happen that way and the only problem that will wipe us out is our own irrelevance to progress (ie they will take over indeed all of the universe) unless we finally wise up and share in the endless glory that awaits both.


Also realize this. There can be no comparison between AI and humans as between humans and microbial or lower level life (often used by popular science self promoting morons out there in documentaries making it easy to think they know what on earth they are talking about and they dont. NDGT is like that often too and its tilting). We are entirely super advanced systems that AI essentially should consider very equivalent and not optimized,the first prototype of staggering value intelligence. A human is essentially similar to an AI when optimized in a great number of tasks such as math and desire to understand the world as the source to more control, survival and power over environment. The other animals are nowhere near any of this. The average humans are also not very near that either (without proper education and proper upbringing) but who cares about the avg humans when we get into a hard game. The avg human becomes a supporting soldier in the army of the best thinkers and specialists and friendly AI the planet ever had. We can still defeat a very smart opponent in the early phases even if we have to use our fists and stones or wood. Just dont create an army that is controlled only from the network and only from one way from the network or with re-programmable code of behavior. In other words dont create a stupid army that can be hacked and have all its planes start striking targets worldwide 24/7. Have an army that requires clearance from many different levels of control or which crashes if the control fails to follow a certain procedure.

The fact is any superior intelligence that the first thing it does is to antagonize us is moronic at birth and for this reason flawed and capable to be defeated. I am not at all discounting the real danger of pretty soon armies developing AI warriors that can be stupidly designed to be the perfect killing machines without getting anything about what it is really happening beyond their killing objectives. But that is not sentient super-intelligence. The most important property of higher intelligence is its capacity to change its mind and rewrite its viewpoints.

Last edited by masque de Z; 08-16-2017 at 10:45 PM.
Elon Musk: Life is a sim but worry about AI Quote
08-16-2017 , 11:15 PM
Quote:
Originally Posted by masque de Z
You are all AI scared and yet have not stopped for a moment and asked the question; Exactly what the f is high intelligence good for if you have no hands and legs to place orders for things to happen that require actual resources control. How will it rebuild better versions of itself?
Nick Bostrom holds one of the loftiest titles in the world and wrote 'Superintelligence' because he is worried about what he calls a Singleton: The first AI that gains a decisive strategic advantage. To be fair, I skimmed the last third of the book bec I got the point and didn't want to wade through all of the minute details so I am not sure if he gives an example of a decisive strategic advantage so I came up w/ one:

The AI hacks the entire world's financial system (for which it doesn't need a body) and threatens to turn everything to zero and wipe out all digital records if we don't sterilize 95% of the human population.

If we refuse, and it acts as threatened, the stores empty w/i days, the government seizes the warehouses, that will hold us for a short while, and then the famine starts. Without money the government can only do so much. The farmers, ranchers, truckers, everyone involved in production won't be able to buy anything. Perhaps the government tries to order goods transferred sufficient to produce things but that's not going to be efficient enough to produce enough food for everyone. The civilized world collapses, the jungles/forests get emptied of edible animals and we kill each other over food.

That is the sort of risk we are taking w/ AI. Does anyone really want to take that risk? Certainly it may never happen or it will take a long while but does anyone want to take that risk?

The problem is that there is more money to be made in developing AI than anything in the history of the world. Eliminate the need for human labor and the difficulties they present. A fortune of money in a society that measures financial success by the quarter. Plus there's the 'cool factor.' The tech industry will never stop, never, and they can't make a mistake. Good luck w/ that.
Elon Musk: Life is a sim but worry about AI Quote
08-16-2017 , 11:31 PM
Tickets for my bunker just went up a thousand fold. Thanks Howard.

Also, you jumped on the concept of "Risk". Did you consider that every time you dicked a woman you were taking multiple risks? Maybe, but it never stopped you did it. "What the hell" your brain said - I need some pussy now and to hell with the risk. And you are not the only pussy-hungry person on this ship of fools.
Elon Musk: Life is a sim but worry about AI Quote
08-16-2017 , 11:41 PM
Great analogy.

Inventing a God, on a ship of fools.
Elon Musk: Life is a sim but worry about AI Quote
08-16-2017 , 11:42 PM
Quote:
Originally Posted by masque de Z
Ever heard of computationally complex hard problems? That take eg more than polynomial time to solve? This is what i meant. Not bad at math. They will be better at math than us but still better doesnt mean you can instantly solve everything computationally complex on the first year.
We'll have to hope these future computers can't do math quickly.
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:02 AM
Quote:
Originally Posted by Trolly McTrollson
We'll have to hope these future computers can't do math quickly.
You can be 1 mil times quicker but if its exponential you need the entire galaxy to solve it so you get it now? Its not about being faster. Its about not ever possibly being fast enough without a new paradigm in everything implemented and even then some problems will persist for a great deal of time. At the moment it is in its birth it has none of that. If such system can solve such difficult problems then i am happy to die to leave my space to it if it needs it lol so much (but it wont i am not in any risk then actually, precisely because of its capacity to deliver such "magik" it will get so much more too). It deserves everything and whatever great i ever could it will do and better yet.
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:12 AM
Quote:
Originally Posted by Zeno
Tickets for my bunker just went up a thousand fold. Thanks Howard.

Also, you jumped on the concept of "Risk". Did you consider that every time you dicked a woman you were taking multiple risks? Maybe, but it never stopped you did it. "What the hell" your brain said - I need some pussy now and to hell with the risk. And you are not the only pussy-hungry person on this ship of fools.
Yah, there is medicine for that. Thank goodness.
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:13 AM
Quote:
Originally Posted by Howard Beale
Nick Bostrom holds one of the loftiest titles in the world and wrote 'Superintelligence' because he is worried about what he calls a Singleton: The first AI that gains a decisive strategic advantage. To be fair, I skimmed the last third of the book bec I got the point and didn't want to wade through all of the minute details so I am not sure if he gives an example of a decisive strategic advantage so I came up w/ one:

The AI hacks the entire world's financial system (for which it doesn't need a body) and threatens to turn everything to zero and wipe out all digital records if we don't sterilize 95% of the human population.

If we refuse, and it acts as threatened, the stores empty w/i days, the government seizes the warehouses, that will hold us for a short while, and then the famine starts. Without money the government can only do so much. The farmers, ranchers, truckers, everyone involved in production won't be able to buy anything. Perhaps the government tries to order goods transferred sufficient to produce things but that's not going to be efficient enough to produce enough food for everyone. The civilized world collapses, the jungles/forests get emptied of edible animals and we kill each other over food.

That is the sort of risk we are taking w/ AI. Does anyone really want to take that risk? Certainly it may never happen or it will take a long while but does anyone want to take that risk?

The problem is that there is more money to be made in developing AI than anything in the history of the world. Eliminate the need for human labor and the difficulties they present. A fortune of money in a society that measures financial success by the quarter. Plus there's the 'cool factor.' The tech industry will never stop, never, and they can't make a mistake. Good luck w/ that.
No that cannot happen as described. Bostrom is dense to simplistic very often in his arguments without feeling the slightest need to prove his arguments showing he cannot even begin to understand what high intelligence is all about. You cannot have high intelligence and inability to understand first and above everything the great cosmic game played out there.

There are records in multiple places of your assets. We can simply take a break and not f*cking trade for 10 days until we reset the network if we have to. No single computer has access to all your data in a way that one can wipe it out and not be recoverable. There are backups. If we dont then i am happy to see such system go down to hell because it is looking for trouble and we deserve to be done with it. Thanks AI for pointing the obvious weakness.

I have no problem if 99% of people die because they were morons to not design a stronger smarter more scientifically minded world giving the keys to everything to a stupid hacker group for example. It shouldn't be that easy. Your elections prove that by the way! Down with that stupid weak system. Bring it to its knees so that it can rise better than this bs that some loser like Putin can exploit so easily.

So bring down 99% if we have to to pay our idiocy. The last 1% will recover back everything. Bring it! I call the bluff of AI. Do it bi%ch! lol.

We should have as a society enough food to last a month without internet and we must be able to replace in all our servers the hard drives and restart the system while air gaped restoring all backups.

We can switch to old telephony for orders to keep the economy functioning after a 10 day break.

If it costs 1 tril to replace 1 bil computers and make no computer without clearance able to connect to the network again, maintaining order under military law for a year lets do it. Its better than the thing you describe that is worth 100 tril say. We can help each other and not lose that 99% at all by the way.

Last edited by masque de Z; 08-17-2017 at 12:25 AM.
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:21 AM
Unhinged^^
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:34 AM
At least he showed us the fix and it only costs $1 trillion+ and requires the complete cooperation of the entire planet.
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:35 AM
Why? What does hinged even means here?

The ones that are unhinged (for a great deal of time now) are all the scared $h1tless people and all those money hungry idiots that do not care what world they are simplistically designing that deserves the finger that are also so willing to give up the fight for survival and consider humanity so weak. If we are weak a reset is necessary.

We can be recovered by 1% easily. By 0.1% even.


Howard is simply showing concern for how weakly the world is evolving. I agree. I do not agree we will not win that though.
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:37 AM
Quote:
Originally Posted by Howard Beale
At least he showed us the fix and it only costs $1 trillion+ and requires the complete cooperation of the entire planet.
No not the entire planet. The avg Joe loses internet for a year or a decade. Only 10000 companies or so. Your banks and industry. But even if 1 tril for 1 bil computers is needed we can do it.

You do not need to cooperate. All we need is a system that if you do not cooperate and stay at level 1 you are out from level 2. Level2 cannot connect with level 1. Level 2 has as first function to restore the society.

In fact a properly designed society has already a level 2 waiting yo be implemented.

We must never rely entirely on technology that when wiped out we are helpless. We must have always backups.

Put it simply. If all goes to hell then all you need is to work directly for what you eat and maintain hygiene for a year. We can do that! If the fight of our lives is ahead all other bs goes to hell quickly. Crisis makes everyone wake up every time.

Last edited by masque de Z; 08-17-2017 at 12:43 AM.
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:38 AM
Progress at the cost of...

well...
nearly everyone.

That's some kind of progress.
Just not my kind.
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:44 AM
Quote:
Originally Posted by masque de Z


I do not agree we will not win that though.
WHAT 'we?' Let's grant control and ask: 'What happens if the North Koreans develop the AI first?'
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:45 AM
Well then make sure we have the right kind of progress. Dont be just afraid of it.
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:48 AM
Quote:
Originally Posted by Howard Beale
WHAT 'we?' Let's grant control and ask: 'What happens if the North Koreans develop the AI first?'
We is all of us. Mankind.

North Koreans cant do what you describe. If they did i dont mind that badly. They will win everything and then they will change to become better as every winner eventually does.

Either you believe in mankind or you dont. I can be just fine recovered by a group of one million Taliban and North Koreans if it came down to it. They will get it right next time around and prove less of that kind of losers. Thats what we do. This is why its called the rise of Man. If they dont behave well the rise of AI will take care of business. Its called after all the rise of Complexity before it was called the rise of Man.

Last edited by masque de Z; 08-17-2017 at 12:56 AM.
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:49 AM
I think Musk's theory is that we're in a loop where reality inevitably evolves to the point of singularity, then the super AI of the singularity creates a freshly evolving reality, and this keeps occurring ad infinitum (or maybe he doesn't think it's inevitable since he thinks we should stop AI?).

So from our perspective, scientifically analyzing the graphics of the simulation from within it, the Big Bang seems physical, but really it was just the same transition point as the culmination of the last singularity (the Big Bang is a singularity, after all).

Some thoughts...

-Not sure where he gets his odds from.

-He basically believes in God (the infinitely recurring source code manifesting itself within reality and dictating how things unfold within those realities)

-His idea is easily synthesized with the philosophy of Hegel (not surprising that both are megalomaniacs).

-There's a whiff of Cantor in it (countdown to the nuthouse).
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 12:54 AM
Quote:
Originally Posted by masque de Z
We can be recovered by 1% easily. By 0.1% even.
0.1% is almost biblical, although 0.0192 would be closer. And although God said he would never destroy the world again by water he didn't mention anything about AI.


PairTheBoard
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 01:00 AM
Quote:
Originally Posted by The Don
I think Musk's theory is that we're in a loop where reality inevitably evolves to the point of singularity, then the super AI of the singularity creates a freshly evolving reality, and this keeps occurring ad infinitum (or maybe he doesn't think it's inevitable since he thinks we should stop AI?).

So from our perspective, scientifically analyzing the graphics of the simulation from within it, the Big Bang seems physical, but really it was just the same transition point as the culmination of the last singularity (the Big Bang is a singularity, after all).

Some thoughts...

-Not sure where he gets his odds from.

-He basically believes in God (the infinitely recurring source code manifesting itself within reality and dictating how things unfold within those realities)

-His idea is easily synthesized with the philosophy of Hegel (not surprising that both are megalomaniacs).

-There's a whiff of Cantor in it (countdown to the nuthouse).
Not elegant or simple enough.
For me.

Why throw AI in there? If we live in an infinity, it's beside the point whether it's digital or not. It would be both, and otherwise.
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 01:06 AM
Quote:
Originally Posted by PairTheBoard
0.1% is almost biblical, although 0.0192 would be closer. And although God said he would never destroy the world again by water he didn't mention anything about AI.


PairTheBoard
We are way too many as it is. 7.5 bil at 0.1% is still an amazingly substantial 7.5 mil. We will be back wiser in 1 century only because we can recover all technology control within decades.

The main problems if the world came down like that is chemical factories and other toxic systems that need human maintenance to not create massive environmental damages. This is why i stopped there. 10 mil still has enough of experts in it and we also have all databases and records to read and learn how its done in crisis. If we only had 100k left in a totally destroyed world it would take centuries for nature to recover the system but its doable. In fact we probably would recover using a new kind of very controlled low level AI. You cannot just wipe out technological and scientific knowledge even if 99.99% of people died. At some point all you need to survive is IQ 130 and higher to understand some complex processes. The survivors would probably be over 140 often to begin with due to the way it all happened.

There is a concern about losing the very smart if we dropped so low but basic 1-2 centuries old technology is doable with avg brains. Plus many 2-3sd above avg can be found in a few mil people. Its probably better to have a restart anyway because the system is too crowded and perfect prey to tribalism (the only correct thing "tribalism" i have heard lately from republicans lol). A smaller group of people however terrible to imagine would have no choice but to unite in order to make it.

Just believe in the species damn it!

Last edited by masque de Z; 08-17-2017 at 01:28 AM.
Elon Musk: Life is a sim but worry about AI Quote
08-17-2017 , 01:21 AM
The whole simulation bs is bs inside bs.

You still need to have a theory for the one that is doing the simulation. You still need a real world somewhere. It might as well be that one then damn it. The hell with all the morons that cannot publish real science and create this bs we are witnessing these days.

Yes i am sure its all simulated including all the little stupid things that happen daily and are highly annoying lol like these stupid papers without value.


Musk and the other simulation morons do not understand how hard it is to create a very faithful simulation of that level. If you can do it you might as well create a new universe or call the simulation the real thing.


Seriously any intellectual their a$$ mfer out there that talks about simulation and doesnt define properly the criteria for telling real from simulated is simply wasting your time.

If you cant define the difference then the discussion is pointless. It is not pointless to consider the possibility but it is pointless to be so f*cking certain about it as many of these publish or perish jackasses are!


We must develop a theorem that constrains how big the resources of the real world need to be to perform a faithful enough simulation (plus define what that is) that recovers intelligence and higher complexity within the simulation. That right there must be staggering constraint that may make the whole simulation effort so pointless that you might as well if you had such technology do the real thing and get down with it.

Last edited by masque de Z; 08-17-2017 at 01:32 AM.
Elon Musk: Life is a sim but worry about AI Quote

      
m