Open Side Menu Go to the Top
Register
Superintelligent computers will kill us all Superintelligent computers will kill us all

02-07-2016 , 01:04 AM
Quote:
Originally Posted by masque de Z
Keep consistently failing to answer in terms of numbers how the 10000TB will get out with air gapped hacking methods..
How many strawmen you want to build and knockdown in 30,000 words or less?

FFS, you're terrible.
Superintelligent computers will kill us all Quote
02-07-2016 , 01:07 AM
Quote:
Originally Posted by masque de Z
That passes for funny joke now? Even if i wrote 100 bytes every second of my life it still wont add up to 1TB.
Not my best work for sure, but while your heart is in the right place, it's fair to remind you from time to time that less is more.
Superintelligent computers will kill us all Quote
02-07-2016 , 01:11 AM
Toothsayer a stick now is say 64GB. And they rise what a factor 2 every 2 years?

Even if so it becomes 64 TB in 20y. Future might top it of course. In that case our ability to attack the ASI will also be seriously elevated too and it will therefore need to be another 10 times smarter than this projection of 10000 TB. You still need to pass 10 layers of security to take out a detectable non biological object and need to do it 156 times in a single day to get that day's brain state. Oh and the future security of the system or the logging data needs to not catch you doing anything irregular either. Sure.

The problem is not that ahole humans that work on the project will suddenly turn rogue. The problem is to protect the AI from gaining control of outside systems. You need those stealing things that are deisgned to be stopped by detection methods to also work for the ASI escape interests.

Last edited by masque de Z; 02-07-2016 at 01:19 AM.
Superintelligent computers will kill us all Quote
02-07-2016 , 01:16 AM
Ctyri you are the terrible. Tell me again what is your contribution to this thread other than trolling? How about the answer you never gave in the other Mars thread?


I am terrible because i suggested independently something that this guy is also suggesting http://cecs.louisville.edu/ry/Leakpr...ingularity.pdf which is a topic in the industry apparently and because i took the direction into security measures that might even involve P vs NP methods.

Last edited by masque de Z; 02-07-2016 at 01:22 AM.
Superintelligent computers will kill us all Quote
02-07-2016 , 01:16 AM


Both of them are AI?
Superintelligent computers will kill us all Quote
02-07-2016 , 01:20 AM
Masque, dude, i spent a lot of time on that response trying to show how flawed your thinking was and your next wall of text was just restating the same things without even changing anything or responding to the points raised. What was the point?

Did you watch those super good videos? The first recursive learning ASI is going to look less like the Manhattan Project and more like my junior high science project.
Superintelligent computers will kill us all Quote
02-07-2016 , 01:20 AM
Thanks for the discussion, masque. I like you. I enjoy your posts on physics. I also find arguing against atrocious reasoning and glaring mental blind spots interesting. They fascinate me.

Anyway, could have been a very interesting thread if people with more knowledge had discussed.
Superintelligent computers will kill us all Quote
02-07-2016 , 01:21 AM
Quote:
Originally Posted by masque de Z
Ctyri you are the terrible. Tell me again what is your contribution to this thread other than trolling? How about the answer you never gave in the other Mars thread?
There was no reason to continue discussing rocket transport with you further. You basically claimed your stance was not debatable, when it was based on assumption that we'd have nuclear electric rockets if we pursued a manned Mars program. We have had plans for a manned Mars program. It did not include the billions for nuclear electric transport capable of Mars missions out of launch window, thus invalidating your assumption no matter how many wiki links you post to potential technologies. Then you went on to state that you are making no leaps yourself, and just stating that the US will have this technology in future given we stop spending so much on defense, as if that was not an assumption (lol).

As you are demonstrating in this thread as well, it is hopeless to discuss future technologies with someone who repeatedly contends their hopes and dreams are indisputable facts.

If you were just misguided or wrong, I'd be more willing to discuss with you. But you are repeatedly an arrogant dick as anyone reading this thread can see for themselves. Arrogant and deluded is a bad combination, kid.

Last edited by ctyri; 02-07-2016 at 01:27 AM.
Superintelligent computers will kill us all Quote
02-07-2016 , 01:33 AM
Quote:
Originally Posted by ctyri
There was no reason to continue discussing rocket transport with you further. You claimed your stance was based on fact, when it was based on assumption that we'd have nuclear electric rockets if we pursued a manned Mars program. We have had plans for a manned Mars program. It did not include the billions for nuclear electric transport capable of Mars missions out of launch window, thus invalidating your assumption no matter how many wiki links you post to potential technologies. Then you went on to state that you are making no leaps yourself, and just stating that the US will have this technology in future given we stop spending so much on defense, as if that was not an assumption (lol).

As you are demonstrating in this thread as well, it is hopeless to discuss future technologies with someone who repeatedly contends their hopes and dreams are indisputable facts.
Do you even know that the author of the Martian is placing the story in 2035 and is already assuming the main ship has a constant acceleration during all the trip due to ion propulsion? Do you want me to link you to his simulation maybe? So its not reasonable to assume the technology is available to them?
Does it even need to be nuclear by the way? The other links i provided can do it with solar too for 1 Ton of payload.

http://www.galactanet.com/martian/hermes.mp4

Did you even maybe try to educate anyone about Hohmann transfers or alternative methods or how to combine them with ion and get even better results? Screw your style by the way. I spit on it every day of my life through my entirely opposite choices when i interact with other people. My intention is to be friendly and improve them and gain from their ideas too not put them down. I never said anything demeaning about your education by the way and i dont exactly see you share your wealth here regularly like others do. Your intention is to pick up fights here 70% of the time.
Superintelligent computers will kill us all Quote
02-07-2016 , 01:48 AM
Quote:
Originally Posted by masque de Z
Screw your style by the way. I spit on it every day of my life through my entirely opposite choices when i interact with other people. My intention is to be friendly and improve them and gain from their ideas too not put them down.
Your work in this thread shows otherwise.

But everyone but you already knew that.

Quote:
i dont exactly see you share your wealth here regularly like others do. Your intention is to pick up fights here 70% of the time.
I used to post here quite a bit. It has become less interesting over the years, in large part because there is little interesting to discuss. Too many folks are here for one liners. And you dominate threads without actually engaging most of the people trying to have a conversation with you. I'm not here to pick fights, but come to read threads about future technologies. Usually, you'll be found making those threads unreadable, so it just seems like I'm here to fight with you.

Last edited by ctyri; 02-07-2016 at 01:53 AM.
Superintelligent computers will kill us all Quote
02-07-2016 , 01:56 AM
Quote:
Originally Posted by ctyri
Your work in this thread shows otherwise.

But everyone but you already knew that.
Name a post in this thread in which i tried to insult people posting here and their education or their intellect, their potential etc. And also name a single post in my entire 2+2 history that i started an attack on a person mocking anything of their posting material trying to ridicule their content. Anything aggressive from me always happens after others have attacked first (it is always on topic) and even then i never looked down on their potential. NEVER. If i have to correct something i always do it as a suggestion or an addition and provide evidence and never direct any insult to the other side for their ideas in the form that is crystal clear recognizable kind of personal attack or even exhibitionistic looking down at others boss mode.
Superintelligent computers will kill us all Quote
02-07-2016 , 02:34 AM
Quote:
Originally Posted by ToothSayer
Thanks for the discussion, masque. I like you. I enjoy your posts on physics. I also find arguing against atrocious reasoning and glaring mental blind spots interesting. They fascinate me.

Anyway, could have been a very interesting thread if people with more knowledge had discussed.
Under normal interactions involving disagreement that nobody is looking down at the other side i like talking with you too.

Thinking that in AI research approaching ASI levels confinement effort protocols must be in place and that a large scale arms race equivalent effort to obtain AI in a much more protected state than random companies that only care for their profits and ignore other value systems typically, is a valid suggestion (if the others will do it anyway) is not a blind spot in my opinion.

It can protect and prepare us. If these companies took similar confinement efforts that higher AI emergence might be less unstable experience or be sensibly postponed indefinitely eg until we had reached other space systems or were united. If the real world is not going to do that then it is its problem. A better suggestion was available. I never suggested that 100% ASI is friendly, only that i find it more reasonable choice for them if they are really so advanced to know far more than us (but not necessarily that this will be the choice/style of the first one created).

I find it easier when i talk with people i generally respect to guide the discussion with positive comments into the direction that the other side if they are an honest thinker they can happily change their mind because they realized a greater truth that was provided to them with kindness. That way the progress appears like a contribution one side made to the other, not a scoreboard story. Friends can disagree and be even better friends because of it if they do it properly.

I look forward to more contributions by all.
Superintelligent computers will kill us all Quote
02-07-2016 , 02:35 AM
Quote:
Originally Posted by Vael
not sure if that has been mentioned already, but are these speculations about what the superintelligence will do based on a magical new model for computation/complexity? because if not, the SI still won't be able to do hard computational problems and I'd guess that any action that would be really dangerous to us would involve solving hard problems
And really, even if their is some magical new model for computation (say for totally unknown reasons quantum field theory allows for something well beyond BQP such that polynomial QFT=NP or even EXPTIME ) that would drastically weaken the threat of AI because in that universe it would take no intelligence at all to solve hard problems.

In a more realistic universe where there is a complexity hierarchy I think AI is again weakend as long as P<>NP morally, in the sense that "average" NP problems are outside of P and there are no heuristics that work most/a lot of the time. I think the absolute worst case scenario in terms of the potential danger of AI would be P<>NP but there exist some super complicated huerestics which make P=NP 99% of time. Then an AI seems likely to be able to solve problems very easily that humans would find impossible even with a ton of "dumb" computational power.
Superintelligent computers will kill us all Quote
02-07-2016 , 02:59 AM
Quote:
Originally Posted by lastcardcharlie
I have no doubt of it.

What is the difference between an IQ of 300 and one of 400? It only makes sense for an AI to have an IQ of 400 if there are a bunch of AI's with an IQ of 300, right?
You could imagine a person having an IQ so high there wouldn't be a single other person within 10 standard deviations. Maybe something like a 5 year old solving 2 Clay Millennial problems. Only a handful of adults (and perhaps none) would be in that persons league and he/she would tower over every other 5 year old in history by light years.
Superintelligent computers will kill us all Quote
02-07-2016 , 09:54 AM
Masque has been frustrating and continually misses the point and stuff but he hasn't been impolite. I think he deserves that in return.
Superintelligent computers will kill us all Quote
02-07-2016 , 10:24 AM
I think he should learn that if you can't express your thoughts succinctly you should first reconsider them, because it's more likely that you've overlooked something important.
Superintelligent computers will kill us all Quote
02-07-2016 , 10:26 AM
Quote:
Originally Posted by JayTeeMe
Masque has been frustrating and continually misses the point and stuff but he hasn't been impolite. I think he deserves that in return.
There are many ways to be impolite. Including not listening to others or putting their viewpoint/heuristics on par with your own (until proven otherwise).

I only use *direct* impoliteness to get through to the very densest of people and beliefs and habits.

Anyway. What's your best guess on the timeframe for this happening? I think we're approaching something of a true hardware/software revolution, where machines are getting fast enough to do things humans do, which is the tipping point for the economics to make sense and drive very large research $$. Modelling and simulation is becoming a far more viable, real-world-mapping affair. And we're finally at a level in terms of raw hardware capabilities where things can happen. For example, storage only recently got large enough to model and simulate large datasets. The camera and processing bandwidth for full field real time human equivalent object recognition is only just becoming possible. Robotic control circuitry is only just reaching high speed at a good price. Etc. The turning point for major robotic commercialization is those two points. In 5-10 years, object recognition control units with vast databases mapped to hands that can recognize and human-superior manipulate any object are going to be dirt cheap. That mapped with high speed robotic units is going to mean a multi trillion dollar explosion of robot engineering. I say 10 years tops until we see this. 20 until humans are fully replaced in nearly all manual and even many service jobs.

In terms of risk, I think it matters a lot what kind of world an ASI comes into. If the average desktop computer is 100,000 human brains when it comes about, it'll be ugly. If it arises first in a supercomputer at near human brain levels with specialized cutting edge learning hardware (hence hardware limited), it'll be different.

Last edited by ToothSayer; 02-07-2016 at 10:40 AM.
Superintelligent computers will kill us all Quote
02-07-2016 , 11:26 AM
I can't decide which self-proclaimed expert of speculation is the smartest, may have to speculate about it further. What would a super-smart AI think?
Superintelligent computers will kill us all Quote
02-07-2016 , 11:58 AM
Quote:
Originally Posted by dessin d'enfant
You could imagine a person having an IQ so high there wouldn't be a single other person within 10 standard deviations. Maybe something like a 5 year old solving 2 Clay Millennial problems. Only a handful of adults (and perhaps none) would be in that persons league and he/she would tower over every other 5 year old in history by light years.
So math is a weak method to compare intelligence at certain scale as the eventual conclusion is smarter than mathematicians?
Superintelligent computers will kill us all Quote
02-07-2016 , 02:53 PM
Quote:
Originally Posted by ToothSayer
There are many ways to be impolite. Including not listening to others or putting their viewpoint/heuristics on par with your own (until proven otherwise).

I only use *direct* impoliteness to get through to the very densest of people and beliefs and habits.

Anyway. What's your best guess on the timeframe for this happening? I think we're approaching something of a true hardware/software revolution, where machines are getting fast enough to do things humans do, which is the tipping point for the economics to make sense and drive very large research $$. Modelling and simulation is becoming a far more viable, real-world-mapping affair. And we're finally at a level in terms of raw hardware capabilities where things can happen. For example, storage only recently got large enough to model and simulate large datasets. The camera and processing bandwidth for full field real time human equivalent object recognition is only just becoming possible. Robotic control circuitry is only just reaching high speed at a good price. Etc. The turning point for major robotic commercialization is those two points. In 5-10 years, object recognition control units with vast databases mapped to hands that can recognize and human-superior manipulate any object are going to be dirt cheap. That mapped with high speed robotic units is going to mean a multi trillion dollar explosion of robot engineering. I say 10 years tops until we see this. 20 until humans are fully replaced in nearly all manual and even many service jobs.

In terms of risk, I think it matters a lot what kind of world an ASI comes into. If the average desktop computer is 100,000 human brains when it comes about, it'll be ugly. If it arises first in a supercomputer at near human brain levels with specialized cutting edge learning hardware (hence hardware limited), it'll be different.
I don't know. I don't really have much knowledge of the field. I tend to think that it's farther away than 20 years. It's a mighty challenge. At least i hope so. Once you have an AI that's as smart as a person you can load it up with the knowledge to do pretty much anybody's job without needing coffee breaks and health insurance. That would probably be more awesome than sucky but would result in major social change and is a subject for a different thread.

I tend to think that the trip from human-level AI to ASI will be very fast (seconds or days or at most a couple years). And I do think it's more likely than not that it ends up bad for us. Say like an 80% chance of extremely bad outcome.
Superintelligent computers will kill us all Quote
02-07-2016 , 05:27 PM
Quote:
Originally Posted by JayTeeMe
Masque has been frustrating and continually misses the point and stuff but he hasn't been impolite. I think he deserves that in return.
I don't think he's ever deliberately insulting, it's just that guys like cytri and TS are pretty sharp cookies, and constantly talking down and lecturing at them is gonna grind their gears a bit.
Superintelligent computers will kill us all Quote
02-07-2016 , 06:04 PM
None of the irrelevant and uninformative gossip tells why a system in containment is impossible or why it wouldn't be tried. Most of all it doesn't explain why it's out of bounds of speculation, even if it appears difficult in theory.
Superintelligent computers will kill us all Quote
02-07-2016 , 06:12 PM
I don't get it; does anyone here think they've become a better person because a check can be wired across an ocean instantaneously ?

Was the check for anything more than purchasing a bowl of wheat or rice or corn with a few stuffed pigs in tow ?

The other part, the purchasing of weapons to coerce; does this better the individual human being ?

Who's in charge of this asylum, the forgone land of blindlingly blind ?

Who's the gorilla , the unthinking gorilla, masquerading as genius ?

I don't get it .
Superintelligent computers will kill us all Quote
02-07-2016 , 06:39 PM
I like the notion that advancement and study in AI contributes to human self-improvement. Better tools to self improve. Enhanced feedback capability comes to mind. We have a lot of the precursors to this on our smart phones.
Superintelligent computers will kill us all Quote
02-07-2016 , 09:36 PM
Quote:
Originally Posted by Vael
not sure if that has been mentioned already, but are these speculations about what the superintelligence will do based on a magical new model for computation/complexity? because if not, the SI still won't be able to do hard computational problems and I'd guess that any action that would be really dangerous to us would involve solving hard problems
Huh?

We are soft and squishy and quite easy to kill. That isn't a particularly hard problem, and it turns out we've already solved the "how to be dangerous to people" problems (including how to make us all dead) using only currently available methods.

Plus (again), no reason to think that advanced AI would care to cause us any harm. So, the problem would be advanced AI making one group/society/country have the upper hand in a massive way over the other ones. We've done that loads of times, and it all works out in the end.

I see the whole thing (superintelligent tech) being more capable as us in what it does as scary as how much faster jets can travel than I can.
Superintelligent computers will kill us all Quote

      
m