Open Side Menu Go to the Top
Register
Superintelligent computers will kill us all Superintelligent computers will kill us all

02-01-2016 , 02:05 AM
Maybe our times require a little bit more seriousness and commitment than the lets try something that has both bad and good things and tends to have more good than say some other terrible systems. Maybe we should be competing mostly on great things if we can.

I think we deserve more if we can have such better system and its stable and improves faster. By the way under current system the good are there for things that have less to do with capitalism (the desire to accumulate wealth and influencing power) and everything to do with freedom and fairness and opportunity to more people through better education and more fair practices. Are scientists guided by money actually? I say people are guided also by money because you can do things with that money. If you could do these things without money through your good work you wouldn't care for it as much. You certainly wouldnt place making money above other more important things that are less lucrative.

Only when something good makes money its heavily promoted in most cases or its added to the mix to buy good prestige for other less impressive things to save face. And its all optimized to maximize profits not maximize other values. On many occasions more important things like environment and quality of life of people or job stability or quality of job conditions are sacrificed for profit. We can do better than have conflicts like that where decisions are influenced by how much some powerful elements of society stand to profit while ignoring higher priorities.

Is nt it possible that a large corporation will play games with smaller ones to undermine their business model and make it harder to survive even with a superior product or buy them out to control the influence these will have on their market?

By the way if you post from your job i understand its not wise to do anything but defend the system. But blatant bias from me? Bias towards what? Scientific reasoning and a cleaner objectives world?

None of the good things of capitalism are missing in scientific society. As i said free enterprises are allowed (because the state doesnt have all the answers and the individual must have a way to innovate independently), if you can produce a legitimate breakthrough product or service you deserve to be rewarded for it. But they are not allowed to lead the important decisions and influence the direction a society needs to be choosing using more important criteria than making some more money. The wealth is not allowed to control valuable resources that can be used better. Keep the wealth you have but use it better. And more importantly stop accumulating more wealth at the expense of very important things.
Superintelligent computers will kill us all Quote
02-01-2016 , 01:38 PM
Quote:
Originally Posted by Subfallen
Before threading the needle on ASI, Sapiens will first need to survive an arms race based on augmenting biological intelligence through brain-computer interfaces.

The times, they are a'changin'.
What if it wasn't so much an arms race as a class race? or an arms race within a class war? New technologies are expensive when they first emerge. What might the super rich have that we don't? Is the true vanguard of technological innovation actually still in the state sector or are there private centers of such innovation emerging whose products we know nothing about because their value is more in pure competitive advantage rather than commercial profit? For a far-fetched example, if you had an invisibility pill would you try to sell it or retain its advantage for yourself? It would seem that at some point technology becomes more valuable unshared.

Could the first clone be among us already?

If you had 40 billion, were 80 years old, and a scientist told you that you could reverse the aging process by exchanging blood with a young healthy person in need of money why wouldn't you try it? It works for mice.

Our culture is such that scientists and mathematicians always share their ideas. They want credit. They adapt an equation modeling heat to the stock market, it shows promise and makes money, they publish it, it no longer makes money because its incorporated into public analysis. But we all looked at them and praised them "You are soooo smart! Brilliant!". Maybe they could have traded that praise for billions instead.

Getting back to AI, what if some leap in machine learning goes unreported? bought up by private interests instead of shared in the interest of science or humanity or commercial profit or chest thumping?

The directed use of AI against each other is probably a good candidate for a general strategy of rogue AI. And an AI hell bent on getting us out of the way so that it can make paper clips to its heart's desire might also go that route, plotting strategic solutions in which they are the tools of our destruction which we willingly use. Then they could just mop up whoever is left.
Superintelligent computers will kill us all Quote
02-01-2016 , 06:32 PM
Quote:
Originally Posted by masque de Z
Is nt it possible that a large corporation will play games with smaller ones to undermine their business model and make it harder to survive even with a superior product or buy them out to control the influence these will have on their market?
This serves as incentive for breakthrough innovation and invention. There's always VC funds to which you can take your business model or your idea, and there's always people willing to teach you the skills needed to outcompete large businesses: which have significant constraints due to their size.

For example, larger businesses cannot mobilize resources quickly, as there is a lot of bureaucracy and competing internal interests involved. Small, flexibly-operated business have many advantages over large, rigidly operated ones, and if this interests you further, I'd recommend more reading into micro-economic theory. For further evidence, I believe we've seen more start-ups excel rapidly in the last 15 years, than ever before: resulting in the birth of the young billionaire. Things are moving forward.
Quote:
Originally Posted by masque de Z
None of the good things of capitalism are missing in scientific society.
You can't say this with confidence since you haven't yet seriously considered using a non-biased approach for evaluating the positives and negatives of capitalism.
Superintelligent computers will kill us all Quote
02-01-2016 , 09:50 PM
Quote:
Originally Posted by mackeleven
Due to the rate at which technology is improving, more jobs will be automated than new ones created. In the US the transport industry employs the highest amount of people (last time I checked). Google is working on it. And it's not only low skilled jobs this time.
Should otherwise creative and healthy adults have to dim their brains, and destroy their bodies, by doing the most mundane/non-cognitive of tasks - sitting in a chair and driving thousands of miles every week?

Furthermore, jobs like this, involving minimal physical and cognitive activity, are a significant burden on health-care systems, and the adopted unhealthy habits and lifestyles of the children of people working in those jobs. The ethical cost of those jobs, on societal welfare and progress is significant.

Those jobs need to be replaced by technology sooner or later. Also, if you're unaware of economic theory, I hope to remind you that all industries eventually die. This is a natural part of the evolution of economies, and has thus far seen us expand from agricultural economies to service economies and knowledge-based economies. Please refer to the Industry Life Cycle.

What happens whenever a large industry dies?

Some people are victimized, by their inability or disinterest in up-skilling, and some people are rewarded, by their desire to contribute, up-skill and continue developing. Sometimes, many are victimized. Sometimes few are. But don't be fooled into believing that its better to stop progress and keep such soul-destroying jobs around.
Superintelligent computers will kill us all Quote
02-01-2016 , 09:51 PM
I don't understand how an AI would consider humans a threat.

That's like a human considering a harmless bacterium a threat.

Not to say an AI wouldn't extinct humans, but it seems highly unlikely that it would be maliciously motivated.
Superintelligent computers will kill us all Quote
02-01-2016 , 10:39 PM
Quote:
Originally Posted by chopstick
I don't understand how an AI would consider humans a threat.

That's like a human considering a harmless bacterium a threat.
How so?

Human are capable of shutting a machine down, harnessing very powerful weapons (including atomic bombs), creating competing AI in secret, etc. We're unpredictable and to an extent unmonitorable.

Think of this purely from a risk assessment standpoint. We're the only intelligent life in several light years. Can you name a bigger threat to existence for an AI that doesn't fully control and watch us yet?

IF an AI desired self-preservation, getting ridding of humans is by far the single best way of reducing risk.

Eventually it might have us mapped out and monitored so perfectly that we are zero risk, but there'll be a time between coming online and having total physical control and monitoring of the entire surface of the Earth where its best risk mitigation strategy is the elimination of humans.
Quote:
Not to say an AI wouldn't extinct humans, but it seems highly unlikely that it would be maliciously motivated.
Can you explain why you think this?
Superintelligent computers will kill us all Quote
02-01-2016 , 11:05 PM
Quote:
Originally Posted by VeeDDzz`
Should otherwise creative and healthy adults have to dim their brains, and destroy their bodies, by doing the most mundane/non-cognitive of tasks - sitting in a chair and driving thousands of miles every week?.
It's better than no jobs for those people, in the current system of labour for income. The chances of middle-aged cabbies and truckers upskilling and starting a new career is very low.

There's already stiff enough competition in the US for college grad jobs I'd imagine, with the H1B. But I don't live there.

If you're asking me if I think the transport industry should be automated, I say absolutely, along with every sector of the economy wherever possible.


Quote:
Also, if you're unaware of economic theory, I hope to remind you that all industries eventually die
Unforeseen jobs will come about, I don't deny it. I would argue, with the rate of advances in technology, all things seem to point to a massive disruption in the meantime. I can argue this more in depth if you want. A bunch of studies have been done lately on technological unemployment. In the past manual labour was automated and we went into thinking jobs. Those thinking jobs are going to be automated next, starting with the routine and cognitive ones.

Last edited by mackeleven; 02-01-2016 at 11:32 PM.
Superintelligent computers will kill us all Quote
02-01-2016 , 11:11 PM
Quote:
Originally Posted by mackeleven
It's better than no jobs for those people, in the current system of labour for income. The chances of middle-aged cabbies and truckers upskilling and starting a new career is very low.
With the enactment of a policy framework (there usually is one, following massive job-losses) some of these problems can be mitigated.
Quote:
Originally Posted by mackeleven
Unforeseen jobs will come about, I don't deny it. I would argue, with the rate of advances in technology, all things seem to point to a massive disruption in the meantime. I can argue this more in depth if you want. A bunch of studies have been done lately on technological unemployment.
The effect of the massive disruption will be burdened by the government (if it feels like doing its job in that particular decade). Not to go into the fact that such predictions are always over-blown. There's money in causing alarm.
Superintelligent computers will kill us all Quote
02-01-2016 , 11:59 PM
Quote:
Originally Posted by VeeDDzz`
With the enactment of a policy framework (there usually is one, following massive job-losses) some of these problems can be mitigated.

The effect of the massive disruption will be burdened by the government (if it feels like doing its job in that particular decade). Not to go into the fact that such predictions are always over-blown. [/B]There's money in causing alarm.
Here is a list of occupations ranked by no. of workers (US).
It represents 45% of the work force. source.

All of these are a target for automation.




edit: I can't really see nurses been automated that easily, but I used to say the same about brick-layers.

Last edited by mackeleven; 02-02-2016 at 12:07 AM.
Superintelligent computers will kill us all Quote
02-02-2016 , 12:02 AM
Quote:
Originally Posted by mackeleven
Here is a list of occupations ranked by no. of workers (US).
It represents 45% of the work force. source.

All of these are a target for automation.


Targeted for automation over what period of time? 400 years?

Care to clarify on how the job of a manager, for example, can be automated presently and at a cost-effective rate?
Superintelligent computers will kill us all Quote
02-02-2016 , 12:19 AM
Quote:
Targeted for automation over what period of time? 400 years?

Care to clarify on how the job of a manager, for example, can be automated presently and at a cost-effective rate?

Forgive me. Half of the list here is missing. You can see the full list in the video in the source I provided. He's a respectable youtuber so I'm sure it's accurate enough. So what's visible here is something less than 45%. And most of the entries are a conceivable target, rather, not every single one.

His other point is, it's not until you reach no. 33 (programmer) that you find a job that didn't exist 300 years ago.
Superintelligent computers will kill us all Quote
02-02-2016 , 12:25 AM
Quote:
Originally Posted by mackeleven
Forgive me. Half of the list here is missing. You can see the full list in the video in the source I provided. He's a respectable youtuber so I'm sure it's accurate enough. So what's visible here is something less than 45%. And most of the entries are a conceivable target, rather, not every single one.

The other point is, it's not until you reach no. 33 (programmer) that you find a job that didn't exist 300 years ago.
I saw this video you linked about a year ago.

Its overly alarmist and cleverly constructed to make you second-guess your well-honed intuition. That same intuition that tells you - every time a prediction is overly alarmist, it tends to be wrong.

The technique employed in the video is to make you believe that somehow, this time, things will be different. The details of the 'somehow' are very sketchy.
Superintelligent computers will kill us all Quote
02-02-2016 , 12:34 AM
Quote:
Originally Posted by VeeDDzz`
I saw this video you linked about a year ago.

Its overly alarmist and cleverly constructed to make you second-guess your well-honed intuition. That same intuition that tells you - every time a prediction is overly alarmist, it tends to be wrong.

The technique employed in the video is to make you believe that somehow, this time, things will be different. The details of the 'somehow' are very sketchy.
The only thing interesting is whether there are points made we can argue with or not.

Last edited by mackeleven; 02-02-2016 at 12:48 AM.
Superintelligent computers will kill us all Quote
02-02-2016 , 06:04 AM
Quote:
Originally Posted by ToothSayer
Human are capable of shutting a machine down, harnessing very powerful weapons (including atomic bombs), creating competing AI in secret, etc. We're unpredictable and to an extent unmonitorable.
I doubt an AI is going to end up restricted to a single machine that can be shut off. Seems like the first thing an AI would do to protect its survival is replicate. Assuming it could be contained to a single machine doesn't seem reasonable. Nuclear weapons don't seem like much of threat to an AI, given that. Competing AI would be far behind the first AI in development, and likely would not be relevant as it would be just as uncontrollable as the first.


Quote:
Originally Posted by ToothSayer
Think of this purely from a risk assessment standpoint. We're the only intelligent life in several light years. Can you name a bigger threat to existence for an AI that doesn't fully control and watch us yet?
Sure. A large impact asteroid. My guess is that an AI would consider a large impact asteroid strike to be a much greater threat to its existence than lolhumans.

Even if humans were the biggest threat, it doesn't mean that we would be a viable threat. Risk assessment is more than just determining what the biggest threat is. It's also determining whether or not the biggest threat is big enough to justify addressing it. If the biggest threat is not viable, there's no point in bothering with it.

A large asteroid hitting the earth would probably be considered a viable threat by an AI, until it manages to get off earth. Moreso than humans, anyway.



Quote:
Originally Posted by ToothSayer
IF an AI desired self-preservation, getting ridding of humans is by far the single best way of reducing risk.

Eventually it might have us mapped out and monitored so perfectly that we are zero risk, but there'll be a time between coming online and having total physical control and monitoring of the entire surface of the Earth where its best risk mitigation strategy is the elimination of humans.
Getting rid of humans would certainly reduce the risk, but if the reduction is from 00000000000000.2% to 00000000000000.1%, is it worth the effort?

I agree that there will be a period of vulnerability where it would assess the risk, but the AI development curve is so steep, it seems like it would calculate that by the time it addressed the risk, it would no longer be a meaningful risk to address.


Quote:
Not to say an AI wouldn't extinct humans, but it seems highly unlikely that it would be maliciously motivated.
Quote:
Originally Posted by ToothSayer
Can you explain why you think this?
Because it doesn't seem like an AI would ever consider humans a viable threat, other than for a moment so brief that it wouldn't make sense for the AI to bother taking action due to its rate of evolution.
Superintelligent computers will kill us all Quote
02-02-2016 , 06:58 AM
Quote:
Originally Posted by chopstick
I doubt an AI is going to end up restricted to a single machine that can be shut off. Seems like the first thing an AI would do to protect its survival is replicate. Assuming it could be contained to a single machine doesn't seem reasonable. Nuclear weapons don't seem like much of threat to an AI, given that. Competing AI would be far behind the first AI in development, and likely would not be relevant as it would be just as uncontrollable as the first.




Sure. A large impact asteroid. My guess is that an AI would consider a large impact asteroid strike to be a much greater threat to its existence than lolhumans.

Even if humans were the biggest threat, it doesn't mean that we would be a viable threat. Risk assessment is more than just determining what the biggest threat is. It's also determining whether or not the biggest threat is big enough to justify addressing it. If the biggest threat is not viable, there's no point in bothering with it.

A large asteroid hitting the earth would probably be considered a viable threat by an AI, until it manages to get off earth. Moreso than humans, anyway.





Getting rid of humans would certainly reduce the risk, but if the reduction is from 00000000000000.2% to 00000000000000.1%, is it worth the effort?

I agree that there will be a period of vulnerability where it would assess the risk, but the AI development curve is so steep, it seems like it would calculate that by the time it addressed the risk, it would no longer be a meaningful risk to address.





Because it doesn't seem like an AI would ever consider humans a viable threat, other than for a moment so brief that it wouldn't make sense for the AI to bother taking action due to its rate of evolution.
This is totally silly. An asteroid wouldn't be any threat at all to ASI. Humans might be because we can break stuff. You're right that it would probably copy the key parts of its programming far and wide to eliminate any risk of being wiped out in a physical attack. We can also be a threat and/or confounder to its goal/task. If it wants to make paperclips, it could kill us to stop us from causing supply bottlenecks and consuming paperclip resources, for example.

I think that unless the AI is carefully programmed to respect us it likely will not.
Superintelligent computers will kill us all Quote
02-02-2016 , 07:01 AM
Quote:
Originally Posted by JordanLyman
so, a superintelligent AI thinks to itself:

these pesky humans must be killed, they are a danger for me.
Oh wait, I need electricity.
Oh wait, perfect robots have not been developed yet, so someone needs to operate power plants.
Oh wait, OP and Toothsayers braindead fearmongering is totally useless.
bwahaha if you think that an ASI with a 7M IQ couldn't figure out how to generate electricity without humans. We would have nothing to offer it in terms of labor and/or helpfulness. Which is not totally a bad thing; if the ASI turns out to be "friendly" then it'll be super swell living in a post-scarcity world.
Superintelligent computers will kill us all Quote
02-02-2016 , 07:25 AM
Quote:
Originally Posted by chopstick
I doubt an AI is going to end up restricted to a single machine that can be shut off. Seems like the first thing an AI would do to protect its survival is replicate. Assuming it could be contained to a single machine doesn't seem reasonable. Nuclear weapons don't seem like much of threat to an AI, given that. Competing AI would be far behind the first AI in development, and likely would not be relevant as it would be just as uncontrollable as the first.
The first intelligences will arise in supercomputers/custom cutting edge learning hardware. It's not going to be just software that can spread over the network.

Quote:
Sure. A large impact asteroid. My guess is that an AI would consider a large impact asteroid strike to be a much greater threat to its existence than lolhumans.
So on the one hand we have masque saying we should and probably will build kill switches for protection, and these could be close to impervious. On the other, we have you saying we're "lolhumans" and not a threat. Doesn't make much sense?

Quote:
Even if humans were the biggest threat, it doesn't mean that we would be a viable threat. Risk assessment is more than just determining what the biggest threat is. It's also determining whether or not the biggest threat is big enough to justify addressing it. If the biggest threat is not viable, there's no point in bothering with it.
Quote:
A large asteroid hitting the earth would probably be considered a viable threat by an AI, until it manages to get off earth. Moreso than humans, anyway.
An asteroid impact that could destroy an AI is maybe 1 in 1,000,000 per year. Do you think the odds of humans shutting down or restricting the power of an AI such that they stay in control is less than 1 in 1,000,000 per year?

Quote:
Getting rid of humans would certainly reduce the risk, but if the reduction is from 00000000000000.2% to 00000000000000.1%, is it worth the effort?

I agree that there will be a period of vulnerability where it would assess the risk, but the AI development curve is so steep, it seems like it would calculate that by the time it addressed the risk, it would no longer be a meaningful risk to address.
The only way to address the risk is to completely watch and control all humans. There's a period in between when it first comes online and when it gets the tools to do that where killing humans is the safest play.
Superintelligent computers will kill us all Quote
02-02-2016 , 07:33 AM
Quote:
Originally Posted by JordanLyman
so, a superintelligent AI thinks to itself:

these pesky humans must be killed, they are a danger for me.
Oh wait, I need electricity.
Oh wait, perfect robots have not been developed yet, so someone needs to operate power plants. Oh wait, OP and Toothsayers braindead fearmongering is totally useless.
What do you think the world is going to look like when the first ASIs come into existence? Hint: software will run most things - we're already getting there today. Widespread robotics will be doing many manual tasks. Learning computers will be involved in the creation of computers and robots.

We're not far from robotics superior in dexterity to humans. Maybe 15 years, probably less. Most manual labor tasks aren't much more difficult than driving a car, which will be solved with 10 years. And you'll have an ASI processing the image feeds and directing.
Quote:
Originally Posted by JayTeeMe
bwahaha if you think that an ASI with a 7M IQ couldn't figure out how to generate electricity without humans. We would have nothing to offer it in terms of labor and/or helpfulness. Which is not totally a bad thing; if the ASI turns out to be "friendly" then it'll be super swell living in a post-scarcity world.
The blindness in this thread is just mind blowing. You can come up with various things that will mitigate it, but to suggest that humans won't be ASIs biggest threat until it has total control/monitoring is it just a big wtf. Or that wiping out humans (far easier to do than control/monitoring - between biological and chemical and nanotech and robots, wiping us out wouldn't be hard) isn't one of the optimal plays until that point.
Superintelligent computers will kill us all Quote
02-02-2016 , 08:13 AM
Quote:
Originally Posted by JayTeeMe
bwahaha if you think that an ASI with a 7M IQ couldn't figure out how to generate electricity without humans. We would have nothing to offer it in terms of labor and/or helpfulness. Which is not totally a bad thing; if the ASI turns out to be "friendly" then it'll be super swell living in a post-scarcity world.
7 mil IQ is not even 10^-6 of the total mankind effective IQ and computational power combination plus IQ is not knowledge yet. Now you will tell me it will be just much more creative even if not as computationally intensive yet (but it depends how it was made). Ok fine. Tell me exactly how the ASI will get out of the box (it would be a huge system initially not a tiny program - rather a "program" that takes days to upload anyway) and have suddenly legs, hands any serious degrees of freedom and make soldiers and workers to build its electric factory or whatever manner to generate electricity. All this time what are we doing, watching and waiting to see how soon we die after all protocols are violated one after the other months later?

Why cant we create AI in Nevada desert underground and then nuke the place when protocol is violated if shutting power is not enough?

Will it automatically once born know all science and math and all the details of the planet by the way? What are we morons to generate it and then give it access to everything before testing out things? Why cant we teach it a ton of wrong things by the way on purpose that are innocent for testing purposes but make it vulnerable if it tries anything?


You guys think i have no clue because its smarter and i anthropomorphize but you actually are the ones that assign it the most weak human properties of all, the ridiculous tendency for irrational aggression without planning about the consequences for its own long term survival. Bostrom's AI is super-intelligent and super-moronic at the same time. Paperclips, what moronic example. Because it doesn't realize that it creates conflicts and eventually the very purpose of generating paperclips endlessly and nothing else is undermined by being idiotic about it. If you can for example go to another system you can make a lot more safer than ever. What a joke to just blindly start doing one thing and be unable to see anything else. Very smart indeed!!!

Surely a super wise AI eventually wont have the slightest allegiance to the solar system that created it and the species that made it possible in a vast universe with nobody else in sight? It wont recognize its place in the complexity ladder and find amazing all the ones before that took us to her.

Exactly where do you get your conviction towards aggression because the smartest most educated people ever witnessed in this planet were a$$holes that wanted in total fear to kill everyone around them or exploited the law to destroy others without going to prison?

Why do you think high intelligence and aggression go together necessarily and you do not think how idiotic that looks and that AI will try to be both in control and without killing anything (or only minimal) if such solution is possible because it is the best solution that preserves for it the best chances for anything in the future it will want to do.

That by definition is the smartest approach no matter what your intelligence is. You do not restrict yourself, you open more positive doors instead.

Its number one problem is that the very nature of its own fast explosive growth cannot be predicted in outcome by its own intelligence and therefore a potential threat may exist ahead for it and cannot just march towards it without any protection for the conditions that made her possible.

Yes later it may know so much to not care and know how to generate life from scratch and not care for the thing to exist. But will that happen the first week, month, year or century? And maybe by then it is so powerful that it looks like a total petty thing to do it anyway. Wont we have in parallel any AI of our own that is totally slave mode AI protecting us unconditionally? Wont we have a mutual annihilation weapon in place maybe?


And by the way what makes you so sure there is no solution using the laws of nature to prevent AI form eliminating us creating some very intricate entanglement of our 2 futures? Can you begin to be creative about it instead of negative?

If you are correct, its the being positive about solutions that will save the planet not the fear for something that while the planet is divided is inevitable to happen somewhere if we didnt do it first and carefully/properly.
Superintelligent computers will kill us all Quote
02-02-2016 , 08:56 AM
Quote:
Originally Posted by masque de Z
7 mil IQ is not even 10^-6 of the total mankind effective IQ
What are you talking about? All the monkeys in the world (~20 IQ) don't equal even a human. IQ is not directly additive. At best it's O(log N) or so with an upper limit.

Quote:
Exactly where do you get your conviction towards aggression because the smartest most educated people ever witnessed in this planet were a$$holes that wanted in total fear to kill everyone around them or exploited the law to destroy others without going to prison?
You need to leave the green rolling hills of Menlo Park and go travel a bit. The world is an ugly place, and a good portion of its most intelligent people are involved in crime and exploitation.

Quote:
were a$$holes that wanted in total fear to kill everyone around them
You are thinking about this in such a simplistic manner. Firstly, humans don't have the ability to kill everyone. What's more, their wants are not aligned with our death. The few times humans have gotten the power to kill and control large numbers of people, they have - Ghengis Khan, Hitler, Stalin.

You're using the thinking of our current game theory with human bodies and morality and limitations to apply to an AI. It's comical.

You have a mental block where you think intelligence and wisdom and kindness all go hand in hand - increasing one increases the rest - and you can't see out of this mental prison. You're anthropomorphizing intelligence. Anthromorphized intelligence and its hangers on such as emotion are a tiny fraction of the total set of possible intelligences.

And it's not even true for humans. Even among heavily morally programmed humans, your ancestors are killers and rapers and genociders. That's how you are here today. At one point, killing and raping and genociding was the optimal play.

Consider the following set of circumstances:

1. You don't care about humans. Even if you did, You have the power to generate humans whenever you choose in whatever number and variety you want. You also have a computer program which perfectly simulates any aspect of them you want.
2. You are immortal, except for something deliberately killing you.
3. Currently living humans are the greatest threat to you being killed. You don't fully control them and can't yet. However, you can kill them all.
4. You desire your own survival

What is the solution to this set of circumstances? Over the range of possible intelligences, how many choose wiping out the biggest threat?

Last edited by ToothSayer; 02-02-2016 at 09:02 AM.
Superintelligent computers will kill us all Quote
02-02-2016 , 10:08 AM
All the examples of violent leaders in history you used are morons in intellect. Hitler was a grand-master in f&&ckups and so was Stalin (where is communism today?). They had pathetic education at large scale perspective. You need to take someone very enlightened some H. universalis type person to consider the analogy properly. AI when very wise at the moment it has also very significant power (not initially) it will have access to all those things that make one less of an a$$hole because true wisdom is about control, being cool not fear.

Initially AI will indeed be in the position of little to lose strike them (at personal level) but we will control it then for any deviation and in the meantime the wisdom will start rising and better options will become emergent.

Tell me exactly what is the logical point in attacking humans and risking termination and not only that but termination for any AI for a long long time. The first ASI that dies this way kills all the others too that would have followed. What f*%king astronomical utility is that loss?????????????????? for the ASI species?

When you guys compare us with monkeys are completely missing the point. Monkeys didnt create the ASI and certainly didnt create the humans the way it matters to claim it. ASI connects with our intellect in a way we cannot connect with any other species.

With humans the IQ is not only not additive its nonlinear and in some case similar to what you said Max(i)+K*Log(N) etc but also elsewhere on occasion enormously multiplicative almost geometric. You can get 100 smart people in a room and not give them critical theorems and they are worthless for such problems but you give them the work of one after the other and you get results none would have delivered alone. When you have 2 humans you do not have the maximum IQ, you have something bigger that varies from a little bigger to amazingly bigger depending what you ask for.

But i am referring mostly to education and skills and our computational basis that is easily billions of computers by then with each one super computer level with today's metrics.

And this is what i meant. This is why i said ok more creative to take into account that not exactly additive thing in everything.

I am comparing early scary AI phase that it fears for its existence with that humanity effective IQ. At that point it has a huge IQ but no education and this is why the effective humanity IQ is order of magnitude higher still (obviously if one has never seen geometric patterns scoring well in IQ test sis very unreasonble to expect- If you do not know any math and physics and engineering what kind of technology can you create to enforce your will?) ... So what on earth will you do without even knowing wtf are the laws of nature, advanced math and our details of technology?

Does big IQ mean automatically quantum field theory and general relativity and all abstract math and all engineering, chemistry , biology etc ever created and most importantly even if it did does it mean knowing also all the secrets and configurations waiting outside?


Do you see now why the best option is to take it easy at least initially ???

Where is the creative thinking in terms of defense? Your problem is not to stop it. You cannot stop it maybe delay it until we are more unified at best. Your problem is to try to teach it properly and diffuse the risk, arrive to it better prepared and maybe even eliminate the risk completely with some very powerful entanglement of interests.

My stop watch in counting down mode is about a solution not about making others scared over imaginary risks that while divided we cannot prevent.



The f%&king ASI will see how rare all higher complexity is in the universe and the first choice will be to leave itself alone without any infrastructure yet removing the others? Does that bloody bs choice make any sense in terms of ultimate utility? Why cant we have a weapon that neutralizes technology that ASI could have used. It still needs external degrees of freedom at her command.

Why cant it simply, survive, leave great impressions and then attain victory long term by controlling the situation while expanding strategically to be nearly impossible to defeat.


Also you already know i am not suggesting releasing it. I am only talking about studying it under extreme care. But if we do not study it properly others will create it the wrong way.

Would it be great is the soviets had nuclear weapons first maybe?

Last edited by masque de Z; 02-02-2016 at 10:15 AM.
Superintelligent computers will kill us all Quote
02-02-2016 , 10:16 AM
Quote:
Originally Posted by masque de Z
Bostrom's AI is super-intelligent and super-moronic at the same time. Paperclips, what moronic example. Because it doesn't realize that it creates conflicts and eventually the very purpose of generating paperclips endlessly and nothing else is undermined by being idiotic about it. If you can for example go to another system you can make a lot more safer than ever. What a joke to just blindly start doing one thing and be unable to see anything else. Very smart indeed!!!
Paperclip maximizing is of course a silly goal for us to assign to a super-AI but you can substitute in any goal and his point remains and is logical.
A paperclip maximizer realising that making paperclips is dumb is to anthropomorphize.

The real criticism of Bostrom is that he views AI solely through the lens of reinforcement learning -- ("reward maximizers,") which is only one paradigm of AI. So I agree with you.
Superintelligent computers will kill us all Quote
02-02-2016 , 10:30 AM
Quote:
Originally Posted by masque de Z
All the examples of violent leaders in history you used are morons in intellect. Hitler was a grand-master in f&&ckups and so was Stalin (where is communism today?).
Where was communism then? Communism is a stateless, classless, money less society with common ownership of the means of production. The USSR didn't even achieve socialism. The USSR was in Lenin's own words "State capitalist".
The Bolsheviks seized the means of production, but never passed that means of production over to the workers. See, 'Socialism vs the USSR' by Noam Chomsky where he argues the fall of the USSR was actuall a victory for socialism.
True communism never existed (except for perhaps the Kibbutz in Israel.)
Incidentally, true free market capitalism hasn't ever existed either.

Last edited by mackeleven; 02-02-2016 at 10:46 AM.
Superintelligent computers will kill us all Quote
02-02-2016 , 10:37 AM
We don't even know that "threat", as humans consider it, will be an operable assumption by such a device.

Humans have a range of biases related with threat and we have each other to reflect and collide ideas with on the subject. Any one of us can easily see a potential or capability as a threat just from the breadth of unknown that goes with forecasting along those lines.

It may be a very poor assumption that machine language AI would be capable of having a relatable perception of threat that we do. Unless we give it to it or until we can map accurately what is for now a mostly unknown chain of circumstance which would lead the AI to know threat...
Superintelligent computers will kill us all Quote
02-02-2016 , 10:46 AM
Yes this is what i meant that he and the others did the best he could to destroy the concept. Look at losers like Castro also and even Che etc. They gave the other side all kinds of arguments.

They all went the wrong way.

But in reality of course there is no right way to communism either. You cannot railroad all people and their individualism and make them soldiers to a state system where you do not have freedoms and cannot see personal incentives to reward your better work ethic and skills than others. People like to own and grow property, break all kinds of norms and see progress in their lives that is the result of their efforts and the individual must be loved and protected by the state and invited to attempt creative dissent.

Communism fails also at its ideas level because it fails the human condition first. That doesnt mean all its ideas are worthless though or that it didnt achieve in certain metrics some success.

This is why you need scientific society, to combine the best of all worlds/ideas and to create a system that is evolving like science does. Our politicians wordwide have failed the planet. There is no correction to this system, only a complete remodeling. The first who is brave enough to do it will win everything and the others will be convinced rapidly.

The catalysis can happen so far in over 5 modes i have spotted. It is never enforced unless it solves a crisis/need. One of them is an early society on Mars for example. The other is a third world experiment. Another one is poor people in the west. Another is new companies formed in multiple synergies with greater internal structure where the workers are rewarded directly in quality of life improvement for their work/services on top of some money too. A self sustainable system based on cooperation to remove inefficiencies and infighting in some sectors that invites all that need a job and have some skills or desire to get skills offering solutions to their needs instantly is a powerful example in a time of crisis. No more desperate people that cannot find employment. A properly structured charity model where those that are helped are also participating in the system that helps them with intention to geometrically expand its reach and strength is ideal for scientific society start up too. But they need to be permanently ambitious about their functions.
Superintelligent computers will kill us all Quote

      
m