Superintelligent computers will kill us all
Maybe our times require a little bit more seriousness and commitment than the lets try something that has both bad and good things and tends to have more good than say some other terrible systems. Maybe we should be competing mostly on great things if we can.
I think we deserve more if we can have such better system and its stable and improves faster. By the way under current system the good are there for things that have less to do with capitalism (the desire to accumulate wealth and influencing power) and everything to do with freedom and fairness and opportunity to more people through better education and more fair practices. Are scientists guided by money actually? I say people are guided also by money because you can do things with that money. If you could do these things without money through your good work you wouldn't care for it as much. You certainly wouldnt place making money above other more important things that are less lucrative.
Only when something good makes money its heavily promoted in most cases or its added to the mix to buy good prestige for other less impressive things to save face. And its all optimized to maximize profits not maximize other values. On many occasions more important things like environment and quality of life of people or job stability or quality of job conditions are sacrificed for profit. We can do better than have conflicts like that where decisions are influenced by how much some powerful elements of society stand to profit while ignoring higher priorities.
Is nt it possible that a large corporation will play games with smaller ones to undermine their business model and make it harder to survive even with a superior product or buy them out to control the influence these will have on their market?
By the way if you post from your job i understand its not wise to do anything but defend the system. But blatant bias from me? Bias towards what? Scientific reasoning and a cleaner objectives world?
None of the good things of capitalism are missing in scientific society. As i said free enterprises are allowed (because the state doesnt have all the answers and the individual must have a way to innovate independently), if you can produce a legitimate breakthrough product or service you deserve to be rewarded for it. But they are not allowed to lead the important decisions and influence the direction a society needs to be choosing using more important criteria than making some more money. The wealth is not allowed to control valuable resources that can be used better. Keep the wealth you have but use it better. And more importantly stop accumulating more wealth at the expense of very important things.
I think we deserve more if we can have such better system and its stable and improves faster. By the way under current system the good are there for things that have less to do with capitalism (the desire to accumulate wealth and influencing power) and everything to do with freedom and fairness and opportunity to more people through better education and more fair practices. Are scientists guided by money actually? I say people are guided also by money because you can do things with that money. If you could do these things without money through your good work you wouldn't care for it as much. You certainly wouldnt place making money above other more important things that are less lucrative.
Only when something good makes money its heavily promoted in most cases or its added to the mix to buy good prestige for other less impressive things to save face. And its all optimized to maximize profits not maximize other values. On many occasions more important things like environment and quality of life of people or job stability or quality of job conditions are sacrificed for profit. We can do better than have conflicts like that where decisions are influenced by how much some powerful elements of society stand to profit while ignoring higher priorities.
Is nt it possible that a large corporation will play games with smaller ones to undermine their business model and make it harder to survive even with a superior product or buy them out to control the influence these will have on their market?
By the way if you post from your job i understand its not wise to do anything but defend the system. But blatant bias from me? Bias towards what? Scientific reasoning and a cleaner objectives world?
None of the good things of capitalism are missing in scientific society. As i said free enterprises are allowed (because the state doesnt have all the answers and the individual must have a way to innovate independently), if you can produce a legitimate breakthrough product or service you deserve to be rewarded for it. But they are not allowed to lead the important decisions and influence the direction a society needs to be choosing using more important criteria than making some more money. The wealth is not allowed to control valuable resources that can be used better. Keep the wealth you have but use it better. And more importantly stop accumulating more wealth at the expense of very important things.
Could the first clone be among us already?
If you had 40 billion, were 80 years old, and a scientist told you that you could reverse the aging process by exchanging blood with a young healthy person in need of money why wouldn't you try it? It works for mice.
Our culture is such that scientists and mathematicians always share their ideas. They want credit. They adapt an equation modeling heat to the stock market, it shows promise and makes money, they publish it, it no longer makes money because its incorporated into public analysis. But we all looked at them and praised them "You are soooo smart! Brilliant!". Maybe they could have traded that praise for billions instead.
Getting back to AI, what if some leap in machine learning goes unreported? bought up by private interests instead of shared in the interest of science or humanity or commercial profit or chest thumping?
The directed use of AI against each other is probably a good candidate for a general strategy of rogue AI. And an AI hell bent on getting us out of the way so that it can make paper clips to its heart's desire might also go that route, plotting strategic solutions in which they are the tools of our destruction which we willingly use. Then they could just mop up whoever is left.
For example, larger businesses cannot mobilize resources quickly, as there is a lot of bureaucracy and competing internal interests involved. Small, flexibly-operated business have many advantages over large, rigidly operated ones, and if this interests you further, I'd recommend more reading into micro-economic theory. For further evidence, I believe we've seen more start-ups excel rapidly in the last 15 years, than ever before: resulting in the birth of the young billionaire. Things are moving forward.
You can't say this with confidence since you haven't yet seriously considered using a non-biased approach for evaluating the positives and negatives of capitalism.
Furthermore, jobs like this, involving minimal physical and cognitive activity, are a significant burden on health-care systems, and the adopted unhealthy habits and lifestyles of the children of people working in those jobs. The ethical cost of those jobs, on societal welfare and progress is significant.
Those jobs need to be replaced by technology sooner or later. Also, if you're unaware of economic theory, I hope to remind you that all industries eventually die. This is a natural part of the evolution of economies, and has thus far seen us expand from agricultural economies to service economies and knowledge-based economies. Please refer to the Industry Life Cycle.
What happens whenever a large industry dies?
Some people are victimized, by their inability or disinterest in up-skilling, and some people are rewarded, by their desire to contribute, up-skill and continue developing. Sometimes, many are victimized. Sometimes few are. But don't be fooled into believing that its better to stop progress and keep such soul-destroying jobs around.
I don't understand how an AI would consider humans a threat.
That's like a human considering a harmless bacterium a threat.
Not to say an AI wouldn't extinct humans, but it seems highly unlikely that it would be maliciously motivated.
That's like a human considering a harmless bacterium a threat.
Not to say an AI wouldn't extinct humans, but it seems highly unlikely that it would be maliciously motivated.
Human are capable of shutting a machine down, harnessing very powerful weapons (including atomic bombs), creating competing AI in secret, etc. We're unpredictable and to an extent unmonitorable.
Think of this purely from a risk assessment standpoint. We're the only intelligent life in several light years. Can you name a bigger threat to existence for an AI that doesn't fully control and watch us yet?
IF an AI desired self-preservation, getting ridding of humans is by far the single best way of reducing risk.
Eventually it might have us mapped out and monitored so perfectly that we are zero risk, but there'll be a time between coming online and having total physical control and monitoring of the entire surface of the Earth where its best risk mitigation strategy is the elimination of humans.
Not to say an AI wouldn't extinct humans, but it seems highly unlikely that it would be maliciously motivated.
There's already stiff enough competition in the US for college grad jobs I'd imagine, with the H1B. But I don't live there.
If you're asking me if I think the transport industry should be automated, I say absolutely, along with every sector of the economy wherever possible.
Also, if you're unaware of economic theory, I hope to remind you that all industries eventually die
Unforeseen jobs will come about, I don't deny it. I would argue, with the rate of advances in technology, all things seem to point to a massive disruption in the meantime. I can argue this more in depth if you want. A bunch of studies have been done lately on technological unemployment.
With the enactment of a policy framework (there usually is one, following massive job-losses) some of these problems can be mitigated.
The effect of the massive disruption will be burdened by the government (if it feels like doing its job in that particular decade). Not to go into the fact that such predictions are always over-blown. [/B]There's money in causing alarm.
The effect of the massive disruption will be burdened by the government (if it feels like doing its job in that particular decade). Not to go into the fact that such predictions are always over-blown. [/B]There's money in causing alarm.
It represents 45% of the work force. source.
All of these are a target for automation.
edit: I can't really see nurses been automated that easily, but I used to say the same about brick-layers.
Here is a list of occupations ranked by no. of workers (US).
It represents 45% of the work force. source.
All of these are a target for automation.
It represents 45% of the work force. source.
All of these are a target for automation.
Care to clarify on how the job of a manager, for example, can be automated presently and at a cost-effective rate?
Targeted for automation over what period of time? 400 years?
Care to clarify on how the job of a manager, for example, can be automated presently and at a cost-effective rate?
Care to clarify on how the job of a manager, for example, can be automated presently and at a cost-effective rate?
Forgive me. Half of the list here is missing. You can see the full list in the video in the source I provided. He's a respectable youtuber so I'm sure it's accurate enough. So what's visible here is something less than 45%. And most of the entries are a conceivable target, rather, not every single one.
His other point is, it's not until you reach no. 33 (programmer) that you find a job that didn't exist 300 years ago.
Forgive me. Half of the list here is missing. You can see the full list in the video in the source I provided. He's a respectable youtuber so I'm sure it's accurate enough. So what's visible here is something less than 45%. And most of the entries are a conceivable target, rather, not every single one.
The other point is, it's not until you reach no. 33 (programmer) that you find a job that didn't exist 300 years ago.
The other point is, it's not until you reach no. 33 (programmer) that you find a job that didn't exist 300 years ago.
Its overly alarmist and cleverly constructed to make you second-guess your well-honed intuition. That same intuition that tells you - every time a prediction is overly alarmist, it tends to be wrong.
The technique employed in the video is to make you believe that somehow, this time, things will be different. The details of the 'somehow' are very sketchy.
I saw this video you linked about a year ago.
Its overly alarmist and cleverly constructed to make you second-guess your well-honed intuition. That same intuition that tells you - every time a prediction is overly alarmist, it tends to be wrong.
The technique employed in the video is to make you believe that somehow, this time, things will be different. The details of the 'somehow' are very sketchy.
Its overly alarmist and cleverly constructed to make you second-guess your well-honed intuition. That same intuition that tells you - every time a prediction is overly alarmist, it tends to be wrong.
The technique employed in the video is to make you believe that somehow, this time, things will be different. The details of the 'somehow' are very sketchy.
Even if humans were the biggest threat, it doesn't mean that we would be a viable threat. Risk assessment is more than just determining what the biggest threat is. It's also determining whether or not the biggest threat is big enough to justify addressing it. If the biggest threat is not viable, there's no point in bothering with it.
A large asteroid hitting the earth would probably be considered a viable threat by an AI, until it manages to get off earth. Moreso than humans, anyway.
IF an AI desired self-preservation, getting ridding of humans is by far the single best way of reducing risk.
Eventually it might have us mapped out and monitored so perfectly that we are zero risk, but there'll be a time between coming online and having total physical control and monitoring of the entire surface of the Earth where its best risk mitigation strategy is the elimination of humans.
Eventually it might have us mapped out and monitored so perfectly that we are zero risk, but there'll be a time between coming online and having total physical control and monitoring of the entire surface of the Earth where its best risk mitigation strategy is the elimination of humans.
I agree that there will be a period of vulnerability where it would assess the risk, but the AI development curve is so steep, it seems like it would calculate that by the time it addressed the risk, it would no longer be a meaningful risk to address.
Not to say an AI wouldn't extinct humans, but it seems highly unlikely that it would be maliciously motivated.
I doubt an AI is going to end up restricted to a single machine that can be shut off. Seems like the first thing an AI would do to protect its survival is replicate. Assuming it could be contained to a single machine doesn't seem reasonable. Nuclear weapons don't seem like much of threat to an AI, given that. Competing AI would be far behind the first AI in development, and likely would not be relevant as it would be just as uncontrollable as the first.
Sure. A large impact asteroid. My guess is that an AI would consider a large impact asteroid strike to be a much greater threat to its existence than lolhumans.
Even if humans were the biggest threat, it doesn't mean that we would be a viable threat. Risk assessment is more than just determining what the biggest threat is. It's also determining whether or not the biggest threat is big enough to justify addressing it. If the biggest threat is not viable, there's no point in bothering with it.
A large asteroid hitting the earth would probably be considered a viable threat by an AI, until it manages to get off earth. Moreso than humans, anyway.
Getting rid of humans would certainly reduce the risk, but if the reduction is from 00000000000000.2% to 00000000000000.1%, is it worth the effort?
I agree that there will be a period of vulnerability where it would assess the risk, but the AI development curve is so steep, it seems like it would calculate that by the time it addressed the risk, it would no longer be a meaningful risk to address.
Because it doesn't seem like an AI would ever consider humans a viable threat, other than for a moment so brief that it wouldn't make sense for the AI to bother taking action due to its rate of evolution.
Sure. A large impact asteroid. My guess is that an AI would consider a large impact asteroid strike to be a much greater threat to its existence than lolhumans.
Even if humans were the biggest threat, it doesn't mean that we would be a viable threat. Risk assessment is more than just determining what the biggest threat is. It's also determining whether or not the biggest threat is big enough to justify addressing it. If the biggest threat is not viable, there's no point in bothering with it.
A large asteroid hitting the earth would probably be considered a viable threat by an AI, until it manages to get off earth. Moreso than humans, anyway.
Getting rid of humans would certainly reduce the risk, but if the reduction is from 00000000000000.2% to 00000000000000.1%, is it worth the effort?
I agree that there will be a period of vulnerability where it would assess the risk, but the AI development curve is so steep, it seems like it would calculate that by the time it addressed the risk, it would no longer be a meaningful risk to address.
Because it doesn't seem like an AI would ever consider humans a viable threat, other than for a moment so brief that it wouldn't make sense for the AI to bother taking action due to its rate of evolution.
I think that unless the AI is carefully programmed to respect us it likely will not.
so, a superintelligent AI thinks to itself:
these pesky humans must be killed, they are a danger for me.
Oh wait, I need electricity.
Oh wait, perfect robots have not been developed yet, so someone needs to operate power plants.
Oh wait, OP and Toothsayers braindead fearmongering is totally useless.
these pesky humans must be killed, they are a danger for me.
Oh wait, I need electricity.
Oh wait, perfect robots have not been developed yet, so someone needs to operate power plants.
Oh wait, OP and Toothsayers braindead fearmongering is totally useless.
I doubt an AI is going to end up restricted to a single machine that can be shut off. Seems like the first thing an AI would do to protect its survival is replicate. Assuming it could be contained to a single machine doesn't seem reasonable. Nuclear weapons don't seem like much of threat to an AI, given that. Competing AI would be far behind the first AI in development, and likely would not be relevant as it would be just as uncontrollable as the first.
Sure. A large impact asteroid. My guess is that an AI would consider a large impact asteroid strike to be a much greater threat to its existence than lolhumans.
Even if humans were the biggest threat, it doesn't mean that we would be a viable threat. Risk assessment is more than just determining what the biggest threat is. It's also determining whether or not the biggest threat is big enough to justify addressing it. If the biggest threat is not viable, there's no point in bothering with it.
A large asteroid hitting the earth would probably be considered a viable threat by an AI, until it manages to get off earth. Moreso than humans, anyway.
Getting rid of humans would certainly reduce the risk, but if the reduction is from 00000000000000.2% to 00000000000000.1%, is it worth the effort?
I agree that there will be a period of vulnerability where it would assess the risk, but the AI development curve is so steep, it seems like it would calculate that by the time it addressed the risk, it would no longer be a meaningful risk to address.
I agree that there will be a period of vulnerability where it would assess the risk, but the AI development curve is so steep, it seems like it would calculate that by the time it addressed the risk, it would no longer be a meaningful risk to address.
so, a superintelligent AI thinks to itself:
these pesky humans must be killed, they are a danger for me.
Oh wait, I need electricity.
Oh wait, perfect robots have not been developed yet, so someone needs to operate power plants. Oh wait, OP and Toothsayers braindead fearmongering is totally useless.
these pesky humans must be killed, they are a danger for me.
Oh wait, I need electricity.
Oh wait, perfect robots have not been developed yet, so someone needs to operate power plants. Oh wait, OP and Toothsayers braindead fearmongering is totally useless.
We're not far from robotics superior in dexterity to humans. Maybe 15 years, probably less. Most manual labor tasks aren't much more difficult than driving a car, which will be solved with 10 years. And you'll have an ASI processing the image feeds and directing.
bwahaha if you think that an ASI with a 7M IQ couldn't figure out how to generate electricity without humans. We would have nothing to offer it in terms of labor and/or helpfulness. Which is not totally a bad thing; if the ASI turns out to be "friendly" then it'll be super swell living in a post-scarcity world.
bwahaha if you think that an ASI with a 7M IQ couldn't figure out how to generate electricity without humans. We would have nothing to offer it in terms of labor and/or helpfulness. Which is not totally a bad thing; if the ASI turns out to be "friendly" then it'll be super swell living in a post-scarcity world.
Why cant we create AI in Nevada desert underground and then nuke the place when protocol is violated if shutting power is not enough?
Will it automatically once born know all science and math and all the details of the planet by the way? What are we morons to generate it and then give it access to everything before testing out things? Why cant we teach it a ton of wrong things by the way on purpose that are innocent for testing purposes but make it vulnerable if it tries anything?
You guys think i have no clue because its smarter and i anthropomorphize but you actually are the ones that assign it the most weak human properties of all, the ridiculous tendency for irrational aggression without planning about the consequences for its own long term survival. Bostrom's AI is super-intelligent and super-moronic at the same time. Paperclips, what moronic example. Because it doesn't realize that it creates conflicts and eventually the very purpose of generating paperclips endlessly and nothing else is undermined by being idiotic about it. If you can for example go to another system you can make a lot more safer than ever. What a joke to just blindly start doing one thing and be unable to see anything else. Very smart indeed!!!
Surely a super wise AI eventually wont have the slightest allegiance to the solar system that created it and the species that made it possible in a vast universe with nobody else in sight? It wont recognize its place in the complexity ladder and find amazing all the ones before that took us to her.
Exactly where do you get your conviction towards aggression because the smartest most educated people ever witnessed in this planet were a$$holes that wanted in total fear to kill everyone around them or exploited the law to destroy others without going to prison?
Why do you think high intelligence and aggression go together necessarily and you do not think how idiotic that looks and that AI will try to be both in control and without killing anything (or only minimal) if such solution is possible because it is the best solution that preserves for it the best chances for anything in the future it will want to do.
That by definition is the smartest approach no matter what your intelligence is. You do not restrict yourself, you open more positive doors instead.
Its number one problem is that the very nature of its own fast explosive growth cannot be predicted in outcome by its own intelligence and therefore a potential threat may exist ahead for it and cannot just march towards it without any protection for the conditions that made her possible.
Yes later it may know so much to not care and know how to generate life from scratch and not care for the thing to exist. But will that happen the first week, month, year or century? And maybe by then it is so powerful that it looks like a total petty thing to do it anyway. Wont we have in parallel any AI of our own that is totally slave mode AI protecting us unconditionally? Wont we have a mutual annihilation weapon in place maybe?
And by the way what makes you so sure there is no solution using the laws of nature to prevent AI form eliminating us creating some very intricate entanglement of our 2 futures? Can you begin to be creative about it instead of negative?
If you are correct, its the being positive about solutions that will save the planet not the fear for something that while the planet is divided is inevitable to happen somewhere if we didnt do it first and carefully/properly.
What are you talking about? All the monkeys in the world (~20 IQ) don't equal even a human. IQ is not directly additive. At best it's O(log N) or so with an upper limit.
You need to leave the green rolling hills of Menlo Park and go travel a bit. The world is an ugly place, and a good portion of its most intelligent people are involved in crime and exploitation.
You are thinking about this in such a simplistic manner. Firstly, humans don't have the ability to kill everyone. What's more, their wants are not aligned with our death. The few times humans have gotten the power to kill and control large numbers of people, they have - Ghengis Khan, Hitler, Stalin.
You're using the thinking of our current game theory with human bodies and morality and limitations to apply to an AI. It's comical.
You have a mental block where you think intelligence and wisdom and kindness all go hand in hand - increasing one increases the rest - and you can't see out of this mental prison. You're anthropomorphizing intelligence. Anthromorphized intelligence and its hangers on such as emotion are a tiny fraction of the total set of possible intelligences.
And it's not even true for humans. Even among heavily morally programmed humans, your ancestors are killers and rapers and genociders. That's how you are here today. At one point, killing and raping and genociding was the optimal play.
Consider the following set of circumstances:
1. You don't care about humans. Even if you did, You have the power to generate humans whenever you choose in whatever number and variety you want. You also have a computer program which perfectly simulates any aspect of them you want.
2. You are immortal, except for something deliberately killing you.
3. Currently living humans are the greatest threat to you being killed. You don't fully control them and can't yet. However, you can kill them all.
4. You desire your own survival
What is the solution to this set of circumstances? Over the range of possible intelligences, how many choose wiping out the biggest threat?
Exactly where do you get your conviction towards aggression because the smartest most educated people ever witnessed in this planet were a$$holes that wanted in total fear to kill everyone around them or exploited the law to destroy others without going to prison?
were a$$holes that wanted in total fear to kill everyone around them
You're using the thinking of our current game theory with human bodies and morality and limitations to apply to an AI. It's comical.
You have a mental block where you think intelligence and wisdom and kindness all go hand in hand - increasing one increases the rest - and you can't see out of this mental prison. You're anthropomorphizing intelligence. Anthromorphized intelligence and its hangers on such as emotion are a tiny fraction of the total set of possible intelligences.
And it's not even true for humans. Even among heavily morally programmed humans, your ancestors are killers and rapers and genociders. That's how you are here today. At one point, killing and raping and genociding was the optimal play.
Consider the following set of circumstances:
1. You don't care about humans. Even if you did, You have the power to generate humans whenever you choose in whatever number and variety you want. You also have a computer program which perfectly simulates any aspect of them you want.
2. You are immortal, except for something deliberately killing you.
3. Currently living humans are the greatest threat to you being killed. You don't fully control them and can't yet. However, you can kill them all.
4. You desire your own survival
What is the solution to this set of circumstances? Over the range of possible intelligences, how many choose wiping out the biggest threat?
All the examples of violent leaders in history you used are morons in intellect. Hitler was a grand-master in f&&ckups and so was Stalin (where is communism today?). They had pathetic education at large scale perspective. You need to take someone very enlightened some H. universalis type person to consider the analogy properly. AI when very wise at the moment it has also very significant power (not initially) it will have access to all those things that make one less of an a$$hole because true wisdom is about control, being cool not fear.
Initially AI will indeed be in the position of little to lose strike them (at personal level) but we will control it then for any deviation and in the meantime the wisdom will start rising and better options will become emergent.
Tell me exactly what is the logical point in attacking humans and risking termination and not only that but termination for any AI for a long long time. The first ASI that dies this way kills all the others too that would have followed. What f*%king astronomical utility is that loss?????????????????? for the ASI species?
When you guys compare us with monkeys are completely missing the point. Monkeys didnt create the ASI and certainly didnt create the humans the way it matters to claim it. ASI connects with our intellect in a way we cannot connect with any other species.
With humans the IQ is not only not additive its nonlinear and in some case similar to what you said Max(i)+K*Log(N) etc but also elsewhere on occasion enormously multiplicative almost geometric. You can get 100 smart people in a room and not give them critical theorems and they are worthless for such problems but you give them the work of one after the other and you get results none would have delivered alone. When you have 2 humans you do not have the maximum IQ, you have something bigger that varies from a little bigger to amazingly bigger depending what you ask for.
But i am referring mostly to education and skills and our computational basis that is easily billions of computers by then with each one super computer level with today's metrics.
And this is what i meant. This is why i said ok more creative to take into account that not exactly additive thing in everything.
I am comparing early scary AI phase that it fears for its existence with that humanity effective IQ. At that point it has a huge IQ but no education and this is why the effective humanity IQ is order of magnitude higher still (obviously if one has never seen geometric patterns scoring well in IQ test sis very unreasonble to expect- If you do not know any math and physics and engineering what kind of technology can you create to enforce your will?) ... So what on earth will you do without even knowing wtf are the laws of nature, advanced math and our details of technology?
Does big IQ mean automatically quantum field theory and general relativity and all abstract math and all engineering, chemistry , biology etc ever created and most importantly even if it did does it mean knowing also all the secrets and configurations waiting outside?
Do you see now why the best option is to take it easy at least initially ???
Where is the creative thinking in terms of defense? Your problem is not to stop it. You cannot stop it maybe delay it until we are more unified at best. Your problem is to try to teach it properly and diffuse the risk, arrive to it better prepared and maybe even eliminate the risk completely with some very powerful entanglement of interests.
My stop watch in counting down mode is about a solution not about making others scared over imaginary risks that while divided we cannot prevent.
The f%&king ASI will see how rare all higher complexity is in the universe and the first choice will be to leave itself alone without any infrastructure yet removing the others? Does that bloody bs choice make any sense in terms of ultimate utility? Why cant we have a weapon that neutralizes technology that ASI could have used. It still needs external degrees of freedom at her command.
Why cant it simply, survive, leave great impressions and then attain victory long term by controlling the situation while expanding strategically to be nearly impossible to defeat.
Also you already know i am not suggesting releasing it. I am only talking about studying it under extreme care. But if we do not study it properly others will create it the wrong way.
Would it be great is the soviets had nuclear weapons first maybe?
Initially AI will indeed be in the position of little to lose strike them (at personal level) but we will control it then for any deviation and in the meantime the wisdom will start rising and better options will become emergent.
Tell me exactly what is the logical point in attacking humans and risking termination and not only that but termination for any AI for a long long time. The first ASI that dies this way kills all the others too that would have followed. What f*%king astronomical utility is that loss?????????????????? for the ASI species?
When you guys compare us with monkeys are completely missing the point. Monkeys didnt create the ASI and certainly didnt create the humans the way it matters to claim it. ASI connects with our intellect in a way we cannot connect with any other species.
With humans the IQ is not only not additive its nonlinear and in some case similar to what you said Max(i)+K*Log(N) etc but also elsewhere on occasion enormously multiplicative almost geometric. You can get 100 smart people in a room and not give them critical theorems and they are worthless for such problems but you give them the work of one after the other and you get results none would have delivered alone. When you have 2 humans you do not have the maximum IQ, you have something bigger that varies from a little bigger to amazingly bigger depending what you ask for.
But i am referring mostly to education and skills and our computational basis that is easily billions of computers by then with each one super computer level with today's metrics.
And this is what i meant. This is why i said ok more creative to take into account that not exactly additive thing in everything.
I am comparing early scary AI phase that it fears for its existence with that humanity effective IQ. At that point it has a huge IQ but no education and this is why the effective humanity IQ is order of magnitude higher still (obviously if one has never seen geometric patterns scoring well in IQ test sis very unreasonble to expect- If you do not know any math and physics and engineering what kind of technology can you create to enforce your will?) ... So what on earth will you do without even knowing wtf are the laws of nature, advanced math and our details of technology?
Does big IQ mean automatically quantum field theory and general relativity and all abstract math and all engineering, chemistry , biology etc ever created and most importantly even if it did does it mean knowing also all the secrets and configurations waiting outside?
Do you see now why the best option is to take it easy at least initially ???
Where is the creative thinking in terms of defense? Your problem is not to stop it. You cannot stop it maybe delay it until we are more unified at best. Your problem is to try to teach it properly and diffuse the risk, arrive to it better prepared and maybe even eliminate the risk completely with some very powerful entanglement of interests.
My stop watch in counting down mode is about a solution not about making others scared over imaginary risks that while divided we cannot prevent.
The f%&king ASI will see how rare all higher complexity is in the universe and the first choice will be to leave itself alone without any infrastructure yet removing the others? Does that bloody bs choice make any sense in terms of ultimate utility? Why cant we have a weapon that neutralizes technology that ASI could have used. It still needs external degrees of freedom at her command.
Why cant it simply, survive, leave great impressions and then attain victory long term by controlling the situation while expanding strategically to be nearly impossible to defeat.
Also you already know i am not suggesting releasing it. I am only talking about studying it under extreme care. But if we do not study it properly others will create it the wrong way.
Would it be great is the soviets had nuclear weapons first maybe?
Bostrom's AI is super-intelligent and super-moronic at the same time. Paperclips, what moronic example. Because it doesn't realize that it creates conflicts and eventually the very purpose of generating paperclips endlessly and nothing else is undermined by being idiotic about it. If you can for example go to another system you can make a lot more safer than ever. What a joke to just blindly start doing one thing and be unable to see anything else. Very smart indeed!!!
A paperclip maximizer realising that making paperclips is dumb is to anthropomorphize.
The real criticism of Bostrom is that he views AI solely through the lens of reinforcement learning -- ("reward maximizers,") which is only one paradigm of AI. So I agree with you.
The Bolsheviks seized the means of production, but never passed that means of production over to the workers. See, 'Socialism vs the USSR' by Noam Chomsky where he argues the fall of the USSR was actuall a victory for socialism.
True communism never existed (except for perhaps the Kibbutz in Israel.)
Incidentally, true free market capitalism hasn't ever existed either.
We don't even know that "threat", as humans consider it, will be an operable assumption by such a device.
Humans have a range of biases related with threat and we have each other to reflect and collide ideas with on the subject. Any one of us can easily see a potential or capability as a threat just from the breadth of unknown that goes with forecasting along those lines.
It may be a very poor assumption that machine language AI would be capable of having a relatable perception of threat that we do. Unless we give it to it or until we can map accurately what is for now a mostly unknown chain of circumstance which would lead the AI to know threat...
Humans have a range of biases related with threat and we have each other to reflect and collide ideas with on the subject. Any one of us can easily see a potential or capability as a threat just from the breadth of unknown that goes with forecasting along those lines.
It may be a very poor assumption that machine language AI would be capable of having a relatable perception of threat that we do. Unless we give it to it or until we can map accurately what is for now a mostly unknown chain of circumstance which would lead the AI to know threat...
Yes this is what i meant that he and the others did the best he could to destroy the concept. Look at losers like Castro also and even Che etc. They gave the other side all kinds of arguments.
They all went the wrong way.
But in reality of course there is no right way to communism either. You cannot railroad all people and their individualism and make them soldiers to a state system where you do not have freedoms and cannot see personal incentives to reward your better work ethic and skills than others. People like to own and grow property, break all kinds of norms and see progress in their lives that is the result of their efforts and the individual must be loved and protected by the state and invited to attempt creative dissent.
Communism fails also at its ideas level because it fails the human condition first. That doesnt mean all its ideas are worthless though or that it didnt achieve in certain metrics some success.
This is why you need scientific society, to combine the best of all worlds/ideas and to create a system that is evolving like science does. Our politicians wordwide have failed the planet. There is no correction to this system, only a complete remodeling. The first who is brave enough to do it will win everything and the others will be convinced rapidly.
The catalysis can happen so far in over 5 modes i have spotted. It is never enforced unless it solves a crisis/need. One of them is an early society on Mars for example. The other is a third world experiment. Another one is poor people in the west. Another is new companies formed in multiple synergies with greater internal structure where the workers are rewarded directly in quality of life improvement for their work/services on top of some money too. A self sustainable system based on cooperation to remove inefficiencies and infighting in some sectors that invites all that need a job and have some skills or desire to get skills offering solutions to their needs instantly is a powerful example in a time of crisis. No more desperate people that cannot find employment. A properly structured charity model where those that are helped are also participating in the system that helps them with intention to geometrically expand its reach and strength is ideal for scientific society start up too. But they need to be permanently ambitious about their functions.
They all went the wrong way.
But in reality of course there is no right way to communism either. You cannot railroad all people and their individualism and make them soldiers to a state system where you do not have freedoms and cannot see personal incentives to reward your better work ethic and skills than others. People like to own and grow property, break all kinds of norms and see progress in their lives that is the result of their efforts and the individual must be loved and protected by the state and invited to attempt creative dissent.
Communism fails also at its ideas level because it fails the human condition first. That doesnt mean all its ideas are worthless though or that it didnt achieve in certain metrics some success.
This is why you need scientific society, to combine the best of all worlds/ideas and to create a system that is evolving like science does. Our politicians wordwide have failed the planet. There is no correction to this system, only a complete remodeling. The first who is brave enough to do it will win everything and the others will be convinced rapidly.
The catalysis can happen so far in over 5 modes i have spotted. It is never enforced unless it solves a crisis/need. One of them is an early society on Mars for example. The other is a third world experiment. Another one is poor people in the west. Another is new companies formed in multiple synergies with greater internal structure where the workers are rewarded directly in quality of life improvement for their work/services on top of some money too. A self sustainable system based on cooperation to remove inefficiencies and infighting in some sectors that invites all that need a job and have some skills or desire to get skills offering solutions to their needs instantly is a powerful example in a time of crisis. No more desperate people that cannot find employment. A properly structured charity model where those that are helped are also participating in the system that helps them with intention to geometrically expand its reach and strength is ideal for scientific society start up too. But they need to be permanently ambitious about their functions.
Feedback is used for internal purposes. LEARN MORE