Open Side Menu Go to the Top
Register
When will the Robot Apocalypse arrive? When will the Robot Apocalypse arrive?

05-24-2016 , 03:17 PM
Quote:
Originally Posted by mackeleven
However there may be another side to it. We may have jobs where we simply do not allow due to having no incentive for AIs to partake in at all. Playing video games is a job for some now. Selling real estate in virtual worlds such as Second Life is a job, etc. It's one possible direction. Jobs that are essentially recreational in nature.
lol, 'video game bots, though', just like in the 'poker is dying' threads.
When will the Robot Apocalypse arrive? Quote
05-24-2016 , 03:22 PM
Hook me up with some computronium. My neocortex digs the sound of that.
When will the Robot Apocalypse arrive? Quote
05-24-2016 , 03:30 PM
Quote:
Originally Posted by Howard Beale
The fact is is that the number of ppl that consider themselves 'garbage' and it's nbd if they and their children simply fade away, goodbye human race, time to hand it over to machines of all things, is very, very small. The rest of them are human and I'll bang the drum again: Humans can, and do, act very badly over much less than what's coming.
I think they just will be lured into it. Some robotoy will pave the way.

It's not that AI is starting an apocalypse tomorrow at noon. The AI will make it possible to pay people for nothing, because it will be doing all the jobs, so they can go and watch more football. And play some live poker too, even in the daytime.

Last edited by plaaynde; 05-24-2016 at 03:39 PM.
When will the Robot Apocalypse arrive? Quote
05-25-2016 , 12:57 PM
Quote:
Originally Posted by Howard Beale
lol, 'video game bots, though', just like in the 'poker is dying' threads.


Really, though, if you take yourself out of the equation and imagine looking at earth and the human race as objectively as possible; imagine looking at the Earth from the ISS or the Moon. Then imagine machines doing all the work for the human race. It's a golden opportunity and has been the apparent goal since the invention of simple tools, to make our life easier. Are we really going to screw that up? ( If we do, we absolutely deserve it. )
When will the Robot Apocalypse arrive? Quote
05-25-2016 , 02:16 PM
Yes, we are going to screw it up.
When will the Robot Apocalypse arrive? Quote
05-25-2016 , 03:18 PM
No we are not just going to screw it up. We are going to screw it up and try all kinds of bs "solutions" that continue to screw it up and then finally f*cking finally we will get it and begin scientific society that is by definition the way to not screw it up and have machines do all the work, all designed by science and logic, finally god damn it!

As someone (Churchill) once said. You can always count on Americans (put here all humans anyway) to do the right thing after they have tried everything else.
When will the Robot Apocalypse arrive? Quote
05-25-2016 , 05:43 PM
Sigh,...................an optimist. I have an uncle who once explained to me his approach to investing: 'I'm aggressive and I'm an optimist' all while laughing at me for taking a more conservative approach. He and his wife are now both in their mid 80's and dead broke. Borrow money from me and not be able to pay back broke, at that.
When will the Robot Apocalypse arrive? Quote
05-25-2016 , 06:07 PM
Quote:
Originally Posted by Howard Beale
Sigh,...................an optimist. I have an uncle who once explained to me his approach to investing: 'I'm aggressive and I'm an optimist' all while laughing at me for taking a more conservative approach. He and his wife are now both in their mid 80's and dead broke. Borrow money from me and not be able to pay back broke, at that.


Risk taking isn't exactly optimism and risk aversion isn't exactly pessimism.
When will the Robot Apocalypse arrive? Quote
05-25-2016 , 06:31 PM
Quote:
Originally Posted by spanktehbadwookie
Risk taking isn't exactly optimism and risk aversion isn't exactly pessimism.
Well, something is something when you're in your 60's and everything is leveraged to the hilt. Max on the mortgage, aggressive and leveraged in the market, etc, etc. How do I know this? Bec I mentioned that my mother had much of her money in fixed income. He barked his laughing scorn in my face. Then, 2 years ago, he asked for 100K explaining that he was selling something for 500K in 4 months. I sent him 25K. 4 months goes by. 5, 6 months. I finally send an email. I can't believe, to this day, that he actually said 'They say you shouldn't lend money to friends and relatives.' lolololol. So I ask what happened to the 500K and THEN he tells me that he owed more on that thing than he'd thought so I say something like you said your getting 500K and he said yes, he was selling the thing for that but it turned out he owed even more. So, jfc! wtf! he misled me. Glad I didn't send the hundred.

Also, sorry for the derail. I really like masque and I wish him the best but his 'scientific society' is just too much for me. IMR, what's the first thing we do w/ science? We weaponize it if we can.

It's going to be bad, I'm not joking, and there is absolutely NOTHING that can be done bec 'MONEY!' Money, money, money, money, nobody can stand in the way of the money and the end result is inevitable.

Btw, if you're wondering what the thing he was selling was I can only relate this: He'd borrowed some money and as collateral or something took out a 4.5milly life insurance policy on his 80+ y.o. wife and he was going to sell it and get 500K. So I said 'if you only get 500K for a 4.5milly policy on an 84 or so woman you're getting robbed' blah, blah. It was something like that. Obv he owed more on his loan than the 500K. But that's how he rolled. When the market did nothing go up he lived very large, 9 milly house in Greenwich, CT, didn't sell at the top of the market bec his started out dirt poor wife wouldn't agree, blah, blah.

Sorry for the rant.
When will the Robot Apocalypse arrive? Quote
05-25-2016 , 09:28 PM
No worries Howard, my eye-ears are here even for rants. I can relate to having not so responsible relatives as well.

Scientists are , in theory , natural optimists in the face of errors as it comes with the method.

I, for one, enjoy optimism for no particular reason as a strategy.
When will the Robot Apocalypse arrive? Quote
05-25-2016 , 10:08 PM
Its a thin line between optimism and wishful thinking.

Even thinner between pessisism and fatalism.
When will the Robot Apocalypse arrive? Quote
05-25-2016 , 10:35 PM
Quote:
Originally Posted by masque de Z
Certainly we had very few animals that were very smart anyway (as smart as AI will see us i mean) and towards them we tend to be more careful these days. The smarter other animals of this planet never came close to producing mathematics or understanding the universe. This is what makes us very different and forces AI to respect us as a system more than we respect say insects. This is why the stupid analogies people make about how AI will see us like we see lower life is not appropriate. AI is essentially better versions of us not something completely different that we are unable to grasp. It is still forced to share mathematics and laws of nature. What do we really share that is so dramatically powerful with say ants?
.

a couple hours ago I shared Ortho Home Defense to the ants outside my house, I assume that's the type of thing that AI would share with us.

I could have moved to the other side of the city, sure. there's a pool over there and plenty of other, albeit less attractive, matter and the ants couldn't really **** with me too much from that far away...

then I realized, "they're just ants, **** 'em" and spent an absurdly low amount of energy to prevent them from ever being a problem again, whether here or across the city or across the universe.
When will the Robot Apocalypse arrive? Quote
05-25-2016 , 10:48 PM
Quote:
Originally Posted by VeeDDzz`
Its a thin line between optimism and wishful thinking.



Even thinner between pessisism and fatalism.


Huge line between optimism and wishful thinking as I know the two.

One can do pessimistic wishful thinking.


Pessimism and skepticism blur more than optimism and wishful thinking. An optimistic skeptic has a good position to avoid both, eventually.
When will the Robot Apocalypse arrive? Quote
05-25-2016 , 11:58 PM
Quote:
Originally Posted by wiper
a couple hours ago I shared Ortho Home Defense to the ants outside my house, I assume that's the type of thing that AI would share with us.

I could have moved to the other side of the city, sure. there's a pool over there and plenty of other, albeit less attractive, matter and the ants couldn't really **** with me too much from that far away...

then I realized, "they're just ants, **** 'em" and spent an absurdly low amount of energy to prevent them from ever being a problem again, whether here or across the city or across the universe.
But the ants went for it by being a nuisance when they didnt have to since there is a ton of space out there humans do not live in. If the ants were intelligent enough and conscious of what is about to happen they would have behaved differently. This is the big deal difference here. If you saw an ant colony that was calculating Pi with experiments in the sand you better believe it you would find a way to move them elsewhere amazed at the connection.

The ants didnt create us. But we would have created AI.

An ant doesnt have dreams and plans of a future with improving technology and science. An ant is static as it was 10 mil years ago in behavior.
(that may not be entirely true - they interact with us for example and adapt behavior and i would love to know more but it sure looks that way as if guided robotically by the same basic functions that simply adapt to environment but do not develop better technology and behavior on its own as part of experimenting with new approaches that the next generation builds on. Most lower animals look settled.)

That is an enormous difference.

AI can reason with us and not with the ants.

Plus i personally i dont have a big problem if AI took out some 50% of losers and scumbags that exist out there that make the world horrible place to begin with with their insane priorities and unethical conduct towards others.

All that humans needed to do is have only 2 kids per family and the population would stabilize until we were super advanced and efficient in everything. I dont mind if a few that dont get it get some attitude adjustment. But a high intelligence finds a bigger challenge in containing and controlling things without closing doors than in simplistically solving things with violence.

Deep down you know it that if it was easy to move the ants intelligently with some chemical approach or other incentives rapidly out of the problem areas and into the desert for example you would find joy in manipulating the situation that way. It is just harder for now without technology. Its a lot more cool and probably efficient too in the end if you have the technology to play this way though. I can kill some insects that get to be a nuisance but i find also pleasure in trapping a spider and taking her out of the home too. I like second chances to those that are a bit more interesting than the other insects such as flies and mosquitoes that are not ever going to get it.

Last edited by masque de Z; 05-26-2016 at 12:10 AM.
When will the Robot Apocalypse arrive? Quote
05-26-2016 , 12:06 AM
Quote:
Originally Posted by masque de Z
i dont have a big problem if AI took out some 50% of losers and scumbags that exist out there that make the world horrible place to begin with with their insane priorities and unethical conduct towards others.
Because the AI would somehow believe that those 50% had the free will to decide and act on what is 'ethical' and that by exercising their free will, they chose the unethical option, and thus need to be held morally responsible?

Or

Because the AI would be a determinist and would not believe in moral responsibility, but rather only deterrence? in which case, its calculations may reveal that the best way forward in terms of deterrence.... is to kill all.

My contention is that hypothetical AIs that do not believe in free-will are likely to be far more dangerous.

Last edited by VeeDDzz`; 05-26-2016 at 12:15 AM.
When will the Robot Apocalypse arrive? Quote
05-26-2016 , 01:14 AM
AI's adjudicating justice is plot material. Robo-cop, Robo-judge, Robo-jury, Robo-jailer, Robo-executioner.
When will the Robot Apocalypse arrive? Quote
05-26-2016 , 02:38 AM
Quote:
Originally Posted by masque de Z
Plus i personally i dont have a big problem if AI took out some 50% of losers and scumbags that exist out there that make the world horrible place to begin with with their insane priorities and unethical conduct towards others.
And... back to the Politics Forum, where the worst you will read is a debate about the morality of murdering people who attempt to cross the U.S. border.*

* I hope you realize that you are a bigger loser and scumbag than almost anybody you'd hope to kill (except people as apparently insane as you are?) if you're genuinely okay with the liquidation of 4 billion people. **

** I'm grunching and I hope I'm misunderstanding what you said.
When will the Robot Apocalypse arrive? Quote
05-26-2016 , 02:44 AM
Quote:
Originally Posted by VeeDDzz`
Because the AI would somehow believe that those 50% had the free will to decide and act on what is 'ethical' and that by exercising their free will, they chose the unethical option, and thus need to be held morally responsible?

Or

Because the AI would be a determinist and would not believe in moral responsibility, but rather only deterrence? in which case, its calculations may reveal that the best way forward in terms of deterrence.... is to kill all.

My contention is that hypothetical AIs that do not believe in free-will are likely to be far more dangerous.
Hi, it's been a while, not to be rude, but looks like we've yet to learn anything about what determinism and moral responsibility mean! I'll check back next year.
When will the Robot Apocalypse arrive? Quote
05-26-2016 , 02:51 AM
Quote:
Originally Posted by smrk2
Hi, it's been a while, not to be rude, but looks like we've yet to learn anything about what determinism and moral responsibility mean! I'll check back next year.
That really depends on how loosely you choose to define determinism.

There's no consensus in the modern literature on philosophy. If you like to pretend there is, go right ahead.
When will the Robot Apocalypse arrive? Quote
05-26-2016 , 03:10 AM
Quote:
Originally Posted by VeeDDzz`
That really depends on how loosely you choose to define determinism.
It is a question of definition, but not of looseness. You can't say/imply that determinists don't believe in moral responsibility, that's entirely wrong, most determinists believe in moral responsibility, some but fewer do not. It is not that the determinist who defines determinism more loosely or less loosely then believes or doesn't believe in moral responsibility, rather determinists generally agree about the definition of determinism and argue whether it is consistent with beliefs about moral responsibility.

Quote:
There's no consensus in the modern literature on philosophy. If you like to pretend there is, go right ahead.
Sure there is, there is the consensus that determinism is if it's the case that events are necessarily caused by prior events in conjunction with the laws of nature. Don't confuse the finer discussions about the concept or its implications with the absence of an operative definition.
When will the Robot Apocalypse arrive? Quote
05-26-2016 , 03:15 AM
Quote:
Originally Posted by smrk2
And... back to the Politics Forum, where the worst you will read is a debate about the morality of murdering people who attempt to cross the U.S. border.*

* I hope you realize that you are a bigger loser and scumbag than almost anybody you'd hope to kill (except people as apparently insane as you are?) if you're genuinely okay with the liquidation of 4 billion people. **

** I'm grunching and I hope I'm misunderstanding what you said.
Its funny that you would think this is what i meant. You should know better that i couldn't be consistent philosophically with the rest i say in SMP with such a position.

I didnt call 50% of people that live now losers etc. Of course others have said such things before hopefully joking.

I said 50% of the losers and scumbags that exist out there. That hopefully is 50% of some 1-5% or less of people not 4 bil (didn't even say 100% lol as 50% would be an instant boost). But you sure know often who these are and they are frequent. You cannot believe that people who make decisions to make money that ultimately kill innocent consumers (and all that support them in this game) or who decide a military strike that kills many civilians when they can avoid and do it later with a better selection and patience or the ISIS people that did modern monstrosities we all witnessed, born out of the dark ages, are not worthy of instant wish to not have been there if a better alternative were impossible. And how about those that profit by enslaving others eternally to a life of despair and then use their gained resources to kill better outcomes that emerge here and there and maintain their power to continue their crimes. How about countless politicians and their choices and corruption? Can't you find easily a 10% of loser leaders out there?

Even all these can be properly reformed but losing them just like that suddenly wouldnt be a great problem either if it removes or seriously undermines the conditions that harm the others. It wouldnt be humans that would decide something like that anyway. It was a hypothetical because probably neutralizing them is a good idea from a utility perspective if one had such super power as the suggested future AI (supernanny lol). The crimes people commit during a lifetime of selfish destructive behavior are known mostly to them only. I didnt elevate myself to any judge and executioner. But we know they are out there in great numbers. A child doesn't start that way. They make them gradually.

Last edited by masque de Z; 05-26-2016 at 03:22 AM.
When will the Robot Apocalypse arrive? Quote
05-26-2016 , 03:20 AM
Quote:
Originally Posted by smrk2
You can't say/imply that determinists don't believe in moral responsibility,
Yes I can: if I view responsibility as originating in a person's freedom to choose and their intentions, as opposed to outcomes or consequences of their choices.

Tell me. What fault exactly is there in such a view of responsibility?
Quote:
Originally Posted by smrk2
most determinists believe in moral responsibility, some but fewer do not.
Got evidence of this?
Quote:
Originally Posted by smrk2
It is not that the determinist who defines determinism more loosely or less loosely then believes or doesn't believe in moral responsibility, rather determinists generally agree about the definition of determinism and argue whether it is consistent with beliefs about moral responsibility.
It is always a matter of looseness when it comes to non-falsifiable concepts like determinism and free-will.
Quote:
Originally Posted by smrk2
Sure there is, there is the consensus that determinism is if it's the case that events are necessarily caused by prior events in conjunction with the laws of nature. Don't confuse the finer discussions about the concept or its implications with the absence of an operative definition.
What operative definition?
When will the Robot Apocalypse arrive? Quote
05-26-2016 , 03:27 AM
Quote:
Originally Posted by masque de Z
Its funny that you would think this is what i meant. You should know better that i couldn't be consistent philosophically with the rest i say in SMP with such a position.

I didnt call 50% of people that live now losers etc. Of course others have said such things before hopefully joking.

I said 50% of the losers and scumbags that exist out there. That hopefully is 50% of some 1-5% or less of people not 4 bil (didn't even say 100% lol as 50% would be an instant boost). But you sure know often who these are and they are frequent. You cannot believe that people who make decisions to make money that ultimately kill innocent consumers (and allt hat support them in this game) or who decide a military strike that kills many civilians when they can avoid and do it later with a better selection and patience or the Isis people that did modern monstrosities we all witnessed born out of the dark ages are not worthy of instant wish to not have been there if a better alternative is impossible. And how about those that profit by enslaving others eternally to a life of despair and then use their gained resources to kill better outcomes that emerge here and there and maintain power.

Even all these can be properly reformed but losing them just like that suddenly wouldnt be a great problem either if it removes or seriously undermines the conditions that harm the others. It wouldnt be humans that woudl decide something like that anyway. It was a hypothetical because probably neutralizing them is a good idea from a utility perspective. The crimes people commit during a lifetime of selfish destructive behavior are known mostly to them only. I didnt elevate myself to any judge and executioner.
Right, I was going to put in a third footnote accounting for the narrowing of "losers and scumbags" to a far smaller number, and then emphasize that it's still insane to be ok with an AI getting rid of 50% of losers and scumbags (still 10s of millions of losers and scumbags right?).

Being a loser or a scumbag is not a capital offense. I'm actually not even interested in an insipid utility argument either, I'd rather challenge you on the cowardice of deferring the act of killing people to an AI and then hiding behind not elevating yourself to be a judge or executioner. If you want somebody dead, you should have the courage of your convictions to do it yourself (you can still use a robot army but it's on you to give the order).
When will the Robot Apocalypse arrive? Quote
05-26-2016 , 03:53 AM
Quote:
Originally Posted by VeeDDzz`
Yes I can: if I view responsibility as originating in a person's freedom to choose and their intentions, as opposed to outcomes or consequences of their choices.

Tell me. What fault exactly is there in such a view of responsibility?

Got evidence of this?

It is always a matter of looseness when it comes to non-falsifiable concepts like determinism and free-will.

What operative definition?
Holy God.

- No you can't possibly say that because it's demonstrably false and it's not even up for debate. There are any number of philosophers like Dennett or the Churchlands who are determinists and believe in moral responsibility.

- It's hard to believe you missed the 100s of times this was linked to in the 100s of free will threads I recall your tedious participation in but here it is one more time:

http://philpapers.org/surveys/results.pl

Quote:
Free will: compatibilism, libertarianism, or no free will?

Accept or lean toward: compatibilism 550 / 931 (59.1%)
Other 139 / 931 (14.9%)
Accept or lean toward: libertarianism 128 / 931 (13.7%)
Accept or lean toward: no free will 114 / 931 (12.2%)
60% of philosophers accept or lean toward the view that determinism is compatible with free will.

- I'm just going to use my judgment that you neither know nor are capable of defending the theory of meaning you are purporting to believe in (because you like the sound of the words or what they might vaguely represent?) and suggest to you that your statement, "It is always a matter of looseness when it comes to non-falsifiable concepts like determinism and free-will." is non-falsifiable and therefore some loose horse **** indeed.

- The operative definition that is in the ****ing comment you just quoted? Determinism is if events are necessarily caused by prior events and conditions in conjunction with the laws of nature.
When will the Robot Apocalypse arrive? Quote

      
m