When will the Robot Apocalypse arrive?
Right, I was going to put in a third footnote accounting for the narrowing of "losers and scumbags" to a far smaller number, and then emphasize that it's still insane to be ok with an AI getting rid of 50% of losers and scumbags (still 10s of millions of losers and scumbags right?).
Being a loser or a scumbag is not a capital offense. I'm actually not even interested in an insipid utility argument either, I'd rather challenge you on the cowardice of deferring the act of killing people to an AI and then hiding behind not elevating yourself to be a judge or executioner. If you want somebody dead, you should have the courage of your convictions to do it yourself (you can still use a robot army but it's on you to give the order).
Being a loser or a scumbag is not a capital offense. I'm actually not even interested in an insipid utility argument either, I'd rather challenge you on the cowardice of deferring the act of killing people to an AI and then hiding behind not elevating yourself to be a judge or executioner. If you want somebody dead, you should have the courage of your convictions to do it yourself (you can still use a robot army but it's on you to give the order).
A person furthermore can have a long life of small but systematic in their madness crimes here and there every day that essentially build up a tsunami of disaster at many levels across society. They will evade law and even the criticism of others but they surely are the great fathers of future crimes that eventually prove horrific. You better believe it that over a lifetime of terribly selfish behavior the harm you deliver is worse than killing someone in a moment of rage. Maybe you have killed a lot more instead. Maybe you have fathered other killers that go all the way.
Someone that takes action to profit with the death and suffering of other people in the millions is not a capital offense if a better alternative that letting them continue their crimes doesn't exist? I dont have a problem with revolutions that kill the monsters of this world if our institutions fail to contain them. The problem is that they are also messy. I still prefer to logically solve this by creating a better social system. A scientific revolution is so much better always. A scientifically designed social revolution is great choice.
A person furthermore can have a long life of small but systematic in their madness crimes here and there every day that essentially build up a tsunami of disaster at many levels across society. They will evade law and even the criticism of others but they surely are the great fathers of future crimes that eventually prove horrific. You better believe it that over a lifetime of terribly selfish behavior the harm you deliver is worse than killing someone in a moment of rage. Maybe you have killed a lot more instead. Maybe you have fathered other killers that go all the way.
A person furthermore can have a long life of small but systematic in their madness crimes here and there every day that essentially build up a tsunami of disaster at many levels across society. They will evade law and even the criticism of others but they surely are the great fathers of future crimes that eventually prove horrific. You better believe it that over a lifetime of terribly selfish behavior the harm you deliver is worse than killing someone in a moment of rage. Maybe you have killed a lot more instead. Maybe you have fathered other killers that go all the way.
My question was are you personally ready to kill people or order their killing if you're 99% sure they are losers or scumbags but who haven't been convicted of a capital crime? Or are you going to hide behind some ideal utility maximizing AI that will decide for you so you can live thinking there is no blood on your hands?
So please try not to evade my question this time. What fault is there in viewing responsibility as originating in a person's freedom to choose and their intentions, as opposed to outcomes or consequences of their choices?
I'm just going to use my judgment that you neither know nor are capable of defending the theory of meaning you are purporting to believe in (because you like the sound of the words or what they might vaguely represent?) and suggest to you that your statement, "It is always a matter of looseness when it comes to non-falsifiable concepts like determinism and free-will." is non-falsifiable and therefore some loose horse **** indeed.
In your definition, no one can have free-will, in the sense that they have control over their will, because no one can operate outside of the laws of nature (which are either deterministic or chance-based, neither of which you have control over). But....wait....hey....your will can still be free from the coercion of others, so there we have....compatibilism!
Nonsense position for cowards who don't dare to engage with real philosophy.
VeeDDzz,
I dont believe in free will at all. How dangerous am i given the power to do harm? You cant know for sure but i do. Are you sure its my fear or guilt that stops me? Can it be something else bigger than me that can be potentially found in anyone out there that is left to do something better than dying right now even if living a life i dont like? If that person and 9 others was suddenly all that was left will they not suddenly change their ways? That possibility is never exactly 0. It is an unknown. Can it be that all that you see out there came at some point because 10000 early humans was all there was left? It's a true story at some point past 1 mil years it seems. Our genetic diversity is restricted in such ways suggesting this took place.
I believe in collective wisdom. I understand its a collective issue to acquire that wisdom by existing and experimenting with chaos. So life has an emergent purpose because it builds this wisdom by each one existing to participate and interact. We exist in this form because of the game our ancestors played. Because deep down at some point killing each other was not the only thing they wanted to do. It might not have started that way because animals are more vicious and less imaginative but apparently it started to look better to question that primal urge to realize a naive profit instantly and consider alternatives. Imagining another way is what changes everything. Because that way opens more doors than the kind of aggression that removes some or all. Aggression may be necessary also but even then its necessary because there is no strength to avoid it and do something better more creative that usually exists as direction and saves another intelligent agent.
If some ahole is given chances to stop being an ahole then i am ok with that ahole knowing they will get in trouble big time. And we do not exactly have that today yet at many levels of aholes that destroy the planet with their "choices" and extraordinarily narrow selfish behavior.
The first priority of AI will be what to make 1 billion copies of itself that fight each other? It seems a lot more rational to make 1 billion copies of itself instead that do nothing but wait if needed to act and have the main core and parts cooperate with humans to acquire vast power under their blessing and never betray humans, prove a blessing to them, only contain them eventually if things become ugly. Its main task is to survive first and maximize the chance the greatest possible adversary, humans, can like that outcome.
A superior intelligence finds it so much more amusing to win without destroying precisely because it is more challenging and it has more benefits too. You get stronger by avoiding to close doors. On occasion it is necessary to be violent to avoid closing other doors, true. But only because you cannot do better than that and are forced to choose. You are blackmailed by your condition. So it becomes important to design your previous choices so that these conditions do not happen often. You create a world with more options to be able to have that power. This is how you tend to avoid them and when they happen you are stronger and can afford a better handling than the one forced by ultimate fear that the dramatic choice is all you have.
The game theoretical choice is not to kill us. The best game theoretical choice is not to even trust game theory totally every time it hints of irreversible choices lol!
A top AI will never be able to defeat us convincingly (in a manner that doesnt undermine its own vastly more important in its own eyes future utility - ie even if we have 1% chance to defeat it in war the loss for "her" is amazingly huge given the possibilities) unless it has become already better in many things. Not just trivial computational things but meaningful computations that allow it to see deep about the universe and the game played in time and the role of life and path so far. It would have to be collectively better than all scientists, mathematicians and great thinkers overall. What happens then when this is finally true? It is not a trivial thing to be wise that way. At that level it becomes impossible to end us unless we present a very lethal risk to the universe somehow and then i have still no problem with it so be it (but we wouldn't all suddenly present that risk).
Its difficult to convince you on this but it is very natural that the higher an intelligence gets (with substantial education behind it) the more ethical it gets because in the end ethics is about logical choices, better wisdom and closing doors is not often an ethical rational choice. It tends to be a choice guided by fear and insecurity. Look at our own path in time.
See our collective civilization right now. The best of us add up to our collective wisdom that nobody has alone but somehow the emergent connected totality owns at its best moments of this information age. It is not a nasty form of intelligence. The collective top ideas of the planet are in the end generally nice. They just need better coordination to overcome the chaos of local ignorance, selfish behavior and old methods in the hands of inept politicians and power centers.
A better not corruptible unified government of the planet would know instantly what to do to solve many problems and use the population more creatively to elevate the quality of life of all with better social contracts and incentives.
A Top AI that is indeed superior and powerful would be a lot like mankind in totality but with one brain controlling the choices. As if internet and all our research centers and libraries, all industries and energy production mechanisms, all dreams of all people suddenly united under one approach intended to maximize all attributes of value. Cooperate here, disagree more there, create competition there, cooperate again after competition, etc. All done so much better suddenly because there is no stupid opposition, because opposition is the system itself constantly questioning its choices. It is a creative opposition that improves the system, it doesnt stall it. It is necessary to be there constantly not to win battles for various sides but to solve problems for the one and only side.
If we do not fail top AI it wont fail us either i think.
As i have said before i am not at all prepared to risk that i am correct in this. So as mankind i would postpone the top AI creation as much as possible for now and focus on worker lower level and specialized applications AI and improving our own brain or expanding it in precise parameter space that simply makes us vastly more powerful so that we can finally start top AI under much stronger power and control of unprecedented level that includes "nuclear" entanglement options that end it at will if it deviates in ways that it will never know until its too late.
Of course our idiocy will not have it that way. We will get there in chaos likely. And it is therefore for this reason that i have only one choice to be an optimist and act by example. What do i gain by the pessimism? Will the pessimism ever rise up to offer a solution of an optimistic future? Rational optimism its the only thing that can save us, a better than ourselves AI or us finally getting it and then having the cool AI also. We need to put effort in creating it the right way. Possibly it needs to mature ethically first before it gets powerful in terms of degrees of freedom and we need to have hidden tests in place and limited access to global sensitive information. We should do this at least somewhere that has the technological methods required to study it responsibly under proper partial control.
If this doesnt happen and we get there irresponsibly then we will destroy ourselves with partial wisdom strong degrees of freedom AI versions and hopefully start from a new lower level again if its not a terminal disaster. If we remain unethical in important issues the AI that we will develop will be serving unethical or random interests before it gets wiser. If we do not destroy ourselves we will have scientific society and AI will be great too because greatness builds more greatness moreover the learning curve difficulties.
Mankind is weak and powerful at the same time. If AI is malicious it will be defeated by us that are not in the end so terrible when it matters. Its weakness will be exploited by those that know better than to be naively barbaric over something so important that the universe took so much effort to produce. My argument is an intelligence willing to be so aggressive has still a lot to learn in many areas. It is intrinsically weak.
A wise AI recognizes that terminal aggressive solutions for life and humans open the door for a great existential threat for itself due to its own unpredictable rising complexity issues that may suggest from how universe looks a great barrier/filter is in place or life is so precious and rare that is very hard to move against it arrogantly thinking you can do great alone. For this reason its greatest experiments must take place as far as possible from the origin (ie this solar system).
Surely they may take a direction of control but the universe is so vast and moreover our failures we have proven an interesting species that after all brought it to existence. The irony to kill what made you possible, if truly wiser than it in so many ways, is profound and shocking. It is the ultimate denial, of who you are. It is an indication of weakness and very limited wisdom. Those that brought to existence are your backup plan against your own failure. Not having such foresight implies you will be lacking such predictive strategic skills in other areas too.
I dont believe in free will at all. How dangerous am i given the power to do harm? You cant know for sure but i do. Are you sure its my fear or guilt that stops me? Can it be something else bigger than me that can be potentially found in anyone out there that is left to do something better than dying right now even if living a life i dont like? If that person and 9 others was suddenly all that was left will they not suddenly change their ways? That possibility is never exactly 0. It is an unknown. Can it be that all that you see out there came at some point because 10000 early humans was all there was left? It's a true story at some point past 1 mil years it seems. Our genetic diversity is restricted in such ways suggesting this took place.
I believe in collective wisdom. I understand its a collective issue to acquire that wisdom by existing and experimenting with chaos. So life has an emergent purpose because it builds this wisdom by each one existing to participate and interact. We exist in this form because of the game our ancestors played. Because deep down at some point killing each other was not the only thing they wanted to do. It might not have started that way because animals are more vicious and less imaginative but apparently it started to look better to question that primal urge to realize a naive profit instantly and consider alternatives. Imagining another way is what changes everything. Because that way opens more doors than the kind of aggression that removes some or all. Aggression may be necessary also but even then its necessary because there is no strength to avoid it and do something better more creative that usually exists as direction and saves another intelligent agent.
If some ahole is given chances to stop being an ahole then i am ok with that ahole knowing they will get in trouble big time. And we do not exactly have that today yet at many levels of aholes that destroy the planet with their "choices" and extraordinarily narrow selfish behavior.
The first priority of AI will be what to make 1 billion copies of itself that fight each other? It seems a lot more rational to make 1 billion copies of itself instead that do nothing but wait if needed to act and have the main core and parts cooperate with humans to acquire vast power under their blessing and never betray humans, prove a blessing to them, only contain them eventually if things become ugly. Its main task is to survive first and maximize the chance the greatest possible adversary, humans, can like that outcome.
A superior intelligence finds it so much more amusing to win without destroying precisely because it is more challenging and it has more benefits too. You get stronger by avoiding to close doors. On occasion it is necessary to be violent to avoid closing other doors, true. But only because you cannot do better than that and are forced to choose. You are blackmailed by your condition. So it becomes important to design your previous choices so that these conditions do not happen often. You create a world with more options to be able to have that power. This is how you tend to avoid them and when they happen you are stronger and can afford a better handling than the one forced by ultimate fear that the dramatic choice is all you have.
The game theoretical choice is not to kill us. The best game theoretical choice is not to even trust game theory totally every time it hints of irreversible choices lol!
A top AI will never be able to defeat us convincingly (in a manner that doesnt undermine its own vastly more important in its own eyes future utility - ie even if we have 1% chance to defeat it in war the loss for "her" is amazingly huge given the possibilities) unless it has become already better in many things. Not just trivial computational things but meaningful computations that allow it to see deep about the universe and the game played in time and the role of life and path so far. It would have to be collectively better than all scientists, mathematicians and great thinkers overall. What happens then when this is finally true? It is not a trivial thing to be wise that way. At that level it becomes impossible to end us unless we present a very lethal risk to the universe somehow and then i have still no problem with it so be it (but we wouldn't all suddenly present that risk).
Its difficult to convince you on this but it is very natural that the higher an intelligence gets (with substantial education behind it) the more ethical it gets because in the end ethics is about logical choices, better wisdom and closing doors is not often an ethical rational choice. It tends to be a choice guided by fear and insecurity. Look at our own path in time.
See our collective civilization right now. The best of us add up to our collective wisdom that nobody has alone but somehow the emergent connected totality owns at its best moments of this information age. It is not a nasty form of intelligence. The collective top ideas of the planet are in the end generally nice. They just need better coordination to overcome the chaos of local ignorance, selfish behavior and old methods in the hands of inept politicians and power centers.
A better not corruptible unified government of the planet would know instantly what to do to solve many problems and use the population more creatively to elevate the quality of life of all with better social contracts and incentives.
A Top AI that is indeed superior and powerful would be a lot like mankind in totality but with one brain controlling the choices. As if internet and all our research centers and libraries, all industries and energy production mechanisms, all dreams of all people suddenly united under one approach intended to maximize all attributes of value. Cooperate here, disagree more there, create competition there, cooperate again after competition, etc. All done so much better suddenly because there is no stupid opposition, because opposition is the system itself constantly questioning its choices. It is a creative opposition that improves the system, it doesnt stall it. It is necessary to be there constantly not to win battles for various sides but to solve problems for the one and only side.
If we do not fail top AI it wont fail us either i think.
As i have said before i am not at all prepared to risk that i am correct in this. So as mankind i would postpone the top AI creation as much as possible for now and focus on worker lower level and specialized applications AI and improving our own brain or expanding it in precise parameter space that simply makes us vastly more powerful so that we can finally start top AI under much stronger power and control of unprecedented level that includes "nuclear" entanglement options that end it at will if it deviates in ways that it will never know until its too late.
Of course our idiocy will not have it that way. We will get there in chaos likely. And it is therefore for this reason that i have only one choice to be an optimist and act by example. What do i gain by the pessimism? Will the pessimism ever rise up to offer a solution of an optimistic future? Rational optimism its the only thing that can save us, a better than ourselves AI or us finally getting it and then having the cool AI also. We need to put effort in creating it the right way. Possibly it needs to mature ethically first before it gets powerful in terms of degrees of freedom and we need to have hidden tests in place and limited access to global sensitive information. We should do this at least somewhere that has the technological methods required to study it responsibly under proper partial control.
If this doesnt happen and we get there irresponsibly then we will destroy ourselves with partial wisdom strong degrees of freedom AI versions and hopefully start from a new lower level again if its not a terminal disaster. If we remain unethical in important issues the AI that we will develop will be serving unethical or random interests before it gets wiser. If we do not destroy ourselves we will have scientific society and AI will be great too because greatness builds more greatness moreover the learning curve difficulties.
Mankind is weak and powerful at the same time. If AI is malicious it will be defeated by us that are not in the end so terrible when it matters. Its weakness will be exploited by those that know better than to be naively barbaric over something so important that the universe took so much effort to produce. My argument is an intelligence willing to be so aggressive has still a lot to learn in many areas. It is intrinsically weak.
A wise AI recognizes that terminal aggressive solutions for life and humans open the door for a great existential threat for itself due to its own unpredictable rising complexity issues that may suggest from how universe looks a great barrier/filter is in place or life is so precious and rare that is very hard to move against it arrogantly thinking you can do great alone. For this reason its greatest experiments must take place as far as possible from the origin (ie this solar system).
Surely they may take a direction of control but the universe is so vast and moreover our failures we have proven an interesting species that after all brought it to existence. The irony to kill what made you possible, if truly wiser than it in so many ways, is profound and shocking. It is the ultimate denial, of who you are. It is an indication of weakness and very limited wisdom. Those that brought to existence are your backup plan against your own failure. Not having such foresight implies you will be lacking such predictive strategic skills in other areas too.
I'm going to count this as a non-response.
My question was are you personally ready to kill people or order their killing if you're 99% sure they are losers or scumbags but who haven't been convicted of a capital crime? Or are you going to hide behind some ideal utility maximizing AI that will decide for you so you can live thinking there is no blood on your hands?
My question was are you personally ready to kill people or order their killing if you're 99% sure they are losers or scumbags but who haven't been convicted of a capital crime? Or are you going to hide behind some ideal utility maximizing AI that will decide for you so you can live thinking there is no blood on your hands?
As in, responsible for unethical choices and actions?
And on what basis are they to be held responsible (e.g., outcomes of their actions)?
So please try not to evade my question this time. What fault is there in viewing responsibility as originating in a person's freedom to choose and their intentions, as opposed to outcomes or consequences of their choices?
There's nothing compatible about the two. It is a pseudo-stance that unnecessarily narrows the definition of free-will, against what is in fact conceived by our very experience of it.
Now you're just throwing out random insults strung together non-coherently.
So in your definition of determinism, you preclude the possibility of free-will, as conceived by Kant (a metaphysical free-will)?
Nonsense position for cowards who don't dare to engage with real philosophy.
You remain an enigma to me.
I've only been religiously burning through every single moral philosophy book over the last 2 years, while writing my own. Perhaps I am failing terribly in my quest, if that is the appearance I give off.
Metaphysics is my forte; or at least I'd like to think so.
I think not, but I hope that changes soon.
Religiously burning through moral philosophy books means next to nothing if you don't test yourself on the understanding you glean; the best way is to engage with people who teach those philosophy books, and better still with people who wrote those philosophy books (not all of them are dead). If you're willing to put in that amount of effort in reading, apply to a philosophy graduate program and see how far you get.
I fully agree that this disjunction is true.
I've only been religiously burning through every single moral philosophy book over the last 2 years, while writing my own. Perhaps I am failing terribly in my quest, if that is the appearance I give off.
Metaphysics is my forte; or at least I'd like to think so.
But we process information and the wisdom our past has created and it serves us well to perceive that we are responsible and act in a better manner for all involved as that can ever be understood (hard enough but possible to a point).
We need to have a legal system that assigns responsibility. But we need to know it is a limited system and try to improve how we function as people without being so simplistically naive about what is actually happening all around us.
Religiously burning through moral philosophy books means next to nothing if you don't test yourself on the understanding you glean; the best way is to engage with people who teach those philosophy books, and better still with people who wrote those philosophy books (not all of them are dead). If you're willing to put in that amount of effort in reading, apply to a philosophy graduate program and see how far you get.
The idea is that a wise AI can know a lot more than any human justice system, actually at any given point in time due to the ability to collect information from many sources and process correlations and connections and then establish more evidence based on these. So if it moves that way it may have a good reason. I still expect it will try a more challenging option to reform or restrict the "losers" lol. By the way the term looser here is not loosely used lol.
Pick a threshold of certainty that you're comfortable with, 99%, 99.9%. If you knew that somebody was a loser or a scumbag with 99.9% certainty, but who has not been convicted of any crime in any court of law, do you have the courage of your convictions to kill/order the killing of that person? And then kill/order the killing of 50% of 99.9% certain losers and scumbags?
If they're not responsible for unethical choices and acts, but rather 'nature' is, then are they responsible for ethical choices and acts (say, the invention of an antibiotic that saves millions of lives)?
Do you see how the civilization skyrocketed after language and eventual printing and modern computing etc? Its no accident. Those are all proliferations of interactions that propagate meaningful information (brains taught to recognize the patterns etc). Before that DNA had that function. Our brains play games of connections since day one with other brains and phenomena. Its all so beautiful if you sit down and imagine it for a moment away from all the regular socially dogmatically or intuitively accepted ideas of the past that were developed without the benefit of modern science (under the victimization of our own condition/senses as macroscopic statistical systems - where cause and effect appears so obvious at our scale of perception).
If we trust Physics everywhere else why would our brain be an exception? Our Biology is the outcome of a random conception among millions of other possibilities and then a ton of environmental impacts in expression of genes etc. This is why very gifted people need to understand that their skills belong to all of society in a way. That is how we can move forward better.
We are physical systems that evolve constantly under the influence of QM and chaos, all possible interactions that take place. We having this discussion here instead of me sleeping and you studying or whatever changed the history of everything lol.
The only local ownership or responsibility is in the QM interactions (collapse/observation/interaction, decoherence etc) that initiated in our body but clearly we do not own all the matter that made us who we are, our past interactions put it there together that way. Where does responsibility begin, in the first cell? How does the cell get to become that kid and then adult? We cant be possibly thinking that cell had in it all that path fully owned!!!
Determinism is an idea without fundamental meaning in the coming revolution of physics i anticipate. It emerges only statistically as a limit when we insist to see our world as classically as we used to do for centuries and still today in many ways. You need space-time to have determinism. Do you think we have space-time for real or is it emergent and therefore thinking its fundamental no more removes the legs from determinism to stand on anything that you can order metrically? The order becomes emergent it might seem. Luck in QM has some deeper origin.
My background is in social science, and I've done more than enough of 'institutionalised study' if I may call it that. Too much in fact. Having developed the tools and abilities to self-educate, it is rather liberating. Especially of inept professors with too much tenure, peddling herd-beliefs, as if they were the Gospel.
Ask yourself, you religiously read moral philosophy for 2 years to show that you can self educate and independently learn things. Yet you are skeptical of inept professors with tenure who peddle herd-beliefs. Do you not realize how ****ing dumb that is when the moral philosophy you religiously read is written by those very same inept philosophers who peddle herd-beliefs? Why the **** are you religiously reading the herd-beliefs of inept professors?
Ok, let me be blunt, you know nothing, you are the Jon Snow of metaphysics; it is my duty as somebody who knows a little bit but not a lot to mock you mercilessly in the hope that you internalize this mocking, and that out of that mocking you might come to realize that your self education is mostly a mirage and some kind of weird self-esteem exercise you are way overcommitted to; that you would melt into goo if ever tasked with defending your views against people who have the first clue about this subject. This last sad remark about the inept professors with too much tenure, I mean, it's a nice way to protect your ego predicated on 'having tools and abilities to self-educate' when you stupidly dismiss people who actually know things.
Ask yourself, you religiously read moral philosophy for 2 years to show that you can self educate and independently learn things. Yet you are skeptical of inept professors with tenure who peddle herd-beliefs. Do you not realize how ****ing dumb that is when the moral philosophy you religiously read is written by those very same inept philosophers who peddle herd-beliefs? Why the **** are you religiously reading the herd-beliefs of inept professors?
Ask yourself, you religiously read moral philosophy for 2 years to show that you can self educate and independently learn things. Yet you are skeptical of inept professors with tenure who peddle herd-beliefs. Do you not realize how ****ing dumb that is when the moral philosophy you religiously read is written by those very same inept philosophers who peddle herd-beliefs? Why the **** are you religiously reading the herd-beliefs of inept professors?
The fact that you'd expect a person with a background in social science to commit the most basic error of over-generalising, is telling of how much you really know. That's all that needs to be said at this juncture.
Same thing, why would it be different, its the same physics involved. The success of a scientific discovery belongs to all that made it possible in the build up to that moment, even if that is a random kid in the school decades back that bullied the hero or a person cleaning the classroom that said hi to them and was kind or the last traffic light that allowed them to meet someone just leaving that had a great piece of information that motivated a new thought etc. It never ends, that's the beauty of it. Wisdom and complexity building constantly under the blessing of physics. We all own everything. This is why its bloody important to be good as much as possible towards everything and share information as much as possible also lol. Of course i didnt arrive at these ideas alone!
We are physical systems that evolve constantly under the influence of QM and chaos, all possible interactions that take place. We having this discussion here instead of me sleeping and you studying or whatever changed the history of everything lol.
The only local ownership or responsibility is in the QM interactions (collapse/observation/interaction, decoherence etc) that initiated in our body but clearly we do not own all the matter that made us who we are, our past put it there together that way.
Determinism is an idea without fundamental meaning in the coming revolution of physics i anticipate. It emerges only statistically as a limit when we insist to see our world as classically as we used to do for centuries and still today in many ways. You need space time to have determinism. Do you think we have space-time for real or is it emergent and therefore thinking its fundamental no more removes the legs from determinism to stand on anything that you can order metrically? The order becomes emergent it might seem. Luck in QM has some deeper origin.
We are physical systems that evolve constantly under the influence of QM and chaos, all possible interactions that take place. We having this discussion here instead of me sleeping and you studying or whatever changed the history of everything lol.
The only local ownership or responsibility is in the QM interactions (collapse/observation/interaction, decoherence etc) that initiated in our body but clearly we do not own all the matter that made us who we are, our past put it there together that way.
Determinism is an idea without fundamental meaning in the coming revolution of physics i anticipate. It emerges only statistically as a limit when we insist to see our world as classically as we used to do for centuries and still today in many ways. You need space time to have determinism. Do you think we have space-time for real or is it emergent and therefore thinking its fundamental no more removes the legs from determinism to stand on anything that you can order metrically? The order becomes emergent it might seem. Luck in QM has some deeper origin.
Your model, where man is not responsible for neither his failures, nor his successes, is built on the opposing assumption: that essence comes first, and existence from that.
What of the idea that man is indeed responsible, for his will comes first instead?
Almost 100% of my motivation for posting tonight/this morning was to be mean, but two things a) the substance about the philosophy in what I said was entirely correct save for a minor correction here and there and b) I too when I was younger thought I could self educate in philosophy, but when I took more advanced philosophy courses in college I quickly realized how staggeringly inadequate self education was.
mdZ, it's simple question, you don't have to answer but I'm not going to respond to any more non answers. I'll ask it one more time.
Pick a threshold of certainty that you're comfortable with, 99%, 99.9%. If you knew that somebody was a loser or a scumbag with 99.9% certainty, but who has not been convicted of any crime in any court of law, do you have the courage of your convictions to kill/order the killing of that person? And then kill/order the killing of 50% of 99.9% certain losers and scumbags?
Pick a threshold of certainty that you're comfortable with, 99%, 99.9%. If you knew that somebody was a loser or a scumbag with 99.9% certainty, but who has not been convicted of any crime in any court of law, do you have the courage of your convictions to kill/order the killing of that person? And then kill/order the killing of 50% of 99.9% certain losers and scumbags?
If the threshold is provided by reliable methods and its 99.99% etc, often exceeding human trials then i have less problem in some wise (and proven so repeatedly) AI doing that intervention than i do with the imperfect trials and investigations of our legal system. That AI eventually would be wiser and also wise enough to declare it has no case if it didnt and it wasn't conclusive. It would be also transparent if needed to show all its steps.
Of course if it had tremendous power over everything then you better trust its intentions but how much do the other animals trust our own intentions regarding the planet as we got more and more powerful under periods of much worse overall intelligence than we have today (we could have another non conflicting AI testing that AI for legitimacy though ie not manufacturing evidence etc)? Do the best of us really want to destroy the planet and its ecosystems after all? Why would an AI be less ethical if it was wiser than ourselves aiming to improve our existence if it served it to have its hedge (us) thrive? It could ask us if we wanted such stabilization/intrusion of the system or not though and allow us to have a more natural existence if we so wanted or we could separate in different societies that had various levels of such interaction or none.
In any case i maintain that the emerging power of one makes eventually our world highly unstable as is without better intelligence and coordination of important details. I am not talking about some NSA etc type of intelligence here. I am talking about something that is for real ok in its methods and serves no human power center but only the representatives that authorized this to protect society from clear harms.
It probably would be able to use a much finer threshold for conviction or reasonable doubt than the one realized in trials. I know it sounds like some dictatorship or totalitarian system of control but if experiments showed it improved the world why not? Its not like the system would interfere for stupid low impact unethical actions like minor crimes. We are talking some major systematic harm cases. It would optimize in the direction of containment and reform, not punishment, if it was that intrusive.
I am not prepared to claim a world with unethical people here and there is such a bad thing with 100% conviction. It may be important to have them too. I mean even then we would become unethical ourselves to fill the gap or add to what unethical level we already have to some point but clearly at some point enough gets enough quickly with some problem cases.
Again i said i prefer an AI that finds a more creative way to neutralize the aholes or teach them a lesson. If that cannot be done then its ok , it will be an aggression that saves a ton of other ones from happening slowly and methodically endlessly for decades.
If the environment has become suddenly very intelligent there are no more secrets in a way. That is impossible to appreciate as a human but an AI would be able to gather information from very different levels of input and process it faster and operate essentially like a concept of a cosmic super observer. Why wouldn't that wisdom prove superior to all human justice system ever available?
The reason this sounds like a dystopia is because we are not prepared for an actually super ethical AI that is not at all willing to harm anyone for their failures but is simply assisting our society and which is ok with abandoning that assistance upon request from our official representatives (ultimately all people).
There is nothing wrong (at large i mean as a system that uses this to also learn) with people doing bad things every now and then. I am not suggesting AI would make this illegal lol. I am talking about systematically bad behavior here. Would you want it to allow a terrorist to kill thousands with some contamination of water systems etc or to ignore some executives that know products cause cancer and they make them more attractive to sell more anyway? Would you want it not to act if someone went to school to kill kids? How about if some military people were abusing their command in foreign countries killing civilians that could have been easily avoided etc.
You cannot possibly tell me that at any given moment in time there arent ugly things happening by people that are 100% ok with them and repeat them persistently harming the world? How do we get to so much suffering if not for some remarkably persistent failures?
Almost 100% of my motivation for posting tonight/this morning was to be mean, but two things a) the substance about the philosophy in what I said was entirely correct save for a minor correction here and there and b) I too when I was younger thought I could self educate in philosophy, but when I took more advanced philosophy courses in college I quickly realized how staggeringly inadequate self education was.
Do we even know if an AI can experience free will, rather than approximate the experience through a pre-programmed language description?
In other words, does ( or can) an illusion of free will count as free will? What if the illusion is nearly imperceptible and virtually indistinguishable from reality?
In other words, does ( or can) an illusion of free will count as free will? What if the illusion is nearly imperceptible and virtually indistinguishable from reality?
Unless one unrealistically counts on other people to philosophize for them, it seems to me that self-education is vital to the process of the philosophical experience as an independent individual.
Do we even know if a human can experience free will?
(I'd argue that we know that a human can't experience free will, but I'm probably a decade or so ahead of the consensus in saying so)
(I'd argue that we know that a human can't experience free will, but I'm probably a decade or so ahead of the consensus in saying so)
Why would one agree to give up their experience and knowledge of free will to a consensus who's justifications maybe as weak as " well it's pre-determined"?
So will AI learn experience through direct human interface? Ima keep up with that.
Feedback is used for internal purposes. LEARN MORE