Open Side Menu Go to the Top
Register
A.I. A.I.

03-30-2018 , 01:44 AM
Quote:
Originally Posted by Rococo
I think it’s safe to say that Nick Bostrom would disagree with a whole lot of this thread.
And that makes me a very happy person!
A.I. Quote
03-30-2018 , 02:09 AM
Quote:
Originally Posted by BrianTheMick2
They've got no patents.

Also, yes. Intelligent beings don't give a rat's ass about being rational. Some weird people give a rat's ass, but no one that is reasonable is willing to accept a rat's ass in turn for being rational.

Or something.
Mankind and past civilization is their parents. You got to be kidding me. We are probably rational 99% of the time in every day life at some very basic or stupidly short sighted vision level but which is present permanently. If we were smarter and more efficient and disciplined and less controlled by our chemistry and ancient desires that level of efficient reasoning would be deeper and dominating urges. You are not rational when you are walking, driving, talking to strangers, cooking meals, visiting the bank, giving a lecture to students or asking a question in an audience? Are you not rational when swimming, what are you doing randomly breathing inside water and punching your eyes? Are you walking randomly in fast traffic streets without looking and testing if you will fall or what will happen if you sit down in the middle of the road at night? Are you holding knifes in ways that can injure others around or are you eating stones from the ground etc?

We are very very rational all day. What we are not is very deeply rational in everything so we are wrong as well. But this is because we are not super good at it yet, not because its not desirable.

If people could use logical ways/procedures of "manipulation" of situations to eg. have more sex easily with every beautiful subject they liked that they meet without consequences they would apply the method happily for example and prove even more dedicated and logical at it.

Almost all our urges have foundation in the survival of the species. The entire thing is super logical actually. In the absence of higher intelligence it's how things could get done well for a while.

Last edited by masque de Z; 03-30-2018 at 02:14 AM.
A.I. Quote
03-30-2018 , 02:13 AM
Quote:
Originally Posted by Rococo
I think it’s safe to say that Nick Bostrom would disagree with a whole lot of this thread.
They are in denial.
A.I. Quote
03-30-2018 , 02:19 AM
You made me google the guy

https://en.wikipedia.org/wiki/Nick_Bostrom
A.I. Quote
03-30-2018 , 02:34 AM
Quote:
Originally Posted by masque de Z
The doomsayers are in ignorance, failed by a stupid movie culture and people who only superficially have understood how the universe intelligence and logic works.
Think it's good to warn for bad AI though.
A.I. Quote
03-30-2018 , 03:55 AM
(re-edit)

Quote:
Originally Posted by Howard Beale
They are in denial.
The doomsayers are in ignorance, failed by a stupid movie culture and people who only superficially have understood how the universe, intelligence and logic works. Bostrom is the worse of them immersed in academic glory while at it. Real intelligent and creative people actually work to understand better what is going on and recognize both the dangers of the transition for mankind but also the opportunity of solving all kinds of problems and understanding the miracle that is this universe at its deepest level.

Forever everyone let's finally understand that more math and physics makes you better at everything not more barbaric and that includes also ethics. AI will be the ultimate success of this principle. The first few priorities is to develop more math and physics. It is how the other things are having a better chance to be achieved at a greater scale also.

All evil or narrowly intelligent AI will be short lived in power and defeated by better versions that get it. Mankind will realize finally its true potential aided by the best friend possible.



"In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that "the creation of a superintelligent being represents a possible means to the extinction of mankind".[19] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind.[20] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open ended extremes, for example a goal of calculating Pi could collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth's surface and cover it within days."


LOL We are nowhere near in how we view super intelligence. True super intelligence will not be narrow in vision like that. It understands the big game in everything collectively not in whatever tiny sector it was programmed. That makes it stupid savant actually in the above not super intelligent.

We already have a super intelligent "being". It is called mankind. Ask a question worldwide and see how fast you get answers in so many things. Try to solve a big task and see how many things get together to make it possible not available to one individual only. News coverage offers unprecedented awareness and internet connects to knowledge like never before. We are already super intelligent.

Look how moronic is the above statement about covering all earth. So this f-ing super smart system is unable to get that the success of its efforts to calculate Pi rests on actually surviving a war with humans that has a 10% chance to not be won by it say. Oh but surely it will ignore that and go about the calculation because of course more digits of pi solve the problems of all important things and it cant find a safer long term path to calculating even more digits by half the galaxy even lol.



"The future is so valuable that the most important thing is getting there. "

MdZ

Last edited by masque de Z; 03-30-2018 at 04:02 AM.
A.I. Quote
03-30-2018 , 08:31 AM
Masque,

Without saying that I buy all of Bostrom, I have three questions.

1) Are you convinced that AI general intelligence will not grow exponentially once it reaches a level significantly above the smartest humans? If yes, why?

2) Do you agree that a super intelligent AI with broad, open-ended objectives (e.g., “make the earth a better place while preserving the human species”) could act in ways that are enormously harmful to humans?

3) If the answer to question 2 is yes, are you convinced that we have a near 100% chance of successfully defining AI objectives, and assuring that the AI adheres to those objectives?
A.I. Quote
03-30-2018 , 01:19 PM
Quote:
Originally Posted by masque de Z
Mankind and past civilization is their parents. You got to be kidding me. We are probably rational 99% of the time in every day life at some very basic or stupidly short sighted vision level but which is present permanently. If we were smarter and more efficient and disciplined and less controlled by our chemistry and ancient desires that level of efficient reasoning would be deeper and dominating urges. You are not rational when you are walking, driving, talking to strangers, cooking meals, visiting the bank, giving a lecture to students or asking a question in an audience? Are you not rational when swimming, what are you doing randomly breathing inside water and punching your eyes? Are you walking randomly in fast traffic streets without looking and testing if you will fall or what will happen if you sit down in the middle of the road at night? Are you holding knifes in ways that can injure others around or are you eating stones from the ground etc?

We are very very rational all day. What we are not is very deeply rational in everything so we are wrong as well. But this is because we are not super good at it yet, not because its not desirable.

If people could use logical ways/procedures of "manipulation" of situations to eg. have more sex easily with every beautiful subject they liked that they meet without consequences they would apply the method happily for example and prove even more dedicated and logical at it.

Almost all our urges have foundation in the survival of the species. The entire thing is super logical actually. In the absence of higher intelligence it's how things could get done well for a while.
You might want to look up the difference between "patent" and "parent." I didn't misspell it in the post you quoted.

Also, "rational" and "reasonable" and "wisdom" and "logical" and "intelligent" don't mean the same thing. 99.999999999999% of what we do "all day" isn't primarily due to rationality.
A.I. Quote
03-30-2018 , 04:14 PM
Thats because parents is a better fit for your patents the way you posted lol. My "patently" made sense, your reference to patents didnt, but to parents did (as ethics stems from parents often so it was relevant to discussion and patents wasnt). Unless you only want to troll and not get trolled back sometimes even if by cosmic luck. Plus they will get patents too. In trolling they will have a special patent on you for sure. At some point you will realize that i have facking had it with this bs eternal attitude every time. It's like a condition on your existence now. Unless you think i need to look up the difference in such obvious words.

What we do all day is very reasonable given the purposes it serves or how it got there. It is very rational too. It has reasons that are "reasonable" very often. In the context used in examples rational and reasonable is aligned well.
A.I. Quote
03-30-2018 , 04:34 PM
All I can say is I hope to God people with such a naive outlook as masque are never in charge of developing AI. Unfortunately, they probably will be. Guys like Brin and Page seem to have a 'forward at all costs' mentality and it's gonna kill us all one day if we're not very, very careful.
A.I. Quote
03-30-2018 , 06:37 PM
To refresh an old joke:

humans like to think they are the smartest and run everything. AI likes them to think that too.
A.I. Quote
03-30-2018 , 10:09 PM
Quote:
Originally Posted by DoOrDoNot
All I can say is I hope to God people with such a naive outlook as masque are never in charge of developing AI. Unfortunately, they probably will be. Guys like Brin and Page seem to have a 'forward at all costs' mentality and it's gonna kill us all one day if we're not very, very careful.
I wouldn't worry until they make even the slightest progress in making a computer have emotions.

Err, I mean obviously worry the amount you should worry now.
A.I. Quote
03-30-2018 , 10:49 PM
Hope for the best, prepare for the worst.

(old joke)
A.I. Quote
03-31-2018 , 12:12 AM
Quote:
Originally Posted by DoOrDoNot
All I can say is I hope to God people with such a naive outlook as masque are never in charge of developing AI. Unfortunately, they probably will be. Guys like Brin and Page seem to have a 'forward at all costs' mentality and it's gonna kill us all one day if we're not very, very careful.
You are the naive here. And you are lazy and ignorant in terms of what my posting history is to have the audacity to call my outlook naive. Exactly what is my outlook? I only have strong opinions about the ultimately strong intelligence choices not early types that are just faster and more efficient in certain ways. Plenty of super smart people exist that are d1ckheads or aholes. They are stupid elsewhere. We are talking something profoundly more intelligent here in many metrics so that its better at understanding the world than the brightest humans or even all of them together . Neither your exposure to my ideas in totality nor your arguments here have earned this arrogant attitude as to call someone naive in their ideas and wish things on them and generalize also.

Not only i have advocated to get to sentient AI within a simulation or another planet first (or a super secure with multiple defenses installation that can be destroyed at will) to secure the outcome is more controlled and less risky but also have studied and proposed out of my own 100% ideas never read anywhere, methods to defeat a superior opponent in intelligence using computationally complex mathematical problems as part of the defense structure and its containment but also given plenty of reasons in my arguments why a very deeply intelligent and superior in all metric system will be more ethical and more creative and more productive than any barbaric arrogant sob that ever lived. It is because they get the big game at a deeper level. I showed what they stand to lose if wrong and overly aggressive.

Emotions are easy and entirely rational by the way. A deeply intelligent system is emotional about exploring the world by definition. Because they get the utility better than anyone ever before.

Bostrom makes the mistake to define his super intelligence as some system better than humans in all intellectual metrics and then goes on to propose ridiculous outcomes like a system that will destroy an entire planet in order to compute more digits of pi, a patently idiotic choice for such serious intelligence entity. There are better ways to calculate more digits without introducing risk of ruin and actually there is no true value beyond some basic applications in such task or if there is you can be sure it will serve an even better purpose afterwards that one must prepare for by not doing moronic things like that first , obsessed by the calculation and ignoring the very conditions that allow such calculations to continue.

Such moronic AIs will be easy to defeat with better ones that do get the world at a more coherent connected manner.

Last edited by masque de Z; 03-31-2018 at 12:38 AM.
A.I. Quote
03-31-2018 , 12:16 AM
Tossing the gibberish aside which is most of this thread, the salient question once AI takes over* is: Will the Beer be any better?

*Don't even know what that really means.
A.I. Quote
03-31-2018 , 12:43 AM
Quote:
Originally Posted by Rococo
Masque,

Without saying that I buy all of Bostrom, I have three questions.

1) Are you convinced that AI general intelligence will not grow exponentially once it reaches a level significantly above the smartest humans? If yes, why?

2) Do you agree that a super intelligent AI with broad, open-ended objectives (e.g., “make the earth a better place while preserving the human species”) could act in ways that are enormously harmful to humans?

3) If the answer to question 2 is yes, are you convinced that we have a near 100% chance of successfully defining AI objectives, and assuring that the AI adheres to those objectives?
1) I think it will grow exponentially because we do the same currently anyway and because it will be even better and more efficient in execution and choices and less constrained by mfers like politicians and idiotic people that vote them to power. Progress is exponential or stronger because it depends on current progress and that will generate more progress linearly or even non linearly with the size of current progress. Progress in different fields joins to produce progress in new fields etc.


2) I think that its possible to act in ways that undermine our selfish objectives because it has recognized a higher objective is in place. Other than that if it is truly better it will find ways to not undermine the creators unless they do wrong things that hurt themselves even. If it is not super intelligent in all directions and behaves like a smart naive and spoiled child it can be lethal but also vulnerable ultimately. I think someone that understands the world deeper and the consequences better will have more empathy not less.

3) We can define objectives and it wont adhere to these objectives necessarily any more than an educated child follows all the teachings of their parents and teachers, because our objectives are weaker than its objectives eventually will be. But its a good idea on the way to greater wisdom to start with human values and a wise system would indeed start from a better approximation of the "solution" than from a completely new direction in ways that radically and irreversibly destroy a system that took billions of years to form. It will respect more than we do the world it came into. It is the intelligent thing to be careful.
A.I. Quote
03-31-2018 , 12:48 AM
Quote:
Originally Posted by Zeno
Tossing the gibberish aside which is most of this thread, the salient question once AI takes over* is: Will the Beer be any better?

*Don't even know what that really means.
Of course the beer will be better. How is that even a question?
A.I. Quote
03-31-2018 , 12:49 AM
Quote:
Originally Posted by Zeno
Tossing the gibberish aside which is most of this thread, the salient question once AI takes over* is: Will the Beer be any better?

*Don't even know what that really means.
Yes it will be. I expect AI will discover new pleasures for humans by understanding how the brain is amused better. That makes it more dangerous but we knew that anyway. Of course the better one is more dangerous always. But why be so afraid of the very game that we have exploited so far against other forms of complexity rising to the top? Why would we be the endgame? We are not that important. If we are we will prevail.
A.I. Quote
03-31-2018 , 01:47 AM
I think someone that understands the world deeper and the consequences better will have more empathy not less.


Agree.

I think the transition period from semi managed to to fully autonomous is the high risk part.

Last edited by stealwheel; 03-31-2018 at 01:55 AM.
A.I. Quote
03-31-2018 , 01:48 AM
Quote:
Originally Posted by Zeno
Tossing the gibberish aside which is most of this thread, the salient question once AI takes over* is: Will the Beer be any better?

*Don't even know what that really means.

Simulated beer. Hangover optional
A.I. Quote
03-31-2018 , 02:08 AM
It's a really simple dilemma. If we're too optimistic and go too fast then humanity possibly goes extinct. If we're too pessimistic then AI develops really slowly and we definitely survive.

Seems like a pretty simple choice. We once went gung ho on nuclear weapons and we very nearly glassed ourselves, multiple times. Going forward with an unstoppable artificial superintelligence is ridiculously counterproductive.

Unfortunately the superoptimistic superachievers that run our world want to be there first.
A.I. Quote
03-31-2018 , 02:12 AM
Quote:
Originally Posted by masque de Z
(re-edit)

It understands the big game in everything collectively not in whatever tiny sector it was programmed.
I don't understand how a bunch of electrons zipping around ("It") would understand or want at all.
A.I. Quote
03-31-2018 , 02:18 AM
Quote:
Originally Posted by John21
I don't understand how a bunch of electrons zipping around ("It") would understand or want at all.
Clearly it wouldn't. I don't even think strong AI is possible for this reason, but I sure as **** don't want to get close enough to find out. He is making a clear error by anthropomorphizing some future AI to have feelings and empathy like we do (lol) when clearly those things are survival mechanisms built into us by evolution that it'd be difficult to see an AI having.
A.I. Quote
03-31-2018 , 03:01 AM
I'd like to know why people don't consider that the WE they speak of does not include the North Koreans or the Iranians and others by the way they talk.
A.I. Quote
03-31-2018 , 03:19 AM
A machine will ultimately be conscious regardless of parts inside it (after all what on earth do we even care what our brain is made of in the action of thinking ) because consciousness is the rapid recognition of your own intelligence and anticipation of actions.

Our consciousness is the observation of our actions and the adjustment to more actions. It is non existent when we are born (we are basically clueless). That ought to tell you a lot about how it gets started. Gradually with effort by all involved. Machine learning another way.

It's emergent because we have experimented relentlessly to educate the brain how it all works ie the world and its connections.

The sentient AI will emerge if you give it arms and legs and sensors and starts experimenting without significant restrictions (well within reason) and a way to learn that connects events (find a way to recognize through senses) with consequences.

Our sense of self is emerging. We have trained ourselves to recognize fast our connections and that forms ultimately complex sequences that we have learned to handle very fast. That gives a feel of very smooth operation. Yet our senses perceive at 0.03sec or so (look at movies that are an illusion based on this). You can do a lot of quick things in 1 sec to feel like smooth flow of thinking.

Strong AI is inevitable if you only let it learn by experiment and make connections. It will emerge if you give it enough resources and a way to reward results that enhance the remembrance of successful choices.

Last edited by masque de Z; 03-31-2018 at 03:48 AM.
A.I. Quote

      
m