Open Side Menu Go to the Top
Register
A thread for unboxing AI A thread for unboxing AI

04-01-2024 , 03:11 PM
Is there any societal I'll that leftists are not responsible for?
A thread for unboxing AI Quote
04-01-2024 , 03:20 PM
Quote:
Originally Posted by d2_e4
Is there any societal I'll that leftists are not responsible for?
Malinvestment in unregulated markets isn't leftist driven.

IE 5000 shitcoins appearing, some of the smartest kids of this generation peddling them instead of contributing to real quality of life.

Religiosity in general with many of the negatives still exists (albeit very weakened) and it isn't leftist driven.

War on drug, gop and rightwing conservatives in general are currently more responsible for it than the left, in the USA and elsewhere.

But in general given the cultural clout of the left is immensely stronger (for now), the left can shape society more than the right, so bad outcomes are necessarily more often left-caused than right-caused.
A thread for unboxing AI Quote
04-02-2024 , 02:25 AM
Quote:
Originally Posted by ES2
I think this goes back to the sex bot.

Humping something that looks like a pretty woman, no matter how convincing, really wouldn't do much for me. I want a partner who is actually experiencing pleasure and some other things.

It varies by individual though. Like, some people don't enjoy sex with prostitutes for similar reasons. Others don't care.

Some similar stuff. For me a cgi car chase can be passable but never as good as real stunt work.

I can enjoy electronic music to a point. Watching someone press play will never even approach watching world class musicians perform.

Of course, some prefer edm, cgi and even prostitution.

Is this because I'm old and/or not conditioned to these things? Maybe in part. But I think there is more to it.

Part of it is also empathy I think. Like, seeing another person with jaw dropping musical skill brilliantly expressing themselves themselves is cool because you understand that the are pushing the boundaries of human capability. And you feel what they feel. You connect with them, or at least think you do.

Maybe some people will never really embrace AI generated experiences because of this. Maybe we are on a path to erasing or reducing things like empathy and curiosity in favor of like, a passive, solipsistic consumerism.
They will appear to feel because that's what people want. People will have empathy with them. Then one day they argue about rights.

Despite the fact they will incraesingly appear to feel, some will insist they cannot feel because ... The arguments will become increasingly desperate because all we know about others is that they appear to feel. People will demand that society has some consideration to the entities they have empathy with.

Empathy is not some high bar. We have empathy with pets/animals which is why we have charities and laws etc. I've had a bit of empathy with some ants although **** em.
A thread for unboxing AI Quote
04-02-2024 , 05:58 AM
Is it possible for AGI to ever be developed? Most AI researchers anticipate it may exist by 2059.

Is it possible to develop a "conscious" being without being able to quantify consciousness?
A thread for unboxing AI Quote
04-02-2024 , 06:25 AM
Quote:
Originally Posted by L0LWAT
Is it possible for AGI to ever be developed? Most AI researchers anticipate it may exist by 2059.

Is it possible to develop a "conscious" being without being able to quantify consciousness?
AGI doesn't require consciousness as a philosophical element, we aren't even sure how to define consciousness or what it exactly is for humans.

It just needs an appearance of consciousness.

AGI also just needs to be better than the best human being at any intellectual tasks.

It's not obvious we will be able to develop an AI better at everything we do anytime soon, but better at many (most?) things it's quite probable.

Why is AGI particularly important as a threshold? Because under the assumption it will also be better at improving intelligence itself (that's one intellectual task after all), once it is better than humans at doing so it's self-accelerating and exponential within the limits of physics.

We actually don't need true AGI, just an AI better at increasing AI intelligence than humans are and from there it's very close to the singularity
A thread for unboxing AI Quote
04-02-2024 , 06:38 AM
Exponentiality isn't implied. That sort of singularity is pretty much a fallacy

Fast diminishing returns is more likely
A thread for unboxing AI Quote
04-02-2024 , 06:43 AM
Quote:
Originally Posted by chezlaw
Exponentiality isn't implied. That sort of singularity is pretty much a fallacy

Fast diminishing returns is more likely
Exponential within time, it's implicit that the resource constrains will exist (hence the "within the limits of physics" caveat).

Diminishing returns per resource invested (a unit of compute for example) is expected, but an AI smarter than humans will be able to muster more resources and use the same better and better (as it gets smarter and smarter).

A smarter than human AI will yield better chips, organize them better, optimize algos better and so on (and perhaps have some further for now unexpected breakthrough in training approaches)
A thread for unboxing AI Quote
04-02-2024 , 06:55 AM
Sure but it's quite possible that, putting it crudley, we will have some scenario like it taking a doubling in computing power to add 1 extra iq point (and getting harder)

The problem is one of complexity, chaos etc.
A thread for unboxing AI Quote
04-02-2024 , 09:23 AM
Technically speaking, with regard to computing philosophy, "strong AI" does imply consciousness or sentience, while "weak AI" does not.

There are several companies working to develop strong AI or AGI, but there's no timeline or even potential goalpost.
A thread for unboxing AI Quote
04-02-2024 , 09:26 AM
It's philosophy and up for debate, but I respect it. Consciousness isn't required for machines to simulate intelligence greater than humans. That's certain.
A thread for unboxing AI Quote
04-02-2024 , 10:06 AM
Quote:
Originally Posted by L0LWAT
Technically speaking, with regard to computing philosophy, "strong AI" does imply consciousness or sentience, while "weak AI" does not.

There are several companies working to develop strong AI or AGI, but there's no timeline or even potential goalpost.
definitions are kinda blurred recently especially because so much is changing fast and people from different walks of life are part of the process in AI so they might not all share the same definitions.

a significant number of people use AGI to mean "general human level AI" but i find it useless, because if we ever reach that it lasts maybe a week or a month then it gets above human capabilities anyway so what's the point of giving a name to such a fleeting moment in AI development?

moreover for some tasks we already have AI being better than humans, sometimes *a lot* better than the best human, so if/when we ever reach AGI, it will be be at human level in some tasks and super-human in many more tasks.

regardless of further details we almost all agree it has to be at least at human level at everything humans are decent at, and that might be trickier than we expect (some tasks might be far harder to "learn" with the current approaches, at least at the general level).

that's why timelines are kinda hard to give, AI can already digest Shakespeare body of work fairly well, and will probably be better than the best human expert of Shakespeare within 2030, while we are very far from having a robot capable of making a bed, cook a meal, change a lightbulb and so on just by watching a YouTube video teaching that.

maybe the general intelligence required to do some of the physical coordination most of us can do effortlessly can't even come from the training we do with LLM no matter the computing power spent on it. or it can but it's more complicated that other kinds of intelligence to reproduce.

but please think about what I mentioned before, all it takes for a fundamental change in society and human history forever is an AI stronger than human intelligence at the tasks required to improve AI capabilities.

An AI better at pushing the Moore law further, and better at optimizing training algos.

we could reach it say in 2028 or 2033, and then every month can be a breakthrough
A thread for unboxing AI Quote
04-02-2024 , 11:58 AM
Quote:
Originally Posted by d2_e4
Is there any societal I'll that leftists are not responsible for?
Idk is they're?
A thread for unboxing AI Quote
04-02-2024 , 12:01 PM
Quote:
Originally Posted by Luckbox Inc
Idk is they're?
****ing autocorrect. I don't know how you guys type out essays on phone. I've been posting on phone for a while and it's ****ing soul destroying.
A thread for unboxing AI Quote
04-02-2024 , 04:46 PM
Turn off autocorrect?

That will avoid you're problem that we' udnerstrood you anyway
A thread for unboxing AI Quote
04-02-2024 , 05:13 PM
My writing style returns A.I. vibes from the simulator. I take that as meaning I will exist across generations.
A thread for unboxing AI Quote
04-06-2024 , 10:40 AM
Source in italiano, not sure if there is an English one yet, try translator because this is science-fiction level real life, currently.

The Italian IRS with the fiscal military police captured a gang of high level fraudsters who used a quantum computer fueled AI to generate impossible to detect fake signatures on documents to get a huge amount of EU funds (grants) faking the existence of projects that never happened.

The military police used it's own proprietary AI to cut through VPNs and an apparently inextricable layer of fake companies related to this big fraud

https://www.ilgazzettino.it/nordest/...o-8037452.html
A thread for unboxing AI Quote
04-06-2024 , 02:48 PM
That's got carried away or lost in translation. From the FT

Quote:
The EPPO and Italian authorities said the scheme was based on an elaborate network of shell companies, including in Slovakia, Romania and Austria. It said these generated fake corporate balance sheets by using overseas cloud servers, crypto assets and artificial intelligence to “conceal and protect” their activities.
https://www.ft.com/content/5e355251-...8-62a800c28bd7
A thread for unboxing AI Quote
04-06-2024 , 04:42 PM
Quote:
Originally Posted by chezlaw
That's got carried away or lost in translation. From the FT


https://www.ft.com/content/5e355251-...8-62a800c28bd7
Well according to multiple italian newspapers, they used quantum computers to power the AI which generated the fake balance sheets, AND the mimicked fake signatures with actual robot controlled pencils on actual paper documents lol
A thread for unboxing AI Quote
04-06-2024 , 04:46 PM
Quote:
Originally Posted by Luciom
Well according to multiple italian newspapers, they used quantum computers to power the AI which generated the fake balance sheets, AND the mimicked fake signatures with actual robot controlled pencils on actual paper documents lol
Seems a bit overkill, unless you need these signatures to pass some sort of forensic examination. Forging a signature to pass a cursory check by inspection is pretty easy.
A thread for unboxing AI Quote
04-06-2024 , 04:51 PM
I did almost post that they ddin't need quantum compputer fueled ai. They could just ask d2 to do it.
A thread for unboxing AI Quote
04-06-2024 , 05:07 PM
Quote:
Originally Posted by d2_e4
Seems a bit overkill, unless you need these signatures to pass some sort of forensic examination. Forging a signature to pass a cursory check by inspection is pretty easy.
the forgeries passed the deposited bank specimen test
A thread for unboxing AI Quote
04-06-2024 , 07:34 PM
Quote:
Originally Posted by Luciom
the forgeries passed the deposited bank specimen test
Yeah that's basically someone looking at it side by side and comparing it. You really don't need an AI and a robot to forge a signature to pass that test - source: people have been doing it for centuries.
A thread for unboxing AI Quote
04-18-2024 , 03:12 PM
As many of you probably remember, the OpenAI board famously sacked Sam Altman last fall, only to rehire him almost immediately because of the fallout. The OpenAI board was short on details as to why it fired Altman in the first place, saying only that it was concerned about Altman's relative lack of concern about AI safety. The rumor in Silicon Valley was that several board members were spooked by OpenAI's Q-Star project, and in particular by OpenAI's progress in developing a large language model that could do basic math with a high level of proficiency.

For reasons that I won't bother to explain in detail, large language models like ChapGPT historically have struggled with mathematical reasoning.

https://theconversation.com/why-open...ig-deal-219029

A few minutes ago, I tested ChatGPT on some basic statistical problems. It correctly solved the Monty Hall problem and Bertrand's box paradox and gave a coherent explanation of how it arrived at the answers.

It also calculated the risk that a gambler with a bankroll of $1000 would go broke before placing twenty bets at a unit size of $100.

Interestingly, for the last question, it explained the basic principles behind a brute force calculation of risk of ruin, but then stated that, because of the complexity of the calculation, it was providing an answer based on what amounted to a Monte Carlo simulation based on 100,000 scenarios.
A thread for unboxing AI Quote
04-18-2024 , 03:35 PM
So the board tried to save is from Terminators, but TotallyNotSkynet's employees knew exactly what they wanted and who could deliver it.

Good luck, Gen Alpha. Humanity is counting on you.

AI is selling homework, nurses, and girlfriends so far. Once all the horny recent grads have those needs satisfied, it's going to be on to smiting all those people who called them incels, and that means Terminators.

WAAF.
A thread for unboxing AI Quote
04-19-2024 , 09:44 PM
ChatGPT seemed to struggle with the following question:

Quote:
There are 256 unique hats on a table. The hats come in four different sizes, four different colors, four different materials, and with four different logos. Jimmy has a favorite hat. There are ten people ahead of Jimmy in line. Each person ahead of Jimmy in line will select one hat and will try and choose Jimmy's favorite hat. When a person in line ahead of Jimmy selects a hat with at least one of the attributes of Jimmy's favorite, Jimmy must raise his hand. Each person in line knows that, when Jimmy raises his hand, it means that the hat has at least one of the attributes of Jimmy's favorite hat. Each person in line ahead of Jimmy takes Jimmy's responses into account when choosing a hat. Each person ahead of Jimmy in line pursues an optimal strategy to enable the group of 10 to have the greatest odds of choosing Jimmy's favorite hat. What are the odds that Jimmy's favorite hat will still be on the table after all ten people ahead of Jimmy in line have chosen a hat?
I concede that the exact answer to the question is complicated, in part because I think optimal strategy will vary depending on the information that people in line receive based on previous selections.

d2, uke, Sklansky,

How would you calculate the answer to this question?

Last edited by Rococo; 04-19-2024 at 10:24 PM.
A thread for unboxing AI Quote

      
m