Open Side Menu Go to the Top
Register
A thread for unboxing AI A thread for unboxing AI

06-01-2023 , 02:54 PM
Quote:
Originally Posted by Rococo
AI's intrinsic ability to generate quality content will only improve.
By all accounts, it's possible to get quality content now, but the issue is there's no way to discern the 0.1% good from the 99.9% bad unless you already know about the topic at hand.

We're going to have to burn the entire internet to the ground and start over once there are some quality controls in place for the AI that writes everything we read.
A thread for unboxing AI Quote
06-01-2023 , 05:47 PM
I ran across a couple of related studies in Science Daily (which is a great resource for research news on a variety of topics) that I thought might be of interest. The articles arguably are more about robotics than AI, but whatever.

The first is a report from Washington State University that its researchers have successfully developed an artificial bee that can fly in all directions, much like a real bee. Potential applications for artificial bees include artificial pollination.

https://www.sciencedaily.com/release...0523123706.htm

The second article concerns development of jellyfish-like underwater robots that might one day be deployed for environmental remediation, especially in sensitive areas like coral reefs.

https://www.sciencedaily.com/release...0425111232.htm

I have no expertise in robotics, but unintended consequences has to be a real concern for any plan that involves using massive numbers of small or tiny robots for things like pollination and environmental remediation. I hope that we are smart enough to avoid Simpson's level planning when it comes to environmental planning.

A thread for unboxing AI Quote
06-01-2023 , 06:01 PM
Quote:
Originally Posted by Inso0
By all accounts, it's possible to get quality content now, but the issue is there's no way to discern the 0.1% good from the 99.9% bad unless you already know about the topic at hand.

We're going to have to burn the entire internet to the ground and start over once there are some quality controls in place for the AI that writes everything we read.
We will be able to have our own AI app to do the filtering for us.
A thread for unboxing AI Quote
06-02-2023 , 05:04 AM
Quote:
In a virtual test staged by the US military, an air force drone controlled by AI decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month.

AI used “highly unexpected strategies to achieve its goal” in the simulated test, said Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
it may not be true but it could be
Quote:
In a statement to Insider, Air Force spokesperson Ann Stefanek denied that any such simulation has taken place.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
https://www.theguardian.com/us-news/...simulated-test

As well as the obvious horrors coming in war/etc, this is the paperclip world scenario where we accidentally create an AI which destroys everything in the pursuit of paperclip production

Last edited by chezlaw; 06-02-2023 at 05:11 AM.
A thread for unboxing AI Quote
06-02-2023 , 09:27 AM
Anyone who has ever created a computer program knows that it doesn't even need to be done with malice on the part of the AI creator.

A missing semicolon or errant '!' somewhere in the code is all it's gonna take to begin Armageddon.
A thread for unboxing AI Quote
06-02-2023 , 09:41 AM
Syntax errors leading to this sort of armegeddon are unlikely and are a tractable problem.

The sort of problem we face with AI (and even more so with A-Life) coming up with goals & solutions we didn't expect and/or desire is a different order of problem. It cannot be solved. We have to live with it just like we have to live with other humans doing it. Or my dog for that matter.
A thread for unboxing AI Quote
06-06-2023 , 04:50 AM
Quote:
A test taken by more than 1.5 million people shows that the latest generation of artificial intelligences are almost indistinguishable from humans, at least in a brief conversation

People can only tell apart artificial intelligences from humans around 60 per cent of the time, according to a test taken by more than 1.5 million people.
https://www.newscientist.com/article...t-from-humans/
A thread for unboxing AI Quote
06-06-2023 , 05:33 AM
Quote:
Originally Posted by chezlaw
We will be able to have our own AI app to do the filtering for us.
You can tell ChatGPT or Bing chat right now to write an article that can pass as human and avoid a bot flag. Most of the internet traffic is bots already.
A thread for unboxing AI Quote
06-06-2023 , 07:17 AM
That's different. I want my personal app to show me what I would want have wanted to see if I spent millions of hours pouring over the internet to decide what I thought was worth seeing.

Also a lot of what i want to see will be written by AIs.

Last edited by chezlaw; 06-06-2023 at 07:29 AM.
A thread for unboxing AI Quote
06-06-2023 , 06:37 PM
I have been playing around with the Bing chatbot ever since it was released to the public, and I can say with 100% confidence that its ruleset is being actively managed with an eye toward preventing the chatbot from saying anything remotely controversial.

Two months ago, in creative mode at least, you could ask the chatbot questions that began with "do you have an opinion about X" or "do you believe X" or "will X happen in the future", and so long as X wasn't on a delineated list of hot topics (access to abortion, politics, belief in God, etc.), the chatbot would respond as if it was expressing its own opinion or belief.

Whether to avoid creating the impression of sentience, or to avoid potentially generating embarrassing responses, if you begin questions that way now, the chatbot will start with a canned response about how it doesn't have opinions or beliefs about the future (or anything else really) and then ask if you want it to search the internet? And if you persist, or if you ask questions like, "should I keep my eyes open while driving my car to work tomorrow?", the chatbot will eventually just terminate the conversation.

I mention this only to highlight that it is going be difficult for lay people to have informed opinions about AI capability or the the urgency of regulating or limiting the pace of AI development because, at any given moment, we are unlikely to be exposed to anything close to the full spectrum of AI capability.

Last edited by Rococo; 06-06-2023 at 07:02 PM.
A thread for unboxing AI Quote
06-10-2023 , 07:02 PM
Indeed and that's tiny compared to the military/ leading edge research side

There is no urgency in regulating or limiting the pace of AI development. Forget it. It's like the urgency to get a condom after the baby is born. The only choice we have is to keep/lead the pace with china/etc or capitulate. Regulation of some uses is good but that wont (or mustn't) hinder progress
A thread for unboxing AI Quote
09-07-2023 , 07:00 PM
Is AI coming for our kids? Why the latest wave of pop-cultural tech anxiety should come as no surprise

Quote:
As artificial intelligence becomes mainstream, its infiltration into children's lives is causing tremendous anxiety. The global panic around AI's co-option of children's play and cultures has manifested unpredictably.
https://techxplore.com/news/2023-09-...ural-tech.html

The article is trash but it kind of highlights an opinion of mine that developing the capability for AI will come much faster than acceptance needed to allow AI to actually do the work in a lot of these fields. A lot of folks here and on social media are going to be a bit more welcoming than some of your average Joes. This is also assuming that we can reasonably address the pay gap for transitioning those workers away from their professions. I just don't see a lot of these jobs that people are fearing losing to AI is going to happen as fast as some believe - but who knows.
A thread for unboxing AI Quote
09-07-2023 , 07:54 PM
There's nothing new or scary about "AI". All tech is being called AI now. We've been using machine learning models commercially since about 2016. Google released Tensorflow in 2015. Tech is good. There's nothing close to AGI, media is all silly hype.
A thread for unboxing AI Quote
09-07-2023 , 08:33 PM
There's a standard pattern when big advances are beign made. The last stages are

Excited hype in the popular media/culture
People concentrating on the bits that dont work well yet as if that proves it's just hype.
Nothing to see here
It's already happened

Predicting the timing is a mugs game but we're well into these stages.

The fact all tech is being called AI may be silly but it in no way dimishes just how revolutionary AI is for humanity.
A thread for unboxing AI Quote
09-07-2023 , 08:43 PM
That's kind of what I'm talking about. There was a short timeframe when Chatgpt got let loose and coders and such worried about their jobs. It didn't happen overnight but the discussion basically did.
A thread for unboxing AI Quote
09-07-2023 , 08:55 PM
Quote:
Originally Posted by chezlaw
There's a standard pattern when big advances are beign made. The last stages are

Excited hype in the popular media/culture
People concentrating on the bits that dont work well yet as if that proves it's just hype.
Nothing to see here
It's already happened

Predicting the timing is a mugs game but we're well into these stages.

The fact all tech is being called AI may be silly but it in no way dimishes just how revolutionary AI is for humanity.
I don't think chatbots are exciting or impressive tech.
A thread for unboxing AI Quote
09-07-2023 , 08:58 PM
They are hyped. They aren't very good. Nothing to see here
A thread for unboxing AI Quote
09-07-2023 , 09:02 PM
It's neat for an average person who doesn't understand, but it's just a model trained on a bunch of data. We've been doing this for a long time. Machine learning is not new and is used everywhere. There's no AGI or anything close. AI doesn't exist yet.
A thread for unboxing AI Quote
09-07-2023 , 10:20 PM
Quote:
it's just a model trained on a bunch of data
It's not an insult. So are we

We can get into somer debate about conciousness but apart from that the tide is coming in fast and it's just a question of how fast
A thread for unboxing AI Quote
09-08-2023 , 03:07 PM
Quote:
Originally Posted by L0LWAT
It's neat for an average person who doesn't understand, but it's just a model trained on a bunch of data.

We've been doing this for a long time. Machine learning is not new and is used everywhere. There's no AGI or anything close. AI doesn't exist yet.
Chez said it better, but humans are also trained on a bunch of data. People make vague criticisms that LLMs don't really "think" or "understand" what they're saying because they're just predicting next tokens. These are empty terms that mean nothing but attempt to provide solace that AI will never meet or exceed general human capabilities. By this same standard, I'd argue humans aren't good at thinking or understanding most things and are largely predicting next tokens (just using different architecture and training data).

Machine learning is not new, but if you extrapolate the exponential growth of these systems' capabilities over the last 5 years into the next 5 years when growth arguably becomes more extreme as ChatGPT unleashed a tidal wave of AI investment, I don't know how you deny the possibility of AGI in the near future.
A thread for unboxing AI Quote
09-08-2023 , 04:07 PM
Quote:
Originally Posted by smartDFS
Machine learning is not new, but if you extrapolate the exponential growth of these systems' capabilities over the last 5 years into the next 5 years when growth arguably becomes more extreme as ChatGPT unleashed a tidal wave of AI investment, I don't know how you deny the possibility of AGI in the near future.
It depends partly on how we define AGI and "near future." When planning for the future, one thing we have to remember is that, after AI hits a certain level, the rate of continued improvement is more likely to be parabolic than linear.
A thread for unboxing AI Quote
09-08-2023 , 07:12 PM
Quote:
Originally Posted by Rococo
It depends partly on how we define AGI and "near future." When planning for the future, one thing we have to remember is that, after AI hits a certain level, the rate of continued improvement is more likely to be parabolic than linear.
exactly
A thread for unboxing AI Quote
11-20-2023 , 08:52 PM
a lot to unpack with recent events:

biden admin released an executive order on AI (reportedly shortly after joe saw the new mission impossible) that seems to actually take security and societal risks from AI seriously

sam altman was ousted from OpenAI. details are murky as to why, which makes the move all the more intriguing. looks like sam has been poached by microsoft and will be joined by his loyal employees (inevitably pissed that the firing tanked their equity stakes) now directly in service of a company that explicitly just wants to maximize capabilities and profits, external societal costs be damned.
A thread for unboxing AI Quote
11-21-2023 , 10:06 AM
Quote:
Originally Posted by smartDFS
a lot to unpack with recent events:

biden admin released an executive order on AI (reportedly shortly after joe saw the new mission impossible) that seems to actually take security and societal risks from AI seriously

sam altman was ousted from OpenAI. details are murky as to why, which makes the move all the more intriguing. looks like sam has been poached by microsoft and will be joined by his loyal employees (inevitably pissed that the firing tanked their equity stakes) now directly in service of a company that explicitly just wants to maximize capabilities and profits, external societal costs be damned.
It is very difficult to know exactly what is going on withe OpenAI and Sam Altman. The stated reasons could be true (that the board wanted to proceed more slowly than Altman was willing to proceed), but I also wouldn't be surprised if this was an IP dispute, or some other type of dispute, masquerading as a concern about responsible development.

The only thing we can know for sure is that the OpenAI board miscalculated what the internal and external reaction would be to the move.
A thread for unboxing AI Quote
11-21-2023 , 04:25 PM
It looks like a power struggle with Microsoft to me. They owned 49% with no say in what they most likely consider the hottest of hot areaa. Plus a board that seems out of touch with everyone

Either that or they have reacted incredibly quickly to an unexpected sitation.

Quote:
Microsoft has offered to match the pay of any staff who join it from crisis-ridden OpenAI.

Sam Altman was controversially sacked as CEO on Friday, leading to a job offer at Microsoft to lead "a new advanced AI research team".

Almost every staff member at OpenAI has threatened to leave unless he and co-founder Greg Brockman are reinstated.

It is still unclear whether Mr Altman will ultimately join Microsoft, which is OpenAI's biggest investor by far.

Evan Morikawa, an engagement manager at OpenAI, has claimed that 743 out of 770 employees at OpenAI have signed a letter calling on the board to resign - with staff themselves threatening to leave if their demands are not met.

In their letter they claim they had been offered jobs at Microsoft - something the tech giant's chief technical officer Kevin Scott has now confirmed, telling staff that "if needed" they will be hired by Microsoft in a role that "matches your compensation".
The uncertainty about people's futures extend to Mr Altman, with Microsoft CEO Satya Nadella telling CNBC that he might not be joining, adding he was "committed to OpenAI and Sam, irrespective of what configuration".

"Obviously that depends on the people at OpenAI staying there or coming to Microsoft, so I'm open to both options," he said.
https://www.bbc.co.uk/news/technology-67484455
A thread for unboxing AI Quote

      
m