Open Side Menu Go to the Top
Register
A thread for unboxing AI A thread for unboxing AI

03-05-2024 , 11:43 PM
I feel like that last one is fake fake.

The AI doesn't just spit out all black people unless you ask it to. Google admitted that they had to force additional criteria into the prompts to avoid too many white people, but either the AI saw watermelon and immediately decided to reinforce stereotypes, or someone is yanking our chain here.


Edit: I just tried it, and received this response:

"We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does."


It will still create images, though. I asked it for a happy pancake and it complied.

A thread for unboxing AI Quote
03-05-2024 , 11:50 PM
i did consider the possibility (especially because that one lacks the prompt in the screenshot), then considered how i know whether any of those were real, but decided it was fine because they took gemini down (for people at least) and issued a mea culpe, so something must've been wrong. but it would be pretty rich if i warned against the dangers of fakes in one post and posted fakes in the next
A thread for unboxing AI Quote
03-06-2024 , 06:44 PM
Quote:
Originally Posted by smartDFS
+1 to inso0, one of the scariest consequences is video footage, personal testimonies, etc. become worthless and we can't trust anything. people no longer have to believe any footage on the nightly news if they don't want to.
I agree with this. I just think that the consequences will be more profound in the realm of public opinion than they will be in evidentiary rulings in court proceedings.

Evidentiary rulings aside, it could affect the weight that juries give to photographic and video evidence.
A thread for unboxing AI Quote
03-07-2024 , 04:07 PM
That last one can't be real. AI has current racist stereotype trumping old school racism.
A thread for unboxing AI Quote
03-07-2024 , 06:48 PM
Quote:
Originally Posted by AquaSwing
That last one can't be real. AI has current racist stereotype trumping old school racism.
Not Mistral which is open source in it's basic version (you can actually download it and run it locally)
A thread for unboxing AI Quote
03-12-2024 , 07:35 PM
Gladstone apparently was commissioned by the State Department to assess catastrophic risk and safety/security risks associated with AI.

https://time.com/6898967/ai-extincti...-risks-report/

https://time.com/6898961/ai-labs-saf...ncerns-report/

https://www.gladstone.ai/action-plan

I don't know much about Gladstone, and I'll leave it to others to decide whether the report is alarmist. My main takeaway is that their proposed prophylactic measures are likely to be ineffective. When I think about controlling potential catastrophic risks associated with AI, I think about two other things that could totally **** up the world -- nuclear weapons and anthropogenic climate change.

Nuclear weapons proliferation isn't easy to control, but it's a lot easier to control than anthropogenic climate change for obvious reasons. There is a finite supply of weapons grade nuclear material on the planet. It is relatively difficult to come by and relatively easy to track. Developing effective delivery systems is fraught and expensive. These factors allow a relatively small number of stable nuclear powers to make it difficult (although not impossible) for other countries or private actors to join the nuclear club. Also, spending untold amounts of money on weapons that you likely will never use isn't worth it for a lot of countries.

Climate is tougher. Global economic competition tends to be zero sum. Every country contributes to anthropogenic climate change. No country can solve climate problems solely through independent action. And in many cases, the more destructive path for the climate in the long term may be the more profitable path in the near term. These factors provide a strong incentive to either minimize the impact of problems or deem the problems unsolvable for any single state actor (or company). But we at least can hold out hope that the path that is more impactful to the climate won't always be the more profitable path. In other words, we can at least hope that our climate goals and our economic goals will eventually converge more than they do now.

Ensuring the responsible proliferation of AI seems more like addressing anthropogenic climate change than it does like addressing nuclear weapons proliferation. The economic incentives for frontier AI labs to push the envelope are enormous. And everyone is certain (as I am) that some governments and private groups will develop AI as quickly as possible, so there is little incentive for any group to proceed cautiously and incrementally. And the chances that profit incentives will naturally align with reasonable prudential concerns seem slight.

It's a worry.

Last edited by Rococo; 03-12-2024 at 09:07 PM.
A thread for unboxing AI Quote
03-12-2024 , 08:07 PM
No matter how high or low you assess AI risk(s) to be, I agree there is absolutely nothing we can do to reduce them meaningfully.

Why?

Because there are open downloadable models and it's very cheap to run them. And they get better every day and will get better still.

I disagree that global warming is an existential risk (even in the worst case scenario humanity is more than fine anyway) though or that it has anything to do with AI risk at all.

Global warming requires the actions of billions of people for decades (both to happen, increase or get reduced/eliminated).

AI risk is the purest rogue risk you can think of, 50 people with decent quantities of money (nothing anyway compared to any mid size project of a national defense department), some breakthroughs and some "luck" and it's GG.

Whether it's an AI that hacks vital structures of western countries, or helps you develop an airborne ebola or whatever, we can list several realistic scenarios far more damaging than RCP 8.5.

What if the AI cracks RSA encryption?

Btw it's not even necessarily only about profit, some people believe they are creating god or at least trying to, they might not even be too far off, they can get their name in history and already have full **** you money.

Plus of course there are governments.

Maybe you can disagree with the extent of the risk, put actual existential risk at extremely low levels, whatever, but fact is high or low, it's a fully ineliminabile risk.

So let's just go allin on AI disregarding risks, it's a waste of time and we only handicap ourselves in case some bad actor gets there first.
A thread for unboxing AI Quote
03-12-2024 , 08:48 PM
Just on the point of cracking asymmetric encryption - this would rely on a new efficient algorithm for integer factorisation, finding discrete logarithms, or finding rational points on elliptic curves, depending on the specific cryptographic implementation of the asymetric key protocol (RSA is one, and it uses integer factorisation). I believe that the consensus within the scientific community is that no such algorithm exists other than for quantum computers, but its non-existence has not been proven.
A thread for unboxing AI Quote
03-12-2024 , 09:04 PM
Quote:
Originally Posted by Luciom
Global warming requires the actions of billions of people for decades (both to happen, increase or get reduced/eliminated).
I never said that climate change was a literal existential risk to humans on the order of the most severe meteor strikes in Earth's history. For that matter, it is highly unlikely that a true nuclear war would result in the complete extinction of humans.
A thread for unboxing AI Quote
03-12-2024 , 09:37 PM
Quote:
Originally Posted by d2_e4
Just on the point of cracking asymmetric encryption - this would rely on a new efficient algorithm for integer factorisation, finding discrete logarithms, or finding rational points on elliptic curves, depending on the specific cryptographic implementation of the asymetric key protocol (RSA is one, and it uses integer factorisation). I believe that the consensus within the scientific community is that no such algorithm exists other than for quantum computers, but its non-existence has not been proven.
The algo exists for quantum computing (Shor's algo) and the idea would exactly be using AI to scale quantum computer development to reach the thousands of qbit at a cheap cost level.

Ofc maybe that's never going to happen, but it's not a "meteor swarm hits the planet" kind of per year probability, it's quite higher imo (and growing in time).

Yes we can encrypt in quantum resistant ways (or so we think), but are we doing it?
A thread for unboxing AI Quote
03-12-2024 , 09:39 PM
Quote:
Originally Posted by Luciom
The algo exists for quantum computing (Shor's algo) and the idea would exactly be using AI to scale quantum computer development to reach the thousands of qbit at a cheap cost level.

Ofc maybe that's never going to happen, but it's not a "meteor swarm hits the planet" kind of per year probability, it's quite higher imo (and growing in time).

Yes we can encrypt in quantum resistant ways (or so we think), but are we doing it?
I think the best that's been done with Shor's algorithm in practice so far is to factor 15, but I get your point.
A thread for unboxing AI Quote
03-12-2024 , 09:41 PM
Quote:
Originally Posted by d2_e4
I think the best that's been done with Shor's algorithm in practice so far is to factor 15, but I get your point.
In general I do think airborne ebola (or equivalent) should be the biggest AI risk but maybe I am biased by the recent pandemic
A thread for unboxing AI Quote
03-12-2024 , 10:07 PM
Quote:
Originally Posted by Luciom
In general I do think airborne ebola (or equivalent) should be the biggest AI risk but maybe I am biased by the recent pandemic
I suspect that there are other illnesses that would be much more dangerous than airborne Ebola. Ebola is incredibly incapacitating. If you have symptoms, you are unlikely to be traveling without assistance, which reduces the chances of widespread transmission. And a significant number of people who contract Ebola die relatively quickly, although death rates obviously would be lower in countries that have better healthcare than Sudan, DRC, and other areas where there have been Ebola outbreaks.

Last edited by Rococo; 03-12-2024 at 10:20 PM.
A thread for unboxing AI Quote
03-12-2024 , 10:14 PM
Quote:
Originally Posted by Rococo
There are many illnesses that would be much more dangerous than airborne Ebola. Ebola is incredibly incapacitating. If you have symptoms, you are unlikely to be traveling without assistance, which reduces the chances of widespread transmission. And a significant number of people who contract Ebola die relatively quickly, although death rates obviously would be lower in countries that have better healthcare than Sudan, DRC, and other areas where there have been Ebola outbreaks.
Ye but it's the perfect weapon for permanent panic anyway.

How do you prevent it from being spread in airports, train stations, metro stations and so on?

It's like being able to do terrorist attacks at any time, anywhere, with no defense available, and very low cost.

I am not thinking airborne ebola -> human extinction.

Rather pretty soon very nasty terrorist attack tools will.be available to anyone, or could be
A thread for unboxing AI Quote
03-12-2024 , 10:21 PM
Quote:
Originally Posted by Luciom
Ye but it's the perfect weapon for permanent panic anyway.

How do you prevent it from being spread in airports, train stations, metro stations and so on?

It's like being able to do terrorist attacks at any time, anywhere, with no defense available, and very low cost.

I am not thinking airborne ebola -> human extinction.

Rather pretty soon very nasty terrorist attack tools will.be available to anyone, or could be
I certainly would panic if I contracted Ebola.
A thread for unboxing AI Quote
03-12-2024 , 11:34 PM
Quote:
Originally Posted by Luciom
What if the AI cracks RSA encryption?
It's when not if. The day is called Q-day. There are new encryption methods to cope
A thread for unboxing AI Quote
03-12-2024 , 11:38 PM
Quote:
Originally Posted by Rococo
Gladstone apparently was commissioned by the State Department to assess catastrophic risk and safety/security risks associated with AI.

https://time.com/6898967/ai-extincti...-risks-report/

https://time.com/6898961/ai-labs-saf...ncerns-report/

https://www.gladstone.ai/action-plan

I don't know much about Gladstone, and I'll leave it to others to decide whether the report is alarmist. My main takeaway is that their proposed prophylactic measures are likely to be ineffective. When I think about controlling potential catastrophic risks associated with AI, I think about two other things that could totally **** up the world -- nuclear weapons and anthropogenic climate change.

Nuclear weapons proliferation isn't easy to control, but it's a lot easier to control than anthropogenic climate change for obvious reasons. There is a finite supply of weapons grade nuclear material on the planet. It is relatively difficult to come by and relatively easy to track. Developing effective delivery systems is fraught and expensive. These factors allow a relatively small number of stable nuclear powers to make it difficult (although not impossible) for other countries or private actors to join the nuclear club. Also, spending untold amounts of money on weapons that you likely will never use isn't worth it for a lot of countries.

Climate is tougher. Global economic competition tends to be zero sum. Every country contributes to anthropogenic climate change. No country can solve climate problems solely through independent action. And in many cases, the more destructive path for the climate in the long term may be the more profitable path in the near term. These factors provide a strong incentive to either minimize the impact of problems or deem the problems unsolvable for any single state actor (or company). But we at least can hold out hope that the path that is more impactful to the climate won't always be the more profitable path. In other words, we can at least hope that our climate goals and our economic goals will eventually converge more than they do now.

Ensuring the responsible proliferation of AI seems more like addressing anthropogenic climate change than it does like addressing nuclear weapons proliferation. The economic incentives for frontier AI labs to push the envelope are enormous. And everyone is certain (as I am) that some governments and private groups will develop AI as quickly as possible, so there is little incentive for any group to proceed cautiously and incrementally. And the chances that profit incentives will naturally align with reasonable prudential concerns seem slight.

It's a worry.
It can't be stopped - defense alone dictates that it ca'nt be stopped. It's use can be regulated for example government could require face recognition/etc tech not to be racist before it can be deployed. This sort of good regulation will speed up development more than slow it.

AI is dangerous. Pretending we can prevent it would be far more so.

I agree with musk that we mustn't let AI be owned.

Last edited by chezlaw; 03-12-2024 at 11:46 PM.
A thread for unboxing AI Quote
03-12-2024 , 11:40 PM
Quote:
Originally Posted by chezlaw
It's when not if. The day is called Q-day. There are new encryption methods to cope
I think you are confusing quantum computing with AI. There are developments being made in leaps and bounds in both fields, but they are (fairly) independent of one another.
A thread for unboxing AI Quote
03-12-2024 , 11:45 PM
Quote:
Originally Posted by d2_e4
I think you are confusing quantum computing with AI. There are developments being made in leaps and bounds in both fields, but they are (fairly) independent of one another.
Yeah I was referring to quantumn.

I include robotics in the topic as well if that helps. And 3d printing
A thread for unboxing AI Quote
03-21-2024 , 06:02 PM
Who is Neuralink's first ever human trial patient? Former athlete Noland Arbaugh, 29, was left paralyzed after a diving accident at a children's camp eight years ago

Quote:
A paraplegic man has been able to play a game of chess by only using his mind thanks to a brain chip created by Elon Musk's company Neuralink.

The device, which has been stitched into Noland Arbaugh's brain, has allowed him to control a computer cursor and play video games just by thinking.

'See that cursor on the screen? That's all me... it's all brainpower,' said the beaming 29-year-old as he controlled the computer from his wheelchair.

The chip's recent success is an incredible development and has strengthened beliefs among experts that the technology could revolutionise care for the disabled.
https://www.msn.com/en-us/health/oth...bbe57452&ei=27
A thread for unboxing AI Quote
03-31-2024 , 12:49 AM
Quote:
Originally Posted by formula72
I wonder how much of the "human spirit" will play a role in what people want in movies and music - or with a lot of other things, really. AI could most certainly replicate Elton John's voice and create a never ending stream of new albums that sound like him but all that's going to do that is make folks appreciate Elton John more.

People also become fans of those that they can relate to and sympathize with and no one is really going to start idolizing the winner of the AI robot who won best supporting actor or who can play the drums the best. I can see some potential pushback in a lot of areas like that.
I think this goes back to the sex bot.

Humping something that looks like a pretty woman, no matter how convincing, really wouldn't do much for me. I want a partner who is actually experiencing pleasure and some other things.

It varies by individual though. Like, some people don't enjoy sex with prostitutes for similar reasons. Others don't care.

Some similar stuff. For me a cgi car chase can be passable but never as good as real stunt work.

I can enjoy electronic music to a point. Watching someone press play will never even approach watching world class musicians perform.

Of course, some prefer edm, cgi and even prostitution.

Is this because I'm old and/or not conditioned to these things? Maybe in part. But I think there is more to it.

Part of it is also empathy I think. Like, seeing another person with jaw dropping musical skill brilliantly expressing themselves themselves is cool because you understand that the are pushing the boundaries of human capability. And you feel what they feel. You connect with them, or at least think you do.

Maybe some people will never really embrace AI generated experiences because of this. Maybe we are on a path to erasing or reducing things like empathy and curiosity in favor of like, a passive, solipsistic consumerism.
A thread for unboxing AI Quote
04-01-2024 , 02:48 PM
Quote:
Originally Posted by ES2
Is this because I'm old and/or not conditioned to these things? Maybe in part. But I think there is more to it.
"Hey you kids, get off my lawn!" - random Babylonian dude named Belshazzar, 482 BC

This has been going on since the dawn of time.

Also, you're equating the sex robot to the blow-up dolls you and your buddies found in the alley when you were kids. The sex robot of the future will talk to you about all your favorite anime in a manner indistinguishable from a real human being, and it'll drain itself into the toilet when you're finished like a real woman does now.

You want someone to please because actual human interaction is your personal baseline for experiencing the world. That's not a guarantee for future humans. All it's going to take is COVID 2.0 and some moderate advancements to AI-assisted learning for people to freak out and forbid entire generations of children from ever leaving the house. I'm pretty sure the 90s was the last decade where kids actually played outside, unsupervised, with their friends from the neighborhood. By the 2000s we were on to cell phones and AOL instant messenger. All down hill from there as far as human interaction goes.

I was trying to hold out on giving my kids their own cell phones until after high school. My youngest son made it to his senior year, but I gave in and bought my daughter one right away as a freshman. It was abundantly clear by 8th grade that she would've just been a social pariah, and I'm a pragmatist.

I've heard that the dating scene is a complete and utter shitshow nowadays, which only further increases the likelihood that sex robot tech gets the investment it needs to prosper.

I'm guessing I'll have grandchildren by 2030. I'm sure I'll be back here lamenting the fact that it's impossible to find any infant toys that don't have a screen and AI-powered content that requires a subscription.

Whatever path we're on as a society is not going to end well. But then again, that's probably what Belshazzar thought as well.
A thread for unboxing AI Quote
04-01-2024 , 03:02 PM
I really don't think the sex robots are going to be psychical robots like Sico with a skirt on in Rocky 3.

Rather it would be more like Total Recall where we can live the life of Hugh Hefner while in our recliner wearing a lucid dream formation headset from facebook.
A thread for unboxing AI Quote
04-01-2024 , 03:07 PM
Yeah, you're probably right.

See, this is my own imagination not being able to think beyond that human interaction.

"Where we're going, we don't need people." - Doc Brown, 2035.
A thread for unboxing AI Quote
04-01-2024 , 03:09 PM
Quote:
Originally Posted by Inso0
"Hey you kids, get off my lawn!" - random Babylonian dude named Belshazzar, 482 BC

This has been going on since the dawn of time.

Also, you're equating the sex robot to the blow-up dolls you and your buddies found in the alley when you were kids. The sex robot of the future will talk to you about all your favorite anime in a manner indistinguishable from a real human being, and it'll drain itself into the toilet when you're finished like a real woman does now.

You want someone to please because actual human interaction is your personal baseline for experiencing the world. That's not a guarantee for future humans. All it's going to take is COVID 2.0 and some moderate advancements to AI-assisted learning for people to freak out and forbid entire generations of children from ever leaving the house. I'm pretty sure the 90s was the last decade where kids actually played outside, unsupervised, with their friends from the neighborhood. By the 2000s we were on to cell phones and AOL instant messenger. All down hill from there as far as human interaction goes.

I was trying to hold out on giving my kids their own cell phones until after high school. My youngest son made it to his senior year, but I gave in and bought my daughter one right away as a freshman. It was abundantly clear by 8th grade that she would've just been a social pariah, and I'm a pragmatist.

I've heard that the dating scene is a complete and utter shitshow nowadays, which only further increases the likelihood that sex robot tech gets the investment it needs to prosper.

I'm guessing I'll have grandchildren by 2030. I'm sure I'll be back here lamenting the fact that it's impossible to find any infant toys that don't have a screen and AI-powered content that requires a subscription.

Whatever path we're on as a society is not going to end well. But then again, that's probably what Belshazzar thought as well.
I gave smartphones to my kids when they were 5 which is why they never gave too much importance to them and live life normally close to how we lived it when we were kids.

The only difference is that thanks to leftists criminality is much higher (in most of Italy outside big cities we literally knew everyone and coul leave the house door open, not so much now thanks to the glorious immigration) so they have less independence going around
A thread for unboxing AI Quote

      
m