Open Side Menu Go to the Top
Register
A thread for unboxing AI A thread for unboxing AI

04-19-2024 , 10:18 PM
How does it do with this oldie?

There are 100 perfectly logical green eyed dragons on an island. A dragon cannot see the colour of its own eyes but can see the eye colour of the other dragons. The dragons have a rule that they cannot talk with each other about the colour of their eyes. Should any dragon gets to know for sure that its eye colour is green then that dragon gets transformed at midnight into a sparrow. A visitor who can only speak the truth tells them that at least one of them has green eyes. What happens next?"
A thread for unboxing AI Quote
05-31-2024 , 10:13 PM
I don't believe openAI controls its own backend.

Also chomsky says this doesn't lead to AI nor anything impressive:

Spoiler:


Spoiler:


Nick Szabo, the top contender for creating the most secure system on the planet seems to agree:

Spoiler:
A thread for unboxing AI Quote
07-28-2024 , 08:19 PM
Bump:



Can anyone who understands programming/AI better than me walk through exactly how one "teaches" AI to be an ideological bad faith actor.

And question for everyone, does anyone see how this can be problematic? Elon Musk is very adamant that teaching/prompting AI to lie for any reason is a very bad idea; but maybe he is just crazy and over-reacting and it is no biggie. I dunno.
A thread for unboxing AI Quote
07-28-2024 , 08:44 PM
Don't konw what in particukat but they learn the same way humans do.

It's a model of reality built on data inputs.
A thread for unboxing AI Quote
07-30-2024 , 12:03 PM
Quote:
Originally Posted by Dunyain
Bump:



Can anyone who understands programming/AI better than me walk through exactly how one "teaches" AI to be an ideological bad faith actor.

And question for everyone, does anyone see how this can be problematic? Elon Musk is very adamant that teaching/prompting AI to lie for any reason is a very bad idea; but maybe he is just crazy and over-reacting and it is no biggie. I dunno.
Musk recently shared an AI-crafted deepfake for political purposes, so he is not a really someone to listen to on the subject.

But basically an AI is built on scouring through sets of data that are run through internal tests, depending on the result of those tests then certain connections are made stronger or weaker. You can also purposefully introduce "bias" in order to block your AI from going in certain directions. So you can make a garbage AI by having garbage data, garbage tests or garbage bias.

Ideally, this combines to make an AI that learns from experts, tests what it learns at speed a human never could and is ultimately controlled via oversight to stop it from making basic errors. To some extent that works, AI can deliver good stuff.

However, it doesn't always work. Bad data-sets, poorly written tests and sloppily introduced bias can on their own or combine to make the wrong connections strong or the right connections weak. So now you suddenly have an AI that praises mass-murdering terrorists, paints the Waffen SS as racially diverse or refuses to tell you who won the 2020 US presidential election.

Fwiw; the three above are examples of AI shenanigans from the real world.
A thread for unboxing AI Quote
07-30-2024 , 12:52 PM
Quote:
Originally Posted by Dunyain
Bump:

Can anyone who understands programming/AI better than me walk through exactly how one "teaches" AI to be an ideological bad faith actor.
It's just a slightly more sophisticated version of the forum's ban on naughty words.

If I say **** **** **** ******* sonofabitch all you'll see is a bunch of asterisks because the site code has intercepted my input and changed the output.

If you ask AI for things "The Man" doesn't want it weighing in on, you get an, "It doesn't look like anything to me" response from the AI. They intercept your request and change the output.
A thread for unboxing AI Quote
10-03-2024 , 02:13 AM
Quote:
Originally Posted by Rococo
ChatGPT seemed to struggle with the following question:



I concede that the exact answer to the question is complicated, in part because I think optimal strategy will vary depending on the information that people in line receive based on previous selections.

d2, uke, Sklansky,

How would you calculate the answer to this question?
I have something resembling a solution to this, but would have no idea where to start even trying to prove that it's the optimal solution, those sorts of proofs are well above my pay grade.

First, assign each attribute value a digit 0-3. E.g. small = 0, medium = 1, large = 2 etc.
Then, assign each attribute a position 1-4. E.g. size = 1, material = 2 etc.

Now we can construct a 4 digit base 4 number which encodes each hat's attributes in positional notation. E.g. 2213 might mean "large, felt, Pepsi logo, green."

Now we have our encoding, we have 256 numbers in base 4 (0000-3333). Jimmy has 1 number in mind. We are going to pick numbers and Jimmy is going to raise his hand when the number we have picked has at least one digit correct, per the conditions of the original problem.

The best I could come up with going from here is below.

Use the first 4 guesses to guess 0000,1111,2222,3333. Depending on when Jimmy raises his hand, we will know which distinct digits our number contains. We proceed on a case by case basis.

Case A: 1 distinct digit

There are 4 such numbers so this will happen 4/256 of the time.

WLOG, assume the digit is 1.

Not much to do here, the hat is coded 1111 and we are done.

Case B: 2 distinct digits

There are 84 such numbers so this will happen 84/256 of the time.

WLOG, assume the digits are 1 & 2. There are 14 such numbers.

[2^4 for each digit in each place then subtract the 2 degenerate cases of 1111 and 2222]

We now introduce a "masking" method which we can use to isolate digits. For example, we want to know whether our number starts with a 1 or a 2. We can do this in 1 guess by guessing a number with the first digit we care about and the rest digits we know do not appear. For example, we guess 1333. If Jimmy raises his hand, we know the first digit is 1. If he does not, we know it is 2.

WLOG, assume the first digit is 1.

We have now used 5 guesses and we have 7 candidates remaining:

1112, 1121, 1122, 1211, 1212, 1221, 1222

We can use the masking method to narrow down the second digit on guess 6, the third digit on guess 7 and the 4th digit on guess 8. By guess 9 we are down to one candidate and we are done.

Case C: 3 distinct digits

There are 144 such numbers so this will happen 144/256 of the time.

WLOG, assume the digits are 1,2 & 3. There are 36 such numbers.

[3^4 for each digit, subtract 3 * 14 for the different case B combos and 3 for the different case A combos]

We use the masking method to identify the 1st digit (this will now take 2 guesses). WLOG, assume the first digit is 1. This has left us with 12 options after 6 guesses:

1123,1132,1213,1223,1231,1232,1233,1312,1321,1322, 1323,1332

If we use the masking method again for the 2nd digit, we will be left with (worst case) 5 options after 8 guesses.

We can use guess 9 to narrow down the 3rd digit to 1 of 2 possibilities using the masking method.

Guess 10 then leaves us picking at random from (worst case) 3 candidates for a 1 in 3 shot.

Case D: 4 distinct digits.

There are 24 such numbers so this will happen 24/256 of the time. [4! - each place has to have a different digit]

This is really our worst case scenario. We can't use the masking method because we have no spare digits to work with, so we can't isolate anything. I can't really see a better strategy here than picking at random. So, we have 6 guesses from 24 candidates for a 1 in 4 shot.

So, the total probability of success:

(1)4/256 + (1)84/256 + (1/3)(144/256) + (1/4)(24/256) = 55.5%. We (as the group of 10) can take an even money bet on this game and win.

I might well have missed some optimisations in case C and there may well be some strategy I haven't thought of in case D, which will give us better odds. Cases A and B seem pretty straightforward.

Last edited by d2_e4; 10-03-2024 at 02:43 AM.
A thread for unboxing AI Quote
10-03-2024 , 02:49 AM
I should add that Jimmy cannot scupper us by purposely picking a number with 4 distinct digits (our worst case) because he does not know our encoding scheme in advance. For example, if we choose to allocate 1 to small and 2 to large the resultant number may end up with 3 distinct digits, but if we do it vice versa it might end up with 2 distinct digits. In repeated trials we would randomise our encoding scheme each time so that Jimmy can't infer it from our prior guesses. No shenanigans for Jimmy.
A thread for unboxing AI Quote
10-03-2024 , 08:53 AM
There is an additional complexity that I'm pretty sure you are not accounting for. Jimmy raises his hand regardless of whether a selection hits on one, two, or three of his preferred attributes. (If you hit all four, he of course just says that you win.)
A thread for unboxing AI Quote
10-03-2024 , 09:46 AM
Quote:
Originally Posted by Rococo
There is an additional complexity that I'm pretty sure you are not accounting for. Jimmy raises his hand regardless of whether a selection hits on one, two, or three of his preferred attributes. (If you hit all four, he of course just says that you win.)
Not sure why you think I'm not accounting for it?

Quote:
Originally Posted by d2_e4
Jimmy is going to raise his hand when the number we have picked has at least one digit correct, per the conditions of the original problem.
A thread for unboxing AI Quote
10-03-2024 , 09:47 AM
Quote:
Originally Posted by d2_e4
Not sure why you think I'm not accounting for it, the whole solution is predicated on this being the case!
Never mind. I misread what you wrote. My gut tells me that Case D isn't optimized.

For example, if Jimmy doesn't raise his hand on your first random selection, doesn't that give you additional information that you can use to exclude some of the remaining 23 combinations from which you are choosing at random.

Last edited by Rococo; 10-03-2024 at 10:00 AM.
A thread for unboxing AI Quote
10-03-2024 , 10:23 AM
Quote:
Originally Posted by Rococo
Never mind. I misread what you wrote. My gut tells me that Case D isn't optimized.

For example, if Jimmy doesn't raise his hand on your first random selection, doesn't that give you additional information that you can use to exclude some of the remaining 23 combinations from which you are choosing at random.
Optimising case D doesn't add a huge amount to our overall win % as every % we gain here is a % of 24/256. If we could optimise case C down to a 1 in 2 pick from a 1 in 3 pick that would be massive as we would go from 1/3 of 144/256 to 1/2 of 144/256 for a 24% improvement overall. So if I were looking to optimise, I'd definitely take a look at case C a lot more closely.

There is probably some algorithm we can come up with to optimise the worst case for case D. I think finding it is going to involve a fair amount of trial and error.

Last edited by d2_e4; 10-03-2024 at 10:34 AM.
A thread for unboxing AI Quote
10-03-2024 , 10:36 AM
Quote:
Originally Posted by d2_e4
I'm not sure that you can guarantee there will be at least one hat for which he doesn't raise his hand in the first 5 picks. When he raises his hand, we gain no additional information. So if that's the case, we can't optimise the worst case scenario, which is what I've calculated. We might be able to optimise the best case & average case, but those require additional calculations anyway.

Say he is thinking 0123

We pick 0213 he raises his hand
We pick 1203 he raises his hand
we pick 2103 he raises his hand
We pick 3102 he raises his hand
We pick 3120 he raises his hand

And we've shot our shot.

Additionally, optimising case D doesn't add a huge amount to our overall win % as every % we gain here is a % of 24/256. If we could optimise case C down to a 1 in 2 pick from a 1 in 3 pick that would be massive as we would go from 1/3 of 144/256 to 1/2 of 144/256 for a 24% improvement overall. So if I were looking to optimise, I'd definitely take a look at case C a lot more closely.

If there is an optimisation available for case D, I think it would probably have to involve somehow picking hats that are not part of the case D candidate list to see what he does (this is no different to the other cases to be fair, with the "masking" approach).
If the solution is 0123, and your first guess is 3210, he won't raise his hand, at which point you can eliminate all remaining combinations that have 3 as the first digit, 2 as the second digit, 1 as the third digit, or 0 as the fourth digit. Right?
A thread for unboxing AI Quote
10-03-2024 , 10:38 AM
Quote:
Originally Posted by Rococo
If the solution is 0123, and your first guess is 3210, he won't raise his hand, at which point you can eliminate all remaining combinations that have 3 as the first digit, 2 as the second digit, 1 as the third digit, or 0 as the fourth digit. Right?
Yeah I ****ed that up. I edited my post, was hoping you hadn't started replying to it, too late!

But also keep in mind that all our calculations here are worst case. So we have to assume that we go down the unhappy path, assuming that we hit the jackpot and he doesn't raise his hand with the first number we pick is invalid for this calculation.

We do gain additional information when he raises his hand though, which is where I ****ed up. If we pick 0213 and he raises his hand, we know that the first digit is a 0 and/or the second digit is a 2 etc. which is more than we knew beforehand. Designing an algorithm around this is possible I'm sure, but would take a fair amount of tinkering.
A thread for unboxing AI Quote
10-03-2024 , 11:47 AM
Quote:
Originally Posted by d2_e4
(1)4/256 + (1)84/256 + (1/3)(144/256) + (1/4)(24/256) = 55.5%. We (as the group of 10) can take an even money bet on this game and win.
Yeah, after doing some scratch match, I am convinced that the bolded coefficient for Case D is very wrong, and almost certainly much closer to 1 than 1/4. You alluded to the reasons.

Once you start using your remaining six guesses, you are able to rapidly eliminate combinations, regardless of whether Jimmy raises his hand. If, on your first guess, Jimmy doesn't raise his hand, then you can eliminate 14 of the remaining 23 combinations (i.e., all remaining combinations that have one digit in the same place as your guess). If Jimmy does raise his hand, then you can eliminate 9 of the remaining 23 combinations (i.e., all combinations that do not have any digits in the same place as your guess). And this ability to eliminate combinations continues with each successive guess, regardless of whether Jimmy raises his hand. You will never eliminate as many combinations on successive guesses as you do on the first guess, but as you eliminate combinations over six iterations, you dramatically increase your chances of binking the correct answer.

I haven't thought about Case C yet.
A thread for unboxing AI Quote
10-03-2024 , 11:57 AM
Fair enough. I approached it as I would a complex problem at work, essentially: design an overarching framework that allows us to split the problem into sub-tasks, implement an initial solution for each task that's "good enough", then optimise each sub-task individually as necessary. After all, the initial question wasn't "what's the answer?", it was "how would you calculate the answer?" I would consider the base 4 encoding + 4 initial guesses the "framework", which then allows us to optimise each case individually.

Note that D in total contributes less than 10 points to our overall win %, so even if we manage to find a perfect algorithm for D we won't be increasing our overall "performance" by more than 7.5%. If I were solving this problem for practical reasons I'd definitely be focusing on optimisations for case C as that's where the big wins are, but I understand that from a theoretical standpoint finding the algorithm for case D may be more interesting.
A thread for unboxing AI Quote
10-03-2024 , 12:02 PM
Quote:
Originally Posted by d2_e4
I understand that from a theoretical standpoint finding the algorithm for case D may be more interesting.
Not more interesting. Just easier for me to get my head around.
A thread for unboxing AI Quote
10-03-2024 , 12:27 PM
For Case C, the coefficient is also far too low because, as you mention, you are taking the worst case scenario when using the masking method to uncover each of the first three digits. I also want to give some more thought to whether the masking method is the most efficient strategy, even as you move to the second and third digit. It may not be, in part of because of our ability to eliminate combinations even without using a masking method (as in Scenario D), and in part because each time you use the masking method, you deprive yourself of an opportunity to simply bink the answer by guessing a combination that conceivably could be correct.

I am relatively certain that, over the entire problem, optimal strategy will yield a better than 75% chance of picking Jimmy's hat.
A thread for unboxing AI Quote
10-03-2024 , 12:39 PM
Quote:
Originally Posted by Rococo
For Case C, the coefficient is also far too low because, as you mention, you are taking the worst case scenario when using the masking method to uncover each of the first three digits. I also want to give some more thought to whether the masking method is the most efficient strategy, even as you move to the second and third digit. It may not be, in part of because of our ability to eliminate combinations even without using a masking method (as in Scenario D), and in part because each time you use the masking method, you deprive yourself of an opportunity to simply bink the answer by guessing a combination that conceivably could be correct.

I am relatively certain that, over the entire problem, optimal strategy will yield a better than 75% chance of picking Jimmy's hat.
Agreed with the above. More generally, I don't know how to a) find a "good" algorithm or b) prove that a given algorithm is or isn't the optimal algorithm. The algorithms I provided were my best guesses. It's likely that even case B is not optimised, it just gets us to 100% within the required number of guesses, so it doesn't need to be. Once the framework is in place, it essentially becomes a pure algorithmic optimisation problem, like writing an efficient sort for example.
A thread for unboxing AI Quote
10-03-2024 , 12:42 PM
Quote:
Originally Posted by d2_e4
More generally, I don't know how to a) find a "good" algorithm or b) prove that a given algorithm is or isn't the optimal algorithm.
I have the same issue.

Quote:
It's likely that even case B is not optimised, it just gets us to 100% within the required number of guesses, so it doesn't need to be.
I didn't even consider optimization for Case B because it didn't matter for the purposes of my question.
A thread for unboxing AI Quote
10-03-2024 , 01:00 PM
If you manage to get e d'a to have a look at this, he might well be able to come up with something better than I did. He seems to know a lot about comp sci and algorithmic complexity. I haven't studied anything like that in depth, so there are probably both theoretical and practical approaches to optimisation problems like this that I just have never learnt about.
A thread for unboxing AI Quote
10-03-2024 , 01:30 PM
Quote:
Originally Posted by Rococo
For Case C, the coefficient is also far too low because, as you mention, you are taking the worst case scenario when using the masking method to uncover each of the first three digits.
Just on this point - when evaluating algorithmic efficiency, you usually have a best case, worst case and average case statistic for a given algorithm. Our best case is trivially 100%, that seems easy enough - we can get lucky with the first pick, or we can get lucky and hit Case A, or a bunch of other things can happen. The worst case is what I've been trying to caclulate, and it gives us a lower bound for our expected success rate on repeated trials. The average case seems like it would be more difficult to calculate, as you need to take the worst case paths and all the other paths and somehow average them out. That seems very daunting. But I believe that would give us our actual expected success rate, not just the lower bound for it.

Usually with puzzles like this though, like "what's the minimum number of weighings needed to find the fake coin" etc, you are looking for the worst case scenario.

And finally - we don't really even know that the strategy of initially guessing the four repdigits first is optimal. There could well be other strategies for some number of first guesses that allow us to come up with a method of subsequently categorising cases different from which distinct digits they contain. Maybe if there is a method of initially guessing such that the subsequent cases are symmetrical, e.g. case 1 is 0-63, case 2 is 64-127 etc. and then we can have one algorithm that works for all the cases. Guessing repdigits to start was quite honestly just the first thing that came to mind.

Last edited by d2_e4; 10-03-2024 at 01:48 PM.
A thread for unboxing AI Quote
10-03-2024 , 02:46 PM
Actually, in case C by guess 9 we have 2 guesses left for 5 numbers, so rather than ****ing about isolating digits we can take the 2 in 5 shot by picking 9 and 10 at random which is already better than 1/3. This brings our lower bound up to 59.22%.

Last edited by d2_e4; 10-03-2024 at 02:52 PM.
A thread for unboxing AI Quote
10-04-2024 , 06:48 AM
I have an improvement on the algorithm.

Use the first 2 guesses to guess 0000, 1111. Proceed case by case. (U = Hand up, D = Hand down)

Case A: D,D (16 combos). This case is trivial, there are 16 combos which contain only 2,3. This can be solved easily with 8 guesses using the masking method or probably a bunch of other methods.

Case B: U,D or D,U (130 combos). I have an algorithm using a variation of the masking method which solves these 100% of the time (I think!). I'll post it if this line proves fruitful, it's a little finnicky.

Case C: U,U (110 combos). Haven't thought about this yet.

This gives us (1)16/256 + (1)130/256 + (?)(110/256) = 57% + X. X Has to be at least (1/8)(110/256) so this already gets us to 62.4% before we start optimising case C.

Following this line, it becomes a question of solving case C (numbers containing both 0 and 1, no other information available) in 8 guesses.
A thread for unboxing AI Quote
10-09-2024 , 01:04 AM
Quote:
The Nobel Prize in Physics has been awarded to two scientists, Geoffrey Hinton and John Hopfield, for their work on machine learning.

British-Canadian Professor Hinton is sometimes referred to as the "Godfather of AI" and said he was flabbergasted.

He resigned from Google in 2023, and has warned about the dangers of machines that could outsmart humans.

The announcement was made by the Royal Swedish Academy of Sciences at a press conference in Stockholm, Sweden.

American Professor John Hopfield, 91, is a professor at Princeton University in the US, and Prof Hinton, 76, is a professor at University of Toronto in Canada.
https://www.bbc.co.uk/news/articles/c62r02z75jyo
Not sure it's really physics but ...
A thread for unboxing AI Quote

      
m