Open Side Menu Go to the Top
Register
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72

12-16-2017 , 10:28 PM
Was interesting to see how some top chess players know what they're talking about and some don't. I thought Vachier-Lagrave's comments were most on point:

Quote:
Vachier-Lagrave: "If people had access to AlphaZero instead of Stockfish and Houdini theory would change. Probably the biggest edge, considering the games I have seen, is that the horizon effect down the road for AlphaZero, if it exists. It means that in most of the games AlphaZero was winning and Stockfish was like: everything's fine, everything's fine, everything's fine for like 20 moves, that's quite far ahead actually in modules' term, but then he was not anymore and things were drifting away very quickly after that. I have seen very impressive stuff."
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-17-2017 , 03:53 PM
Quote:
Originally Posted by ChrisV
Additional brute-force processing may not help in that case. Like, maybe it will, but it's not clear to me that that is the case.
Of course you don't meet that literally. Alpha after learning checkers, I assume, could theoretically sometimes lose to Chinook. But never the opposite.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-18-2017 , 06:59 AM
Quote:
Originally Posted by ChrisV
This is just utter nonsense from start to finish. For starters, when he says "massive hardware upgrades almost doubled the playing-strength of AlphaGo", his link says no such thing and he has no way of knowing that. AlphaGo Master played on better hardware than AlphaGo Lee, but it was also a new version of the neural net. From the publicly available facts, here's no way of knowing which of these things caused the improvement.
The link shows AlphaGo getting stronger in a linear way, correlated to the hardware upgrades. Note: It can only be the hardware, because the algorithm doesn't change. It's a neural net that plays against itself until it reaches a stable ranking.

Quote:
Secondly, we know what effect better hardware has on Stockfish's rating. Doubling processor speed adds about 60 ELO, doubling cores 40 ELO. There's huge diminishing returns on brute-force search because the search space expands exponentially. It's certainly true that AlphaZero requires much more processing power than Stockfish does to play at a high level. It would get annihilated if both ran on a desktop computer. It does not follow that AlphaZero's advantage over Stockfish was simply processing power, nor that Stockfish would "beat AlphaZero easily" on similar top-end hardware. It can simply be the case that AlphaZero improves much more as hardware gets better than Stockfish does.
At least you agree that hardware matters. It's all about search-horizon. Stockfish finds these moves too, just an hour later.

Quote:
It may also be the case that the ELO added by hardware improvement doesn't translate to much better performance against AlphaZero specifically. The kind of positions engines don't understand, such as closed positions where one side can't actually use their material advantage, aren't solved by adding additional processing power. It may be that the positional weakness in Stockfish's game which AlphaZero exploits is precisely the aspect of its game that improves least with additional processing power.
It may or it may not. You can't dodge the fact that AlphaZero was running on million dollar hardware with a couple of thousand cores, while Stockfish was running on just 64 cores with 1 GB hash. You can actually buy the hardware that Stockfish was running on. Try to buy 4 Google TPUs. If you can afford that you don't need to worry about this topic anymore.

Quote:
The guy writing this blog has absolutely no idea how AlphaZero works btw, from his previous post on the subject
You may have noticed that the difference between both articles is that the second one is based on the Giraffe-papers which go into detail on the process, while the first one was based on the AlphaGo-paper only where lots of questions were still open.

Quote:
Why take seriously the writings of someone on AlphaZero when he doesn't know the most basic things about how it functions? His implication that AlphaZero is just a more powerful version of Giraffe is also completely wrong, by the way. The only similarity between the two is that they both use machine learning. Giraffe was taught from human games and its neural nets were structured in a domain-specific way
So instead we should take your opinion seriously, because you know all the details. You know that there are self-learning neural nets and there are better self-learning nets. Both know nothing but the rules and do nothing but playing against each other, but one does it clearly better. You should read Animal Farm.

Quote:
AlphaZero learnt from scratch, playing itself, and its neural network was not specifically structured to handle chess. This is a good place to point out that the breakthrough here is not that Google made a chess engine which is stronger than Stockfish. The breakthrough is how it works and how it was done.
Exactly, except for the part with the breakthrough. The guy who developed Giraffe was working on AlphaZero. The concept is virtually identical, except for the NASA-type of hardware used by Google. I give you one point though: Matthew Lai - working for the 400 million dollar company DeepMind - probably got very good support by excellent programmers. Some solutions are probably more elegant and more efficient too. I mean there should be something worth the money, right?

Quote:
Your second article is in English without needing to be translated here. I'm not sure what in it I need to refute though.
Here is the original article: https://de.chessbase.com/post/alpha-...ns-mit-aepfeln

Btw, I have no problems with arguments ad hominem, because this is the internet after all. You got very angry though, that has to be admitted. If you can channel this ability you should seek a job as paid astroturfer. Companies are looking for such people. Jobs like that are the future. Sorry, couldn't resist

Last edited by Shandrax; 12-18-2017 at 07:09 AM.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-18-2017 , 02:52 PM
I saw nothing in ChrisV's post that was ad hominem, unlike your pathetic "u mad bro?"-attempts.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-18-2017 , 07:36 PM
Quote:
Originally Posted by Shandrax
The link shows AlphaGo getting stronger in a linear way, correlated to the hardware upgrades. Note: It can only be the hardware, because the algorithm doesn't change. It's a neural net that plays against itself until it reaches a stable ranking.
It's not even clear to me which of AlphaGo Lee and AlphaGo Master had the more powerful hardware. Lee used 48 first-gen TPUs and Master used 4 second-gen TPUs. I have no idea how to compare them for power and I don't think you do either:



It's just flat out false that "the algorithm doesn't change" though. No version of AlphaGo was fully self learnt before AlphaGo Zero (hence the "Zero" part of the name, meaning zero bootstrapping from humans):

Quote:
The main difference between the old AlphaGo AIs and the new one is that one learns how to play Go from human data and one doesn’t.

All previous versions of AlphaGo started by training on human data (amateur and professional Go matches) that was downloaded from online sites. They looked at thousands of games and were told what moves human experts would make in certain positions. But AlphaGo Zero doesn’t use any human data whatsoever. Instead, AlphaGo Zero has learned how to play Go for itself, completely from self play.
AlphaGo Lee was actually designed completely differently, with a "value" neural network and a "policy" neural network which worked in tandem to evaluate moves. I think, but am not sure, that this is what changed between AlphaGo Lee and AlphaGo Master. Trying to google for details on Master now is hard because of the avalanche of articles about Zero. Generally though, nobody would change the name of the program merely because it is running on better hardware.

Quote:
So instead we should take your opinion seriously, because you know all the details. You know that there are self-learning neural nets and there are better self-learning nets. Both know nothing but the rules and do nothing but playing against each other, but one does it clearly better. You should read Animal Farm.
AlphaGo Zero is the only engine that learnt in the way you describe. Previous versions of AlphaGo didn't. Giraffe certainly didn't. I can only conclude you didn't read my last post, because I pointed this out there.

Quote:
Exactly, except for the part with the breakthrough. The guy who developed Giraffe was working on AlphaZero. The concept is virtually identical, except for the NASA-type of hardware used by Google.
By "the concept is virtually identical" you mean simply that both used neural nets as a foundation. This is roughly as silly as saying that an F-117 Nighthawk is "virtually identical" to a Cessna light plane, because both are roughly the same size and use wings and an engine.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-18-2017 , 07:43 PM
OK, so I found this paper from DeepMind. This describes the differences between the original AlphaGo and AlphaGo Zero:

Quote:
Our program, AlphaGo Zero, differs from AlphaGo Fan and AlphaGo Lee in several important aspects. First and foremost, it is trained solely by self-play reinforcement learning, starting from random play, without any supervision or use of human data. Second, it only uses the black and white stones from the board as input features. Third, it uses a single neural network, rather than separate policy and value networks. Finally, it uses a simpler tree search that relies upon this single neural network to evaluate positions and sample moves, without performing any MonteCarlo rollouts. To achieve these results, we introduce a new reinforcement learning algorithm that incorporates lookahead search inside the training loop, resulting in rapid improvement and precise and stable learning. Further technical differences in the search algorithm, training procedure and network architecture are described in Methods.
And regarding AlphaGo Master:

Quote:
We also played games against the strongest existing program, AlphaGo Master – a program based on the algorithm and architecture presented in this paper but utilising human data and features (see Methods) – which defeated the strongest human professional players 60–0 in online games 34 in January 2017
So, like I said, Master featured the altered neural net architecture, but not the full self-learning capacity of Zero.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-18-2017 , 07:46 PM
Ah, in Methods, they have a complete description of the versions:

Quote:
AlphaGo versions

We compare three distinct versions of AlphaGo:

1. AlphaGo Fan is the previously published program 12 that played against Fan Hui in October 2015. This program was distributed over many machines using 176 GPUs.

2. AlphaGo Lee is the program that defeated Lee Sedol 4–1 in March, 2016. It was previously unpublished but is similar in most regards to AlphaGo Fan 12. However, we highlight several key differences to facilitate a fair comparison. First, the value network was trained from the outcomes of fast games of self-play by AlphaGo, rather than games of self-play by the policy network; this procedure was iterated several times – an initial step towards the tabula rasa algorithm presented in this paper. Second, the policy and value networks were larger than those described in the original paper – using 12 convolutional layers of 256 planes respectively – and were trained for more iterations. This player was also distributed over many machines using 48 TPUs, rather than GPUs, enabling it to evaluate neural networks faster during search.

3. AlphaGo Master is the program that defeated top human players by 60–0 in January, 2017. It was previously unpublished but uses the same neural network architecture, reinforcement learning algorithm, and MCTS algorithm as described in this paper. However, it uses the same handcrafted features and rollouts as AlphaGo Lee 12 and training was initialised by supervised learning from human data.

4. AlphaGo Zero is the program described in this paper. It learns from self-play reinforcement learning, starting from random initial weights, without using rollouts, with no human supervision, and using only the raw board history as input features. It uses just a single machine in the Google Cloud with 4 TPUs (AlphaGo Zero could also be distributed but we chose to use the simplest possible search algorithm).
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-18-2017 , 07:51 PM
Note that there are only subtle differences between Master and Zero - they run on the same hardware and use the same neural net architecture. Nevertheless, these subtle differences were good for 327 ELO and an 89-11 record. Think you're going to struggle to fit this into your ITS ALL ABOUT THE HARDWARE mantra.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-19-2017 , 03:30 AM
Dude, you are putting a lot of effort into this...

Anyways, I hope you do realize that AlphaGo went from 176 GPUs to 48 first generation TPUs to 4 second generation TPUs. At the same time the rating went up from 31xx to 37xx to ~ 5000. Right? It is a legit assumption that hardware has most likely something to do with it. We could find out by running the latest AlphaGo software on 176 GPUs and the latest AlphaZero version on the good old workstation that Giraffe was running on. How about that? I am pretty sure though that DeepMind will not like this idea that much.

I hope you also realize that Giraffe peaked at Elo 2400 in 2016 which is - sorry to say - total trash. Actually it is so bad, that I would have canceled the project and missed out on a huge opportunity. That's where the decision of Mr. Lei was clearly superior. Maybe that's because he wrote in his paper - surprise surprise - that hardware is the bottleneck.

About the algorithm itself you can't make any fine-tuning, because it is fully automated. That's basically what it is it all about, the removal of the human element. The only thing that you can do is shaving off processor time by making everything more efficient.

It's not clear to me how many times I shall point out that Stockfish finds those magical moves too, just much later, because the hardware that it was running on was simply too weak. It's not that Stockfish was hit with any sophisticated concepts from outer space that were never seen before, it was simply a problem of doing the work within the given time-frame of 1-minute per move.

The questions that remain are:
1. Why did they play a match with a 1-minute format and not with classical time-controls?
2. Why wasn't the match announced, so that the Stockfish guys had a say on the conditions?
3. Why did they use an earlier version of Stockfish and not the latest build?
4. Why did Stockfish only have 1 GB hash?
5. How many secret test runs did they need, before they knew that the match was a lock?
6. Why didn't they publish the other 90 games? It's not difficult to post a pgn-file with 100 games.

We can conclude:
The AlphaZero-team chose time and the conditions for the match. AlphaZero had a massive hardware advantage. The team is holding back information. At the same time there is a full force hype-campaign running on social media. Finally the stock-market reacted to the news by adding a couple of million dollars to the Google net value. What we can observe here is a fabricated sensation, a match where Goliath beats David, because they made sure that there was no stone.

Last edited by Shandrax; 12-19-2017 at 03:58 AM.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-19-2017 , 04:55 AM
By the way, the picture of David vs. Goliath works both ways. You can either claim that AlphaZero with the massive hardware advantage was Goliath and that they set up the match in a way that Goliath could never lose.

Or you can claim that Stockfish with the Elo 3500+ rating was Goliath and AlphaZero aka Giraffe 2.0 with "unbuffed" Elo 2400+ was David, but in this case they stripped Goliath naked by taking away his opening book and the endgame tablebases and they blindfolded him by screwing around with his horizon by limiting calculation time and hashtables.

Either way it was a fixed match and it's actually pretty sad that nobody could bet on it. It's massive +EV to bet on a match when you already know the outcome
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-19-2017 , 11:59 AM
It's my understanding that AZ has the equivalent of an opening book, generated from its self-play. Is that correct?

If so, then I agree it was unfair not to give Stockfish a book, since one minute is not enough time for an engine to evaluate an opening position. The question becomes, what would be a fair book to give Stockfish? Imo it would have to be one completely generated by Stockfish, otherwise it would be cheating. But how many moves deep, and what search depth for each move? If that could be agreed upon, it wouldn't be necessary to generate the book beforehand. Stockfish could simply be given extra time in the opening until a certain #moves into the game. If you let it think to depth 50 and it still comes up with French Defense, then only Stockfish is to blame.

But they also mentioned 12-game series starting from a variety of popular opening positions, where each engine got to be White 6 times, and AZ still crushed. So I doubt an opening book would help Stockfish that much.

Anyway, I'd be more interested to see a match between (top humans) + (latest SF/asmFish/CorChess) VS alphazero, because a human+SF is able to defeat SF alone, right? Now imagine if there were a few humans and an engine teaming up against AZ. That would make for some great games!
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-19-2017 , 04:24 PM
Quote:
Originally Posted by heehaww

because a human+SF is able to defeat SF alone, right!
How would that work?
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-19-2017 , 05:33 PM
Quote:
Originally Posted by heehaww
It's my understanding that AZ has the equivalent of an opening book, generated from its self-play. Is that correct?
No. I don't know why people keep saying this. I guess it's because AlphaZero learnt to prefer certain openings over time. The basis for this preference is simply that it evaluates those positions as better. It's no more "in book" on the first move of the game than it is on the 50th. Of course, because it sees the early positions over and over, it probably has more practice evaluating them than it does later positions. But all it's doing is tree-searching and evaluating positions, same as Stockfish. It just has a vastly better evaluation function.

Quote:
Originally Posted by heehaww
But they also mentioned 12-game series starting from a variety of popular opening positions, where each engine got to be White 6 times, and AZ still crushed. So I doubt an opening book would help Stockfish that much.
I'm not sure, but I think those were pretty shallow starting positions, like just "Queen's Gambit". Having its book would help Stockfish a lot from there.

Quote:
Originally Posted by heehaww
because a human+SF is able to defeat SF alone, right?
Does anyone know the answer to this? I know this used to be true, but I don't know if it is anymore. The advantage would be getting quite marginal. I know retrograde analysis with human + SF is still stronger than SF alone, not sure if that's true in real time.

Quote:
Originally Posted by David Sklansky
How would that work?
Kasparov used to champion the idea, called it Advanced Chess, there were tournaments for a while. Traditionally human + computer is stronger than either separately. The computer prevents the human from making a tactical mistake, while the human provides strategic direction for the engine in positions it doesn't really understand. Usually this would take the form of the computer evaluating two moves as near-identical and the human selecting the one that makes positional progress.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-19-2017 , 06:03 PM
i believe he's still holding to that belief that it's better - he mentioned it on sam harris's podcast earlier this year. sam then made the comment of 'huh? surely that's just a matter of the computers not being strong enough yet, and that will eventually not be true' (seems totally reasonable!), but i believe kasparov then doubled down. i don't really remember the details though. it's no doubt talked about extensively in his book but i haven't got that far yet.

it does seem incredibly hard to believe a human can do anything to help alphago get better. and even if it's true, how can that be true in 1-5 years?

Last edited by Yeti; 12-19-2017 at 06:13 PM.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-19-2017 , 06:28 PM
Kasparov is a computer skeptic from way back. I think if they're not there yet, it would only be a matter of time before humans can't help traditional engines. And I think humans have no shot at improving AlphaZero, unless armed with Stockfish as well. Actually an AlphaZero-Stockfish combination, with AlphaZero playing the human role of nudging Stockfish in the right direction sometimes, would probably be super strong.

Edit: Although AlphaZero lost no games in the 100 game match, it did lose some games in the 1200 game match in set openings (I did see the game score for that, but annoyingly I now can't find it). Stockfish under ideal conditions would still be stronger than it in certain positions. That's why I think a combination would be superstrong.

Last edited by ChrisV; 12-19-2017 at 06:37 PM.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-19-2017 , 09:26 PM
Quote:
Originally Posted by TimM
From what I'm hearing, the match was unfair. Stockfish was crippled by the removal of its opening book, reduced RAM for hash tables, probably no endgame tablebases, and unequal hardware. I'm also hearing there was a fixed time per move? Chess programs are not optimized for that, since that's not how the rest of the world plays chess.

Think of the opening book this way: Brute-force style chess programmers supply all of the chess knowledge (with some limited tuning based on trial and error playing). The programmers could just as easily put the opening knowledge into the code, rather than a separate database. But since it's much more efficient to use a database for openings, that is what happens, and the programmers don't put a whole lot of effort into tuning the non-book opening play of their engine. Disabling the opening book is tantamount to deleting a portion of the engine's code IMO.

The only fair way to run a match like this is in a competitive situation. Stockfish would have it's own team that controls its hardware, settings, books and tablebases, etc. Equal hardware would be preferred, but since that may not be possible, the next best thing is that each side simply provide the best hardware available to them.
I read the paper (link available off of the first post of this thread), and I didn't see a mention of Stockfish not having its opening book. Maybe I missed it, but I double checked. I did find the quote "Stockfish and Elo played at their strongest skill level using 64 threads and a hash size of 1G." I would think "strongest skill level" means not disabling the opening book or end tables.

In terms of unequal hardware - they are just not comparable. AZ runs on Google's special hardware that is similar to a GPU (Graphics Processor Unit), not a regular computer. Bitcoin mining is typically done on a GPU, but if you try to mine bitcoins on a general purpose computer, it will cost you more in electricity than you'll earn from your bitcoins. Stockfish is the best chess program for general purpose computers, AZ the best for GPUs - but there can never be a fair competition.

From the Youtube videos I've watched about the 10 published games, it does appear that AZ played well against tough competition. No one criticized Stockfish's play. Several excellent chess players have been surprised at some of the great moves by AZ. One said that due to AZ's impressive performance as white against the Queen's Indian, it is likely that top pros will stop using that defense in a major competition. Even if you can't exactly compare the two competing systems, that is still an impressive result.

Google said they wouldn't comment until the paper is officially published. I'm looking forward to that - perhaps they will answer questions similar to ones posed in this thread.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-19-2017 , 09:27 PM
So how much of a favorite would Alpha Zero be if Stockfish gets whatever opening book, table bases, and anything else that it may have been missing. Also standard tournament time controls (as if they're playing in a regular tournament). Both can run on whatever hardware makes them work the best.

Are there any plans to do above type of match? I would love to see it.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-19-2017 , 09:52 PM
Quote:
Originally Posted by Melkerson
So how much of a favorite would Alpha Zero be if Stockfish gets whatever opening book, table bases, and anything else that it may have been missing. Also standard tournament time controls (as if they're playing in a regular tournament). Both can run on whatever hardware makes them work the best.

Are there any plans to do above type of match? I would love to see it.
Impossible to say. It might not have an advantage. Again (and this is not aimed at you personally) the point of AlphaZero being a big deal is not that it's necessarily stronger than Stockfish. It's that it's capable of learning three different games in like 4 hours each and being a candidate for best player in the world at all of them, all without any human input or domain-specific knowledge. Also that its style of play is a departure from typical engines. There's a video on Youtube, btw, where an IM claims that under TCEC conditions, Stockfish would have drawn 7 of the 10 example games. Of course, A0 wasn't playing under TCEC conditions either and hasn't been set up to do it (for instance, I don't think it has any heuristics about how long to spend on different moves).

I'm sure Google will set up a match with a superstrength traditional engine at some point. At that point there will probably be a newer, stronger version of A0.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-20-2017 , 02:29 AM
Quote:
Originally Posted by David Sklansky
How would that work?
This is how correspondence chess works for over a decade.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-20-2017 , 02:56 AM
Quote:
Originally Posted by ChrisV
It's not just how humans learn, by the way, it's also more or less how the brain does it. Artificial neural networks were modelled from how the brain works.
I think this is a little... hyperbolic? We know very little about how the brain works and learns. Neural nets are a non-linear statistical model. There's way too much hype surrounding the whole "mimicking the brain" thing.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-20-2017 , 03:02 AM
By the way, there is another little twist in the "opening book"-part of the match. If Stockfish is calculating moves while AlphaZero is answering instantly, then Stockfish cannot think on the opponent's time ("permanent brain"). Therefore Stockfish plays the openings considerably weaker than the rest of the game when they are both using time on every move. It basically comes down to a double penalty.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-20-2017 , 03:40 AM
Why would AlphaZero be answering instantly? You continue to bemuse me. I have to say that I don't know enough about the details of A0 to know why it would ever select varying opening moves (or why Stockfish would, for that matter) but there's no reason it would be answering instantly.

Quote:
Originally Posted by Priptonite
I think this is a little... hyperbolic? We know very little about how the brain works and learns. Neural nets are a non-linear statistical model. There's way too much hype surrounding the whole "mimicking the brain" thing.
Sure, I didn't claim they work exactly how the brain does, simply that they are modelled after it. This is sentence #1 in the Wiki article for Artificial Neural Networks:

Quote:
Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-20-2017 , 03:44 AM
Since this is a Poker website after all, we need to get people on the same page.

In chess there are no random events like cards or dice. The game is fully determined and when it is your move you also have full information. The GTO-strategy is to simply play the best move. Finding the best move is the problem. There is no room for probabilities in chess, so there is no room for Monto-Carlo simulations. If move 1 wins and moves 2-100 lose, then it is a winning position and not a 1%-shot.

Uncertainty exists only when humans are playing. Humans can semi-bluff each other by signaling that they willing to go for very complicated lines where one has to remember every single detail up to move 35+ and even then could still run into a brillant novelty by the opponent that turns the previously known evaluation upside down. Humans can also make stupid mistakes because of laziness or fatigue or hallucinations. Humans do have a certain probability to find the best move. That's what beginners are speculating for when they go for the fool's mate or basic opening traps. A friend of mine even won a game because his opponent, a very old man, died from a heart attack...

None of this matters in computer chess. Computers always play the move with the highest evaluation, they never blunder and they never go for "unclear" complications. They don't make sacrifices, they only play combinations. Computers convert every aspect of the game (time, material, space) into static numerical values and simply do the math.

Engines based on neural-nets don't use static values. In theory - and with AlphaZero also in practice - they use an approximation of the optimal values for every position. Since the static values found by humans over centuries are already very close to optimal, the neural-net cannot do that much better. That's why AlphaZero with it's massive NASA-type of hardware power and full control of the match only performed 150 Elo points better than a massively handicapped version of Stockfish.

An unleashed version of Stockfish playing under fair conditions should eat AlphaZero for breakfast, because quantity beats quality after all. Don't forget that good old Giraffe couldn't even compete with engines written by chess-programming amateurs a year ago.

http://www.computerchess.org.uk/ccrl...0150908_64-bit

That's a total joke. You gotta give the guy massive credit for turning this into a career. If he had shown these results to me, I would have kicked him out of the office. Stupid me!

Last edited by Shandrax; 12-20-2017 at 03:58 AM.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-20-2017 , 03:45 AM
Quote:
Originally Posted by ChrisV
Why would AlphaZero be answering instantly?
Sorry dude, but you are just a clueless fanboy. Let's leave it at that.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote
12-20-2017 , 04:19 AM
Quote:
Originally Posted by Shandrax
Sorry dude, but you are just a clueless fanboy. Let's leave it at that.
This is a forum for smart people, not YouTube. I can't imagine you think these rhetorical flourishes are fooling anyone.
Google's AlphaZero AI Learns Chess in Four Hours; Then Beats Stockfish 28-0-72 Quote

      
m