They get the result from (AIVAT column * hands column) / total hands played.
21798823 / 44852 = ~ 486 mbb/g
Our friend Mike Phan needs to play more hands to make a difference.
If you remove just 4 of the weaker players in this experiment:
muskan sethi,
pol dmit,
youwei qin,
geidrius talacka
it brings down the number to:
11832392 / 35579 = ~ 332 mbb/g
Although there may be professional poker players in the list, I don't think they are all NLHU specialists.
It's possible the people in the list may be very good (or 'professionals' if you like) at their chosen format of poker and at the same time not good at NLHU. (So I'm saying nothing against them, I don't know them. I've heard of Phil Laak though, he's cool
)
What I'm saying is the people doing this A.I. research project can easily skew their results any way they want by getting in the wrong type of people (and then claiming it beats professionals).
Also I don't believe their equations can possibly be accounting for the fact, that good players will adapt.
Anyway probably a good thing if they get their experiment wrong, we really don't need any A.I. bots destroying the game.
(can't they do something more useful with A.I. anyway, like solve unlimited renewable energy or find medical cures?)
(
TL;DR -
researchers can skew their own results & should really spend their time on more important projects)