Thanks again. While strong chess players rely on bots to ASSIST them in their study, they don't just trust them out of hand. One of the main uses strong players like Caruana make of bots is having a team of seconds search for positions that the bots get wrong. The other use is more like what I think you were thinking of, looking for positions that the bots say are even but the inferior side needs to play 'like a god' to hold the draw. The bots are obviously not infallible or in bot v bot games they both wouldn't chalk up wins or in BG people wouldn't run rollouts on complex positions of timing.
I noticed I didn't give my reasoning earlier. It seems reasonable to assume that a game where one player misses good double/take opportunities several times in a row is probably, but not certainly, a low volatility series of positions.
If a human player was 200 elo better than XG, demonstrated by winning around 75% of their matches (75% is from chess not certain what the exact number is for BG) over a long series of matches, XG analysis would insist that the player played like an idiot and was very lucky. XG's standard is itself. If you disagree, even when you're right, it considers you wrong unless it failed to examine your play and upon deeper analysis sees that it is in fact better.
I don't think I'm better than XG. When it tells me I made a checker error I usually agree instantly. I usually didn't even consider its move. But acknowledging that it is better is different than accepting that it is infallible. I get outplays sometimes. Not as often as I get blunders but sometimes
Regarding cube decisions you could be correct as I can't always understand what XG is telling me when it is right there to study.