It's a typical problem in no-limit TH: abstracting all possible bet/raise sizes into just a few to make the game smaller.
Why I'm looking into this: I'm using MCTS, an algorithm that builds a search tree, trying to estimate the actual game tree using simulation and resulting in an estimation of the EV for each possible action.
So with the goal of getting the best estimation of EV in mind, this is what I'm thinking for decision nodes (try to stay with me
):
The chosen branches should lead to the highest divergency in poker situations. Eg. as a player, you will probably have the same reaction to a minimum raise, and a minimum + $0.01 raise. So I think a good abstraction would be taking together those gamestates where the opponent will react the same, or similar, think the same about my cards.
These ranges of bet sizes are of course different for every player, so we should look to make an estimation of the average player. I think a good estimation would be based on the pot odds (or implied odds), but how exactly that would go, I've yet to figure out.
Once we decided on these ranges, I think the bet/raise size should be equal to a weighted average, with the weights equal to the chance of the betsize (for example given by a learned distribution, player-specific or general).
So now you know what I'm thinking, I have some questions:
1. What do you think about this thought process?
2. Do you think the same could be said about opponent nodes?
3. I'm also thinking about if a different abstraction should be made in the root node and in another decision node. I think not, but I'm not sure. Also, can I reduce the number of branches if we go deeper into the tree?
4. And a last thought I had: is all the trouble of finding a good abstraction worth the while, comparing it to an uniform abstraction (X branches, spread uniformly over the ) or an expert knowlegde abstraction (eg. {0,5; 0,8; 1; 2} * potsize)?
PS. Some relevant approaches I've found:
-
viewtopic.php?f=25&t=6.
- sampling from a learned distribution