slocketping wrote:
Suppose we are looking at the equity for a specific hand against another specific hand on a hold'em flop. It might be 70/30 or 91/9 or 50/50. But this doesn't really tell the whole story. If we're going to plan out the hand, we should know the equities for every turn card. So a better way to describe the equity situation would be as an (unordered) set of 45 numbers between 0 and 1, representing the equity at each turn card. (However, there are a very limited number of these sets that could actually occur.)
So my first question is: What are some good ways to reduce the dimensionality of these "equity sets"? I'm familiar with deep learning, things like restricted boltzmann machines as ways to reduce dimensionality, but I'm thinking maybe that's overkill here?
UofA use Earth Movers Distance and K means to cluster hands
http://poker-ai.org/phpbb/viewtopic.php?f=18&t=2381 Personally I think variance (and maybe skew and curtosis too) in equity is a lot simpler and might be worth trying. I've found ways of reducing the dimensionality, but I'm sceptical if deep learning would find them because the dimensionality of the original problem is so high.
slocketping wrote:
Now we move on from flop hand vs hand to flop hand vs range analysis. This would could be modeled as a probability distribution over all the possible equity sets for the hand vs hand problem. Are there techniques to reduce the dimensionality here?
Finally we move from flop hand vs range to flop range vs range analysis. This would be a probability distribution over the flop hand vs range distributions.
Why flop range vs range when you know flop hand? I don't necessarily disagree, but want to check we are on the same wavelength.
slocketping wrote:
In hold'em, intuitively we can think of our range vs range at any given point as composing a relatively few number of measurements. E.g., there would be the overall equity, how polarized each range is, how "nut-heavy" each range is, how draw heavy they are, how many of the draws are very strong (more than 9 outs) vs weak (less than 8 outs), etc... Basically, what I'm wondering is - what kind of machine learning techniques, if any, have been applied to extract these kinds of characteristics for range vs range situations.
Apart from the work quoted above I don't think anything has been published. I have my own approach which reduces the problem size to something quite manageable, but it doesn't use a machine learning approach. Rather I looked at what calculations had to be done to calculate everything completely accurately and then searched for good approximations. It doesn't use poker domain knowledge, because I don't have any. It has taken at least five years. I'd much preferred to have used a machine learning approach but couldn't find one that did the job.
slocketping wrote:
From an AI perspective, I would think any spot in poker (no limit hold'em) can be modeled as:
1. Amount in pot and effective stack sizes
2. My position, size of bet I'm facing
3. The hand vs range characteristics of this spot (think of this as just a list of quantitative features/measurements)
4. The range vs range characteristics of this spot (again just a list of quantitative features/measurements)
Given actual hands, ranges and boards, how might we reduce this to get the features/measurements for number 3 and 4 above? I would think that each street would have a different way of modeling these range vs range characteristics. Pre flop would get even more complex.
Once post flop is solved isn't pre-flop a piece of cake? Just run and cache all possible post flops?