Poker-AI.org Poker AI and Botting Discussion Forum 2013-04-12T00:21:39+00:00 http://poker-ai.org/phpbb/feed.php?f=24&t=2452 2013-04-12T00:21:39+00:00 2013-04-12T00:21:39+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2452&p=3817#p3817 <![CDATA[Public Chance Sampling -- am I missing something?]]>
Since I used a public tree to implement vanilla CFR, I decided to do public chance sampling next. It seems like it should be so simple: instead of having a global iterations count, you have a visits count for each node. You use that in the counterfactual regret update and then replace the board node enumeration with a simple random child node choice.

The code still works for HS Kuhn, which is a good sanity check since there are no board cards. However, when I run it on Leduc, it gets stuck around exploitability values of (0.35, 0.10) for players 1 and 2.

The MC papers talk about sampling the chance event proportionally to the likelihood, but since we're enumerating all holecards, it seems to me that uniform random is okay. I've tried doing a full Bayesian update of the probability of cards in the deck before sampling and I seem to be getting the same results.

Am I missing something here? Is there some other adjustment I need to make to the algorithm?

Code for my PCS implementation is below:

Code:
class PublicChanceSamplingCFR(CounterfactualRegretMinimizer):
    def __init__(self, rules):
        CounterfactualRegretMinimizer.__init__(self, rules)
        self.init_helper(self.tree.root)

    def init_helper(self, node):
        node.visits = 0
        try:
            for child in node.children:
                self.init_helper(child)
        except AttributeError:
            return

    def cfr_helper(self, root, reachprobs):
        root.visits += 1
        return CounterfactualRegretMinimizer.cfr_helper(self, root, reachprobs)

    def cfr_boardcard_node(self, root, reachprobs):
        bc = random.choice(root.children)
        return self.cfr_helper(bc, reachprobs)

    def cfr_regret_update(self, root, action_payoffs, ev):
        for action,subpayoff in enumerate(action_payoffs):
            if subpayoff is None:
                continue
            for hc,winnings in subpayoff[root.player].iteritems():
                immediate_regret = max(winnings - ev[hc], 0)
                infoset = self.rules.infoset_format(root.player, hc, root.board, root.bet_history)
                prev_regret = self.regret[root.player][infoset][action]
                self.regret[root.player][infoset][action] = 1.0 / (root.visits + 1) * (root.visits * prev_regret + immediate_regret)

Statistics: Posted by longshot — Fri Apr 12, 2013 12:21 am


]]>