Poker-AI.org

Poker AI and Botting Discussion Forum
It is currently Mon Nov 13, 2023 11:45 am

All times are UTC




Post new topic Reply to topic  [ 13 posts ] 
Author Message
PostPosted: Sun Jun 03, 2018 11:58 pm 
Offline
New Member

Joined: Sun Jun 03, 2018 11:57 pm
Posts: 8
Hi all,

I've released an implementation of DeepStack for NL texas hold'em:
https://github.com/happypepper/DeepHoldem


Top
 Profile  
 
PostPosted: Mon Jun 04, 2018 11:16 pm 
Offline
Veteran Member

Joined: Wed Mar 20, 2013 1:43 am
Posts: 267
Cool, will have a look.
How did you play against Slumbot? I only see a website, do you just send http requests?


Top
 Profile  
 
PostPosted: Tue Jun 05, 2018 2:19 am 
Offline
New Member

Joined: Sun Jun 03, 2018 11:57 pm
Posts: 8
I wrote a selenium script to play on the site. If you look in the javascript, all the dealing code is in there. So I can extract the situation from there by modifying the JS slightly.


Top
 Profile  
 
PostPosted: Wed Jun 06, 2018 12:03 am 
Offline
Veteran Member

Joined: Wed Mar 20, 2013 1:43 am
Posts: 267
I think this is a really good contribution, I will try to fire up my Linux partition soon and reproduce your bot.
On GitHub there was a discussion about how abstraction influences the Huber loss of the networks, I have run a few tests myself and when you map the buckets back to cards and compute the loss then in an unabstracted way it definitely increased.
For actual online botting the thinking times might be a bit long, any ideas how to decrease them?


Top
 Profile  
 
PostPosted: Wed Jun 06, 2018 4:46 am 
Offline
New Member

Joined: Sun Jun 03, 2018 11:57 pm
Posts: 8
Yes, it is mainly the flop that is too slow.

There are a few ways it can be faster:

- Precalculate the flop call matrix (around 1.5 seconds savings)
- Support luaJIT or C++ implementation
- Use flop network in preflop for 20 iterations like they do in the paper, so the opponent CFVs can be looked up in the flop instead of recalculated. The downside is that the CFVs won't be very accurate
- Precalculate flop CFVs for all boards for common preflop situations
- Use less CFR iterations on the flop


Top
 Profile  
 
PostPosted: Wed Jun 06, 2018 9:43 am 
Offline
New Member

Joined: Tue Mar 20, 2018 1:54 pm
Posts: 4
Thanks for the contribution. I have a couple of questions on implementation details.

Firstly, what bucketing methods did you use? Did you compare different methods?

And secondly, did you check other loss functions for neural net training? I have also been experimenting with DeepStack for quite some time now and from my experience simple MSE works better (but I only tested it with small games like Rhode Island Holdem). I know that folks from UoA used Huber loss but there were no any explanations or comparisons in the paper.


Top
 Profile  
 
PostPosted: Wed Jun 06, 2018 11:56 pm 
Offline
New Member

Joined: Sun Jun 03, 2018 11:57 pm
Posts: 8
Hey there,

The flop and turn were bucketed using k-means clustering with earth mover's distance metric. I didn't include the bucketing code used to generate the bucketing data files since they were ugly and unpolished. I can clean them up and release them if enough people want it though.

The river was bucketed using pair of (win%, tie%), assuming uniform opponent range. (similar to EHS)
I didn't experiment with different bucketing strategies.

That is an interesting suggestion to use MSE instead of huber loss. It's probably a good idea since outliers in poker are actually quite important. Nutted hands can sometimes have cfv of 30x pot size for certain range pairs and it's these cases that contain the greatest loss when trained with huber loss. IMO it's definitely worth running the experiment but I don't know if I'll have the time to do it in the near future.


Top
 Profile  
 
PostPosted: Thu Jun 07, 2018 8:30 pm 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
Sadly I don't have time to have a really good look at this, but from what I see it seems like really good work. Thanks for the contribution. What computing resources did this require to create?


Top
 Profile  
 
PostPosted: Thu Jun 07, 2018 10:57 pm 
Offline
New Member

Joined: Sun Jun 03, 2018 11:57 pm
Posts: 8
Thanks!

I used many of google cloud platform's nVidia Tesla K80s in parallel. Since the code is preemptible instance friendly, it is pretty cheap to achieve the results that I did. For me it took $3000, but my code now contains many optimizations that it didn't when I was generating the data. I was able to take advantage of many free GCP credits online, so it cost me $0.

If you wanted to 10x amount of data generated (like the paper did) it would probably cost ~$30,000.


Top
 Profile  
 
PostPosted: Tue Jul 24, 2018 9:20 am 
Offline
Veteran Member

Joined: Thu Feb 28, 2013 2:39 am
Posts: 437
happypepper wrote:
Hey there,

The flop and turn were bucketed using k-means clustering with earth mover's distance metric. I didn't include the bucketing code used to generate the bucketing data files since they were ugly and unpolished. I can clean them up and release them if enough people want it though.

The river was bucketed using pair of (win%, tie%), assuming uniform opponent range. (similar to EHS)
I didn't experiment with different bucketing strategies.

That is an interesting suggestion to use MSE instead of huber loss. It's probably a good idea since outliers in poker are actually quite important. Nutted hands can sometimes have cfv of 30x pot size for certain range pairs and it's these cases that contain the greatest loss when trained with huber loss. IMO it's definitely worth running the experiment but I don't know if I'll have the time to do it in the near future.


Consider just for the river 'hand strength histogram homogeneity.' In my experimenting with HUNL CFRM strategies, that worked best in comparison. I'm not sure I have the code for it anymore, but I believe it involved calculating a inverse center-weighted skewness for the HS histogram. So, for nine slot HS histogram you would normalize it, multiply it by [5,4,3,2,1,2,3,4,5], then calculate the skewness for the weighted histogram and divide the skewness into x buckets. Having a better 'understanding' of the histogram distribution (as apposed to EHS) seemed to allow better strategic performance.

For flop and turn just do what you're doing.


Top
 Profile  
 
PostPosted: Wed Jul 25, 2018 2:17 pm 
Offline
New Member

Joined: Sun Jun 03, 2018 11:57 pm
Posts: 8
Would the skewness just be 1 number then? Would it be used in combination with win%?


Top
 Profile  
 
PostPosted: Sat Mar 13, 2021 9:00 pm 
Offline
Junior Member

Joined: Sat Mar 13, 2021 3:43 pm
Posts: 23
Glad I found this! Thanks for making it available, will take a deeper look as I move further into this subject


Top
 Profile  
 
PostPosted: Tue Dec 07, 2021 4:31 pm 
Offline
New Member

Joined: Tue Dec 07, 2021 8:33 am
Posts: 2
I liked this project but was struggling with the software stack used. That's why I converted it to a Python/PyTorch implementation:
https://github.com/lucky72s/dyypholdem


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 13 posts ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Powered by phpBB® Forum Software © phpBB Group