Poker-AI.org

Poker AI and Botting Discussion Forum
It is currently Mon Nov 13, 2023 2:10 pm

All times are UTC




Post new topic Reply to topic  [ 7 posts ] 
Author Message
PostPosted: Tue Jan 31, 2017 3:58 pm 
Offline
Junior Member

Joined: Tue Dec 13, 2016 4:11 am
Posts: 13
AI Decisively Defeats Human Poker Players
http://spectrum.ieee.org/automaton/robo ... er-players

Noam, the PhD who worked on Libratus, also mentioned that: The basis for the bot is reinforcement learning using a special variant of Counterfactual Regret Minimization. We use a form of Monte Carlo CFR distributed over about 200 nodes. We also incorporate a sampled form of Regret-Based Pruning which speeds up the computation quite a bit.

https://www.reddit.com/r/IAmA/comments/ ... y/dczfvej/


Top
 Profile  
 
PostPosted: Wed Feb 01, 2017 9:15 am 
Offline
Senior Member

Joined: Fri Nov 25, 2016 10:42 pm
Posts: 122
would that method be applicable in online poker due to short time to react with some average computer?


Top
 Profile  
 
PostPosted: Sat Feb 04, 2017 9:21 am 
Offline
New Member

Joined: Sat Jun 01, 2013 10:01 am
Posts: 7
it'll work fine if you use the $10million dollar super computer they have. think i need to upgrade my little laptop :roll:


Top
 Profile  
 
PostPosted: Fri Feb 10, 2017 7:23 pm 
Offline
Junior Member

Joined: Sat Apr 26, 2014 7:29 am
Posts: 34
[edit:] Sorry, the following post was written under the assumption they use neural nets (I mix it up with DeepStack, they use neural nets). So you may not want to look at the GPU instance prices, but at prices for normal instances.

You can train on amazon and then just run fewer steps than them for real play on local servers. You don't have to be as good as them to beat online players. I currently train my deep nets on 3 GPUs at home, but plan on using Amazon for the training. Spot price for GPU is 10-15 cents an hour. So you can easily train on many GPUs for some days for reasonable money...

Note: I don't follow that paper, I follow the main ideas of another reinforced learning poker paper, but with some significant changes that reduce the effort greatly.

Training is what needs much resources (at least for my bot), eval is cheap compared to that (especially if you batch smartly, cost is not linear, if you do thousands of evals in one batch it is much much cheaper than thousands of single evals).

[edit:]
The problem with amazon is that single GPUs are that price, a machine with 16 GPUs is very expensive. And you have to send a lot of data around (at least for what I am doing). I am currently optimizing my data transfers to be sure I am below what a cheap GPU instance has (p2.xlarge, assuming worst case of 800Mbps, atm I am way above that with the scaling I plan to run).


Top
 Profile  
 
PostPosted: Tue Feb 14, 2017 1:19 pm 
Offline
Veteran Member

Joined: Wed Mar 20, 2013 1:43 am
Posts: 267
SkyBot wrote:
[edit:]
Note: I don't follow that paper, I follow the main ideas of another reinforced learning poker paper, but with some significant changes that reduce the effort greatly.


Which paper do you follow in your training? Would be great if you could provide the name or a link.


Top
 Profile  
 
PostPosted: Tue Feb 14, 2017 10:41 pm 
Offline
Junior Member

Joined: Sat Apr 26, 2014 7:29 am
Posts: 34
HontoNiBaka wrote:
SkyBot wrote:
[edit:]
Note: I don't follow that paper, I follow the main ideas of another reinforced learning poker paper, but with some significant changes that reduce the effort greatly.


Which paper do you follow in your training? Would be great if you could provide the name or a link.

Deep Reinforcement Learning from Self-Play in Imperfect-Information Games
https://arxiv.org/abs/1603.01121

I use the main idea of having a average and best response neural net and mix them for training. However, while their stuff should converge to Nash, I use some brutal optimizations where I may lose those properties.


Top
 Profile  
 
PostPosted: Tue Mar 21, 2017 1:43 pm 
Offline
New Member

Joined: Tue Mar 21, 2017 1:37 pm
Posts: 1
Sky bot.. we can cooperate. I am also investigating this topic. I have experience in RL and deep learning. Now I am trying an other approach. But cooperation with can be helpful. Looks like a lot of work here.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 7 posts ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Powered by phpBB® Forum Software © phpBB Group