Poker-AI.org

Poker AI and Botting Discussion Forum
It is currently Mon Nov 13, 2023 2:25 pm

All times are UTC




Post new topic Reply to topic  [ 14 posts ] 
Author Message
PostPosted: Wed Jul 31, 2013 3:30 pm 
Offline
Regular Member

Joined: Sun Mar 03, 2013 11:55 am
Posts: 64
More theoretical stuff here I'm afraid.

If we've got some optimal convergence algorithm that uses chance based sampling and has multiple bet amounts as possible actions, is that likely to skew the bot towards being too aggressive due to variance? In my head, it is, if not, why not?

Here's my thought process
Let's say we have our possible actions as Fold, Check/Call and then 100000 different bet sizes (extreme example here), due to variance, on marginal hands the chances are one of of those bet actions will skew towards being more +EV than the passive actions as there are so many of them.

If this is the case, can we measure it? What can we do about it?


Top
 Profile  
 
PostPosted: Wed Jul 31, 2013 6:29 pm 
Offline
Junior Member
User avatar

Joined: Mon Jun 03, 2013 9:06 pm
Posts: 20
yes , maybe with 100000 different bet sizes, if the best action is checking, the bot will bet 1.01 or 1.05 blind instead of checking but it is not a big problem because the difference is not great if the pot is big.


Top
 Profile  
 
PostPosted: Thu Aug 01, 2013 10:27 am 
Offline
Veteran Member

Joined: Thu Feb 28, 2013 2:39 am
Posts: 437
Wouldn't each action EV average out over time, assuming you keep a history?

Maybe you could do something like Public Chance Sampling?


Top
 Profile  
 
PostPosted: Thu Aug 01, 2013 10:42 am 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
OneDayItllWork wrote:
More theoretical stuff here I'm afraid.
This answer is theoretical too since I don't really know

OneDayItllWork wrote:
If we've got some optimal convergence algorithm that uses chance based sampling and has multiple bet amounts as possible actions, is that likely to skew the bot towards being too aggressive due to variance? In my head, it is, if not, why not? Let's say we have our possible actions as Fold, Check/Call and then 100000 different bet sizes (extreme example here), due to variance, on marginal hands the chances are one of of those bet actions will skew towards being more +EV than the passive actions as there are so many of them.

Are you saying that for a particular bet amount there are too few iterations to achieve convergence?

OneDayItllWork wrote:
If this is the case, can we measure it?

There is surely a relationship between sample size, variance and error. Maybe you could get a estimate of that from examining the history of convergence of more frequently visited nodes.

OneDayItllWork wrote:
What can we do about it?

- increase the number of iterations
- reduce the number of bet actions
- concentrate the iterations on the area of interest
- smooth the results over "nearby" points using some sort of machine learning. Depending on your abstraction, "nearby" might be hard to define. Since performance is likely to be an issue maybe something like this http://webdocs.cs.ualberta.ca/~sutton/b ... ode88.html


Top
 Profile  
 
PostPosted: Fri Aug 02, 2013 10:30 am 
Offline
Regular Member

Joined: Sun Mar 03, 2013 11:55 am
Posts: 64
spears wrote:
Are you saying that for a particular bet amount there are too few iterations to achieve convergence?
For a particular bet amount there are too few iterations to eliminate variance - obviously we'd need infinite iterations for that.

spears wrote:
There is surely a relationship between sample size, variance and error. Maybe you could get a estimate of that from examining the history of convergence of more frequently visited nodes.
This is a possibility, although you still get variance when measuring variance. Therefore it still couldn't be eliminated.

Quote:
- increase the number of iterations
This would work, but is a large performance hit on the speed of convergence. Especially if we increased it to the required infinity.

Quote:
- reduce the number of bet actions
This would help, although makes us more exploitable.

Quote:
- concentrate the iterations on the area of interest
Already doing this

Quote:
- smooth the results over "nearby" points using some sort of machine learning. Depending on your abstraction, "nearby" might be hard to define. Since performance is likely to be an issue maybe something like this http://webdocs.cs.ualberta.ca/~sutton/b ... ode88.html
Again, we've got performance issues.

It sounds like it's the usual speed vs quality of result problem... It also sounds like there's not a huge amount that can be done about it.


Top
 Profile  
 
PostPosted: Fri Aug 02, 2013 11:19 am 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
Does your abstraction allow you to define "nearby"? If not, maybe a solution is to find an abstraction that does.


Top
 Profile  
 
PostPosted: Fri Aug 02, 2013 2:36 pm 
Offline
Regular Member

Joined: Sun Mar 03, 2013 11:55 am
Posts: 64
I can get nearby without too much difficulty, but to do so would increase memory consumption as I'll need to store parent pointers back through the tree. Which obviously isn't ideal when trying to conserve memory.

To be honest, I'm not massively concerned - as neither I, not anyone else is going to generate a perfect 6 max solution (probably in my lifetime), it's just something I wanted to be aware of.

I'm storing floats using 8 bits for gods sake... I really shouldn't be worrying about precision! ;-)


Top
 Profile  
 
PostPosted: Sat Aug 03, 2013 12:36 pm 
Offline
Veteran Member

Joined: Thu Feb 28, 2013 2:39 am
Posts: 437
OneDayItllWork wrote:
I'm storing floats using 8 bits for gods sake... I really shouldn't be worrying about precision! ;-)

How are you managing that?


Top
 Profile  
 
PostPosted: Sun Aug 04, 2013 8:31 pm 
Offline
Regular Member

Joined: Sun Mar 03, 2013 11:55 am
Posts: 64
Nasher wrote:
OneDayItllWork wrote:
I'm storing floats using 8 bits for gods sake... I really shouldn't be worrying about precision! ;-)

How are you managing that?

All EVs on a single node are usually in a fairly small range, so I use an byte multiplier and offset on the node, and the store each float as byte, applying the multiplier and offset when I need to convert it into a workable number.

A bit like having a shared exponent between multiple float values I suppose.


Top
 Profile  
 
PostPosted: Sun Aug 04, 2013 10:21 pm 
Offline
Veteran Member

Joined: Thu Feb 28, 2013 2:39 am
Posts: 437
Don't you lose a great deal of decimal accuracy in that situation?


Top
 Profile  
 
PostPosted: Mon Aug 05, 2013 9:37 am 
Offline
Regular Member

Joined: Sun Mar 03, 2013 11:55 am
Posts: 64
Nasher wrote:
Don't you lose a great deal of decimal accuracy in that situation?

Not as much as I'd lose if I had to quarter the size of my game tree.


Top
 Profile  
 
PostPosted: Mon Aug 05, 2013 10:37 am 
Offline
Veteran Member

Joined: Thu Feb 28, 2013 2:39 am
Posts: 437
How do you know that for sure?


Top
 Profile  
 
PostPosted: Mon Aug 05, 2013 11:07 am 
Offline
Regular Member

Joined: Sun Mar 03, 2013 11:55 am
Posts: 64
Nasher wrote:
How do you know that for sure?

I don't :-)


Top
 Profile  
 
PostPosted: Mon Aug 05, 2013 11:17 am 
Offline
Veteran Member

Joined: Thu Feb 28, 2013 2:39 am
Posts: 437
In my brief experiments, game tree increase at the cost of decimal reduction didn't improve performance.

Of course, that was with CFRM, where the gradient topology was described via those decimals.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 14 posts ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Powered by phpBB® Forum Software © phpBB Group