Poker-AI.org

Poker AI and Botting Discussion Forum
It is currently Mon Nov 13, 2023 5:14 pm

All times are UTC




Post new topic Reply to topic  [ 59 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
PostPosted: Wed May 14, 2014 10:42 am 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
A simulation using a known strategy S will give you observations O (action frequencies and showdown strengths) and hence probability p(O|S). I think you should be able to turn that round to give p(S|O) using Bayes rule and some other reasonable assumptions. Sorry this is so vague: I don't have a lot of time and my maths is very poor.

I think you should be able to use Bayes rule for the update weight too. The default strategy is a "prior"


Top
 Profile  
 
PostPosted: Wed May 14, 2014 12:56 pm 
Offline
Junior Member
User avatar

Joined: Sun Mar 17, 2013 10:03 pm
Posts: 25
I'll give it some thought, and I'll come back on it later.

In the meantime, I have some questions about the bucketing and transition functions:
  • For a possible hole cards, I would calculate the current HS and the Hand Potential seperately (in contrast to EHS). I do this because a hand with great potential might be played differently as an already strong hand with less potential.

    For Hand Potential, I thought about using HP = Ppot - Pneg. Is there a better way for this?

  • Every possible transition exists of 81 weight values (9 * 9 buckets). I've calculated that their are 1755 isomorphic flops possible, so this makes 81*1755 = 142155 values, just for transition at the flop. For the turn and especially the river, the number is even higher.

    Could you help me how I should store all this information? I'm not looking for the fastest approach, a workable one will be fine.
    Are there other botters, using a similar approach/ has this be don before?


Top
 Profile  
 
PostPosted: Wed May 14, 2014 1:41 pm 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
You are right not to use EHS. I would suggest using two parameters: average and variance of strength at showdown. A drawing hand will have a large variance, a made hand will have a low variance. I don't like current hand strength because it means nothing as it is never seen at showdown.

You definitely shouldn't be dealing with all those post flop isomorphisms. I think it should be possible to formulate the problem to make these large numbers go away. I'm thinking something like this:
- From pre-flop betting you know a bucket distribution for villain.
- You can translate that into a hole card distribution
- Then the flop comes
- You can update villain's bucket distribution
- Villain acts
- Update villain's bucket distribution
- .....
- Then the turn comes
- Update villain's bucket distribution
- ....


Top
 Profile  
 
PostPosted: Wed May 14, 2014 2:14 pm 
Offline
Junior Member
User avatar

Joined: Sun Mar 17, 2013 10:03 pm
Posts: 25
spears wrote:
You are right not to use EHS. I would suggest using two parameters: average and variance of strength at showdown. A drawing hand will have a large variance, a made hand will have a low variance. I don't like current hand strength because it means nothing as it is never seen at showdown.
That is a great idea :!: Thank you, sir!

spears wrote:
You definitely shouldn't be dealing with all those post flop isomorphisms. I think it should be possible to formulate the problem to make these large numbers go away. I'm thinking something like this:
- From pre-flop betting you know a bucket distribution for villain.
- You can translate that into a hole card distribution
- Then the flop comes
- You can update villain's bucket distribution
- Villain acts
- Update villain's bucket distribution
- .....
- Then the turn comes
- Update villain's bucket distribution
- ....

Yes, that's what I had in mind, but how would you suggest I do the highlighted updates then?

When new community cards are shown, the average and variance of strength change for all hole cards. Eg. for a dry board, a drawing hand will fall back in strength and variance. In an ideal situation, I should be able to update the bucket distribution according the amount of hole cards that switch from one bucket to another, that is, I should know from which old buckets the hole cards in the new bucket come: b_new = 2% b1 + 6% b2 + 30% b3 + ...
And this transition is different for every type of flop.


Top
 Profile  
 
PostPosted: Wed May 14, 2014 2:38 pm 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
Sacré d'Jeu wrote:
spears wrote:
Sacré d'Jeu wrote:
I learn the opponent-specific distribution P (a | hc Є b, I ) based on the action in hands where the hole cards are shown.

I think that is biased. That's why I suggested simulation.

You are absolutely right. I understand that some people assume that, if he folds, he has a bad hand and also learn on these actions. Maybe I'll do that.


We might writing at cross purposes here. I think that calculating a strategy from showdown hands will not be accurate because showdown hands are an unrepresentative sample.

When villain mucks, you can conclude that his hand is weaker than your shown hand. So you can figure out the cards he cannot have, and then update your estimates of the cards he had earlier in the hand.

It's a reasonable to assume that villain only folds weak hands, but how do you make practical use of the assumption? For example does he fold 100% of the weakest 20% or 50% of the weakest 40%?


Top
 Profile  
 
PostPosted: Wed May 14, 2014 2:54 pm 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
Sacré d'Jeu wrote:
spears wrote:
You definitely shouldn't be dealing with all those post flop isomorphisms. I think it should be possible to formulate the problem to make these large numbers go away. I'm thinking something like this:
- From pre-flop betting you know a bucket distribution for villain.
- You can translate that into a hole card distribution
- Then the flop comes
- You can update villain's bucket distribution
- Villain acts
- Update villain's bucket distribution
- .....

Yes, that's what I had in mind, but how would you suggest I do the highlighted updates then?


- You have a preflop bucket distribution.
- You can get a preflop hole card distribution from that.
- The flop comes
- You can calculate the strength of every hole ***
- Since you know the strength of every hole and you know the distribution, you can calculate the bucket distribution

*** This is computationally challenging. But you could keep lookup tables for each flop isomorph, on disk if necessary.


Top
 Profile  
 
PostPosted: Wed May 14, 2014 3:41 pm 
Offline
Junior Member
User avatar

Joined: Sun Mar 17, 2013 10:03 pm
Posts: 25
spears wrote:
- You have a preflop bucket distribution.
- You can get a preflop hole card distribution from that.
- The flop comes
- You can calculate the strength of every hole ***
- Since you know the strength of every hole and you know the distribution, you can calculate the bucket distribution

*** This is computationally challenging. But you could keep lookup tables for each flop isomorph, on disk if necessary.

That's what I meant in the first place, I think. :)

So I need a LUT for every isomorph flop, turn and river that gives me the 81 values so I can calculate the new bucket distribution:
b0_new = x00 * b0_old + x10 * b0_old + x20 * b2_old + x30 * b3_old + ...
b1_new = x01 * b0_old + x11 * b0_old + x21 * b2_old + x31 * b3_old + ...

I'm capable of calculating the values, but how should I store them in a LUT? I'm afraid I will use all my heap space...
The three tables I need, are size 142.155 (exact), 5M and 200M (!) (last two are rough estimations)...

I probably should use an similar index-system as in http://www.poker-ai.org/phpbb/viewtopic.php?f=25&t=2660.


Top
 Profile  
 
PostPosted: Wed May 14, 2014 4:06 pm 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
Sacré d'Jeu wrote:
I'm capable of calculating the values, but how should I store them in a LUT? I'm afraid I will use all my heap space...
The three tables I need, are size 142.155 (exact), 5M and 200M (!) (last two are rough estimations)...
I probably should use an similar index-system as in viewtopic.php?f=25&t=2660.


- I didn't understand that paper.
- Devise a scheme to represent the board isomorph. eg (Aa,Ab,3a) means (Ac,Ah,3c) and also (Ah,As,3h) and a way of translating from a real board to the isomorph
- If you are going to keep this all in memory, create a hash table with the isomorph as the key and the vector of hole card strengths as the value. 200MB isn't big these days
- If you are going to put it on disk create a hash table with the isomorph as the key and a number as the value. Use the number as an index in a random access file


Top
 Profile  
 
PostPosted: Wed May 14, 2014 4:58 pm 
Offline
Junior Member

Joined: Wed Dec 04, 2013 12:40 am
Posts: 49
Just about the hand isomorphism paper : it can easily provide an exact isomorphism.
As you have not much time for your work and it is already implemented, I recommend you to use it directly.

Exact indexes count for flop / turn / river with imperfect recall (on turn we don't know what card is the turn's one) :
[1 755, 16 432, 134 459]

the memory sizes for 81 bytes :
[142 155, 1 330 992, 10 891 179]

Given your numbers, I assume you use perfect recall, this can easily be done as well.

I don't know what programmation language you are using, but in C / C++ you can use the original github code : https://github.com/kdub0/hand-isomorphism
And if you need a Java wrapper, I could modify the one I published to make perfect recall board indexing available.


Top
 Profile  
 
PostPosted: Wed May 14, 2014 5:06 pm 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
You can reduce the river to 42769 by removing nut boards


Top
 Profile  
 
PostPosted: Wed May 14, 2014 5:23 pm 
Offline
Junior Member

Joined: Wed Dec 04, 2013 12:40 am
Posts: 49
So many nut boards ?
Good thing to know, one could do an intermediary boolean table to mark them :)


Top
 Profile  
 
PostPosted: Fri May 16, 2014 11:09 am 
Offline
Junior Member
User avatar

Joined: Sun Mar 17, 2013 10:03 pm
Posts: 25
I've calculated handstrength mean and variance for every starting hand, the graphic of the results are shown below.

I should strategically divide these hands into buckets. I've been thinking about using a clustering algorithm, but maybe it's better to use 'expert knowledge'. What do you think?


I understand the paper (it wasn't easy :) ), but it's actually quite simple. I can reproduce the algorithm to create tables.
I'm coding in JAVA, I'm gonna try to make the indexing myself, cause it's a variant where I'm only indexing the board cards (XXX | X | X).

Thanks for all help!


Attachments:
startingHands.png
startingHands.png [ 6.1 KiB | Viewed 14703 times ]
Top
 Profile  
 
PostPosted: Fri May 16, 2014 12:24 pm 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
If you use a clustering algorithm you need to ensure that it gives the type of clustering you want. You need to choose bins that make a good distinction between different strengths. Equal frequency binning is exactly the wrong solution because it doesn't distinguish between strengths well. There is no expert knowledge to process hands postflop so you can't use that.

I think some ad hoc approach that is a 2 dimensional equivalent of equal width binning would probably work quite well. Divide the space into equal size rectangles in such a way that the number of rectangles with any content is the number of bins.

I don't understand the graph though. AA has a strength of 85.2% (against a uniform hand). Where is that on your graph?


Top
 Profile  
 
PostPosted: Fri May 16, 2014 1:22 pm 
Offline
Junior Member
User avatar

Joined: Sun Mar 17, 2013 10:03 pm
Posts: 25
spears wrote:
If you use a clustering algorithm you need to ensure that it gives the type of clustering you want. You need to choose bins that make a good distinction between different strengths. Equal frequency binning is exactly the wrong solution because it doesn't distinguish between strengths well. There is no expert knowledge to process hands postflop so you can't use that.

I think some ad hoc approach that is a 2 dimensional equivalent of equal width binning would probably work quite well. Divide the space into equal size rectangles in such a way that the number of rectangles with any content is the number of bins.

I don't understand the graph though. AA has a strength of 85.2% (against a uniform hand). Where is that on your graph?

Oh, crap. I've calculated the numbers on the rank (using your 7-hand evaluator spears2p2). I'll be back. :)
(Thanks for the reaction about bucketing, I'll look what the actual graphic looks like, and then see how I'll do the division.)


Top
 Profile  
 
PostPosted: Sat May 17, 2014 6:47 am 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
Having thought about this maybe k-means clustering will work ok. You might have to stretch one axis or other to get the right division between strength and variance. Post your data and I'll do some experiments


Top
 Profile  
 
PostPosted: Mon May 19, 2014 10:11 am 
Offline
Junior Member
User avatar

Joined: Sun Mar 17, 2013 10:03 pm
Posts: 25
I've struggled more with the indexing algorithm than expected, but it's working now. Eventually, I've basically translated the C-code into Java. I'll upload the code somewhere when this project is over.

So now I'm doing this:
- I iterate over every possible 5-card board (= 2598960 times).
  • calculating the rank for every possible hole (using spears2p2)
  • then I calculate the handstrength of every possible hole for the particular board
  • and save it with the corresponding index (0 - 168) in a text-file
- Then I go over the saved handstrengths and update the corresponding mean and variance, which results in the means and variances of every possible hole (169 in total)


I'm doing the first iteration now. At the current speed, it will take around a day to complete. The second iteration shouldn't take that long. I'll post the results when I have them.


Top
 Profile  
 
PostPosted: Tue May 20, 2014 12:52 pm 
Offline
Junior Member
User avatar

Joined: Sun Mar 17, 2013 10:03 pm
Posts: 25
The algorithm is finished:

Mean HS (in %)
A K Q J T 9 8 7 6 5 4 3 2
A 85,20 67,04 66,21 65,39 64,60 62,78 61,94 60,98 59,91 59,92 59,03 58,22 57,38
K 65,32 82,40 63,40 62,57 61,79 59,99 58,31 57,54 56,64 55,79 54,88 54,05 53,21
Q 64,43 61,46 79,93 60,26 59,47 57,66 56,02 54,30 53,61 52,77 51,86 51,02 50,17
J 63,56 60,57 58,13 77,47 57,53 55,66 54,02 52,32 50,61 49,99 49,07 48,23 47,38
T 62,72 59,74 57,29 55,25 75,01 54,03 52,33 50,64 48,94 47,22 46,53 45,69 44,84
9 60,77 57,81 55,36 53,25 51,53 72,06 50,80 49,12 47,43 45,72 43,86 43,26 42,42
8 59,87 56,02 53,60 51,49 49,72 48,10 69,16 47,94 46,24 44,54 42,70 40,87 40,27
7 58,84 55,19 51,77 49,68 47,91 46,30 45,05 66,24 45,37 43,68 41,85 40,04 38,16
6 57,68 54,22 51,02 47,84 46,09 44,49 43,24 42,32 63,28 43,13 41,33 39,53 37,67
5 57,70 53,31 50,12 47,18 44,25 42,67 41,43 40,51 39,94 60,32 41,45 39,69 37,85
4 56,73 52,33 49,13 46,19 43,50 40,67 39,45 38,55 38,01 38,16 57,02 38,64 36,83
3 55,84 51,43 48,22 45,28 42,59 40,02 37,48 36,60 36,08 36,26 35,15 53,69 35,98
2 54,93 50,51 47,30 44,35 41,67 39,10 36,83 34,58 34,08 34,28 33,20 32,30 50,33

Variance HS (*10^(-2))
A K Q J T 9 8 7 6 5 4 3 2
A 1,12 6,16 5,98 5,84 5,75 5,57 5,59 5,65 5,73 5,98 6,12 6,22 6,29
K 5,95 1,33 7,27 7,11 7,00 6,72 6,57 6,59 6,63 6,70 6,82 6,91 7,00
Q 5,72 7,02 1,46 8,23 8,11 7,77 7,54 7,38 7,38 7,41 7,52 7,58 7,64
J 5,53 6,81 7,93 1,62 9,19 8,83 8,57 8,33 8,13 8,12 8,20 8,23 8,26
T 5,39 6,64 7,76 8,86 1,81 9,87 9,61 9,34 9,06 8,82 8,87 8,87 8,87
9 5,11 6,25 7,30 8,37 9,42 2,22 10,46 10,24 9,95 9,63 9,42 9,40 9,37
8 5,08 6,00 6,96 7,99 9,03 9,88 2,68 11,08 10,83 10,50 10,22 9,90 9,86
7 5,09 5,97 6,68 7,63 8,64 9,54 10,38 3,17 11,57 11,28 10,98 10,61 10,24
6 5,12 5,96 6,63 7,30 8,23 9,11 10,00 10,75 3,71 11,88 11,63 11,25 10,83
5 5,38 5,98 6,61 7,24 7,85 8,65 9,52 10,32 10,94 4,23 12,04 11,72 11,31
4 5,49 6,06 6,66 7,26 7,85 8,29 9,09 9,86 10,53 10,99 4,78 11,69 11,31
3 5,54 6,09 6,67 7,23 7,79 8,21 8,61 9,32 9,98 10,51 10,40 5,29 11,07
2 5,56 6,13 6,67 7,20 7,71 8,10 8,50 8,77 9,38 9,92 9,83 9,52 5,78


A graph is shown below. So what do you think?
How can I share the raw data with you?


Attachments:
File comment: graphic Mean and Variance HS Starting hands
startingHands.png
startingHands.png [ 9.27 KiB | Viewed 14630 times ]
Top
 Profile  
 
PostPosted: Tue May 20, 2014 4:41 pm 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
I've transformed your data so it will go into weka as a csv
Code:
AAo, 0.852000, 0.011200
AKo, 0.670400, 0.061600
AQo, 0.662100, 0.059800
AJo, 0.653900, 0.058400
ATo, 0.646000, 0.057500
A9o, 0.627800, 0.055700
A8o, 0.619400, 0.055900
A7o, 0.609800, 0.056500
A6o, 0.599100, 0.057300
A5o, 0.599200, 0.059800
A4o, 0.590300, 0.061200
A3o, 0.582200, 0.062200
A2o, 0.573800, 0.062900
KAs, 0.653200, 0.059500
KKo, 0.824000, 0.013300
KQo, 0.634000, 0.072700
KJo, 0.625700, 0.071100
KTo, 0.617900, 0.070000
K9o, 0.599900, 0.067200
K8o, 0.583100, 0.065700
K7o, 0.575400, 0.065900
K6o, 0.566400, 0.066300
K5o, 0.557900, 0.067000
K4o, 0.548800, 0.068200
K3o, 0.540500, 0.069100
K2o, 0.532100, 0.070000
QAs, 0.644300, 0.057200
QKs, 0.614600, 0.070200
QQo, 0.799300, 0.014600
QJo, 0.602600, 0.082300
QTo, 0.594700, 0.081100
Q9o, 0.576600, 0.077700
Q8o, 0.560200, 0.075400
Q7o, 0.543000, 0.073800
Q6o, 0.536100, 0.073800
Q5o, 0.527700, 0.074100
Q4o, 0.518600, 0.075200
Q3o, 0.510200, 0.075800
Q2o, 0.501700, 0.076400
JAs, 0.635600, 0.055300
JKs, 0.605700, 0.068100
JQs, 0.581300, 0.079300
JJo, 0.774700, 0.016200
JTo, 0.575300, 0.091900
J9o, 0.556600, 0.088300
J8o, 0.540200, 0.085700
J7o, 0.523200, 0.083300
J6o, 0.506100, 0.081300
J5o, 0.499900, 0.081200
J4o, 0.490700, 0.082000
J3o, 0.482300, 0.082300
J2o, 0.473800, 0.082600
TAs, 0.627200, 0.053900
TKs, 0.597400, 0.066400
TQs, 0.572900, 0.077600
TJs, 0.552500, 0.088600
TTo, 0.750100, 0.018100
T9o, 0.540300, 0.098700
T8o, 0.523300, 0.096100
T7o, 0.506400, 0.093400
T6o, 0.489400, 0.090600
T5o, 0.472200, 0.088200
T4o, 0.465300, 0.088700
T3o, 0.456900, 0.088700
T2o, 0.448400, 0.088700
9As, 0.607700, 0.051100
9Ks, 0.578100, 0.062500
9Qs, 0.553600, 0.073000
9Js, 0.532500, 0.083700
9Ts, 0.515300, 0.094200
99o, 0.720600, 0.022200
98o, 0.508000, 0.104600
97o, 0.491200, 0.102400
96o, 0.474300, 0.099500
95o, 0.457200, 0.096300
94o, 0.438600, 0.094200
93o, 0.432600, 0.094000
92o, 0.424200, 0.093700
8As, 0.598700, 0.050800
8Ks, 0.560200, 0.060000
8Qs, 0.536000, 0.069600
8Js, 0.514900, 0.079900
8Ts, 0.497200, 0.090300
89s, 0.481000, 0.098800
88o, 0.691600, 0.026800
87o, 0.479400, 0.110800
86o, 0.462400, 0.108300
85o, 0.445400, 0.105000
84o, 0.427000, 0.102200
83o, 0.408700, 0.099000
82o, 0.402700, 0.098600
7As, 0.588400, 0.050900
7Ks, 0.551900, 0.059700
7Qs, 0.517700, 0.066800
7Js, 0.496800, 0.076300
7Ts, 0.479100, 0.086400
79s, 0.463000, 0.095400
78s, 0.450500, 0.103800
77o, 0.662400, 0.031700
76o, 0.453700, 0.115700
75o, 0.436800, 0.112800
74o, 0.418500, 0.109800
73o, 0.400400, 0.106100
72o, 0.381600, 0.102400
6As, 0.576800, 0.051200
6Ks, 0.542200, 0.059600
6Qs, 0.510200, 0.066300
6Js, 0.478400, 0.073000
6Ts, 0.460900, 0.082300
69s, 0.444900, 0.091100
68s, 0.432400, 0.100000
67s, 0.423200, 0.107500
66o, 0.632800, 0.037100
65o, 0.431300, 0.118800
64o, 0.413300, 0.116300
63o, 0.395300, 0.112500
62o, 0.376700, 0.108300
5As, 0.577000, 0.053800
5Ks, 0.533100, 0.059800
5Qs, 0.501200, 0.066100
5Js, 0.471800, 0.072400
5Ts, 0.442500, 0.078500
59s, 0.426700, 0.086500
58s, 0.414300, 0.095200
57s, 0.405100, 0.103200
56s, 0.399400, 0.109400
55o, 0.603200, 0.042300
54o, 0.414500, 0.120400
53o, 0.396900, 0.117200
52o, 0.378500, 0.113100
4As, 0.567300, 0.054900
4Ks, 0.523300, 0.060600
4Qs, 0.491300, 0.066600
4Js, 0.461900, 0.072600
4Ts, 0.435000, 0.078500
49s, 0.406700, 0.082900
48s, 0.394500, 0.090900
47s, 0.385500, 0.098600
46s, 0.380100, 0.105300
45s, 0.381600, 0.109900
44o, 0.570200, 0.047800
43o, 0.386400, 0.116900
42o, 0.368300, 0.113100
3As, 0.558400, 0.055400
3Ks, 0.514300, 0.060900
3Qs, 0.482200, 0.066700
3Js, 0.452800, 0.072300
3Ts, 0.425900, 0.077900
39s, 0.400200, 0.082100
38s, 0.374800, 0.086100
37s, 0.366000, 0.093200
36s, 0.360800, 0.099800
35s, 0.362600, 0.105100
34s, 0.351500, 0.104000
33o, 0.536900, 0.052900
32o, 0.359800, 0.110700
2As, 0.549300, 0.055600
2Ks, 0.505100, 0.061300
2Qs, 0.473000, 0.066700
2Js, 0.443500, 0.072000
2Ts, 0.416700, 0.077100
29s, 0.391000, 0.081000
28s, 0.368300, 0.085000
27s, 0.345800, 0.087700
26s, 0.340800, 0.093800
25s, 0.342800, 0.099200
24s, 0.332000, 0.098300
23s, 0.323000, 0.095200
22o, 0.503300, 0.057800


I've tried clustering in weka. Example attached. I've messed about with stretching and compressing the strength axis but hasn't been very successful. Need to think about this some more. Also wondering if points should be weighted by the number of instances. Will return to this tonight/tomorrow.


Attachments:
clusters.jpg
clusters.jpg [ 61.43 KiB | Viewed 14624 times ]
Top
 Profile  
 
PostPosted: Tue May 20, 2014 5:38 pm 
Offline
Site Admin
User avatar

Joined: Sun Feb 24, 2013 9:39 pm
Posts: 642
I've added in a dummy point at strength = 0.3, variance = 0.5. Then raised the strength to power of 4 and used weka k means to get this:

The dummy point gets a cluster of its own, and there are four points in the strongest cluster.


Attachments:
clusters.jpg
clusters.jpg [ 73.96 KiB | Viewed 14622 times ]
Top
 Profile  
 
PostPosted: Tue May 20, 2014 7:41 pm 
Offline
Junior Member
User avatar

Joined: Sun Mar 17, 2013 10:03 pm
Posts: 25
Yeah, I've been doing somewhat the same. (didn't want to post the raw data like you, thought it took to much space :p)

The number of clusters could also be altered (keeping in mind the trade-off between simplicity and accuracy).
The goal should be to cluster the starting hands that ask for the same strategy, right? So do we have an idea how a good clustering should look like? I'm guessing big buckets for bad holes, and smaller buckets for strong hands?

Raising the handstrength to a power is a way to cluster the low starting hands more together while using a less denser clustering for strong hands, right?
And what's the reasoning behind the dummy point?

Tomorrow, I'm speaking with my supervisor, who has more experience with clustering. I'll come back to here then.

THANKS!!!!


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 59 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Powered by phpBB® Forum Software © phpBB Group