In conclusion, I think I have to rewrite my poker hand evaluator directly in tensorflow. I'm working on google cloud, so the ram and cpu limits are given by those that I can reasonably afford. What do you think is the most appropriate algorithm?
I'm thinking this:
- Batch, build the kevin's rule rank hash table for all 7 cards hands ( 133784560 hand ranks, every saved as a 4 byte integer so only 0,5 GB, maybe i've made a mistake seems too small). 64 cpu may cost 0.10-0.20$/hour, tips for parallelize this process?
- Online, for every hand simulate for N times a random opponent hand and a random board and update the win total of the hand through a table lookup. Here I can parallelize over the simulation (or over the hands).
-Return the win totals array / N.
I plan to make the batch script building the hash table in cython (reusing the code of the project in my first post) and then to use it in a pure tensorflow python API app.Statistics: Posted by AlephZero — Wed Aug 30, 2017 10:04 pm
]]>