Poker-AI.org Poker AI and Botting Discussion Forum 2013-12-02T18:56:58+00:00 http://poker-ai.org/phpbb/feed.php?f=22&t=2575 2013-12-02T18:56:58+00:00 2013-12-02T18:56:58+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=5384#p5384 <![CDATA[Re: Efficiency of large memory problems]]>
I think it is very likely that your cpu has 8 threads but only 4 cores. :)

Statistics: Posted by flopnflush — Mon Dec 02, 2013 6:56 pm


]]>
2013-12-02T17:50:27+00:00 2013-12-02T17:50:27+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=5381#p5381 <![CDATA[Re: Efficiency of large memory problems]]> flopnflush wrote:

fraction wrote:
Each thread churns through X number of CS game trees (which means threads can collide).
What do you mean here? Does each thread operate on a different part of your game tree. Or do they all perform parallel full chance sampled tree traversals? And if so, do you synchronize something?

Parellel full chance sampled traversals. It's synchronised on regret updates. I'm fairly certain it's not purely a synchronisation problem though - To checkl I've made (unsafe) unsynchronised code with non-volaitile member variables and there's no improvement in speed.

Quote:

I'm also using Java and I'm also trying to figure out the best options for multithreading for my cs cfrm implementation. I would like to know if it could screw up the algorithm if the regrets and average strategies are not updated atomically.
If the algorithm still converges if some values are sometimes not updated correctly, it would probably be the best option to let each thread do independent chance-sampled tree-walks without synchronization. Then we might still need to use the "volatile" keyword, but I'm not sure about that. Can anyone explain to me what exactly the "volatile" keyword does? My understanding is, that it prevents caching so that every thread always reads and writes the actual values directly from and to ram.


Yup, volatile means that if one thread writes to a variable then it's updated for all threads (which might be on other cores) straight away.

I'm trying to work out how to make the threads independent of each other but the only way I can do it would mean each having it's own copy of the information sets. I need some algo that will allow me to split them up between threads.

Statistics: Posted by fraction — Mon Dec 02, 2013 5:50 pm


]]>
2013-12-02T17:16:10+00:00 2013-12-02T17:16:10+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=5377#p5377 <![CDATA[Re: Efficiency of large memory problems]]> fraction wrote:

Each thread churns through X number of CS game trees (which means threads can collide).
What do you mean here? Does each thread operate on a different part of your game tree. Or do they all perform parallel full chance sampled tree traversals? And if so, do you synchronize something?

I'm also using Java and I'm also trying to figure out the best options for multithreading for my cs cfrm implementation. I would like to know if it could screw up the algorithm if the regrets and average strategies are not updated atomically.
If the algorithm still converges if some values are sometimes not updated correctly, it would probably be the best option to let each thread do independent chance-sampled tree-walks without synchronization. Then we might still need to use the "volatile" keyword, but I'm not sure about that. Can anyone explain to me what exactly the "volatile" keyword does? My understanding is, that it prevents caching so that every thread always reads and writes the actual values directly from and to ram.

Statistics: Posted by flopnflush — Mon Dec 02, 2013 5:16 pm


]]>
2013-12-02T15:08:25+00:00 2013-12-02T15:08:25+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=5372#p5372 <![CDATA[Re: Efficiency of large memory problems]]>
Not that I'm going to win any speed prizes with my implementation, but it'd be nice if it was a /bit/ faster :D

Statistics: Posted by fraction — Mon Dec 02, 2013 3:08 pm


]]>
2013-11-15T04:02:32+00:00 2013-11-15T04:02:32+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=5249#p5249 <![CDATA[Re: Efficiency of large memory problems]]> Statistics: Posted by Nose — Fri Nov 15, 2013 4:02 am


]]>
2013-11-14T16:26:34+00:00 2013-11-14T16:26:34+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=5244#p5244 <![CDATA[Re: Efficiency of large memory problems]]> Nose wrote:

Sick shit. After reading this thread I wondered whether the cache or the computational power is the bottleneck in my setup. So I reduced the number of threads in my process from 32 over 16 to 8 finally down to 1 (having 16 cores) resulting in the processor utilization decaying from 97% to 5%. however, the iterations per second went up from initially 16'000 to almost 40'000.

Not sure what language you are using, but are any of your threads calling any standard library functions while they are executing? eg: allocating ram, sqrt(), pow(), ect.

I had similar problems in the past whereby 4 threads were doing less work than 1 and it turned out to be very poor thread control in the M$ standard libraries... Switching to use the Intel compiler (which obviously has much better lock-free implementations of the standard library functions) solved the problem.

Juk :)

Statistics: Posted by jukofyork — Thu Nov 14, 2013 4:26 pm


]]>
2013-11-11T16:34:51+00:00 2013-11-11T16:34:51+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=5229#p5229 <![CDATA[Re: Efficiency of large memory problems]]> Statistics: Posted by cantina — Mon Nov 11, 2013 4:34 pm


]]>
2013-11-10T00:07:21+00:00 2013-11-10T00:07:21+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=5222#p5222 <![CDATA[Re: Efficiency of large memory problems]]> Nasher wrote:

That's a very odd response to thread reduction. I'd say there's a problem with how you're doing multithreading.


True :( Every applied mechanism for synchronizing just stabs my back. I will try to disable them all. Let's see how it turns out :)

[Edit] Wow, that's odd. Even worse .... well, that's gonna be a boring day :(

Statistics: Posted by Nose — Sun Nov 10, 2013 12:07 am


]]>
2013-11-09T07:52:57+00:00 2013-11-09T07:52:57+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=5210#p5210 <![CDATA[Re: Efficiency of large memory problems]]> Statistics: Posted by cantina — Sat Nov 09, 2013 7:52 am


]]>
2013-11-08T21:56:52+00:00 2013-11-08T21:56:52+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=5206#p5206 <![CDATA[Re: Efficiency of large memory problems]]>
impressed! very helpful. thanks a lot!

Statistics: Posted by Nose — Fri Nov 08, 2013 9:56 pm


]]>
2013-09-07T08:16:54+00:00 2013-09-07T08:16:54+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4865#p4865 <![CDATA[Re: Efficiency of large memory problems]]> all the time so you design it so that all the big jumps in memory location are made at the same time, because a large jump in memory location costs the same as a medium size one and small jumps don't cost anything. So for example if you are navigating a tree and doing calculations only within a node and with adjacent nodes you should keep all data for a particular node together, and data for adjacent nodes together. Multicore multithreading makes this even more difficult to get right because different threads will be working on very different parts of memory.

Statistics: Posted by spears — Sat Sep 07, 2013 8:16 am


]]>
2013-09-07T06:24:42+00:00 2013-09-07T06:24:42+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4864#p4864 <![CDATA[Re: Efficiency of large memory problems]]> Nasher wrote:

Moving all the data into a single, external array actually ended up slowing things down. Each thread had the same efficiency, but they just took longer to traverse the tree. :)

I don't doubt what you're telling me about the cache hits, I'm just not sure what to do about it.

Run a profiler and see what's taking the time.

Statistics: Posted by OneDayItllWork — Sat Sep 07, 2013 6:24 am


]]>
2013-09-07T01:24:13+00:00 2013-09-07T01:24:13+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4863#p4863 <![CDATA[Re: Efficiency of large memory problems]]> Statistics: Posted by iKNOWpoker — Sat Sep 07, 2013 1:24 am


]]>
2013-09-07T01:27:24+00:00 2013-09-07T01:16:31+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4862#p4862 <![CDATA[Re: Efficiency of large memory problems]]>

I don't doubt what you're telling me about the cache hits, I'm just not sure what to do about it.

Statistics: Posted by cantina — Sat Sep 07, 2013 1:16 am


]]>
2013-09-06T11:14:26+00:00 2013-09-06T11:14:26+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4848#p4848 <![CDATA[Re: Efficiency of large memory problems]]> Nasher wrote:

Hmm... Would .NET be doing the tossing? I'm convinced something is, because, like I said, the smaller the data arrays, the faster things get.


Without going into specifics of your algorithm, the above sounds like a normal behaviour - you just get more cache hits and less cache misses (requiring a fetch from memory) when your data structure is smaller. Also, I wouldn't worry about the "tossing" - as far as I understand it, the cost of bringing the whole block of memory to the CPU and (thus to on-CPU cache) is the same as the cost of bringing a single value. So it's not like you're wasting time because of CPU's caching efforts.

Statistics: Posted by PolarBear — Fri Sep 06, 2013 11:14 am


]]>
2013-09-06T11:02:23+00:00 2013-09-06T11:02:23+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4846#p4846 <![CDATA[Re: Efficiency of large memory problems]]> Nasher wrote:

Maybe I should keep my data outside of the game tree nodes? If it is the references of the node class that's causing the tossing, keeping the data marshaled and just storing a single reference pointer within the node class would reduce that.

Keep as little data as you possibly can in a struct that will enable you to iterate the tree. Store all the nodes of the tree in an array to enforce some kind of locality. And in your array of structs store an object reference to data you actually need to work with when doing something with a node.

Statistics: Posted by OneDayItllWork — Fri Sep 06, 2013 11:02 am


]]>
2013-09-06T11:14:40+00:00 2013-09-06T10:58:43+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4845#p4845 <![CDATA[Re: Efficiency of large memory problems]]> Nasher wrote:

PolarBear wrote:
I concur with OneDay on this - it's likely that the memory bandwidth of your machine is just not enough given the number of threads running and that each thread constantly accesses different areas of memory.


Like I mentioned above, it gets faster when the data arrays have smaller bounds. Given what OneDay said about the caching, this makes me think that it's tossing around the entirety of those arrays when there is work being done on just a few elements therein. I'm hoping marshaling the specific elements I need to work with will bypass that tossing.


edit: I thought my previous post didn't post, so I've written some of the same things again.

An array is always stored in one continuous section of memory, so when accessing an element of an array in memory, the array itself is not read into the cache. .NET has a pointer reference to the start of the array, it has an index, and it knows the size of the elements in the array. It can therefore jump straight to the section of memory it reads. It will not read the whole array, and as most CPU caches are < 10MB, it can't read anything near the whole array into the cache if the array is large.

How the hardware decides what to pull into CPU cache, other than what was requested, I'm not totally sure. As far as I'm aware it'll grab a page, depending on hardware this may vary in size, but let's say it grabs a 4K page. It doesn't look at what that page contains, or whether it's in any way related, it' just one 4K block of memory. The logic being, that if you're looking at something in that area of memory, then you'll probably want something else in that area of memory as well. This is especially true when working with objects, as all the object value types will also be stored in a single block of memory. It also means we can have multiple sections of memory under a single lookup in the CPU cache, which means the cache lookup table doesn't need to be as big.

This brings me onto my next point. An array of objects is an array of object references. That means that the objects themselves have no locality to the array. You'll just jumping all over the place with an array of objects. You need to use an array of structs, then all the structs will be stored adjacently in a single block of memory.

As said, assuming you're working with some kind of tree that you iterate though, you want to have adjacent nodes as close to each other as possible in the array. Due the the exponentially increasing size of a tree with levels, they are horrible structures for cache locality, but there's not much you can do about that.

So to sum it up, don't bother with marshalling, just use an array of structs.

Statistics: Posted by OneDayItllWork — Fri Sep 06, 2013 10:58 am


]]>
2013-09-06T10:14:56+00:00 2013-09-06T10:14:56+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4844#p4844 <![CDATA[Re: Efficiency of large memory problems]]> Statistics: Posted by cantina — Fri Sep 06, 2013 10:14 am


]]>
2013-09-06T10:09:49+00:00 2013-09-06T10:09:49+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4843#p4843 <![CDATA[Re: Efficiency of large memory problems]]> OneDayItllWork wrote:

Requesting something from an array does not move the entire array into the cache, it will still just grab a section of memory. I'm not totally sure how that section is decided upon, but it doesn't look at what it's getting hold of, it literally just grabs a section. So that memory may be totally unrelated.

Hmm... Would .NET be doing the tossing? I'm convinced something is, because, like I said, the smaller the data arrays, the faster things get. Maybe when the tree is traversed and the node class is referenced it's bringing the entire thing into cache?

OneDayItllWork wrote:

You should be using arrays of structs in .NET, an array of objects is just a array of object references, so the objects themselves wont be local to the array. An array of structs stores the structs in a continuous section of memory, therefore it makes much better use of the CPU caches.

I'm not sure what you're suggesting here? Things look like this:

Code:
Class GameTreeNode

  Dim data(,,,,) as double
  Dim children() as GameTreeNode

  Function Train(ByVal some_stuff)
    children(i).Train(some_stuff)
  End Function

End Class

Statistics: Posted by cantina — Fri Sep 06, 2013 10:09 am


]]>
2013-09-06T10:05:21+00:00 2013-09-06T10:05:21+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4842#p4842 <![CDATA[Re: Efficiency of large memory problems]]> Quote:

Doesn't the marshalling itself require the array to be moved into the cache?

I meant to ask
Quote:

Doesn't the marshalling itself require the portion of the array containing the target value to be moved into the cache?

Statistics: Posted by spears — Fri Sep 06, 2013 10:05 am


]]>
2013-09-06T09:58:12+00:00 2013-09-06T09:58:12+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4840#p4840 <![CDATA[Re: Efficiency of large memory problems]]> does not move the entire array into the cache, it will still just grab a section of memory. I'm not totally sure how that section is decided upon, but it doesn't look at what it's getting hold of, it literally just grabs a section. So that memory may be totally unrelated.

You should be using arrays of structs in .NET, an array of objects is just a array of object references, so the objects themselves wont be local to the array. An array of structs stores the structs in a continuous section of memory, therefore it makes much better use of the CPU caches.

You just somehow need to work out which objects are frequently accessed together, and try to keep them together. So assuming you're storing nodes of a tree, try to keep adjacent nodes as close together as possibly. A side note - trees are a killer for cache locality, but there's not much you can do about that.

Statistics: Posted by OneDayItllWork — Fri Sep 06, 2013 9:58 am


]]>
2013-09-06T09:51:41+00:00 2013-09-06T09:51:41+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4839#p4839 <![CDATA[Re: Efficiency of large memory problems]]>
Maybe I should try splitting up the arrays without marshaling first. I could define an array for each texture bucket post-flop? Then, at loading/saving just merge it back together.

Statistics: Posted by cantina — Fri Sep 06, 2013 9:51 am


]]>
2013-09-06T09:30:20+00:00 2013-09-06T09:30:20+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4838#p4838 <![CDATA[Re: Efficiency of large memory problems]]> Statistics: Posted by spears — Fri Sep 06, 2013 9:30 am


]]>
2013-09-06T09:15:48+00:00 2013-09-06T09:15:48+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4837#p4837 <![CDATA[Re: Efficiency of large memory problems]]>
http://en.wiktionary.org/wiki/tosser

Statistics: Posted by cantina — Fri Sep 06, 2013 9:15 am


]]>
2013-09-06T09:13:04+00:00 2013-09-06T09:13:04+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4836#p4836 <![CDATA[Re: Efficiency of large memory problems]]> PolarBear wrote:

I concur with OneDay on this - it's likely that the memory bandwidth of your machine is just not enough given the number of threads running and that each thread constantly accesses different areas of memory.

Like I mentioned above, it gets faster when the data arrays have smaller bounds. Given what OneDay said about the caching, this makes me think that it's tossing around the entirety of those arrays when there is work being done on just a few elements therein. I'm hoping marshaling the specific elements I need to work with will bypass that tossing.

Statistics: Posted by cantina — Fri Sep 06, 2013 9:13 am


]]>
2013-09-06T09:07:41+00:00 2013-09-06T09:07:41+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4835#p4835 <![CDATA[Re: Efficiency of large memory problems]]> OneDayItllWork wrote:

When the CPU requests something from RAM, it doesn't just grab what the CPU asks for, but will grab a page of memory and put all of that in the CPU cache. Therefore if you're using an array it'll sweep a whole bunch of objects into the CPU cache, and ideally you want those objects to be useful rather that a load of stuff you couldn't care less about. So yes, arrays of data with objects accessed together adjacent to each other is the way forwards.


From a single iteration's perspective, I don't care about everything else in the array except what I need to crunch. My thought is that marshaling will prevent the program from "grabbing" the entire array into cache when I'm working on just a few elements. I don't know, though, as I've never had this problem to deal with. ;)

I'm not sure how to force useful objects to be accessed together?

Statistics: Posted by cantina — Fri Sep 06, 2013 9:07 am


]]>
2013-09-06T09:02:26+00:00 2013-09-06T09:02:26+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4834#p4834 <![CDATA[Re: Efficiency of large memory problems]]> Statistics: Posted by cantina — Fri Sep 06, 2013 9:02 am


]]>
2013-09-06T09:00:02+00:00 2013-09-06T09:00:02+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4833#p4833 <![CDATA[Re: Efficiency of large memory problems]]>
You want to be using the 4.5 framework and turn on gcAllowVeryLargeObjects to allow array > 2GB in size.

Statistics: Posted by OneDayItllWork — Fri Sep 06, 2013 9:00 am


]]>
2013-09-06T08:52:59+00:00 2013-09-06T08:52:59+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4832#p4832 <![CDATA[Re: Efficiency of large memory problems]]>

Statistics: Posted by cantina — Fri Sep 06, 2013 8:52 am


]]>
2013-09-05T23:26:00+00:00 2013-09-05T23:26:00+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4826#p4826 <![CDATA[Re: Efficiency of large memory problems]]> Nasher wrote:

I should have stipulated, I'm not rounding. .NET marshaling only has integer read/writes. So, I need some way of "converting" a double into a long, and vice versa, while maintaining it's actual decimal value. Basically, just a direct bit mapping.

The quickest way is possibly to do something 'unsafe'. Or you can take a look at this:
http://msdn.microsoft.com/en-us/library ... erter.aspx

Statistics: Posted by OneDayItllWork — Thu Sep 05, 2013 11:26 pm


]]>
2013-09-05T21:17:22+00:00 2013-09-05T21:17:22+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4825#p4825 <![CDATA[Re: Efficiency of large memory problems]]> Statistics: Posted by PolarBear — Thu Sep 05, 2013 9:17 pm


]]>
2013-09-05T20:58:52+00:00 2013-09-05T20:58:52+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4824#p4824 <![CDATA[Re: Efficiency of large memory problems]]> http://msdn.microsoft.com/en-us/library ... erter.aspx :?:
http://msdn.microsoft.com/en-us/library/ms146627.aspx :?:
http://msdn.microsoft.com/en-us/library/ms146633.aspx :?:

Statistics: Posted by spears — Thu Sep 05, 2013 8:58 pm


]]>
2013-09-05T20:03:45+00:00 2013-09-05T20:03:45+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4823#p4823 <![CDATA[Re: Efficiency of large memory problems]]> Statistics: Posted by cantina — Thu Sep 05, 2013 8:03 pm


]]>
2013-09-05T18:59:01+00:00 2013-09-05T18:59:01+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4822#p4822 <![CDATA[Re: Efficiency of large memory problems]]> Nasher wrote:

Now, fastest way to convert a double into an int64, and vice versa? :)


(double)x

Will certainly work to convert the Int64 to a double. I'm guessing that:

(long)x

May well work and force rounding, although I'm not certain. It's a hell of a lot faster than calling Convert.ToXxx() - although doesn't work in the same way and doesn't perform the same checks.

Statistics: Posted by OneDayItllWork — Thu Sep 05, 2013 6:59 pm


]]>
2013-09-05T16:01:43+00:00 2013-09-05T16:01:43+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4821#p4821 <![CDATA[Re: Efficiency of large memory problems]]>

Statistics: Posted by cantina — Thu Sep 05, 2013 4:01 pm


]]>
2013-09-05T08:28:03+00:00 2013-09-05T08:28:03+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4819#p4819 <![CDATA[Re: Efficiency of large memory problems]]> Nasher wrote:

array[a, b, c, d, e] = a * UpperBound(0) + b * UpperBound(1) + c * UpperBound(2) + d * UpperBound(3) + e?


Add some parentheses for the sums (I think):

( ( ( a * UpperBound(0) + b ) * UpperBound(1) + c ) * UpperBound(2) + d ) * UpperBound(3) + e

Statistics: Posted by nonpareil — Thu Sep 05, 2013 8:28 am


]]>
2013-09-05T08:34:01+00:00 2013-09-05T08:22:27+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4818#p4818 <![CDATA[Re: Efficiency of large memory problems]]> Quote:

Note: I think it has something to do with Windows memory swapping/caching, but I have no idea how to do anything about that.

Less to do with Windows, more to do with hardware.

Having become a performance expert over the last year or so, I strongly suspect your problem is that reading from memory takes time.

The more memory you have in use, the lower the percentage of all the data read will be in the CPU caches, and therefore it'll be hitting the RAM more frequently.

I had a similar problem. Having done lots of profiling and tuning, I was eventually in the situation where 25% of my execution time was being spent on one simple line of code.... I eventually discovered that this was the point that the referenced object was loaded from memory, and more often that not, that object wasn't in the CPU cache.

So the good news is, there's your answer - the bad news is that doing anything about it is very very tricky.

Edit:
As an extension to that - Arrays are good as they enforce some kind of locality of memory between objects. If you keep objects that are frequently referenced in quick succession next to each other in an array this will speed things up.

When the CPU requests something from RAM, it doesn't just grab what the CPU asks for, but will grab a page of memory and put all of that in the CPU cache. Therefore if you're using an array it'll sweep a whole bunch of objects into the CPU cache, and ideally you want those objects to be useful rather that a load of stuff you couldn't care less about. So yes, arrays of data with objects accessed together adjacent to each other is the way forwards.

Statistics: Posted by OneDayItllWork — Thu Sep 05, 2013 8:22 am


]]>
2013-09-05T07:01:30+00:00 2013-09-05T07:01:30+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4817#p4817 <![CDATA[Re: Efficiency of large memory problems]]> - # Threads = # Processors.

I was thinking I could use System.Runtime.Interop.Marshal to load the arrays in memory. Any suggestion on the fastest way to convert multidimensional array indices into a memory offset? :)

array[a, b, c, d, e] = a * UpperBound(0) + b * UpperBound(1) + c * UpperBound(2) + d * UpperBound(3) + e?

Statistics: Posted by cantina — Thu Sep 05, 2013 7:01 am


]]>
2013-09-05T06:29:31+00:00 2013-09-05T06:29:31+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4816#p4816 <![CDATA[Re: Efficiency of large memory problems]]> - How many threads do you set up? If it's significantly more than the number of processors that might be a problem.
- I'd make sure of the diagnosis first by running some toy problems. Then post on stack overflow and give us a link.

Statistics: Posted by spears — Thu Sep 05, 2013 6:29 am


]]>
2013-09-04T23:53:02+00:00 2013-09-04T23:53:02+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4814#p4814 <![CDATA[Re: Efficiency of large memory problems]]>
I don't use any kind of sync locking. Threads are allowed to collide, etc. This isn't to say that my version of .NET isn't doing all kinds of syncing in the background. How do I make this work? Maybe the shared objects/functions in my GameTreeNode class are causing the blocking? Or, maybe the node class itself?

Statistics: Posted by cantina — Wed Sep 04, 2013 11:53 pm


]]>
2013-09-04T22:38:52+00:00 2013-09-04T22:38:52+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4813#p4813 <![CDATA[Re: Efficiency of large memory problems]]> Statistics: Posted by spears — Wed Sep 04, 2013 10:38 pm


]]>
2013-09-04T20:52:53+00:00 2013-09-04T20:52:53+00:00 http://poker-ai.org/phpbb/viewtopic.php?t=2575&p=4811#p4811 <![CDATA[Efficiency of large memory problems]]>
I'm looking for constructive suggestions, please save the "switch to linux" or "switch to C++" for somebody else.

Note: I think it has something to do with Windows memory swapping/caching, but I have no idea how to do anything about that.

Statistics: Posted by cantina — Wed Sep 04, 2013 8:52 pm


]]>