Following is a summary of some timing tests that I ran last night. There are some interesting results that I am not sure I fully understand, so any review or comments are appreciated.
The calculation for which I measured execution time is: given a number of decks, compute the set of probabilities of outcomes of the dealer's hand (bust, 17-21, or blackjack) for each up card, assuming S17. I evaluated my code and assume_R's code using a slightly modified version of the test wrapper earlier in this thread, and iCountNTrack's code using a Mathematica wrapper around Run[] system calls to his latest executable.
To accurately estimate execution time for one calculation of a given shoe, I ran 100,000 iterations of my code (for each number of decks), 10,000 iterations of assume_R's, and 100 iterations of iCountNTrack's. (This was in the interest of time, since there were order of magnitude differences in speed.)
The following plot shows the resulting single-iteration execution times vs. number of decks. My code, assume_R's, and iCountNTrack's are shown in blue, magenta, and, well, goldenrod or whatever (Mathematica's defaults).
The bottom two curves make sense to me: there is consistently about a 15x difference in speed between my and assume_R's implementation, with single deck being slightly faster, but things leveling off quickly as the shoe grows, to where even 1000 decks would yield these same times, about 0.000174 second and 0.002678 second, resp.
But the log-scale plot hides the very linear increase in execution time of iCountNTrack's code, which I don't understand. Following are just those times, on a linear scale with a least squares fit:
Not sure what is happening here.