Discussion in 'Skilled Play - Card Counting, Advanced Strategies' started by iCountNTrack, Nov 9, 2011.
its about time you contributed something
If one were simming rare indicies 2 billion may not be enough?
My Excel spreadsheet simulator has been running for 2 months, 14 days, and a little over 3 hours now, but I'm sure it will soon give me a statistically relevant answer to my last query, that is, if I entered the data right. It's a little tricky the way I have it set up. Someone suggested I give in and get Qfit's blackjack simulator. What!? And miss all the fun?! :joker:
If I understand correctly you are enumerating the number of possible card combinations that could possibly lead to an outcome for any player strategy for a round of blackjack for 1 player versus dealer.
For what it's worth there are 3072 possible unbusted player hands of 2 or more cards. If dealer stands on soft 17 there are 1677 possible dealer hands. If dealer hits soft 17 there are 1740 possible dealer hands. As number of decks becomes more limited there are fewer possibilities than above. Therefore for a single unsplit round there are up to 3072*1677 = 5,151,744 possibilities for s17 and up to 3072*1740 = 5,345,280 possibilities for h17.
If I am not wrong there are 15,450 player hands of 2 or more cards if busts are included.
If 2 hands are dealt back to back to account for pair splits there are (15,450)^2 = 238,702,500 hand combinations.
If the condition of both hands totaling > 21 is eliminated, that would leave 85,827,635 hand combinations to account for a single split.
Therefore in order to account for 1 possible split there are up to 85,827,635*1677 = 143,932,943,895 possibilities for s17 and up to 85,827,635*1740 = 149,340,084,900 possiblities for h17.
Adding more splits would increase number of possibilities exponentially.
fwiw that's how I would answer the question and I don't have any opinion or knowledge of what the statistical significance may be. I am sure of the numbers for a single hand and less sure of the numbers for a single split. Also I am sure of the number of possible dealer hands.
you really have an excel simulator? i gave up trying to figure out how to heck to handle splits, with the one i tried to make.
but yeah, the other thing was, if it ever was created it would have to run for ever, also i just didn't think it could contain all the data.
This where algebraic approximated indices come in handy! :laugh:
Creating indexes is a very different problem. And no, 2 billion would not be enough.
Just re-build the Excel for algabraic approximation using the Griffin formula.
Excels over-heat running sims, could start a fire. zg
Jensen Algebraic Index Calc (ZIP D/L EXCEL)
http://www.bjmath.com/bjmath/tcindex/Generator.zip (Archive copy)
How many sim hands would it take to show a difference between extensively simmed indices and those approximated by algebra? zg
As many as it would take to calculate correct indexes, instead of algebraic estimates. Which varies enormously depending on the index, tags, methodology and other variables. Unlike BJ simulation, index generators do not run a preset number of hands, which is why you see some indexes pop up quickly and others grind away for long periods.
Very interesting write-up. This is an important question-- rephrasing slightly, "How do we know that a 2 billion round simulation is good enough to calculate the expected value of a blackjack round?"
I think the initial response must be another question: "How good do you want it to be?" Because if "good enough" means, for example, needing an estimate of expected value that we can be confident is accurate to 4 decimal places (in percent of initial wager), then 2 billion rounds is nowhere near enough. On the other hand, if "good enough" simply means accurate to 1 decimal place, then 2 billion rounds is overkill.
The Central Limit Theorem is relevant here. Suppose that we know in advance the standard deviation (sigma) of the outcome of a single round. (We don't, but we could estimate it as well. Let's use the 1.1418 value from the Wizard of Odds appendix here. I know this is for 6 decks, not 1, but this is back of the envelope.)
Then for a large number n of samples, our estimated EV is approximately normally distributed with standard deviation sigma/sqrt(n). So if we were to run our n-round simulation repeatedly, we should expect our estimated EV to be within 2*sigma/sqrt(n) of the *true* expected value about 95% of the time.
Plugging in 1.1418 for sigma and 2 billion for n yields a "one-sided" 2-sigma difference of about 0.005%, or about 2 decimal places.
As discussed elsewhere, this is easy to demonstrate, simply by running your 2 billion-round simulation multiple times. For example, suppose you run your simulation, sample 2 billion rounds, and get an estimated EV of -0.071826%. Is it appropriate to include this many digits in the result? No, because if you run it again, you may see -0.069712%, or -0.078315%. If you kept running the simulation many more times, about 95% of the results would be within 0.005% of the *true* expected value. If you want more "quotable" digits, you need more sample rounds.
Note that this leaves all of the combinatorics at the door, so to speak. The number of decks, whether suits matter, etc., are not the important factor. All that matters is the *variance* of the underlying distribution. For example, suppose that we simulate a round, not by shuffling a deck and actually playing out a hand-- which has all of the concerns associated with the large number of permutations-- but instead just make a single random draw from the probability distribution of outcomes such as the table in the Wizard of Odds appendix above. Then all of the above analysis still applies: more accuracy requires more samples, based *solely* on the *variance* of the underlying distribution. (Of course, if we knew the distribution in the appendix ahead of time, then we wouldn't need to run the simulation in the first place, but you get the idea.)
my brain hurts
Yet another reason to not bet full Kelly?
Separate names with a comma.