DMMx3 said:
Assume a theoretical blackjack game where we can calculate our exact advantage before each hand, and can play perfect composition-dependent strategy. We can also play or not play at any given time. We cannot play more than one hand at a time.
Given this scenario, what would be an appropriate betting strategy?
Table min/max is 1 to 50
Our Bankroll is 500 units
We can bet size in any whole number amount from 1 to 50.
What would our plan be for an ROR of 5%?
How about for an ROR of 20%?
I was experimenting with various Kelly betting schemes, but didn't like the results because the advantage is not directly related to our percent chance of winning.
I think it is a safe assumption that we will never play if our edge is <=0, but do we start playing if we have any positive edge, no matter how tiny?
Any help in tackling this problem would be appreciated. Thanks in advance.
I have fairly recently been working with iCountNTrack on a project to make the data output from my cdca (composition dependent combinatorial analyzer) program available to a scientific analysis program he uses.
There turned out to be a few problems to overcome first, though.
First thing I did was compile a .dll. Although the data output from the .dll could be read and used by programs written in languages that could access a Microsoft C++ .dll the scientific program required an extra interface in order to access the .dll. The .dll could be read out of the box, though, in Excel using VBA.
Finally with iCountNTrack's help in trying to learn enough of his scientific program to create an external procedure to it in C++ we were able to get the necessary interface working.
Next I added the ability to sim either optimal composition dependent perfect play strategy or composition dependent full shoe basic strategy to the .dll. However, we found that my original programming was faulty because after about 125,000 rounds of perfect play simulation, the program crashed because it had a memory leak.
I found the source of the memory problem and I rewrote my code in all of my programs that had the faulty code.
The simulation interface to the .dll requires the user of the it to supply his/her own shuffle. iCountNTrack's program has what seemed to be a very fast shuffle when the .dll was employed from within his program as compared to my shuffle algorithm using the C++ random number generator when the .dll was emplyed from within a C++ program. In any case, nobody can accuse the .dll simulation of being biased due to a faulty shuffle because the shuffle is external to the .dll.
Simulation of basic strategy is pretty fast. iCountNtrack's program is faster than my C++ test program I think mainly because the shuffle is faster.
A basic CD strategy sim to a pen of 4 cards should have shown an EV virtually equal to the computed full shoe CD EV and that proved to be the case so the shuffles employed seemed to be working adequately.
Optimal play simulation takes more time because each play needs to be computed. It could probably be faster because my program computes all up cards on all calculations even though there is only one up card for each round of a sim.
I think we were able to show EV for flat betting (single deck, 3:2 bj, s17, DOA, NDAS, split 2-10 once, split aces once, one card to split aces, full peek, no surrender, penetration of 35 cards) is something like +.7%. In this scenario every play made is as good as it can be using everything known when the play is made. This includes all split hands.
I was mainly working to get the process working but I think I completed about 100,000 rounds of above a couple of times. iCountNTrack may have done more once the bugs were worked out, I don't know.
I have never been one to worry too much about optimal betting, choosing instead to be satisfied with ensuring positive EV and using common sense. However, one could add the step of computing pre-deal EV and applying a system of bet spreading to the above. This would require even more computer time (to get pre-deal EV) but since everything is computed an overwhelming amount of data wouldn't be required.
However, remember at the end of the day this is only theoretical. It's easy to forget that.
A future thing to do might be to record all of the intermediate results of an optimal sim. That type of data might prove to be a bit more practical to use. I'm not doing much programming at this time, though.