Hi, MGP.

So I've already implemented the hash tables for different states as you suggested, for both dealer probability calculations (the number of cards from 0 to 10 with all faces lumped together is the key for this hash table) and EV calculations (the hand state and the deck state, with full suits and ranks is the key for this hash table). I made my hash for the EV calculations use the full deck because I have included the suit-dependent Sp21 bonuses.

My splitting EV agrees exactly with k_c's on most values, but a few are off, presumably because I am using a "fixed strategy" after you split and resplit. For how people play, I figure this is a reasonable thing to do, but will consider changing this. However, resplitting takes a long time which I am working on trimming down. It's tricky because the hash table can't directly be used if there are remaining split hands to be played out.

My bonuses implementation agrees with the Sp21 basic strategy developed by Wizard of Odds and Kat Walker on everything I've checked, so I know I did those correctly.

What I will work on next:

- Multi-card strategies for Sp21.
- Bath-generate TD strategy as per your suggested route.
- Hopefully index generation.

Now, regarding index generation, you said, "you need to use burn card probabilities" but you also said that it won't take into account how the count changes as you play. This is indeed how it should be done in a true brute force way, but what about a half-simmed sort of way? What I was planning on doing is to randomly generate count distributions, and determine the average EV of each decision at each count. This will be biased towards the middle of the shoe, but our index plays are estimates anyway, yes? And as each card is iterated through when you calculate the EV's, you can update the count to have the correct decision for each other play. But then again it becomes an iterated result, since the index plays would change after a new index play is added...