Alternative Index Calculation Method

assume_R

Well-Known Member
#1
The traditional method for computing an index was to try different indices, and determine which value resulted in the highest EV (expected value).

Then came RA indices, in which one would try different indices, and determine which value resulted in the highest CE (certainty equivalent), which is based on one's bankroll, kelly factor, etc.

I propose another method for index generation, based on minimizing N0. N0 = Var/EV^2. So essentially we are maximizing (EV^2/Var). The reason I propose this, is because this is what we aim to maximize every time we calculate the optimal bets. The entire optimal bet theory is based on minimizing N0, and I see no reason why we shouldn't minimize N0 when we generate indices.

So to summarize, possible methods for indices:
1. (Traditional) Maximize EV
2. (Risk Averse) Maximize EV - Var/(2 * KellyFraction * Bankroll)
3. (Proposed) Maximize EV^2/Var

Thoughts? Criticisms? Comments?
 

k_c

Well-Known Member
#4
assume_R said:
The traditional method for computing an index was to try different indices, and determine which value resulted in the highest EV (expected value).

Then came RA indices, in which one would try different indices, and determine which value resulted in the highest CE (certainty equivalent), which is based on one's bankroll, kelly factor, etc.

I propose another method for index generation, based on minimizing N0. N0 = Var/EV^2. So essentially we are maximizing (EV^2/Var). The reason I propose this, is because this is what we aim to maximize every time we calculate the optimal bets. The entire optimal bet theory is based on minimizing N0, and I see no reason why we shouldn't minimize N0 when we generate indices.

So to summarize, possible methods for indices:
1. (Traditional) Maximize EV
2. (Risk Averse) Maximize EV - Var/(2 * KellyFraction * Bankroll)
3. (Proposed) Maximize EV^2/Var

Thoughts? Criticisms? Comments?
I am no statistical guru but from what I understand standard deviation = sqrt(variance)

Standard deviation is a measure of how spread out a series of data points are from the mean.
If data points are 1,3,5,7,9 then mean = (1+3+5+7+9)/5 = 5
Variance is defined as ((1-5)^2+(3-5)^2+(5-5)^2+(7-5)^2+(9-5)^2)/5 = 8
Standard deviation = sqrt(variance) = sqrt(8) = 2.83
So data points from (5-2.83) to (5+2.83) = (2.17 to 7.83) are within 1 standard deviation of the mean in this example.

The point is that variance and standard deviation are just constant characteristics. So it would seem maximizing EV^2/var is the same as maximizing EV^2/SD^2 which is the same as maximizing EV^2/(constant value).

So I would say maximizing EV^2/var boils down to the same thing as simply maximizing EV.
 
#5
RA as Optimal?

Optimal bets provide us with the lowest NO
RA indices provide us with the lowest NO also

With EV indices you are in a sense overbetting and it has the same affect as overbetting your bets. The use of RA indices optimizes indice use.

:joker::whip:
 

assume_R

Well-Known Member
#6
k_c said:
I am no statistical guru but from what I understand standard deviation = sqrt(variance)

So it would seem maximizing EV^2/var is the same as maximizing EV^2/SD^2
Correct.

k_c said:
The point is that variance and standard deviation are just constant characteristics ... which is the same as maximizing EV^2/(constant value).
Not when comparing different index values. An index of +3 for doubling something versus an index of +5 for doubling something will result in different standard deviations (and variances).

If it were constant, and not dependent on the index values, then RA indices would always equal EV-maximizing indices.

k_c said:
So I would say maximizing EV^2/var boils down to the same thing as simply maximizing EV.
I respectfully disagree based on my above statements.
 

assume_R

Well-Known Member
#7
blackjack avenger said:
RA indices provide us with the lowest NO also
Always respectful of your blackjack knowledge, avenger, but why do you say that? RA indices (equation 2. in my OP) are based on EV - Var * (something), where something depends on your personal bankroll and kelly factor, which is not the same as minimizing N0 (which equals Var/EV^2)
 

k_c

Well-Known Member
#8
assume_R said:
Not when comparing different index values. An index of +3 for doubling something versus an index of +5 for doubling something will result in different standard deviations (and variances).
By the same token increasing or decreasing bet size will cause a change in variance and standard deviation.

I generally stay out of statistical discussions and let the statisticians fight it out amongst themselves. My point of view is that the most basic element for success is a consistent and reliable positive EV.
 

assume_R

Well-Known Member
#9
k_c said:
By the same token increasing or decreasing bet size will cause a change in variance and standard deviation.
Hmm yeah that's why I've wondered if indices, for which you calculate both EV and Variance, such as RA indices, depend on your bets ( http://www.blackjackinfo.com/bb/showthread.php?t=18742 )

k_c said:
I generally stay out of statistical discussions and let the statisticians fight it out amongst themselves. My point of view is that the most basic element for success is a consistent and reliable positive EV.
Understandable. I always try to make sure I am making as accurate a decision as possible in situations :)

Thanks for your insights, k_c.
 
#10
Optimal is as Optimal Does?

Indices effect our bets.
When you use RA indices it changes our betting ramp. The risk reward ratio of the bet ramp and the RA indices are in sinc.

So if the bet ramp is optimized (min NO, Highest SCORE) then so are the RA indices (min NO, Highest SCORE).
They are linked together.

If you sim another indices value the SCORE will drop and NO will increase.

:joker::whip:
 

Nynefingers

Well-Known Member
#11
assume_R said:
Hmm yeah that's why I've wondered if indices, for which you calculate both EV and Variance, such as RA indices, depend on your bets ( http://www.blackjackinfo.com/bb/showthread.php?t=18742 )
Generally speaking, if you are near an index, your bet size will fall within a small range. CVIndex asks for your Kelly fraction for betting as well as for the TC at which you make your max bet, if I remember right. At least for shoe games, it will be rare to make a min bet and then end up with a +4 TC by the time you make your playing decision, and likewise it will be rare to have a max bet out and yet be at a TC of 0 or +1 by the time you make your decision. I suppose that would happen more often in a pitch game, in which case you may be right that the current bet size may impact the optimal index value.

Interesting thread. I'll be curious to see where it goes. However, personally I fail to see why there would be a reason to minimize N0 at the expense of not maximizing CE. Maximizing CE means that we are directly choosing to make the play that has the most utility to us and to our bankroll. There are several games that I have played where minimizing N0 and maximizing CE were goals at odds with one another, and in those cases I chose maximum CE.
 

assume_R

Well-Known Member
#12
Nynefingers said:
There are several games that I have played where minimizing N0 and maximizing CE were goals at odds with one another, and in those cases I chose maximum CE.
It's interesting that you'd choose to maximize CE, because people have different "CE" preferences, in that some people are willing to accept more or less risk (i.e. different kelly fractions). Yet if you'd minimize N0, then you'd be growing your bankroll optimally (from a mathematical perspective), and minimizing the number of hands it takes to grow your bankroll, regardless of your personal playing style. Sort of like short term vs. long term goals (where CE might be short term, and N0 might be long term), or subjective vs. objective definitions of "optimal".

Also, I just thought about it, and it's a bit of a catch 22. The reason is that your index values change your optimal bets at each count. Then your optimal bets could change your index values (because it will change the absolute standard deviation and EV at each count in $$ / hour). This will in term change the index value at each count. They probably converge, but that just means that 1 pass of "generating index values" may not be best.
 

Gramazeka

Well-Known Member
#13
The interesting theory-

We have 2 sets of rules - with surrender for Ace and without. We have TT against a 6, TC is +5 (HiLow). In what case splitting is better option? Only in terms of max. EV disregarding risk aversion (which, I believe, does not bother you) and other factors.
Answer -
Without surrender for Ace splitting is more worth considering. By splitting you pull at least 2 cards out of the deck. Every card pulled out under +TC is a loss of EV. By refraining from splitting you increase the number of hands dealt under current shuffle. By splitting you pull out 2 cards, about 6 cards are pulled out at 1 box, so you'll play 1/3 of a hand less. The more EV at the top of shuffle the more is
your loss from splitting.
In case of surrender for Ace EV is 3.2% under the TC of +5.
In case of no surrender for Ace EV is 2.8% under the TC of +5.
So, in first case the loss is 3.2/3=1.07%, in second 2.8/3=0.93%.
This factor is not considered when indeces are calculated in a classical way. If we consider it, the splitting index for TT against a 6 is abot 4.7, against a 5 is 5.3 (for a standars BJ with surrender for Ace). Under classical calculation we'll have 4.3 and 4.8 respectively.
 

k_c

Well-Known Member
#14
spout like a whale

I'm going to spout out a bunch of stuff. I am willing to bet about a jillion leap dollars that something I say will be unclear or not be quite right, but here goes. The stuff about fractional betting comes from this article.

I'm eventually going to try to address the topic of this thread. I have my own way of analyzing problems. My approach is to use actual calculations as much as possible before resorting to simulation or statistical mmethods. That seems to be the approach in the article I cited above as well.

fractional betting In theory if there was no such thing as a minimum bet then someone who always wagered a fraction of their current bankroll could never go broke making risk of ruin equal to 0%. Even if EV equals -100%, which is a sure loss, a fractional bettor could never theoretically go broke. Obviously a 0% risk of ruin is no guarantee of winning in the long run.

Optimal fractional betting In order to have an expectation of winning the prerequisite of a wager is that it has has positive EV. Someone could bet their entire bankroll on a relatively small +EV. If he is lucky he wins and otherwise is out of business. Conversely one could bet a very small fraction of bankroll on a relatively large +EV. If he is unlucky he's not hurt too badly but if he is lucky and wins then he doesn't profit by much. Each betting
strategy yields a differing rate of return. f* is defined as the fraction ofbankroll that yields maximum rate of return for average luck.

Consequences of wagering varying fractions of bankroll on +EV f* is defined as the fraction of bankroll that yields maximum rate of return for average luck. What happens when a fraction more or less than f* is wagered? If bet < 2f* then chance of being behind approaches 0
as number of trials approaches infinity. (The lesser the fraction is below 2f* the faster the chance of being behind approaches 0.) If bet=2f* then chance of being behind is about 50% as number of trials approaches infinity. As fraction increases above 2f* the chance of being behind becomes more and more as number of trials approaches infinity.

Now comes the somewhat paradoxical consequences for varying fractional bets. If a small fraction of f* is wagered then amount of expected winnings is relatively small and chance of being ahead is relatively large. As fraction of wager grows to f*, 2f*, 3f*, ...., ,100% of bankroll then expected winnings is very large even though chance of being ahead approaches 0! In other words the few that succeed by wagering extremely large amounts bring up the average expected winnings for the many that fail when betting big such that average expected winnings for this betting strategy is larger than any other!

In essence there is no such thing as overbetting +EV (with the caveat that what's right for an individual is a subjective decision.)

Hopefully back on topic
OK I've jumped through a few hoops to try to lay a basis to address the topic of the thread. Blackjack is different in the above points in that additional wagers can be added due to splits and doubles. If someone wagers f* on a round of blackjack then he will be wagering 2f* in order to double a hand. From above, a wager of f* has a greater expectation of being ahead after a
large number of trials than a wager of 2f* but has a lesser amount of expected winnings. What if basic strategy is altered by eliminating all doubles? It appears that a bs player would increase his chance of being ahead after many rounds but would have less money than a bs player
that doubled when called for. The doubling bs player would have a lesser chance of being ahead after many rounds but would tend to have more money.

There's no getting around the fact that doubling is riskier than not doubling. A player is free to accept or decline the additional risk with consequences as previously stated. The tradeoff between risk averse and EV maximizing is that with risk averse there is a greater chance of being ahead after many trials but with less expected winnings than EV maximizing. I would say that any other method of lessening risk would be similar.

That's all the jibberish I have for now. :)
 
#15
Little Ripple, Little Waves

Several indices have an EV to RA range so not much value here if slightly different indices are used.

Many indices don't change in EV much depending on the TC of indice used.

The TC theorem means that the TC you start the hand with will at most times be the TC you face when you play your hand. Perhaps in SD or DD one could be wary of large double bets when the count makes a big drop, but probably not much value here because it won't happen often and we should be betting small in relation to our bank. To a very strict full kelly bettor this would have more meaning.

Many indices don't have a real difference if 1 or 2 off because of the number of hands required to play before a significant difference is shown.

Any indice play we make is just an approximation based on our count, the actual composition of the remaining cards may change the play without our knowledge.

If we adjust our bet ramp at all for camouflage then we add another variable to the indice saga.

All of the above reasons are why one should not get preoccupied over indices. The above has been proven by the studies of grouped/light indices and the use of a limited number of indices. Also, the studies of the marginal value of composition dependent indices and keeping A side counts for strategy variations.

So if precise indices have little value, then is does not matter how one comes up with them.

:joker::whip:
 

Gramazeka

Well-Known Member
#16
Index

It's probably useful to remind that the matter is not about just EV loss, split/stand, but the effect from burning the cards. About losing EV by pulling other two cards from the positive slug when splitting TT vs 6. I believe, Casino Verite creators should help (if they understand the subject, of course). It's very difficult to solve the task analitically.
Polevoy and others could understand the problem better if they had talked aluod: "WE CONSIDER COUNT TAKING INTO ACCOUNT THE NUMBER OF REMAINING CARDS!!" The point is that immediately arises the parameter "price of taken card", in lack of such a few pages of this forum are quite useless.
Indeed, the count of +5 with one deck left (say, there's no penetration), and ten decks left (infinite deck as a bound) are different game situations. In the first case any pulled card may change the count significantly, unlike in the latter.
Limited number of remaining cards brings out "index drift" effect. In my opinion there'll be different data for different situations:

1.Penetration.

2.Number of cards left before and after the cut card, the number of taken cards (depends on the two).

3.TrueCount exact to tenths (it's probably better to take running count then)

4. Cards in play zone.

Accordingly, when one of the parameters changes we'll have different value of the burned card. And because in reality the count frequency is different from theoretical, the solution gets even harder. How shall we count the variance? That's why this task is analitically solvable, I believe, though very difficult.
Once again, I remind the question, which is "EV loss when burning cards in TTvs6 situation.
 

assume_R

Well-Known Member
#17
Gramazeka said:
It's probably useful to remind that the matter is not about just EV loss, split/stand, but the effect from burning the cards. About losing EV by pulling other two cards from the positive slug when splitting TT vs 6. I believe, Casino Verite creators should help (if they understand the subject, of course). It's very difficult to solve the task analitically.
Polevoy and others could understand the problem better if they had talked aluod: "WE CONSIDER COUNT TAKING INTO ACCOUNT THE NUMBER OF REMAINING CARDS!!" The point is that immediately arises the parameter "price of taken card", in lack of such a few pages of this forum are quite useless.
Indeed, the count of +5 with one deck left (say, there's no penetration), and ten decks left (infinite deck as a bound) are different game situations. In the first case any pulled card may change the count significantly, unlike in the latter.
Limited number of remaining cards brings out "index drift" effect. In my opinion there'll be different data for different situations:

1.Penetration.

2.Number of cards left before and after the cut card, the number of taken cards (depends on the two).

3.TrueCount exact to tenths (it's probably better to take running count then)

4. Cards in play zone.

Accordingly, when one of the parameters changes we'll have different value of the burned card. And because in reality the count frequency is different from theoretical, the solution gets even harder. How shall we count the variance? That's why this task is analitically solvable, I believe, though very difficult.
Once again, I remind the question, which is "EV loss when burning cards in TTvs6 situation.
Indeed all those factors you listed do make the problem difficult to solve analytically. However, I have no problem solving a problem empirically, and as I'm sure you know it has been shown countless times that without enough simulations the empirical evidence approaches analytic accuracy.

Yet I want to bring up the point that given that we are going to try to come up with a given "decision" empirically, we still need a way of evaluating that decision.

So I suppose my original question isn't how to solve this (because we will solve this using simulations), but rather what is our evaluation criteria? The simulations would take into account the frequencies changing based on where in the shoe we are and how many cards are burned in the TTv6 situation you cited.

Given all that, though, I am curious as to how to combine EV, Variance, Bankroll, Spread, etc. from the simulation output to evaluate the decision.

So we would run a simulation by having a player split TTv6 at +4, and another splitting TTv6 at +5. Yet after that, how do we determine which one made the correct decision? Again, the different methods (1., 2., and 3. in my OP) are methods for evaluating the results from the sims, which will indeed take into account the factors your accurately listed (such as the "eating cards" effect). Thoughts?
 

assume_R

Well-Known Member
#18
k_c said:
The tradeoff between risk averse and EV maximizing is that with risk averse there is a greater chance of being ahead after many trials but with less expected winnings than EV maximizing. I would say that any other method of lessening risk would be similar.
blackjack avenger said:
Several indices have an EV to RA range so not much value here if slightly different indices are used.

Many indices don't change in EV much depending on the TC of indice used.
...
So if precise indices have little value, then is does not matter how one comes up with them.
Thanks for your thoughts, guys. I suppose one could say this is a bit of an exercise in futility, and in some situations the "correct" index might matter more than in other situations.

An analogous situation is that you know that the "optimal" bet might be $70, and you bet $75, and even though it's "incorrect" it doesn't matter that much in the long run.

Nevertheless, I still have this nasty habit of wanting to know the best possible decision given all my information at hand :flame::laugh:
 

k_c

Well-Known Member
#19
Gramazeka said:
It's probably useful to remind that the matter is not about just EV loss, split/stand, but the effect from burning the cards. About losing EV by pulling other two cards from the positive slug when splitting TT vs 6. I believe, Casino Verite creators should help (if they understand the subject, of course). It's very difficult to solve the task analitically.
Polevoy and others could understand the problem better if they had talked aluod: "WE CONSIDER COUNT TAKING INTO ACCOUNT THE NUMBER OF REMAINING CARDS!!" The point is that immediately arises the parameter "price of taken card", in lack of such a few pages of this forum are quite useless.
Indeed, the count of +5 with one deck left (say, there's no penetration), and ten decks left (infinite deck as a bound) are different game situations. In the first case any pulled card may change the count significantly, unlike in the latter.
Limited number of remaining cards brings out "index drift" effect. In my opinion there'll be different data for different situations:

1.Penetration.

2.Number of cards left before and after the cut card, the number of taken cards (depends on the two).

3.TrueCount exact to tenths (it's probably better to take running count then)

4. Cards in play zone.

Accordingly, when one of the parameters changes we'll have different value of the burned card. And because in reality the count frequency is different from theoretical, the solution gets even harder. How shall we count the variance? That's why this task is analitically solvable, I believe, though very difficult.
Once again, I remind the question, which is "EV loss when burning cards in TTvs6 situation.
I think what you may be saying is that as number of cards dwindle indices tend to break down and I agree.

As an extreme example suppose you are just simply counting tens and non tens. Remaining number of cards to be dealt consist of 1 ten and 1 non ten and cards are dealt to the last card. Suppose player has hard 12 versus dealer 8. If many cards remain with half tens and half non-tens them it is a no brainer to take a hit with hitting better than standing by ~7% for infinite shoe.

However when 1 ten and one non-ten remain what happens?

If player stands he wins 50% of the time whenever the non-ten is 4,5,6,7,8 and otherwise loses. His win rate for standing is 5/18, which is EV = -44.4%

If player hits he must draw the non-ten for any chance to win. Non-ten is drawn half the time.
If the non-ten is 1,2,3,4,5 he loses to dealer's 18.
If the non-ten is a 6 his 18 pushes dealer's 18.
If the non-ten is 7,8,9 his 19,20,21 wins over dealer's 18.
His win rate for hitting is 3.5/18, which is EV = -61.11%

So at shoe's end player is better off STANDING by ~16.7% compared to being ~7% better off HITTING when there are more cards remaining.

What seems to be happening is what could be called restricted choice. Player must draw a non-ten to have any chance of winning. However, if a non-ten is drawn that doesn't bust player then dealer has no chance to bust, as would happen when many cards remain, and would be restricted to a total of 18.

I don't know the exact dynamics of how and when indices break down but it is obvious that at some point any count's indices will break down.
 

assume_R

Well-Known Member
#20
k_c said:
I don't know the exact dynamics of how and when indices break down but it is obvious that at some point any count's indices will break down.
Or, more specifically, the index (the decision) is dependent not only on the TC but also on the current depth. This is probably non-linear, in as the deeper you are, the more the depth becomes important.
 
Top