David Spence
Member
Kelly betting is what many of us use as at least one factor (OK, usually it’s not the most important factor, but bear with me) in determining how much to bet. One shortcoming it seems to have is that it only takes into account the present situation. I realize that it considers the future by preserving current wealth, but this doesn’t take into account that future opportunities may be much better than the present one.
For a trip I plan to take, I’ll first be playing a very marginal game with a ~3% edge for about 10 hours. After that, I’ll be playing a much better game with a ~20% edge for about 10 hours (travel logistics prevent me from playing the better game first, or for all 20 hours). Should I consider the potential opportunity cost of not being able to bet as much at the better game resulting from losing money at the marginal game?
Consider a simpler(?) case: you first play one round of Game A. In Game A, you have a .9 probability of losing your wager, and a .1 probability of winning 1000 times your wager. You then play one round of Game B. In Game B, you win 10,000 times your wager with certainty. How much of your $10,000 bankroll would you wager on Game A? Even though Game A has an astronomical ev, most APs probably wouldn’t wager anything close to full or half or whatever Kelly fraction with which they might normally feel comfortable. Instead, they’d likely save virtually all of their bankroll for Game B and the $100,000,000 guaranteed win. Here, you could treat each dollar as being worth $10,000, since that’s exactly what you’ll get in Game B. So when you risk $1 in Game A, you’re really risking $10,000. This could be an argument for wagering 1/10000 Kelly in Game A, and similar reasoning could be used for the more complicated situation mentioned in the previous paragraph.
I realize this discussion is a little muddled, and it’s not entirely clear whether I’m asking a question or just rambling. But I suspect I’m not the first to consider this common situation, and I was wondering what others’ thoughts might be.
For a trip I plan to take, I’ll first be playing a very marginal game with a ~3% edge for about 10 hours. After that, I’ll be playing a much better game with a ~20% edge for about 10 hours (travel logistics prevent me from playing the better game first, or for all 20 hours). Should I consider the potential opportunity cost of not being able to bet as much at the better game resulting from losing money at the marginal game?
Consider a simpler(?) case: you first play one round of Game A. In Game A, you have a .9 probability of losing your wager, and a .1 probability of winning 1000 times your wager. You then play one round of Game B. In Game B, you win 10,000 times your wager with certainty. How much of your $10,000 bankroll would you wager on Game A? Even though Game A has an astronomical ev, most APs probably wouldn’t wager anything close to full or half or whatever Kelly fraction with which they might normally feel comfortable. Instead, they’d likely save virtually all of their bankroll for Game B and the $100,000,000 guaranteed win. Here, you could treat each dollar as being worth $10,000, since that’s exactly what you’ll get in Game B. So when you risk $1 in Game A, you’re really risking $10,000. This could be an argument for wagering 1/10000 Kelly in Game A, and similar reasoning could be used for the more complicated situation mentioned in the previous paragraph.
I realize this discussion is a little muddled, and it’s not entirely clear whether I’m asking a question or just rambling. But I suspect I’m not the first to consider this common situation, and I was wondering what others’ thoughts might be.