Many people have heard about the Monty Hall problem. A similar (but less well known and more mathematically interesting) problem is the

*, which Wikipedia describes as follows:*

**two envelopes problem**“You are given two indistinguishable envelopes, each containing money, one contains twice as much as the other. You may pick one envelope and keep the money it contains. Having chosen an envelope at will, but before inspecting it, you are given the chance to switch envelopes. Should you switch?”The problem has been around in various forms since 1953 and has been extensively discussed (see, for example Gerville-Réache for a comprehensive analysis and set of references) although I was not aware of this until recently.

We actually gave this problem (using boxes instead of envelopes) as an exercise in the supplementary material for our Book, after Prof John Barrow of University of Cambridge first alerted us to it. The ‘standard solution’ (as in the Monty Hall problem) says that you should always switch. This is based on the following argument:

If the envelope you choose contains $100 then there is an evens chance the other envelope contains $50 and an evens chance it contains $200. If you do not switch you have won $100. If you do switch you are just as likely to decrease the amount you win as increase it. However, if you win the amount increases by $100 and if you lose it only decreases by $50. So your expected gain is positive (rather than neutral). Formally, if the envelope contains S then the expected amount in the other envelope is 5/4 times X (i.e. 25% more).In fact (as pointed out by a reader Hugh Panton), the problem with the above argument is that it equally applies to the ‘other envelope’ thereby suggesting we have a genuine paradox. In fact, it turns out that the above argument only really works if you actually open the first envelope (which was explicitly not allowed in the problem statement) and discover it contains S. As Gerville-Réache shows, if the first envelope is not opened, the only probabilistic reasoning that does not use supplementary information leads to estimating expectations as infinite amounts of each envelope. Bayesian reasoning can be used to show that there is no benefit in switching, but that is not what I want to describe here.

What I found interesting is that I could not find - in any of the discussions about the problem - a solution for the case where we assume there is a

*, even if we allow that maximum to be as large as we like. With this assumption it turns out that we can prove (without dispute) that there is no benefit to be gained if you stick or switch. See this short paper for the details:*

**finite maximum prize**Fenton N E, "Revisiting a Classic Probability Puzzle: the Two Envelopes Problem" 2018, DOI10.13140/RG.2.2.24641.04960

There are two common approaches to this problem. Both assume, correctly, that you have a 50% chance to pick high, or to pick low. One then assumes that your envelope has S, so the other envelope has a 50% chance to have S/2, or to have 2S. This makes the expectation after switching (S/2)/2+(2S)/2 = 5S/4, and you should switch.

ReplyDeleteMy preferred version of the other solution, which combines two separate thoughts expressed here, assumes that the sum of the two envelopes is 3S. That is, that one has S and one has 2S, but you don’t know which. So the expectation for one envelope is (S)/2+(2S)/2 = 3S/2. The two terms are swapped and for expectation of the other, (2S)/2+(S)/2 = 3S/2. Since they are the same, there is no reason to switch.

While the second argument gets the correct result, it doesn’t say what is wrong with the first. Getting an answer from what you believe to be a flawless solution won’t convince somebody else who believes the same thing about a solution that gets a different answer.

The flaw in the first argument is that it uses two random variables – X chosen from {High,Low} and Y chosen from {S/2,2S}. But it only states a distribution for X, and assumes it applies to Y. The second uses only X (unless you look in your envelope). To see how this affects the solution, consider three ways I could have set up the situation described in the problem above:

(A) I start with two envelopes. I seal $10 in one envelope, and $20 in the other, shuffle them and give them to you.

(B) I start with three envelopes. I seal $10 in one envelope. Then I seal $5 in second, and $20 in a third. I randomly pick one of the last two envelopes, shuffle it with the first, and give these two to you.

(C) I start with six envelopes. I seal $10 in one envelope. Then I seal $5 in four more, and $20 in the last one. I randomly pick one of the last five envelopes, shuffle it with the first, and give these two to you.

If you don’t look in an envelope, it can’t matter whether you switch or not. However, the expectation that applies to both is ($10+$20)/2 = $15 in (A), ($10)/2+[($5+$20)/2]/2 = $11.25 in (B), and ($10)/2+[($5*4+$20)/5]/2 = $9 in (C).

But what if you open your envelope? If you see $5 or $20, or are in case (A), it is trivial whether you should switch. But what if you see $10? In (B), your expectation if you switch is ($5+$20)/2 = $12.5. In this case, the first solution turns out to be correct. But in (C), the expectation is ($5*4+$20)/5 = $8. And *neither* solution predicts this.

My point is that you need to know the distribution of possible values in the pair for most solutions. The Principle of Indifference is adequate to tell you whether you have the chosen the higher or lower value, but not to tell you anything about the values themselves. (Note that when I used the sum, X and Y are essentially the same random variable).

And there are distributions where, if you see a value, all but the highest possible value mean that there is an expected gain. This is why, if you don’t assume that there is an upper limit, that you can have a distribution where you should always switch.