I think you may find this interesting:

### Architecture is art

This is a breathtaking video on architecture through the viewpoint of a photographer. Somewhat surrealistic and fully computer generated.

Although I've embedded this video, you really shouldn't watch it here. You should watch it in fullscreen on Vimeo.

The Third & The Seventh from Alex Roman on Vimeo.

Although I've embedded this video, you really shouldn't watch it here. You should watch it in fullscreen on Vimeo.

### Business Myths

Wise Bread is an interesting blog, which featured an article named 10 Myths Non-Business People Believe About Business by Joshua Ritchie a few days ago. In this article, Joshua refutes some persistent ideas about business that are often viewed as old truths. My general opinion on the article is that it's good, but it looses credibility on some points. Let me give an example.

Myth number 4: "Prices are whatever businessmen arbitrarily decide to charge."

In his article, Joshua turns against the myth that prices reflect greed. When prices rise, it's supposed to be because the selling company wants to squeeze an extra profit out of the product or service. While this is a myth by all means, the explanation he gives is only half the story and actually serves to reinforce another, related, myth.

Joshua argues that, instead of reflecting businesses' greed, prices reflect the cost of producing their product or service. When demand for oil is high, Joshua argues, a gas company has to bid higher for the raw material for their product. Subsequently, they have to rise the gas prices in order to make a profit. This can be true sometimes, but an important piece of the puzzle is still missing.

We have to remember that companies charge whatever they can charge for a certain quantity of goods or services. When our gas company charged $2.50 for a gallon of gas, it was because they expected to sell a certain quantity of gas at that price and that the expected quantity times that price would give them the best profit. If they lowered the price, they'd sell more but still they wouldn't make the same ammount of money (or at least their key figures wouldn't turn out as favorable), and if they raised the prices, they wouldn't sell as much even though each gallon would be more profitable.

This relationship is easy to see if we imagine the extremes; giving out free gas on one end and charging infinitely high prices on the other end. Giving out free gas (charging $0) would make no profit at all even though they'd "sell" fantastic quantities. On the other hand, charging infinitely high prices wouldn't make a profit either, because no one would buy. The optimum is somewhere in between.

Prices on the X-axis, profit on the Y-axis

Now, businesses don't charge prices in the sole purpose of covering their costs. It's the other way around; they take on costs necessary to acquire or maintain sales. No sound company has ever had a business meeting where they've said "OK, we have all of those costs, now what should we do in order to get our money back?". The whole reasoning is backwards.

The real questions to ask yourself when setting the price are:

-- How many units can we sell at a specific price?

-- How much money does that earn us?

-- What are the variable costs of producing that many units?

-- How much is left to cover our fixed costs after variable costs have been accounted for?

-- What are our fixed costs?

-- How much profit is left after fixed costs are accounted for?

The costs that Joshua Ritchie talks about in his article are mainly the variable costs. Depending on the answers to the other questions, they may have a big or small impact. Sometimes they're almost irrelevant. On some markets, raising prices would yield such a drop in sales that this factor outweighs the cost factor. Rising costs may call not for raising prices, but rather for canceling the product if there's no longer room for good-enough profit.

The main point is this: Whenever you feel like saying "This item is over-priced, it can hardly cost a tenth of this much to produce", remember that businesses don't set prices in order to cover costs. They set whatever prices they can in order to maximize profit, or actually to maximize utility (where profit is a main ingredient, but other key figures come to play as well). As long as people are buying and they make a good profit, prices are reasonable.

Myth number 4: "Prices are whatever businessmen arbitrarily decide to charge."

In his article, Joshua turns against the myth that prices reflect greed. When prices rise, it's supposed to be because the selling company wants to squeeze an extra profit out of the product or service. While this is a myth by all means, the explanation he gives is only half the story and actually serves to reinforce another, related, myth.

Joshua argues that, instead of reflecting businesses' greed, prices reflect the cost of producing their product or service. When demand for oil is high, Joshua argues, a gas company has to bid higher for the raw material for their product. Subsequently, they have to rise the gas prices in order to make a profit. This can be true sometimes, but an important piece of the puzzle is still missing.

We have to remember that companies charge whatever they can charge for a certain quantity of goods or services. When our gas company charged $2.50 for a gallon of gas, it was because they expected to sell a certain quantity of gas at that price and that the expected quantity times that price would give them the best profit. If they lowered the price, they'd sell more but still they wouldn't make the same ammount of money (or at least their key figures wouldn't turn out as favorable), and if they raised the prices, they wouldn't sell as much even though each gallon would be more profitable.

This relationship is easy to see if we imagine the extremes; giving out free gas on one end and charging infinitely high prices on the other end. Giving out free gas (charging $0) would make no profit at all even though they'd "sell" fantastic quantities. On the other hand, charging infinitely high prices wouldn't make a profit either, because no one would buy. The optimum is somewhere in between.

Now, businesses don't charge prices in the sole purpose of covering their costs. It's the other way around; they take on costs necessary to acquire or maintain sales. No sound company has ever had a business meeting where they've said "OK, we have all of those costs, now what should we do in order to get our money back?". The whole reasoning is backwards.

The real questions to ask yourself when setting the price are:

-- How many units can we sell at a specific price?

-- How much money does that earn us?

-- What are the variable costs of producing that many units?

-- How much is left to cover our fixed costs after variable costs have been accounted for?

-- What are our fixed costs?

-- How much profit is left after fixed costs are accounted for?

The costs that Joshua Ritchie talks about in his article are mainly the variable costs. Depending on the answers to the other questions, they may have a big or small impact. Sometimes they're almost irrelevant. On some markets, raising prices would yield such a drop in sales that this factor outweighs the cost factor. Rising costs may call not for raising prices, but rather for canceling the product if there's no longer room for good-enough profit.

The main point is this: Whenever you feel like saying "This item is over-priced, it can hardly cost a tenth of this much to produce", remember that businesses don't set prices in order to cover costs. They set whatever prices they can in order to maximize profit, or actually to maximize utility (where profit is a main ingredient, but other key figures come to play as well). As long as people are buying and they make a good profit, prices are reasonable.

### Meta blog

The translation of “meta blogging” goes something like “blogging about blogging”. I don't like it. The reason being that, when I read a blog, I'm not interested in the blog itself. I'm interested in the topic of the blog. I don't care what the author has been up to recently and I'm not interested in whatever adventures he pulls himself through in order to splash his letters onto my screen. I suspect that my readers share that indifference to information relating to my persona and, thus, I'd like to keep any such uninteresting details at a minimum (I'm not doing a particularly good job here).

Objective meta-meta blog

Many meta blog entries are concerned with the responsibility of the author and the fact that the author haven't managed to actually author anything for a while. The readers are supposed to have been subjected to great distress in the absence of quality content to digest off the pages of the blog in question, and therefore it lies within the author's responsibility to maintain their hunger for information with a steady flow of well thought out entries.

Subjective meta blog

My readers are a sparse collection of nerds, friends, acquaintances, my brother and hopefully a few others (please say “hi” in the comments). However, I believe that my responsibility as a writer is proportional to the size of my reader base. Because of the small size of that reader base, I believe my responsibility is limited to not lying and deliberately spreading misinformation on these pages. I'm sure I don't have to produce a certain quantity of text in order to meet the expectations of my readers. And I haven't produced much of anything here lately. In the future, however, the updating of this blog will probably be just like my reader base – sparse. Maybe I'll post every two weeks or something along those lines.

After all, I write for my own pleasure. This blog fulfills a need to discuss some topics that I have no other media for, even if it means I'll discuss them with myself. And it helps me practice my English and general language skills.

So, hopefully you'll hear from me in the future and I'll be able to sense your presence from the slow ticking of the stat counter or from the precious few comments to my entries. Until then, have a nice day.

Oh, and I apologize for this entry.

Objective meta-meta blog

Many meta blog entries are concerned with the responsibility of the author and the fact that the author haven't managed to actually author anything for a while. The readers are supposed to have been subjected to great distress in the absence of quality content to digest off the pages of the blog in question, and therefore it lies within the author's responsibility to maintain their hunger for information with a steady flow of well thought out entries.

Subjective meta blog

My readers are a sparse collection of nerds, friends, acquaintances, my brother and hopefully a few others (please say “hi” in the comments). However, I believe that my responsibility as a writer is proportional to the size of my reader base. Because of the small size of that reader base, I believe my responsibility is limited to not lying and deliberately spreading misinformation on these pages. I'm sure I don't have to produce a certain quantity of text in order to meet the expectations of my readers. And I haven't produced much of anything here lately. In the future, however, the updating of this blog will probably be just like my reader base – sparse. Maybe I'll post every two weeks or something along those lines.

After all, I write for my own pleasure. This blog fulfills a need to discuss some topics that I have no other media for, even if it means I'll discuss them with myself. And it helps me practice my English and general language skills.

So, hopefully you'll hear from me in the future and I'll be able to sense your presence from the slow ticking of the stat counter or from the precious few comments to my entries. Until then, have a nice day.

Oh, and I apologize for this entry.

### Conditional Probability

In my last entry on probability theory, I promised to have a more detailed look at conditional probability. We need this for solving the disease test problem and for solving the Monty Hall problem mathematically.

P(A | B) reads "the probability of A given B", and this is referred to as conditional probability. The formula for calculating this probability is

P(A | B) = P(A AND B) / P(B)

Why?

We're trying to figure out the probability that the event A is also true, given that B is true. Sometimes A may be true even though B is not, but we're not interested in those instances.

Independent events

Now, sometimes the probability of A is independent of B. For example, if we flip two coins [A, B], and each event A and B is true if the corresponding coin comes up heads. If B is true (that is, comes up heads) the probability of A is still 50% (1). This means that

P(A | B) = P(A)

where P(A) is the a-priori probability and the conditional probability is unchanged due to the information that we gained from flipping the coin B.

Dependent events

But what if

P(A | B) != P(A)

("!=" reads "does not equal") In this case, A is dependent on B. What this means is that, as we gain information about B, the probability of A changes from the a-priori probability. In this case, we need to consider all the cases when B is true:

P(A AND B) + P(NOT A AND B) = P(B)

Those are all of the instances where B is true. So we know that B is true. Out of all the instances where B is true [P(B)], some of them are instances where A is also true [P(A AND B)]:

P(A | B) = P(A AND B) / P(B)

An example

Let's say we have a drawn three cards from a deck: An ace, a king and a queen [A, K, Q]. We shuffle those three cards and draw two of them, trying to draw an ace. Let's say that the first card is not an A. What's the probability that the second one is?

First, let's define two events:

Card1 is the event that the first card is an A

Card2 is the event that the second card is an A

P(Card1) = 1/3

P(NOT Card1) = 2/3

P(Card2 AND NOT Card1) = 1/3

The last probability is easy to see from an a-priori standpoint - The probability that any one of the drawn cards will be an A is 1/3. So the probability of that one card to be an A and the other one not to be an A is also 1/3.

P(Card2 | NOT Card1) = P(Card2 AND NOT Card1) / P(NOT Card1)

P(Card2 | NOT Card1) = (1/3) / (2/3) = 1/2

More intuitively, this can be illustrated as follows:

As we can see in this matrix, there are six possible combinations of two cards. [AA, KK, QQ] are not possible, since there's only one card of each rank. Each possible combination has a 1/6 probability of occuring. [AK, AQ] are the possible combinations where the first card is an A. In the matrix, we can find the probabilities stated earlier:

P(Card1) = 2 * 1/6 = 1/3

P(NOT Card1) = 4 * 1/6 = 2/3

P(Card2 AND NOT Card1) = 2 * 1/6 = 1/3

Asking what P(Card2 | NOT Card1) is, is the same as asking "how big a fraction of the times that we don't pick an A as our first card do we pick an ace as our second card?". We can easily see in the matrix that there are 4 cases (marked as green) where we don't pick an A as our first card. In two of those cases our second card is an A. 2/4 = 1/2. But also, 2*(1/6) / 4*(1/6) = 1/2.

2*(1/6) / 4*(1/6) = P(Card2 AND NOT Card1) / P(NOT Card1)

Stated in words, in 2 out of 4 cases when the first card is not an A (out of a total of 6 possible cases, which includes draws where the first card is an A), the second card is an A. So, knowing that the first card is not an A, we can narrow the situation down to those 4 cases, giving us a probability of 2/4 = 1/2.

I hope this entry has helped your understanding of how conditional probability works. It's not very formal, and it's not very extensive, but hopefully it's quite intuitive and at least free of too big gaps in it's logic. However, it's late now, and I kind of just threw this one out there, because I haven't posted anything for a while.

__________

Notes:

(1) If you think otherwise, you're subjected to the gamblers fallacy, which we'll have a closer look at in a future post.

P(A | B) reads "the probability of A given B", and this is referred to as conditional probability. The formula for calculating this probability is

P(A | B) = P(A AND B) / P(B)

Why?

We're trying to figure out the probability that the event A is also true, given that B is true. Sometimes A may be true even though B is not, but we're not interested in those instances.

Independent events

Now, sometimes the probability of A is independent of B. For example, if we flip two coins [A, B], and each event A and B is true if the corresponding coin comes up heads. If B is true (that is, comes up heads) the probability of A is still 50% (1). This means that

P(A | B) = P(A)

where P(A) is the a-priori probability and the conditional probability is unchanged due to the information that we gained from flipping the coin B.

Dependent events

But what if

P(A | B) != P(A)

("!=" reads "does not equal") In this case, A is dependent on B. What this means is that, as we gain information about B, the probability of A changes from the a-priori probability. In this case, we need to consider all the cases when B is true:

P(A AND B) + P(NOT A AND B) = P(B)

Those are all of the instances where B is true. So we know that B is true. Out of all the instances where B is true [P(B)], some of them are instances where A is also true [P(A AND B)]:

P(A | B) = P(A AND B) / P(B)

An example

Let's say we have a drawn three cards from a deck: An ace, a king and a queen [A, K, Q]. We shuffle those three cards and draw two of them, trying to draw an ace. Let's say that the first card is not an A. What's the probability that the second one is?

First, let's define two events:

Card1 is the event that the first card is an A

Card2 is the event that the second card is an A

P(Card1) = 1/3

P(NOT Card1) = 2/3

P(Card2 AND NOT Card1) = 1/3

The last probability is easy to see from an a-priori standpoint - The probability that any one of the drawn cards will be an A is 1/3. So the probability of that one card to be an A and the other one not to be an A is also 1/3.

P(Card2 | NOT Card1) = P(Card2 AND NOT Card1) / P(NOT Card1)

P(Card2 | NOT Card1) = (1/3) / (2/3) = 1/2

More intuitively, this can be illustrated as follows:

Card 2 | ||||

A | K | Q | ||

Card 1 | A | 0 | 1/6 | 1/6 |

K | 1/6 | 0 | 1/6 | |

Q | 1/6 | 1/6 | 0 |

As we can see in this matrix, there are six possible combinations of two cards. [AA, KK, QQ] are not possible, since there's only one card of each rank. Each possible combination has a 1/6 probability of occuring. [AK, AQ] are the possible combinations where the first card is an A. In the matrix, we can find the probabilities stated earlier:

P(Card1) = 2 * 1/6 = 1/3

P(NOT Card1) = 4 * 1/6 = 2/3

P(Card2 AND NOT Card1) = 2 * 1/6 = 1/3

Asking what P(Card2 | NOT Card1) is, is the same as asking "how big a fraction of the times that we don't pick an A as our first card do we pick an ace as our second card?". We can easily see in the matrix that there are 4 cases (marked as green) where we don't pick an A as our first card. In two of those cases our second card is an A. 2/4 = 1/2. But also, 2*(1/6) / 4*(1/6) = 1/2.

2*(1/6) / 4*(1/6) = P(Card2 AND NOT Card1) / P(NOT Card1)

Stated in words, in 2 out of 4 cases when the first card is not an A (out of a total of 6 possible cases, which includes draws where the first card is an A), the second card is an A. So, knowing that the first card is not an A, we can narrow the situation down to those 4 cases, giving us a probability of 2/4 = 1/2.

I hope this entry has helped your understanding of how conditional probability works. It's not very formal, and it's not very extensive, but hopefully it's quite intuitive and at least free of too big gaps in it's logic. However, it's late now, and I kind of just threw this one out there, because I haven't posted anything for a while.

__________

Notes:

(1) If you think otherwise, you're subjected to the gamblers fallacy, which we'll have a closer look at in a future post.

### Soccer Penalty Kicks Article Flawed?

In my last post on this subject, I made a quick reference to an article on a mathematical examination of soccer penalty kicks. In this article, Tim Harford gives a brief survey of the findings of a paper by Ignatio Palacios-Huerta of the Brown University. The paper (pdf) is quite an interesting read, and I really recommend anyone with some knowledge in statistics and game theory to read it. However, I do believe I've detected a flaw in it. Though, before proceeding any further, I should include all the standard disclaimers, including, but not limited to, the fact that I'm in no way an authority nor an expert in this area, and that there is a chance that I've misunderstood things. I have all due respect for Mr Palacios-Huerta as a scientist and for Mr Harford as a writer, and I'm merely a layman myself.

Anyways. After writing my first entry on the article by Tim Harford, I got to thinking. Quoting from Tim Harford's article:

The optimal strategy is about making your opponent indifferent between his strategy choices. Recall, from my previous post on this subject, how the indifference equations for each player included the strategy choices for the other player, but not his own strategy choices. This relationship works two ways: Your playing optimally doesn't make you indifferent, and your indifference is not an indication that you're playing optimally.

So the Harford article is wrong. The fact that Zinédine Zidane and Gianluigi Buffon seem to be indifferent does not indicate that they play optimal strategies. It does, however, indicate that their opponents, on an aggregate level, are playing optimally.

Now, is this Tim Harford's or Ignatio Palacios-Huerta's mistake? In order to find out, I read the original paper by Mr Palacios-Huerta.

In the paper, Palacios-Huerta starts by formulating a hypothesis saying that professional players are indeed playing a minimax strategy. In order to test this hypothesis, he examines a sample of 1417 penalty kicks. I have no objections to his examination on all the players on an aggregate level. However, when testing the hypothesis for individual players, he seems to be looking at each individual players' strategy choices and their corresponding outcomes. Using Pearson statistics and p-values, based on those figures, the hypothesis is rejected for five players.

On an aggregate level, we can look at the overall figures of both sides of the game. According to the hypothesis, both goalies and kickers should have equal sucess rates, no matter their choices. This can be tested and the hypothesis rejected with the tools used by Palacios-Huerta. But when testing the hypothesis for individual players, we should look at that individual player's aggregated opponents' sucess rates for their strategy choices, which, it seems to me, is not what he's done.

So what hypothesis should we reject when the individual figures used by Palacios-Huerta don't give a good enough match with the hypothesis? Well, not the one that that particular player is playing minimax, but rather the one that his opponents, on an aggregate level, play minimax. This is not, in itself, an uninteresting hypothesis to examine, but, as far as I can see, it's not the one intended by Palacios-Huerta.

Unfortunately, the tables provided in the paper don't allow for the data to be rearranged so that we can perform this test on our own. There is no information on the strategy choices of the opponents of each individual player and their corresponding outcomes, so we can't examine the hypothesis that a specific individual player plays optimally, without accessing the underlying data.

So, what do we know about Zinédine Zidane and Gianluigi Buffon? Not much, but it seems they've been playing against superb economists.

__________

Notes:

Again, let me remind you of the disclaimers. I'm really a laysman, and I may very well be wrong. Either all wrong or just in my interpretation of the paper.

External links in this post:

World Cup Game Theory - What economics tells us about penalty kicks by Tim Harford. The quoted article in Slate Magazine.

Ignatio Palacios-Huerta at the Brown University website

Professionals Play Minimax by Ignatio Palacios-Huerta of the Brown University (pdf format)

Other resources:

Tim Harford - The Undercover Economist

Anyways. After writing my first entry on the article by Tim Harford, I got to thinking. Quoting from Tim Harford's article:

Professionals such as the French superstar Zinédine Zidane and Italy's goalkeeper Gianluigi Buffon are apparently superb economists: Their strategies are absolutely unpredictable, and, as the theory demands, they are equally successful no matter what they do, indicating that they have found the perfect balance among the different options. These geniuses do not just think with their feet.At first, this seemed to be a good indication that Zidane and Buffon are indeed playing optimal strategies. But what hit me after writing my first entry, is that their playing optimal strategies doesn't make themselves indifferent between their strategy choices. That is, their playing optimally doesn't make them succeed equally often no matter what they do. It does, however, make their opponents indifferent between their strategy choices.

The optimal strategy is about making your opponent indifferent between his strategy choices. Recall, from my previous post on this subject, how the indifference equations for each player included the strategy choices for the other player, but not his own strategy choices. This relationship works two ways: Your playing optimally doesn't make you indifferent, and your indifference is not an indication that you're playing optimally.

So the Harford article is wrong. The fact that Zinédine Zidane and Gianluigi Buffon seem to be indifferent does not indicate that they play optimal strategies. It does, however, indicate that their opponents, on an aggregate level, are playing optimally.

Now, is this Tim Harford's or Ignatio Palacios-Huerta's mistake? In order to find out, I read the original paper by Mr Palacios-Huerta.

In the paper, Palacios-Huerta starts by formulating a hypothesis saying that professional players are indeed playing a minimax strategy. In order to test this hypothesis, he examines a sample of 1417 penalty kicks. I have no objections to his examination on all the players on an aggregate level. However, when testing the hypothesis for individual players, he seems to be looking at each individual players' strategy choices and their corresponding outcomes. Using Pearson statistics and p-values, based on those figures, the hypothesis is rejected for five players.

On an aggregate level, we can look at the overall figures of both sides of the game. According to the hypothesis, both goalies and kickers should have equal sucess rates, no matter their choices. This can be tested and the hypothesis rejected with the tools used by Palacios-Huerta. But when testing the hypothesis for individual players, we should look at that individual player's aggregated opponents' sucess rates for their strategy choices, which, it seems to me, is not what he's done.

So what hypothesis should we reject when the individual figures used by Palacios-Huerta don't give a good enough match with the hypothesis? Well, not the one that that particular player is playing minimax, but rather the one that his opponents, on an aggregate level, play minimax. This is not, in itself, an uninteresting hypothesis to examine, but, as far as I can see, it's not the one intended by Palacios-Huerta.

Unfortunately, the tables provided in the paper don't allow for the data to be rearranged so that we can perform this test on our own. There is no information on the strategy choices of the opponents of each individual player and their corresponding outcomes, so we can't examine the hypothesis that a specific individual player plays optimally, without accessing the underlying data.

So, what do we know about Zinédine Zidane and Gianluigi Buffon? Not much, but it seems they've been playing against superb economists.

__________

Notes:

Again, let me remind you of the disclaimers. I'm really a laysman, and I may very well be wrong. Either all wrong or just in my interpretation of the paper.

External links in this post:

World Cup Game Theory - What economics tells us about penalty kicks by Tim Harford. The quoted article in Slate Magazine.

Ignatio Palacios-Huerta at the Brown University website

Professionals Play Minimax by Ignatio Palacios-Huerta of the Brown University (pdf format)

Other resources:

Tim Harford - The Undercover Economist

### The Monty Hall Problem, Part 4

Hopefully, we all agree that we should switch when faced with the problem given in the basic formulation of the Monty Hall problem. However, in the alternative formulation given in my first entry, there's one major difference. In the original game, the host was obliged to reveal a second door after watching you picking one. In my version of the game, I hadn't made any such commitment. So, why would I give you a chance to change your mind? Possibly out of generosity, sure, but most probably because I knew you had made the right choice and wanted you to switch to an empty cup.

Put in game-theory terms, switching is a dominated strategy. Your strategy choices are to switch when given the opportunity or to never switch (switch / stay). I have more strategy choices than you do. This is a full payoff matrix of the game for all possible strategy choices, with your choices represented as columns and my choices as rows. The outcome values are the probabilities of you winning the bill.

My strategy choice "Yes-No", for example, means that I offer you an opportunity to switch if you choose the right cup initially, but I don't offer you that opportunity if you choose the wrong one. So the first Yes or No refers to whether I offer you that opportunity when you choose the right cup, and the second one to the case when you choose the wrong one.

Notice that all of my strategy choices but "Yes-No" (offering the opportunity to switch only when you've picked the right cup) are dominated. This means that they can never lead to better results, only worse, depending on your strategy choice. So there is no reason for me to choose any of those strategies. Thus removing those strategy choices, we get a much simpler payoff matrix:

It should now be obvious that switching is a dominated strategy. So in a game-theory sense, switching is a bad strategy. However, game theory isn't everything. Maybe you have a "read" on me, making you believe that I want you to have the bill. Maybe you think that I intended to always give you the switching opportunity as in the original Monty Hall problem. So there may be reasons to deviate from game-theory optimal play. But lacking such guidance, you're probably better off resorting to game theory, in this case guaranteeing you a 1/3 chance to win the prize.

Put in game-theory terms, switching is a dominated strategy. Your strategy choices are to switch when given the opportunity or to never switch (switch / stay). I have more strategy choices than you do. This is a full payoff matrix of the game for all possible strategy choices, with your choices represented as columns and my choices as rows. The outcome values are the probabilities of you winning the bill.

Switch | Stay | |

No-No | 1/3 | 1/3 |

No-Yes | 1 | 1/3 |

Yes-No | 0 | 1/3 |

Yes-Yes | 2/3 | 1/3 |

My strategy choice "Yes-No", for example, means that I offer you an opportunity to switch if you choose the right cup initially, but I don't offer you that opportunity if you choose the wrong one. So the first Yes or No refers to whether I offer you that opportunity when you choose the right cup, and the second one to the case when you choose the wrong one.

Notice that all of my strategy choices but "Yes-No" (offering the opportunity to switch only when you've picked the right cup) are dominated. This means that they can never lead to better results, only worse, depending on your strategy choice. So there is no reason for me to choose any of those strategies. Thus removing those strategy choices, we get a much simpler payoff matrix:

Switch | Stay | |

Yes-No | 0 | 1/3 |

It should now be obvious that switching is a dominated strategy. So in a game-theory sense, switching is a bad strategy. However, game theory isn't everything. Maybe you have a "read" on me, making you believe that I want you to have the bill. Maybe you think that I intended to always give you the switching opportunity as in the original Monty Hall problem. So there may be reasons to deviate from game-theory optimal play. But lacking such guidance, you're probably better off resorting to game theory, in this case guaranteeing you a 1/3 chance to win the prize.