# Haifa mgsb: homework

This page is an experiment in student generated answers to homework questions. In Markets, Games, and Strategic Behavior, we covered a diverse number of topics. The lecturer will pose a question and the students will provide the answer(s). Feel free to improve on other students' answers, put alternate answers and pose new questions. Feel free to clarify the questions as well.

### Question 1[edit | edit source]

Take from class, the Diamond Dybvig model with and , two impatient depositors and two patient depositors. Each depositor has $1000 to deposit in the bank. Let us say that deposits are insured up to fraction . For what values of is there only one equilibrium and what values are there two equilibria? (For each dollar put in the bank yesterday, early withdrawers are guaranteed to get and late get .)

### Answer 1[edit | edit source]

The bank expects the two impatient depositors to withdraw today and the two patient depositors to withdraw tomorrow. Hence, yesterday, the bank sets aside $2000 for today and invests $2000 for tomorrow. Now today, the depositors must decide whether to withdraw today or tomorrow. We assume that the impatient depositors withdraw today. Now We can examine this as a game between the two patient depositors. Each has to decide whether or not to withdraw today. When the payoffs as discussed in class.

Today | Tomorrow | |

Today | $750, $750 | $1000, $0 |

Tomorrow | $0, $1000 | $2000, $2000 |

For a general , we must calculate again each of the payoffs. If both withdraw today, the bank can pay the first 3 depositors the $1000. The last depositor will receive . Thus, the expected payoff is . If one withdraws today and the other withdraws tomorrow, the bank will be able to pay all three today, and the depositor withdrawing tomorrow receives . Rewriting the game yields.

Today | Tomorrow | |

Today | $750+1000*(f/4), $750+1000*(f/4) | $1000, $2000*f |

Tomorrow | $2000*f, $1000 | $2000, $2000 |

We see that if one patient depositor withdraws today, the second patient depositor only has incentive to withdraw today if . Hence, if and only if , there is a possibility of two equilibria.

__Question about this solution-__

You claimed that if all four people want the money today three will get 1000$ and the fourth will get 1000f. This means, that at the point in which all four people want the money today the bank has 3000+1000f dollars to give. However, the bank has only 3000$ to give. How does this fit into your solution?

**Todd's response**

Great question! I should probably have mentioned the insurance is from outside the bank. This is true for any case that I know of where deposits are insured. In the US most deposits are insured by the FDIC (Federal deposit Insurance Corporation). Hence, the 1000*f comes from this outside source.

**Another Question Please**
According to the solution the insurance is per group but could it be per person?(so that the compensation ratios change but not the way of solving)

**Todd's response**
Sorry, I don't understand the question. Do you mean that two different depositors may have two different insurance schemes? For instance, I would be insured up to 100%, but you up to 90%.

### Question 2[edit | edit source]

**Part A.**

Examine the second treatment of the Beer-Quiche game where there is a 2/3 chance of the proposer being strong.

Payoffs: Proposer, Responder

Flee | Fight | |

Beer (Strong) | $1.40, $1.25 | $0.60, $0.75 |

Quiche (Strong) | $1.00, $1.25 | $0.20, $0.75 |

Beer (Weak) | $1.00, $0.75 | $0.20, $1.25 |

Quiche (Weak) | $1.40, $0.75 | $0.60, $1.25 |

Can there be a pooling equilibrium where both proposers choose Quiche and the responder flees? Does this seem reasonable to you?
**Qustion please**
We dont understand the question , can you please explain? (according to the homework question the pooling equilibrium should be when both proposers choose beer and the responder flees)

**I'm not Tod but i'll try and explain**- a pooling equilibrium is an equilibrium in which both the weak type and the strong type both choose the same signal, and by doing so there is no signalling.

____________________________________________________________________________________________

**Part B.**

fold | call | |

raise (Strong) | $1.00, -$1.00 | $2.00, -$2.00 |

fold (Strong) | -$1.00, $1.00 | -$1.00, $1.00 |

raise (Weak) | $1.00, -$1.00 | -$2.00, $2.00 |

fold (Weak) | -$1.00, $1.00 | -$1.00, $1.00 |

Assume the odds of a strong hand is 80%. Find any equilibrium. Is it signalling or pooling? Extra hard: what happens if it is 60%?

### Answer 2[edit | edit source]

**Part A.**

Beer-Quiche game

All in all there can be such a pooling equilibrium. It can happen if the responder thinks that people drink beer are weak.

However, this is not a reasonable equilibrium because there is no reason for someone who is strong to do so because he is better off drinking beer.

**Qustion to Todd**
Do you mean that we need to change the data in order to get Quich Polling equilibrium or the qustion is can we have Quich Polling equilibrium under this data?
**Part B.**

Question: Is the solution for Homework Question (simplified poker) is that A choses Raise and B choses Call? So that the equilobrium are: (2, -2), (-2, 2)? If not, can you explain how to get to the right answer?

**Todd's response**

This can't be an equilibrium since if B chooses to call, then A would not raise if A has a weak hand.

What is an equilibrium (I hope) is that A (always) raises and B (always) folds. In such a case, B gets -$1.00. If B decides to call, the payoff would be .8 times the payoff from a strong type -$2.00 plus .2 times the payoff of from a weak type $2.00. This equals .8*(-2)+.2*2=-1.2. This is less than -1 so B would never call. Note that A earns $1 from raising. This is better than the $-1 from folding.

Also, if the probability is 60% rather than 80%, .6(-2)+.4*2=-.4, then B has an incentive to raise. But what I said above, A raising and B calling can't be an equilibrium. My new question to you is can separating be an equilibrium?

**Another Question:**

hey Todd.
i dont understand why A raises and B folds cant be an equilibrium in part B?
we know that the payoff for player A will be in such case 0.6*2-0.4*2=0.4
A know's that if he will fold in case of a weak hand he will be punished by player B when he have a strong hand (B chooses to fold and payoff for A is 1 instead of 2).

In case of separating equilibrium A choses to raise if he has strong hand and B choses to fold. if A has weak hand A choses to fold and B choses to Call. the payoff for player A will be 0.6*1-1*0.4=0.2 I think its not reasonable solution cause player A have incentive to chose to raise allways, payoff is 1 in this scenario. 1>0.2 but now player B prefer to fold and pay 0.4<1.

can you please tell me where is my mistake?

**Todd's response to another question**

Judging from you explanation, I think you mean to ask

"I dont understand why A raising and B calling can't be an equilibrium in part B when the probability is 60%?"

If so, this is a great question! What you are doing is examining the game where player A chooses the strategy before knowing his information. In economics, we call this commitment power. You are assuming that A can commit what he will do when he has a strong hand or weak hand. Usually with signalling, we assume that the game is a one-shot game and hence player A would not have any commitment power. This was the case with in our experiment where matching each time was different. By doing so, we must look at the best decision for Player A when he has a strong hand separate from the best decision for Player A when he has a weak hand. (You may debate whether or not this is the best way to model poker.) By doing so, player A raising and B calling can't be an equilibrium since A would never raise when he is weak.

______________________________________________________________________________________________

**I think that the answer is:**
When A is strong he will raise and B will fold.
When A is weak he will fold and B can raise or fold.
So we can say that B always fold.

**Another Question:**

in 60% is the answer - if A is strong - raise and B call , if A is weak - fold and B call ???

**Todd's response**

The problem is that B won't call if a strong A is the only one raising.

**Todd's answer**

It may be easier for me to explain what happens. It is called **partial pooling**. All the strong A players raise and some of the weak A players do. B players only some times call. Since weak A's and B players are mixing their strategies they must be indifferent between the two.
For a weak A to be indifferent, he knows that by folding he would get -1. If B calls c fraction of the time he would get c*(-2)+(1-c)*1. Therefore, -1=c*(-2)+(1-c)*1 or c=2/3. If a weak A raises r fraction of the time, the odds of a strong A are .6/(.6+.4*r). Thus, for B to be indifferent

```
```

This yields r=.5.

So the equilibrium is a strong A raises, a weak A bluffs (raises) 50% of the time and folds the other 50%, B calls 2/3 of the time and folds the other 1/3.
*You would not be required to understand partial pooling for the exam, though it is a good sign if you do.*

### Question 3[edit | edit source]

In each of the treatments of our classroom experiment, were there two equilibria? In each, how many would have to withdraw for you to have the dominant strategy to withdraw as well (always want to withdraw)? Under what parameters do you think we will get a bank run? Why?

todd can you please slove this question?

### Answer 3[edit | edit source]

We really only made it through one treatment so I would analyze that. In that treatment there were 20 depositors: 10 patient and 10 impatient. R=2 and L=0.5.

I made a mistake when I increased the number of subjects from 18 to 20. I only left the bank with $9 in the short-term investment and $9 in the long-term investment.

If only 10 impatient withdrew, the bank would have to cash in $2 of the long-term investment to pay for the 10th impatient investor. This would leave $7 in the long term investment. The 10 patient depositors would still get on average $1.4. Hence, it would make sense to wait and there would be two equilibria.

If one patient depositor withdrew early, the bank would pay out on average $10/9=$1.11. If two patient depositors withdrew early, the bank would pay out on average $6/8=$.75.

Hence, if two or more patient depositors withdrew early (along with 10 impatient), then it would be a dominant strategy to withdraw early as well.

### Question 4[edit | edit source]

There is a Beersheva to Haifa train line. Travellers either go between Haifa and Tel Aviv with demand , Tel Aviv and Beersheva 12-p, Haifa and Beersheva. , Say it is all owned by one profit maximizing monopolist with marginal cost of zero. For simplicity assume that the monopolist must set the price of the Haifa-Beersheva route equal to the sum of the other two. What would he charge for all three routes? Now say the government thinks it needs to add competition to the rail industry. It divides things into two companies. One takes care of the Haifa-Tel Aviv route and the other the Tel Aviv-Beersheva route. The price of the combined trip is the sum of the other two. What are the new prices? Who wins and who loses?

### Answer 4[edit | edit source]

Because the Monopolist has to set a price exactly the same for all three train lines he will try and maximize- 2*(12-p)p+(18-p)p --> p=7.

Now the situation is a bit different. instead of having on monopolist for the entire train lines we have two seperate monopolists for each leg of the lines. Each monopolist will try and maximize his own profits.

__Tel-Aviv -- Beer-Sheva__

maximizing- (12-p)p --> p=6

__Tel-Aviv -- Haifa__

maximizing- (12-p)p --> p=6

Therefore, the price for a train ride from Beer-Sheva to Haifa will now cost 12 shekels.

The two comapnies win because they gain more money than the what happenned before. Furthermore, people who only wants to go from Beer-Sheva to Tel-Aviv or only from Tel-Aviv to Haifa also gain from a reduction of 1 Shekel in the price. On the other hand, people who wants to go from Beer-Sheva to Haifa suffer, in this new situation, from an increase of 5 Shekels in the overall price.

**<Todd's comments>** An interesting interpretation to the problem. In the first part, you limited the monopolist to set a single price for any route. While this may seem unusual, it is the limitation in many subway systems such as
NYC. For this limitation, I think you solved the problem correctly. In the second part, it seems that the demand for the Beer-Sheva to Haifa
route is missing. This demand may affect the choice of price by the two companies.

you should take the aggregate demand d=30-p --> p=7.5 for the Haifa-Beer-Sheva road and the other road with stop on T"A. for Tel-Aviv -- Beer-Sheva and Tel-Aviv -- Haifa the amount of tickets sold is 4.5. profit=33.75

now government divides the company into two.

П1=p1(12-p1-p2)==> p1=4 d=2 П1=16 П2=p2(12-p1-p2)==> p2=4 d=2 П2=16 П=32 D=4 p=8

Everybody losses. this is more reasonable after reading class conclusion.

**<Todd's comments>**
Notice the aggregate demand here is different than the previous response. I believe it is an attempt to determine the total demand for the Beersheva-Haifa route. This is given that due to symmetry the optimal price of the
Beersheva-Tel Aviv route should equal the price of the Tel Aviv-Haifa route and half the price of the Beersheva-Haifa route. When adding demand curves one has to be careful. For example, adding two demands of
demand of 12-p together is not a demand of 24-p; it is a demand of 24-2p.
Anyway, I think if one wanted to do this as intended the demand would be

Simplifying yields

This is maximized at p=10 with a profit of 150. This is the same result as analysis of a different method below.

I am not sure about the second part.

**<Todd's thoughts>**

I think it is great that people were able to come up with different ways to examine the problem. I am quite happy with this result since in the "real world" we (economists) don't always have well-defined problems and must still attempt to model them.

My way of looking at the problem is that we can call p1 the price of Beersheva to Tel-Aviv, p2 the price of Tel Aviv to Haifa, and p3 the price of Beersheva to Haifa. My thoughts were that p3=p1+p2. While over simplified, it avoids worrying about arbitrage such as if p3>p1+p2, one can buy two separate tickets and stop for a cup of coffee at Tel Aviv. Using this, the monopolist has the following maximization problem The two first-order conditions (with respect to p1 and p2) are ,

Solving should yield p1=p2=5 and p3=10 for a total profit of 150.

Now if the company were divided into two separate halves, the firm on the BeerSheva-Tel Aviv route would solve while the firm on the Tel Aviv- Haifa route would solve Notice that adding these two together yields the same expression for joint profit as above.

The first-order conditions of these maximization problems are

Solving yields p1=p2=6 with each firm earning a profit of 72 for a combined profit of 144. Notice that prices are higher and profits are lower.

### Question 5[edit | edit source]

Students like to go to the Haifa Ball depending upon how many other students go there. Tickets cost 32 NIS each. There are 1000 students indexed by i from 1 to 1000. Student i has value vi=i. Student i has utility (in shekels) for going to the Ball of , where n is the total number of students going to the Ball. (i) If everyone believes , which students will be willing to go to the ball? (ii) What is the threshold number of tickets sold above which it will be a success and below which it will be a failure? (iii) What is the equilibrium of tickets sold if the ball is a success? (iv) What is the equilibrium of tickets sold if the ball is a failure?

### Answer 5[edit | edit source]

(i) i*n/5000>32 --> 500*i/5000>32 --> i>320.

Because there are 1000 students than 1000-320=680 will go.

(ii) we know that P=nv/5000/

we also know that n=1000-v. therefore- P=n(1000-n)/5000 --> 32=n(1000-n)/5000 n=200,800. thus- the threshold wil be n=200.

(iii) if the ball will be a success then the number of tickets sold will be 800

(iv) if the ball is a failure the number of tickets bought will be 0.

### Question 6[edit | edit source]

In the takeover game, assume that the value for the seller is uniformly distributed between 50 and 100. Assume that it is still worth to the buyer 3/2 times the seller’s value. What should the buyer offer to the seller?

### Answer 6[edit | edit source]

The buyer will offer- P=V_{min}*(3/2). thus- P=50*(3/2)=75.
Therefore, the Price that the buyer should offer is P=75. In that price the buyer can't loses any money but can win.

<Todd's comments> Very nice observation! It is true the buyer can't lose money and can only win, but perhaps he can do better. At a price of 75, the buyer makes a purchase 50% of the time. By increasing the price from 75 to 76, the buyer would increase his costs whenever P is below 75 by 1. However, there is a 2% chance (when the value is between 75 and 76) that he would make approximately 75*3/2-75=37.5. Since 2% of 38 is higher than 50% of 1, the buyer should raise his bid. See below for a solution.

Let us say that the buyer makes an offer of P. The odds that the seller has a value below P is . In such a case, the seller would accept the offer. Now GIVEN that the seller accepts the offer, the expected value to the seller is . The value to the buyer in this case is . The price paid is simply P. Hence, the expected profit of the buyer given an offer of P would be .

This simplifies to , which is maximized at P=100.

Thus, the buyer will always be purchasing. Note the solution (choice of P) can be internal as well.

### Question 7[edit | edit source]

A monopoly has marginal cost of 5 and faces a demand of q=20-p. What price should he charge to maximize profits? Let us say it is a vertical market of two firms: supplier and retailer. What would the price would the supplier charge the retailer? What would be the price charged to the end consumer? If the supplier charged a franchise fee in addition to wholesale price, what would they be? Extra: Solve the above problem for the general case of marginal cost of c facing demand of q=A-p where (A>c).

### Answer 7[edit | edit source]

first of all we need to find the MR. the TR=20Q-Q^{2}. thus the MR=20-2Q. We now set MR=MC and therefore- Q=7.5. And the price is - P=20-7.5=12.5
The profit of the supplier is (P_{s}-5)(20-P_{s})/2. Thus the P_{s}=12.5
retailer sets price- Q=(20-P)/2
therefore - Q=(20-12.5)/2=3.75
P_{c}= 20-Q=20-3.75=16.25

__Franchise Fee__

If the supplier charges a franchise fee then P_{s}=MC=5.
For the retailer- Q=(20-P_{s})/2 --> Q=7.5
the retailer profits are: 7.5*12.5-7.5*5=56.25
Thherefore, the Franchise Fee=56.25

**<Todd's Comments>**

General case.

A monopoly has total revenue TR=(A-q)q. Thus, the marginal revenue MR=A-2q. Setting MR=MC yields A-2q=c or q=(A-c)/2. The profit of the monoplist is and the price he charges is A-(A-c)/2=(A+c)/2.

With a vertical market there are two firms: a supplier and a retailer. The supplier faces the marginal production costs of c.
The retailer faces the demand of A-p. The retail now acts like the single monopolist instead of marginal cost c,
it uses the price the supplier charges (call this p_{s}). From the monopoly solution above, the quantity chosen by the retailer q_{r}=(A-p_{s})/2.
By choosing the price, the supplier is equivalently choosing the quantity q_{s}. The price as a function of this quantity is then
p_{s}=A- 2q_{s}. The MR for this supplier is now MR=A-4q_{s}. Setting MR=MC yields A-4q_{s}=c or q_{s}=(A-c)/4. Profit for the supplier is then
.
We can use the quantity q_{s} to determine the price to the retailer p_{s}.
This is p_{s}=A- 2q_{s}=(A+c)/2. We then see that the price charged to the public is A-q_{s}=(3A+c)/4. Profit for the retailer is then
. Notice the total profit goes down and the final price goes up.

For the franchise fee, the supplier does best be setting the price to his marginal cost c and setting F to equal the monopoly profit of . Since we would then make the full monopoly profit, he can do no better. Note that the retailer makes nothing. Interestingly, during the experiment some subjects as suppliers set price to 0 and F to capture the whole profit of the retailer. This is not optimal since a larger quantity than the monopoly quantity would be sold (the retail gets the goods for too cheap of a price).

We can get the solution to the question by plugging in 20 for A and 5 for c.

### Question 8[edit | edit source]

El Al and British Air are competing for passengers on the Tel Aviv- Heathrow route. Assume marginal cost is 4 and demand is Q = 18 − P. If they choose prices simultaneously, what will be the Bertrand equilibrium? If they can collude together and fix prices, what would they charge. In practice with such competition under what conditions would you expect collusion to be strong and under what conditions would you expect it to be weak. Under what conditions should the introduction of BMI affect prices?

### Answer 8[edit | edit source]

In the first case the equilibrium price will be that in which P=MC. Thus, the price will be P_{El Al}=P_{British Air}=4.
If the two comapnies collude togerther they will get a price which will maximize their total profit. Therefore, they will try and maximize (P-4)(18-P)-->Q=7 (q_{A}=q_{B}=3.5); P=11.
The coluusion is suppose to be strong if there is more then one encounter between the two players because each one will be afraid to betray the other due to the after-results of it.

The introduction of BMI will cause a change in the prices if BMI will have a different MC.

__________________________________________________________________________________________

I think we should use the critirion with the B like we used in the class - 24.5/(1-B)>49 --> B>=0.5

### Question 9[edit | edit source]

Solve a three stage ultimatum game where in the first stage player A offers player B an offer for a $10 pie. If this offer is rejected, then the pie shrinks to $8 and player B makes the offer. If this offer is rejected, then the pie shrinks to $6 and player A makes the offer. If this final offer is rejected, then the payoffs are 0 to both players. (Assume the possibility of continuous offers.)

### Answer 9[edit | edit source]

we will look at the solution in a reverse way-

First we'll think about the second round- what is the best offer B can offers A which he won't decline. Because the max A can get in the third round is 5.99 than B can offer in the second round 2.01 for himself and 5.99 for A. Now let us see if it's worthwhile for A to offer something in the first round so that B won't want to reject it (while having full information and knowing all that we just wrote). The Best offer for A in the first round so that B won't reject it (meaning he will still get 2.01) is that A will offer 2.01 to B and 7.99 for himself.

Thus (A,B)=(7.99,2.01) on the first round is the equilibrium.

**Question 10**[edit | edit source]

You get in a taxi. Should you bargain over the price at the beginning or end of the trip? Why?

**Answer 10**[edit | edit source]

Under the "ultimatum game" hypothesis it is better to nagotiate on a price after the ride in the taxi. Before the ride the taxi driver knows you need a service so he will over-price you. On the other hand when you get to the end of your ride the taxi driver needs you because he already gave you a service and now you are in the position of power so even you will offer him a low price he is suppose to accept it because it is better than nothing.

**Todd's response**

Yes, this is the correct game theoretic solution. Unfortunately there is a true story of two economists that were in Jerusalem at a conference. They tried this on a taxi driver. The driver got so mad, he locked the doors and drove them all the way back to where they started. (I think the story was on the back cover of JPE.)

**Question 11**[edit | edit source]

*By popular demand, I have had a request for another network externalities question. So here it is.*

Home Box Office is a pay-TV service that is based in the US. After showing only
movies they decided to increase subscribers by introducing shows such as Sex and
the City and the Sopranos.
A person enjoys a show for its quality and whether they can talk about it
next to the water cooler the next day at work. Given that someone has seen the
show, the probability that one can talk about it f is just the number of people
who have seen the show divided by the total number of people.
We index the possible viewers by i (from 1 to 1000). Viewer i has parameter
v_{i} where v_{i} = i. Viewer i values subscribing to HBO 9+ (v_{i}/10)· f. Note that 9 is the
value of HBO from the sheer quality. The price charged for HBO is 30.
(i) If everyone believes f = .4, which people will subscribe to HBO?
(ii) What is the threshold number of subscribers above which HBO will be a
success and below which HBO will be a failure?
(iii) What is the equilibrium number of subscribers if HBO is a success?
(iv) What is the equilibrium number of subscribers if HBO is a failure?

**Answer 11**[edit | edit source]

1. according to the information given to us we know that

2. moreover we know that .

3. the last eqution we need is or .

(i) if .
we will use the (2) equation and insert the parameters in to it.

→ → → .

(ii) here we will use all the equations mentioned above.

we will use equation (2) as a base and place the other 2 equations in to it:

.

we get 2 solutions:

and 300 is the threshold.

(iii) and is the equilibrium in the state of success.

(iv) .