Expectation#

Given the rationale play principle, we know that repeatedly playing the same game will provide players with enough time and information to optimize their strategies. Repeated play allows the players to develop an understanding of expectation within the game. Different strategies will lead to different expected values.

Nash Equilibrium

Nobel Prize winning mathematician John Nash considered game theory strategy in terms of optimization. In calculus or differential equations, we often find a saddle point exists. When we find such a saddle point in game theory, neither player can improve their payoff by switching strategies. In such a case, we call the resulting equilibrium pair or mixed strategy solution the Nash Equilibrium.

Equalized Expectations#

We will soon see why the idea of equalized expectations is so powerful in two-person, zero-sum matrix games. If Rose chooses

\[\begin{split}\vec r = \left[\begin{array}{c}0.25\\0.50\\0.25\end{array}\right]\end{split}\]

such that the expected value of strategies \(A,B\) and \(C\) are identical, then Colin has no incentive to switch his strategy choice. No matter what choice he makes, his expected value will remain fixed. For John Nash, this idea was connected to how he thought of an equilibrium for the game.

he idea of equalized expectations will allow us to develop solution paths for finding the Nash Equilibria for many matrix games.

Expected Value Formula#

What would happen if perfectly rational and self-interested players engaged in repeated play of a game? They will quickly settle down to an equilibrium pair if the game had a Pure Strategy Solution (PSS) or to an optimal set of mixed strategies played in precise proportion for an MSS.

Repeated play in game theory behaves exactly as it Bournouli trials behave in probability problem solving. Over the course of many thousands of trials, the payoff to a player for playing a single strategy will be fixed. This is the expected value of that strategy.

Expected Value

Given a set \(X\) of \(n\in\mathbb{N}\) outcomes and \(n\) associate probabilities, the expected value of \(X\) is given as follows:

\[E(X) =\sum x_ip(x_i)\]