# Congestion game

Congestion games are a class of games in game theory first proposed by Rosenthal in 1973. In a Congestion game we define players and resources, where the payoff of each player depends on the resources it chooses and the number of players choosing the same resource. Congestion games are a special case of potential games. Rosenthal proved that any congestion game is a potential game and Monderer and Shapley (1996) proved the converse: for any potential game, there is a congestion game with the same potential function.

## Motivation

Consider a traffic net where two players originate at point O and need to get to point T. Suppose that node O is connected to node T via connection points A and B, where A is a little closer than B (i.e. A is more likely to be chosen by each player). However, both connection points get easily congested, meaning the more players pass through a point the greater the delay of each player becomes, so having both players go through the same connection point causes extra delay. Good outcome in this game will be for the two players to "coordinate" and pass through different connection points. Can such outcome be achieved? And if so, what will the cost be for each player?

## Definition

Discrete congestion games are games with the following components.

## Example

Let's consider the following directed graph where each player has two available strategies - going though A or going through B - leading to a total of four possibilities. The following matrix expresses the costs of the players in terms of delays depending on their choices:

The directed graph for a simple congestion game.
Cost Matrix
p1/p2 A B
A (5,5) (2,3)
B (3,2) (6,6)

Both (A,B) and (B,A) are pure Nash equilibria in this game.

## Existence of Nash equilibria

The existence of Nash equilibria can be shown by constructing a potential function that assigns a value to each outcome. Moreover, this construction will also show that iterated best response finds a Nash equilibrium. Define ${\displaystyle \textstyle \Phi =\sum _{e\in E}\sum _{k=1}^{x_{e}}d_{e}(k)}$. Note that this function is not the social welfare ${\displaystyle \textstyle \sum _{e\in E}x_{e}d_{e}(x_{e})}$, but rather a discrete integral of sorts. The critical property of a potential function for a congestion game is that if one player switches strategy, the change in his delay is equal to the change in the potential function.

Consider the case when player ${\displaystyle i}$ switches from ${\displaystyle P_{i}}$ to ${\displaystyle Q_{i}}$. Elements that are in both of the strategies remain unaffected, elements that the player leaves (i.e. ${\displaystyle e\in P_{i}-Q_{i}}$) decrease the potential by ${\displaystyle d_{e}(x_{e})}$, and the elements the player joins (i.e. ${\displaystyle e\in Q_{i}-P_{i}}$) increase the potential by ${\displaystyle d_{e}(x_{e}+1)}$. This change in potential is precisely the change in delay for player ${\displaystyle i}$, so ${\displaystyle \Phi }$ is in fact a potential function.

Now observe that any minimum of ${\displaystyle \Phi }$ is a pure Nash equilibrium. Fixing all but one player, any improvement in strategy by that player corresponds to decreasing ${\displaystyle \Phi }$, which cannot happen at a minimum. Now since there are a finite number of configurations and each ${\displaystyle d_{e}}$ is monotone, there exists an equilibrium.

## Continuous congestion games

Continuous congestion games are the limiting case as ${\displaystyle n\rightarrow \infty }$. In this setup, we consider players as "infinitesimally small." We keep ${\displaystyle E}$ a finite set of congestible elements. Instead of recognizing ${\displaystyle n}$ players, as in the discrete case, we have ${\displaystyle n}$ types of players, where each type ${\displaystyle i}$ is associated with a number ${\displaystyle r_{i}}$, representing the rate of traffic for that type. Each type picks a strategy from a strategy set ${\displaystyle S_{i}}$, which we assume are disjoint. As before, assume that the ${\displaystyle d_{e}}$ are monotone and positive, but add the assumption that they are continuous as well. Finally, we allow players in a type to distribute fractionally over their strategy set. That is, for ${\displaystyle P\in S_{i}}$, let ${\displaystyle f_{P}}$ denote the fraction of players in type ${\displaystyle i}$ using strategy ${\displaystyle P}$. Assume that ${\displaystyle \textstyle \sum _{P\in S_{i}}f_{P}=r_{i}}$.

## Existence of equilibria in the continuous case

Note that strategies are now collections of strategy profiles ${\displaystyle f_{P}}$. For a strategy set ${\displaystyle S_{i}}$ of size ${\displaystyle n}$, the collection of all valid profiles is a compact subset of ${\displaystyle [0,r_{i}]^{n}}$. As before, define the potential function as ${\displaystyle \textstyle \Phi =\sum _{e\in E}\int _{0}^{x_{e}}d_{e}(z)\,dz}$, replacing the discrete integral with the standard one.

As a function of the strategy, ${\displaystyle \Phi }$ is continuous: ${\displaystyle d_{e}}$ is continuous, and ${\displaystyle x_{e}}$ is a continuous function of the strategy. Then by the extreme value theorem, ${\displaystyle \Phi }$ attains its global minimum.

The final step is to show that a minimum of ${\displaystyle \Phi }$ is indeed a Nash equilibrium. Assume for contradiction that there exists a collection of ${\displaystyle f_{P}}$ that minimize ${\displaystyle \Phi }$ but are not a Nash equilibrium. Then for some type ${\displaystyle i}$, there exists some improvement ${\displaystyle Q}$ over the current choice ${\displaystyle P}$. That is, ${\displaystyle \textstyle \sum _{e\in P}d_{e}(x_{e})>\sum _{e\in Q}d_{e}(x_{e})}$. The idea now is to take a small amount ${\displaystyle \delta of players using strategy ${\displaystyle P}$ and move them to strategy ${\displaystyle Q}$. Now for any ${\displaystyle x_{e}\in Q}$, we have increased its load by ${\displaystyle \delta }$, so its term in ${\displaystyle \Phi }$ is now ${\displaystyle \textstyle \int _{0}^{x_{e}+\delta }d_{e}(z)dz}$. Differentiating the integral, this change is approximately ${\displaystyle \delta \cdot d_{e}(x_{e})}$, with error ${\displaystyle \delta ^{2}}$. The equivalent analysis of the change holds when we look at edges in ${\displaystyle P}$.

Therefore, the change in potential is approximately ${\displaystyle \textstyle \delta (\sum _{e\in Q}d_{e}(x_{e})-\sum _{e\in P}d_{e}(x_{e}))}$, which is less than zero. This is a contradiction, as then ${\displaystyle \Phi }$ was not minimized. Therefore, a minimum of ${\displaystyle \Phi }$ must be a Nash equilibrium.

## Quality of solutions and Price of anarchy

Since there exist Nash equilibria in continuous congestion games, the next natural topic is to analyze their quality. We will derive bounds on the ratio between the delay at Nash and the optimal delay, otherwise known as the Price of Anarchy. First, we begin with a technical condition on the delay functions.

Now if the delay is ${\displaystyle (\lambda ,\mu )}$ smooth, ${\displaystyle f}$ is a Nash equilibrium, and ${\displaystyle f^{*}}$ is an optimal allocation, then ${\displaystyle \textstyle \sum _{e}x_{e}d_{e}(x_{e})\leq {\frac {\lambda }{1-\mu }}\sum _{e}x_{e}^{*}d_{e}(x_{e}^{*})}$. In other words, the price of anarchy is ${\displaystyle \textstyle {\frac {\lambda }{1-\mu }}}$. See these lecture notes for a proof.

## References

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

|CitationClass=citation }}.