# Erdős–Rényi model

Template:Network Science In graph theory, the Erdős–Rényi model is either of two closely related models for generating random graphs, including one that sets an edge between each pair of nodes with equal probability, independently of the other edges. They are named for Paul Erdős and Alfréd Rényi, who first introduced one of the two models in 1959; the other model was introduced independently and contemporaneously by Edgar Gilbert. These models can be used in the probabilistic method to prove the existence of graphs satisfying various properties, or to provide a rigorous definition of what it means for a property to hold for almost all graphs. A quite recent and interesting applications of these models is on the discovery of network motifs, in systems biology.

## Definition

There are two closely related variants of the Erdős–Rényi (ER) random graph model.

• In the G(n, M) model, a graph is chosen uniformly at random from the collection of all graphs which have n nodes and M edges. For example, in the G(3, 2) model, each of the three possible graphs on three vertices and two edges are included with probability 1/3.
• In the G(n, p) model, a graph is constructed by connecting nodes randomly. Each edge is included in the graph with probability p independent from every other edge. Equivalently, all graphs with n nodes and M edges have equal probability of
$p^{M}(1-p)^{{n \choose 2}-M}.$ The parameter p in this model can be thought of as a weighting function; as p increases from 0 to 1, the model becomes more and more likely to include graphs with more edges and less and less likely to include graphs with fewer edges. In particular, the case p = 0.5 corresponds to the case where all $2^{\binom {n}{2}}$ graphs on n vertices are chosen with equal probability.

The behavior of random graphs are often studied in the case where n, the number of vertices, tends to infinity. Although p and M can be fixed in this case, they can also be functions depending on n. For example, the statement

Almost every graph in G(n, 2ln(n)/n) is connected.

means

As n tends to infinity, the probability that a graph on n vertices with edge probability 2ln(n)/n is connected, tends to 1.

## Comparison between the two models

The expected number of edges in G(n, p) is ${\tbinom {n}{2}}p$ , and by the law of large numbers any graph in G(n, p) will almost surely have approximately this many edges (provided the expected number of edges tends to infinity). Therefore a rough heuristic is that if pn2 → ∞, then G(n,p) should behave similarly to G(n, M) with $M={\tbinom {n}{2}}p$ as n increases.

For many graph properties, this is the case. If P is any graph property which is monotone with respect to the subgraph ordering (meaning that if A is a subgraph of B and A satisfies P, then B will satisfy P as well), then the statements "P holds for almost all graphs in G(np)" and "P holds for almost all graphs in $G(n,{\tbinom {n}{2}}p)$ " are equivalent (provided pn2 → ∞). For example, this holds if P is the property of being connected, or if P is the property of containing a Hamiltonian cycle. However, this will not necessarily hold for non-monotone properties (e.g. the property of having an even number of edges).

In practice, the G(n, p) model is the one more commonly used today, in part due to the ease of analysis allowed by the independence of the edges.

## Properties of G(n, p)

With the notation above, a graph in G(n, p) has on average ${\tbinom {n}{2}}p$ edges. The distribution of the degree of any particular vertex is binomial:

$P(\operatorname {deg} (v)=k)={n-1 \choose k}p^{k}(1-p)^{n-1-k},$ where n is the total number of vertices in the graph. Since

$P(\operatorname {deg} (v)=k)\to {\frac {(np)^{k}\mathrm {e} ^{-np}}{k!}}\quad {\mbox{ as }}n\to \infty {\mbox{ and }}np=\mathrm {const} ,$ this distribution is Poisson for large n and np = const.

In a 1960 paper, Erdős and Rényi described the behavior of G(np) very precisely for various values of p. Their results included that:

• If np < 1, then a graph in G(np) will almost surely have no connected components of size larger than O(log(n)).
• If np = 1, then a graph in G(np) will almost surely have a largest component whose size is of order n2/3.
• If npc > 1, where c is a constant, then a graph in G(np) will almost surely have a unique giant component containing a positive fraction of the vertices. No other component will contain more than O(log(n)) vertices.

Further properties of the graph can be described almost precisely as n tends to infinity. For example, there is a k(n) (approximately equal to 2log2(n)) such that the largest clique in G(n, 0.5) has almost surely either size k(n) or k(n) + 1.

Thus, even though finding the size of the largest clique in a graph is NP-complete, the size of the largest clique in a "typical" graph (according to this model) is very well understood.

## Relation to percolation

In percolation theory one examines a finite or infinite graph and removes edges (or links) randomly. Thus the Erdős–Rényi process is in fact unweighted link percolation on the complete graph. (One refers to percolation in which nodes and/or links are removed with heterogeneous weights as weighted percolation). As percolation theory has much of its roots in physics, much of the research done was on the lattices in Euclidean spaces. The transition at np = 1 from giant component to small component has analogs for these graphs, but for lattices the transition point is difficult to determine. Physicists often refer to study of the complete graph as a mean field theory. Thus the Erdős–Rényi process is the mean-field case of percolation.

Some significant work was also done on percolation on random graphs. From a physicist's point of view this would still be a mean-field model, so the justification of the research is often formulated in terms of the robustness of the graph, viewed as a communication network. Given a random graph of n ≫ 1 nodes with an average degree <k>. Remove randomly a fraction 1 − p′ of nodes and leave only a fraction p′ from the network. There exists a critical percolation threshold $p'_{c}={\tfrac {1}{\langle k\rangle }}$ below which the network becomes fragmented while above $p'_{c}$ a giant connected component of order n exists. The relative size of the giant component, P, is given by

$P_{\infty }=p'[1-\exp(-\langle k\rangle P_{\infty })].\,$ ## Caveats

Both of the two major assumptions of the G(n, p) model (that edges are independent and that each edge is equally likely) may be inappropriate for modeling certain real-life phenomena. In particular, an Erdős–Rényi graph does not have heavy tails, as is the case in many real networks. Moreover, it has low clustering, unlike many social networks. For popular modeling alternatives, see Barabási–Albert model and Watts and Strogatz model. One should note that these alternative models are not percolation processes, but instead represent a growth and rewiring model, respectively. A model for interacting ER networks was developed recently by Buldyrev et al..

## History

The G(np) model was first introduced by Edgar Gilbert in a 1959 paper which studied the connectivity threshold mentioned above. The G(n, M) model was introduced by Erdős and Rényi in their 1959 paper. As with Gilbert, their first investigations were as to the connectivity of G(nM), with the more detailed analysis following in 1960.