# Association scheme

Template:Inadequate lead {{ safesubst:#invoke:Unsubst||$N=Use dmy dates |date=__DATE__ |$B= }} The theory of association schemes arose in statistics, in the theory of experimental design for the analysis of variance.[1][2][3] In mathematics, association schemes belong to both algebra and combinatorics. Indeed, in algebraic combinatorics, association schemes provide a unified approach to many topics, for example combinatorial designs and coding theory.[4][5] In algebra, association schemes generalize groups, and the theory of association schemes generalizes the character theory of linear representations of groups.[6][7][8]

## Definition

An n-class association scheme consists of a set X together with a partition S of X × X into n + 1 binary relations, R0, R1, ..., Rn which satisfy:

An association scheme is commutative if ${\displaystyle p_{ij}^{k}=p_{ji}^{k}}$ for all ${\displaystyle i}$, ${\displaystyle j}$ and ${\displaystyle k}$. Most authors assume this property.

A symmetric association scheme is one in which each relation ${\displaystyle R_{i}}$ is a symmetric relation. That is:

• if (x,y) ∈ Ri, then (y,x) ∈ Ri . (Or equivalently, R* = R.)

Every symmetric association scheme is commutative.

Note, however, that while the notion of an association scheme generalizes the notion of a group, the notion of a commutative association scheme only generalizes the notion of a commutative group.

Two points x and y are called i th associates if ${\displaystyle (x,y)\in R_{i}}$. The definition states that if x and y are i th associates so are y and x. Every pair of points are i th associates for exactly one ${\displaystyle i}$. Each point is its own zeroth associate while distinct points are never zeroth associates. If x and y are k th associates then the number of points ${\displaystyle z}$ which are both i th associates of ${\displaystyle x}$ and j th associates of ${\displaystyle y}$ is a constant ${\displaystyle p_{ij}^{k}}$.

### Graph interpretation and adjacency matrices

An association scheme can be visualized as a complete graph with labeled edges. The graph has ${\displaystyle v}$ vertices, one for each point of ${\displaystyle X}$, and the edge joining vertices ${\displaystyle x}$ and ${\displaystyle y}$ is labeled ${\displaystyle i}$ if ${\displaystyle x}$ and ${\displaystyle y}$ are ${\displaystyle i}$ th associates. Each edge has a unique label, and the number of triangles with a fixed base labeled ${\displaystyle k}$ having the other edges labeled ${\displaystyle i}$ and ${\displaystyle j}$ is a constant ${\displaystyle p_{ij}^{k}}$, depending on ${\displaystyle i,j,k}$ but not on the choice of the base. In particular, each vertex is incident with exactly ${\displaystyle p_{ii}^{0}=v_{i}}$ edges labeled ${\displaystyle i}$; ${\displaystyle v_{i}}$ is the valency of the relation ${\displaystyle R_{i}}$. There are also loops labeled ${\displaystyle 0}$ at each vertex ${\displaystyle x}$, corresponding to ${\displaystyle R_{0}}$.

The relations are described by their adjacency matrices. ${\displaystyle A_{i}}$ is the adjacency matrix of ${\displaystyle R_{i}}$ for ${\displaystyle i=0,\ldots ,n}$ and is a v × v matrix with rows and columns labeled by the points of ${\displaystyle X}$.

${\displaystyle \left(A_{i}\right)_{x,y}=\left\{{\begin{matrix}1,&{\mbox{if }}\left(x,y\right)\in R_{i},\\0,&{\mbox{otherwise.}}\end{matrix}}\right.\qquad (1)}$

The definition of an association scheme is equivalent to saying that the ${\displaystyle A_{i}}$ are v × v (0,1)-matrices which satisfy

I. ${\displaystyle A_{i}\,}$ is symmetric,
II. ${\displaystyle \sum _{i=0}^{n}A_{i}=J}$ (the all-ones matrix),
III. ${\displaystyle A_{0}=I\,}$,
IV. ${\displaystyle A_{i}A_{j}=\sum _{k=0}^{n}p_{ij}^{k}A_{k}=A_{j}A_{i},i,j=0,\ldots ,n}$.

The (x, y)-th entry of the left side of (IV) is the number of paths of length two between x and y with labels i and j in the graph. Note that the rows and columns of ${\displaystyle A_{i}}$ contain ${\displaystyle v_{i}}$ ${\displaystyle 1}$'s:

## History

The term association scheme is due to Template:Harv but the concept is already inherent in Template:Harv.[9] These authors were studying what statisticians have called partially balanced incomplete block designs (PBIBDs). The subject became an object of algebraic interest with the publication of Template:Harv and the introduction of the Bose–Mesner algebra. The most important contribution to the theory was the thesis of P. Delsarte Template:Harv who recognized and fully used the connections with coding theory and design theory.[10] Generalizations have been studied by D. G. Higman (coherent configurations) and B. Weisfeiler (distance regular graphs).

## The Bose–Mesner algebra

The adjacency matrices ${\displaystyle A_{i}}$ of the graphs ${\displaystyle \left(X,R_{i}\right)}$ generate a commutative and associative algebra ${\displaystyle {\mathcal {A}}}$ (over the real or complex numbers) both for the matrix product and the pointwise product. This associative, commutative algebra is called the Bose–Mesner algebra of the association scheme.

Since the matrices in ${\displaystyle {\mathcal {A}}}$ are symmetric and commute with each other, they can be diagonalized simultaneously. Therefore ${\displaystyle {\mathcal {A}}}$ is semi-simple and has a unique basis of primitive idempotents ${\displaystyle J_{0},\ldots ,J_{n}}$.

There is another algebra of ${\displaystyle \left(n+1\right)\times \left(n+1\right)}$ matrices which is isomorphic to ${\displaystyle {\mathcal {A}}}$, and is often easier to work with.

## Examples

• The Hamming scheme, denoted H(n,q), is defined as follows. The points of H(n,q) are the qn ordered n-tuples over a set of size q. Two n-tuples x, y are said to be i th associates if they disagree in exactly i coordinates. E.g., if x = (1,0,1,1), y = (1,1,1,1), z = (0,0,1,1), then x and y are 1st associates, x and z are 1st associates and y and z are 2nd associates in H(4,2).
• A distance-regular graph, G, forms an association scheme by defining two vertices to be i th associates if their distance is i.
• A specific 3-class association scheme:[11]
Let A(3) be the following association scheme with three associate classes on the set X = {1,2,3,4,5,6}. The (i,j) entry is s if elements i and j are in relation Rs.
1 2 3 4 5 6
1  0   1   1   2   3   3
2  1   0   1   3   2   3
3  1   1   0   3   3   2
4  2   3   3   0   1   1
5  3   2   3   1   0   1
6  3   3   2   1   1   0

## Coding theory

The Hamming scheme and the Johnson scheme are of major significance in classical coding theory.

In coding theory, association scheme theory is mainly concerned with the distance of a code. The linear programming method produces upper bounds for the size of a code with given minimum distance, and lower bounds for the size of a design with a given strength. The most specific results are obtained in the case where the underlying association scheme satisfies certain polynomial properties; this leads one into the realm of orthogonal polynomials. In particular, some universal bounds are derived for codes and designs in polynomial-type association schemes.

In classical coding theory, dealing with codes in a Hamming scheme, the MacWilliams transform involves a family of orthogonal polynomials known as the Krawtchouk polynomials. These polynomials give the eigenvalues of the distance relation matrices of the Hamming scheme.

## References

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}. (Chapters from preliminary draft are available on-line.)

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• P. Camion (1998), Codes and Association Schemes: Basic Properties of Association Schemes Relevant to Coding, in Handbook of Coding Theory, V. S. Pless and W. C. Huffman, Eds., Elsevier, The Netherlands.
• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• {{#invoke:Citation/CS1|citation

|CitationClass=journal }}

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes, Elsevier, New York, 1978.
• {{#invoke:citation/CS1|citation

|CitationClass=book }}

• van Lint, J.H., and Wilson, R.M. (1992), A Course in Combinatorics. Cambridge, Eng.: Cambridge University Press. ISBN 0-521-00601-5
• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}