§ Character theory

I jot down the rough proof sketches of character theoretic facts for quick reference. Fix a group GG. A group representation of GG is a group homomorphism from the group to the automorphism group of a complex vector space VV: Formally, f:GAut(V)f: G \rightarrow Aut(V). A direct sum of representations f:GAut(V)f: G \rightarrow Aut(V), f:GAut(W)f': G \rightarrow Aut(W) is the obvious extension of the maps ff:GAut(VW)f \oplus f': G \rightarrow Aut(V \oplus W), given by (ff)(g)=λv.f(g)(v)f(g)(v)(f \oplus f')(g) = \lambda v. f(g)(v) \oplus f'(g)(v). A representation is said to be irreducible if it cannot be written as the direct sum of two non-trivial representations. A character is the trace of a representation. An irreducible character is the trace of an irreducible representation.

§ All finite group representations are unitary representations

Given a representation f:GAut(V)f: G \rightarrow Aut(V), we construct an invariant inner product, that is, one where f(g)(v)f(g)(w)=vw\langle f(g)(v) | f(g)(w) \rangle = \langle v | w \rangle. This maps the representation unitary, since it preserves this special inner product. The idea is to begin with some arbitrary inner product [v,w][v, w] which we can always induce on VV (pick a basis). Then, we build an "averaged" inner product given by vwhG[f(h)(v)f(h)(w)]\langle v | w \rangle \equiv \sum_{h \in G} [ f(h)(v) | f(h)(w) ]. Intuitively, this inner product is invariant because on considering f(g)(v)f(g)(w)\langle f(g)(v) | f(g)(w) \rangle, the definition will contain [f(h)(f(g)(v))f(h)(f(g)w)]=[f(hg)(v)f(hg)(w)][f(h)(f(g)(v)) | f(h)(f(g) w)] = [f(hg)(v) | f(hg)(w)], which is a re-indexing of the original sum. Hence, the representation ff preserves this inner product, and we can thus study only unitary representations (which are much simpler). From now on, we assume all representations are unitary.

§ Representation has same value for the entire conjugacy classe

Since all representations are unitary, the image of f(ghg1)=f(g)f(h)f(g)1f(ghg^{-1}) = f(g) f(h) f(g)^{-1} is going to be a change-of-basis of f(h)f(h), and thus does not actually change the automorphism given by f(h)f(h). Hence, representations are the same for an entire conjugacy class. Such functions which are constant on a conjugacy class is called as a class function.

§ Morphism between representations / intertwining

A map between two representations, f:GVf: G \rightarrow V, f:GWf': G \rightarrow W is given by η:VW\eta: V \rightarrow W if the natural diagram commutes:
V --f--→ V
|        |
η        η 
↓        ↓
W --f'-→ W
such a map η\eta is called called as an intertwining map or an equivariant map.

§ Schur's lemma

The only equivariant maps between irreducible representations is either the zero map or a scalar multiple of the identity map. This is stronger than saying that the equivariant map is a diagonal matrix; scalar multiple of identity implies that all dimensions are scaled uniformly. The main idea of the proof is to show that the kernel and image of the intertwining map is an irreducible subspace of f,ff, f' retrospectively. Since the maps are irreducible, we must have the the intertwining is either the zero map, or a map into the full group. This forces the map to be zero or a scalar multiple of the identity. One way to look at this is that for irreps f:GVf: G \rightarrow V and f:GWf': G \rightarrow W, the dimension of Hom(V,W)Hom(V, W) is either 0 or 1 (scalings of identity).

§ Schur orthogonality relations

We consider representations "one matrix index" at a time, and show that the matrix entries of irreducible representations is going to be orthogonal The proof is to consider representations α:GGL(V)\alpha: G \rightarrow \mathbb GL(V) β:GGL(W)\beta: G \rightarrow \mathbb GL(W), and an intertwining map T:VWT: V \rightarrow W. How do we involve all of α,β,T\alpha, \beta, T at once? Recall that since TT is an intertwining, we must have:
T(α(g)(v))=β(g)(T(v)) T(\alpha(g)(v)) = \beta(g)(T(v))
Now, since β\beta is invertible (it must be since it's a member of GL(W)GL(W)), I can rewrite the above as:
β1(g)(T(α(g)(v))=T(v) \beta^{-1}(g)(T(\alpha (g)(v)) = T(v)
This needs that T:VWT: V \rightarrow W is an intertwining map. Can we generalize this to any linear map? Suppose that L:VWL: V \rightarrow W is a linear map, not necessary intertwining. Let's induce an intertwining map from LL:
L:VWL(v)1/GgGβ(g)1Tα(g)v \begin{aligned} &\overline{L}: V \rightarrow W \\ &\overline{L}(v) \equiv 1/|G|\sum_{g \in G} \beta(g)^{-1} T \alpha(g) v \end{aligned}
We average the intertwining condition of T(v)T(v) to produce an appropriate L(v)\overline{L}(v). Is this an intertwining? Yes, because when we compute β(h)1Lα(h)\beta(h)^{-1} \overline L \alpha(h), the averaging trick winds up shifting the index, exactly as it does for the inner product:
β(h)1Lα(h)=β(h)1(gGβ(g)1Lα(g))α(h)=gGβ(gh)1Lα(gh)=gGβ(gh)1Lα(gh)=kh1Gβ(kh1h)1Lα(kh1h)=kh1Gβ(k)1Lα(k)=L \begin{aligned} &\beta(h)^{-1} \overline L \alpha(h) \\ &=\beta(h)^{-1} \left( \sum_{g \in G} \beta(g)^{-1} L \alpha(g) \right) \alpha(h) \\ &=\sum_{g \in G} \beta(gh)^{-1} L \alpha(gh) \\ &=\sum_{g \in G} \beta(gh)^{-1} L \alpha(gh) \\ &=\sum_{kh^{-1} \in G} \beta(kh^{-1}h)^{-1} L \alpha(kh^{-1}h) \\ &=\sum_{kh^{-1} \in G} \beta(k)^{-1} L \alpha(k) \\ &= \overline{L} \end{aligned}
Thus, for every linear map L:VWL: V \rightarrow W, if the representation α\alpha is not isomorphic to the representation β\beta, then L=0\overline{L} = 0, or:
gGβ(g)1Lα(g)=0(gGβ(g)1Lα(g))[i][j]=0[i][j]gGβ(g)1[i][p]L[p][q]α(g)[q][j]=0[i][j](β is unitary)gGβ(g)[p][i]L[p][q]α(g)[q][j]=0[i][j] \begin{aligned} &\sum_{g \in G} \beta(g)^{-1} L \alpha(g) = 0 \\ &\left( \sum_{g \in G} \beta(g)^{-1} L \alpha(g) \right)[i][j] = 0[i][j] \\ &\sum_{g \in G} \beta(g)^{-1}[i][p] L[p][q] \alpha(g)[q][j] = 0[i][j] \\ &\text{($\beta$ is unitary)} \\ &\sum_{g \in G} \beta(g)*[p][i] L[p][q] \alpha(g)[q][j] = 0[i][j] \\ \end{aligned}
The above equality holds for all indexes i,ji, j and for all choices of L[p][q]L[p][q] (since LL can be any linear map). In particular, we can choose L[p][q]=δ[p][r]δ[q][s]L[p][q] = \delta[p][r] \delta[q][s] for arbitrary r,sr, s. This gives us the equation:
gGβ(g)[p][i]L[p][q]α(g)[q][j]=0[i][j]gGβ(g)[p][i]δ[p][r]δ[q][c]α(g)[q][j]=0[i][j]gGβ(g)[r][i]α(g)[s][j]=0[i][j]=0 \begin{aligned} &\sum_{g \in G} \beta(g)*[p][i] L[p][q] \alpha(g)[q][j] = 0[i][j] \\ &\sum_{g \in G} \beta(g)*[p][i] \delta[p][r] \delta[q][c] \alpha(g)[q][j] = 0[i][j] \\ &\sum_{g \in G} \beta(g)*[r][i] \alpha(g)[s][j] = 0[i][j] = 0\\ \end{aligned}
This tells that we can choose any index [r,i][r, i] and index [i,j][i, j] and these will be orthogonal, when viewed as vectors "along" the set of matrices. If the representation is a one-dimensional representation/character, then we have no freedom in indexing, and the above becomes:
gGβ(g)α(g)=0[i][j]=0 \begin{aligned} &\sum_{g \in G} \beta(g)* \alpha(g) = 0[i][j] = 0 \end{aligned}
Thus, different characters are all orthogonal.

§ Inner product of class functions

we impose an inner product relation on the space of class functions (complex valued functions constant on conjugacy classes) GC×G \rightarrow \mathbb C^\times, given by ff1/GgGf(g)f(g)\langle f | f' \rangle \equiv 1/|G| \sum_{g \in G} f(g) \overline{f'(g)} where f(g)\overline{f'(g)} is the complex conjugate. Using the Schur orthogonality relations, we immediately deduce that the inner product of two irreducible characters can be viewed as the schur orthogonality applied to their (only) matrix entry at location (1, 1). Thus, irreducible characters will be orthogonal, and equal characters will have inner product 1.

§ Regular representation

The "Cayley-style" representation one would naturally dream up. For a group GG, build a vector space VV whose basis is given by elements of GG. Have gGg \in G act on VV by seding vhv_h to vghv_{gh}. Ie, act with gg as a permutation on VV. This gives us a "large" representation. For example, the permutation group of nn letters will have a regular representation of n!n! basis vectors. This representation contains every irrep. The idea is to show that the dot product of the trace of the regular representation with every other irrep is nonzero. Furthermore, since the regular representation has finite dimension, this tells us that there are only finitely many irreps: the irreps correspond to subrepresentations, and a finite representation only has finitely many subrepresentations. This makes the idea of classifying irreps a reasonable task.

§ Character of the regular representation

Theorem: The character rGr_G of the regular representation is given by rG(1)=Gr_G(1) = |G|, rG(s)=0r_G(s) = 0 for s1s \neq 1.
  • The matrix for the identity element is the identity matrix, and the sizeof the matrix is the size of the vector space, which is G|G| sincethere's a basis vector for each element of GG. Thus, rG(1)=Gr_G(1) = |G|.
  • For any other element gGg \in G, the regular representation will be a permutation matrixwith no fixed points. Thus, the diagonal of the matrix is all zeros, and hence rG(g)=0r_G(g) = 0.

§ Regular representation contains all other irreps

The inner product of the character of the regular representation with any other irrep α\alpha is going to be:
rGχα=1/GgGrG(g)χα(g)=1/G(rG(1)χα(1))=1/G(G1)=1 \begin{aligned} & \langle r_G | \chi_\alpha \rangle = 1/|G| \sum_{g \in G} r_G(g)* \chi_\alpha(g) \\ &= 1/|G| (r_G(1) \cdot \chi_\alpha(1)) \\ &= 1/|G| (|G| \cdot 1) \\ &= 1 \end{aligned}
Thus, the regular rep contains the other irreps, since the character of the regular rep has non-zero inner product with irrep, and irrep characters are all orthogonal.

§ Abelian groups are controlled by characters

Since abelian groups map to automorphism that all commute with each other, we can simultaneously diagonalize these matrices. Thus, we only need to consider the data along each diagonal, which is independent. This reduces the representation to a direct sum of scalars / 1D representations / characters.