## § Character theory

I jot down the rough proof sketches of character theoretic facts for quick reference. Fix a group $G$. A group representation of $G$ is a group homomorphism from the group to the automorphism group of a complex vector space $V$: Formally, $f: G \rightarrow Aut(V)$. A direct sum of representations $f: G \rightarrow Aut(V)$, $f': G \rightarrow Aut(W)$ is the obvious extension of the maps $f \oplus f': G \rightarrow Aut(V \oplus W)$, given by $(f \oplus f')(g) = \lambda v. f(g)(v) \oplus f'(g)(v)$. A representation is said to be irreducible if it cannot be written as the direct sum of two non-trivial representations. A character is the trace of a representation. An irreducible character is the trace of an irreducible representation.

#### § All finite group representations are unitary representations

Given a representation $f: G \rightarrow Aut(V)$, we construct an invariant inner product, that is, one where $\langle f(g)(v) | f(g)(w) \rangle = \langle v | w \rangle$. This maps the representation unitary, since it preserves this special inner product. The idea is to begin with some arbitrary inner product $[v, w]$ which we can always induce on $V$ (pick a basis). Then, we build an "averaged" inner product given by $\langle v | w \rangle \equiv \sum_{h \in G} [ f(h)(v) | f(h)(w) ]$. Intuitively, this inner product is invariant because on considering $\langle f(g)(v) | f(g)(w) \rangle$, the definition will contain $[f(h)(f(g)(v)) | f(h)(f(g) w)] = [f(hg)(v) | f(hg)(w)]$, which is a re-indexing of the original sum. Hence, the representation $f$ preserves this inner product, and we can thus study only unitary representations (which are much simpler). From now on, we assume all representations are unitary.

#### § Representation has same value for the entire conjugacy classe

Since all representations are unitary, the image of $f(ghg^{-1}) = f(g) f(h) f(g)^{-1}$ is going to be a change-of-basis of $f(h)$, and thus does not actually change the automorphism given by $f(h)$. Hence, representations are the same for an entire conjugacy class. Such functions which are constant on a conjugacy class is called as a class function.

#### § Morphism between representations / intertwining

A map between two representations, $f: G \rightarrow V$, $f': G \rightarrow W$ is given by $\eta: V \rightarrow W$ if the natural diagram commutes:
V --f--→ V
|        |
η        η
↓        ↓
W --f'-→ W

such a map $\eta$ is called called as an intertwining map or an equivariant map.

#### § Schur's lemma

The only equivariant maps between irreducible representations is either the zero map or a scalar multiple of the identity map. This is stronger than saying that the equivariant map is a diagonal matrix; scalar multiple of identity implies that all dimensions are scaled uniformly. The main idea of the proof is to show that the kernel and image of the intertwining map is an irreducible subspace of $f, f'$ retrospectively. Since the maps are irreducible, we must have the the intertwining is either the zero map, or a map into the full group. This forces the map to be zero or a scalar multiple of the identity. One way to look at this is that for irreps $f: G \rightarrow V$ and $f': G \rightarrow W$, the dimension of $Hom(V, W)$ is either 0 or 1 (scalings of identity).

#### § Schur orthogonality relations

We consider representations "one matrix index" at a time, and show that the matrix entries of irreducible representations is going to be orthogonal The proof is to consider representations $\alpha: G \rightarrow \mathbb GL(V)$ $\beta: G \rightarrow \mathbb GL(W)$, and an intertwining map $T: V \rightarrow W$. How do we involve all of $\alpha, \beta, T$ at once? Recall that since $T$ is an intertwining, we must have:
$T(\alpha(g)(v)) = \beta(g)(T(v))$
Now, since $\beta$ is invertible (it must be since it's a member of $GL(W)$), I can rewrite the above as:
$\beta^{-1}(g)(T(\alpha (g)(v)) = T(v)$
This needs that $T: V \rightarrow W$ is an intertwining map. Can we generalize this to any linear map? Suppose that $L: V \rightarrow W$ is a linear map, not necessary intertwining. Let's induce an intertwining map from $L$:
\begin{aligned} &\overline{L}: V \rightarrow W \\ &\overline{L}(v) \equiv 1/|G|\sum_{g \in G} \beta(g)^{-1} T \alpha(g) v \end{aligned}
We average the intertwining condition of $T(v)$ to produce an appropriate $\overline{L}(v)$. Is this an intertwining? Yes, because when we compute $\beta(h)^{-1} \overline L \alpha(h)$, the averaging trick winds up shifting the index, exactly as it does for the inner product:
\begin{aligned} &\beta(h)^{-1} \overline L \alpha(h) \\ &=\beta(h)^{-1} \left( \sum_{g \in G} \beta(g)^{-1} L \alpha(g) \right) \alpha(h) \\ &=\sum_{g \in G} \beta(gh)^{-1} L \alpha(gh) \\ &=\sum_{g \in G} \beta(gh)^{-1} L \alpha(gh) \\ &=\sum_{kh^{-1} \in G} \beta(kh^{-1}h)^{-1} L \alpha(kh^{-1}h) \\ &=\sum_{kh^{-1} \in G} \beta(k)^{-1} L \alpha(k) \\ &= \overline{L} \end{aligned}
Thus, for every linear map $L: V \rightarrow W$, if the representation $\alpha$ is not isomorphic to the representation $\beta$, then $\overline{L} = 0$, or:
\begin{aligned} &\sum_{g \in G} \beta(g)^{-1} L \alpha(g) = 0 \\ &\left( \sum_{g \in G} \beta(g)^{-1} L \alpha(g) \right)[i][j] = 0[i][j] \\ &\sum_{g \in G} \beta(g)^{-1}[i][p] L[p][q] \alpha(g)[q][j] = 0[i][j] \\ &\text{(\beta is unitary)} \\ &\sum_{g \in G} \beta(g)*[p][i] L[p][q] \alpha(g)[q][j] = 0[i][j] \\ \end{aligned}
The above equality holds for all indexes $i, j$ and for all choices of $L[p][q]$ (since $L$ can be any linear map). In particular, we can choose $L[p][q] = \delta[p][r] \delta[q][s]$ for arbitrary $r, s$. This gives us the equation:
\begin{aligned} &\sum_{g \in G} \beta(g)*[p][i] L[p][q] \alpha(g)[q][j] = 0[i][j] \\ &\sum_{g \in G} \beta(g)*[p][i] \delta[p][r] \delta[q][c] \alpha(g)[q][j] = 0[i][j] \\ &\sum_{g \in G} \beta(g)*[r][i] \alpha(g)[s][j] = 0[i][j] = 0\\ \end{aligned}
This tells that we can choose any index $[r, i]$ and index $[i, j]$ and these will be orthogonal, when viewed as vectors "along" the set of matrices. If the representation is a one-dimensional representation/character, then we have no freedom in indexing, and the above becomes:
\begin{aligned} &\sum_{g \in G} \beta(g)* \alpha(g) = 0[i][j] = 0 \end{aligned}
Thus, different characters are all orthogonal.

#### § Inner product of class functions

we impose an inner product relation on the space of class functions (complex valued functions constant on conjugacy classes) $G \rightarrow \mathbb C^\times$, given by $\langle f | f' \rangle \equiv 1/|G| \sum_{g \in G} f(g) \overline{f'(g)}$ where $\overline{f'(g)}$ is the complex conjugate. Using the Schur orthogonality relations, we immediately deduce that the inner product of two irreducible characters can be viewed as the schur orthogonality applied to their (only) matrix entry at location (1, 1). Thus, irreducible characters will be orthogonal, and equal characters will have inner product 1.

#### § Regular representation

The "Cayley-style" representation one would naturally dream up. For a group $G$, build a vector space $V$ whose basis is given by elements of $G$. Have $g \in G$ act on $V$ by seding $v_h$ to $v_{gh}$. Ie, act with $g$ as a permutation on $V$. This gives us a "large" representation. For example, the permutation group of $n$ letters will have a regular representation of $n!$ basis vectors. This representation contains every irrep. The idea is to show that the dot product of the trace of the regular representation with every other irrep is nonzero. Furthermore, since the regular representation has finite dimension, this tells us that there are only finitely many irreps: the irreps correspond to subrepresentations, and a finite representation only has finitely many subrepresentations. This makes the idea of classifying irreps a reasonable task.

#### § Character of the regular representation

Theorem: The character $r_G$ of the regular representation is given by $r_G(1) = |G|$, $r_G(s) = 0$ for $s \neq 1$.
• The matrix for the identity element is the identity matrix, and the sizeof the matrix is the size of the vector space, which is $|G|$ sincethere's a basis vector for each element of $G$. Thus, $r_G(1) = |G|$.
• For any other element $g \in G$, the regular representation will be a permutation matrixwith no fixed points. Thus, the diagonal of the matrix is all zeros, and hence $r_G(g) = 0$.

#### § Regular representation contains all other irreps

The inner product of the character of the regular representation with any other irrep $\alpha$ is going to be:
\begin{aligned} & \langle r_G | \chi_\alpha \rangle = 1/|G| \sum_{g \in G} r_G(g)* \chi_\alpha(g) \\ &= 1/|G| (r_G(1) \cdot \chi_\alpha(1)) \\ &= 1/|G| (|G| \cdot 1) \\ &= 1 \end{aligned}
Thus, the regular rep contains the other irreps, since the character of the regular rep has non-zero inner product with irrep, and irrep characters are all orthogonal.

#### § Abelian groups are controlled by characters

Since abelian groups map to automorphism that all commute with each other, we can simultaneously diagonalize these matrices. Thus, we only need to consider the data along each diagonal, which is independent. This reduces the representation to a direct sum of scalars / 1D representations / characters.