← Symmetries → Matrix Groups

What should you be acquainted with?

1. Linear Algebra in particular: inner product spaces both over the real and the complex numbers. 2. Very basics in Functional Analysis and Group Theory. For an in depth study cf. e.g. J-P. Serre: Linear Representations of Finite Groups or I.M. Isaacs: Character Theory of Finite Groups.

Acknowledgments

I'm especially indebted to Thomas Speckhofer for his accurate revision of this chapter.

Finite Groups

Representations

Unitary representations

Let $G$ be a group and $E$ a vector-space over the complex (real) numbers. $\Gl(E)$ the group of all invertible linear mappings $A:E\rar E$ - thus $\Gl(n,\R)=\Gl(\R^n)$ and $\Gl(n,\C)=\Gl(\C^n)$. A homomorphism $\Psi:G\rar\Gl(E)$ is said to be a representation of $G$ in $E$. If $E$ is a Hilbert-space and $\UU(E)$ is the group of all unitary operators $U:E\rar E$, i.e. $\UU(E)=\{U\in\Gl(E):U^*U=1\}$ in particular $\UU(n)=\UU(\C^n)$, then any homomorphism $\Psi:G\rar\UU(E)$ is said to be a unitary (orthogonal) representation. If $E$ is finite dimensional, then $\dim(E)$ is said to be the dimension of the representation.
As for this chapter, we are essentially confined to finite groups and finite dimensional representations. Also in case of a finite group it's no loss of generality to study unitary representations only. For suppose $G$ is a finite group and $\Psi:G\rar\Gl(E)$ is a representation. Employing the 'averaging trick' we will find a euclidean product on $E$, such that $\Psi$ is unitary: for this, let $(.|.)$ be any euclidean product and define $$ \la x,y\ra\colon=\frac1{|G|}\sum_{h\in G}(\Psi(h)x|\Psi(h)y) $$ then for $x\neq0$: $\la x,x\ra\geq(x|x)/|G| > 0$ and for all $g\in G$: $$ \la\Psi(g)x,\Psi(g)y\ra =\frac1{|G|}\sum_{h\in G}(\Psi(hg)x|\Psi(hg)y) =\frac1{|G|}\sum_{h\in G}(\Psi(h)x|\Psi(h)y) =\la x,y\ra~. $$ Thus we will mostly deal with unitary representations. However, there is some virtue of arbitrary representations: 1. we don't have to care about a euclidean product, which sometimes is of artificial nature. 2. it can be extended to arbitrary fields $\bK$ different from $\R$ and $\C$.
Suppose $e_1,\ldots,e_n$ is a basis of the real vector-space $E$. Prove that there is a euclidean product $\la.,.\ra$ on $E$ such that $e_1,\ldots,e_n$ is an orthonormal basis. If $e_1^*,\ldots,e_n^*$ denotes the dual basis, i.e. $e_1^*,\ldots,e_n^*$ is defined by $e_j^*(e_k)=\d_{jk}$ (this is indeed a basis for $E^*$). Now $\la x,y\ra\colon=\sum e_j^*(x)e_j^*(y)$ will do it! How to define $\la.,.\ra$ in the complex case?

Dual and contragredient representation

The following construction is well known in linear algebra: If $A\in\Hom(E)$, the its dual $A^*\in\Hom(E^*)$ is defined by $A^*(x^*)(x)\colon=x^*(Ax)$.
Verify that the matrix of $A^*$ with respect to the dual basis $e_1^*,\ldots,e_n^*$ is the transpose of the matrix of $A$ with respect to the basis $e_1,\ldots,e_n$.
Let $\Psi:G\rar\Gl(E)$ be any representation. Then $$ \Psi^d(g)\colon=\Psi(g)^{-1*} $$ is a representation of $G$ in the dual $E^*$ of $E$. $\Psi^d$ is called the dual representation
Verify that $\Psi^d(g)\colon=\Psi(g)^{-1*}$ is a representation of $G$ in the dual $E^*$ of $E$. Solution by T. Speckhofer.
If $E$ is a Hilbert-space, we may identify $E^*$ with $E$, for every $x^*\in E^*$ can be written as $x^*(y)=\la y,x\ra$ for exactly one $x\in E$ - beware, the map $J:x\mapsto x^*$ is anti-linear! Given an orthonormal basis $e_1,\ldots$ we may define a linear map $I:E\rar E^*$ by $Ie_j(x)=\la x,e_j\ra$ and extend linearly - since $Ie_j(e_k)=\d_{jk}$, the basis $Ie_1,\ldots$ is the dual basis of $e_1,\ldots$. Now we define $\bar\Psi(g)\colon=I^{-1}\Psi^d(g)I$. $\bar\Psi$ is called the contragredient representation. The matrix of $\bar\Psi(g)$ with respect to the orthonormal basis $e_1,\ldots$ is the matrix of $\Psi^d(g)$ with respect to the dual basis $Ie_1,\ldots$, which is the transpose of the inverse of the matrix of $\Psi(g)$. Finally, if $\Psi$ is unitary, then the transpose of the inverse of the matrix of $\Psi(g)$ is just the complex conjugate of the matrix of $\Psi(g)$. If the representation is real, then of course both coincide.
Verify that $J:E\rar E^*$ is anti-linear, i.e. $J(\l x)=\bar\l Jx$ and $J(x+y)=Jx+Jy$. Also $g\mapsto J^{-1}\Psi(g)^dJ$ is a representation, but in case $\Psi$ is unitary it coincides with $g\mapsto\Psi(g)$.
We just verify that in case $\Psi$ is unitary $J^{-1}\Psi(g)^dJ$ coincides with $\Psi(g)$: \begin{eqnarray*} \la J^{-1}\Psi(g)^dJx,y\ra &=&\cl{\la y,J^{-1}\Psi(g)^dJx\ra} =\cl{(\Psi(g)^dJx)(y)}\\ &=&\cl{(Jx)(\Psi(g)^{-1}y)} =\cl{\la\Psi(g)^{-1}y,x\ra} =\cl{\la y,\Psi(g)x\ra} =\la\Psi(g)x,y\ra~. \end{eqnarray*} Also, if $u:H\rar G$ is a homomorphism and $\Psi:G\rar\Gl(E)$ a representation, then $u^*\Psi:H\rar\Gl(E)$, $u^*\Psi(h)\colon=\Psi(u(h))$ is a representation of $H$. This in particular applies to the canonical inclusion $j:H\rar G$ of any subgroup $H$ of $G$ - in other words: the restriction of any representation of $G$ to $H$ gives a repreasentation of $H$. If on the other hand $u$ is onto and $\Phi:H\rar\Gl(E)$ is a representation, then this descendes to a representation $\wh\Phi$ of $G$ iff for all $h\in\ker u$: $\Phi(h)=1_E$.
If $\Phi$ is a representation of $(\R,+)$ in $E$ such that $\Phi(2\pi)=1_E$, then there is exactly one representation $\wh\Phi:S^1\rar\UU(E)$ such that $\wh\Phi(e^{it})=\Phi(t)$.
In this case $u:\R\rar S^1$ is evidently given by $u(t)=e^{it}$.
Let ${\cal T}_p(E)$ be the space of all $p$-linear mappings: $A:E^p\rar\R$, i.e. $A$ is linear in all its $p$ components. For $g\in\Gl(E)$ let $g^*A\in{\cal T}_p(E)$ be the pull-back of $A$ defined by $$ g^*A(x_1,\ldots,x_p)\colon=A(gx_1,\ldots,gx_p)~. $$ Then $\Psi(g)A\colon=g^{-1*}A$ is a representation of $\Gl(E)$. Moreover, if ${\cal T}^p(E)\colon={\cal T}_p(E^*)$ denotes the space of all $p$-linear mappings: $A:E^{*p}\rar\R$, then $\Phi(g)B\colon=g_*B$ defined by $$ g_*B(x_1^*,\ldots,x_p^*)\colon=B(g^{*}x_1^*,\ldots,g^{*}x_p^*) $$ is a representation of $\Gl(E)$ in ${\cal T}^p(E)$ - $g_*B$ is called the push forward of $B$.

Representations in function spaces

Some of the representations we already encountered are a particular case of the following more general construction: Suppose a group $G$ operates on a space $M$
, i.e. there is a mapping $G\times M\rar M$, denoted by $(g,x)\mapsto g\cdot x$ (or likewise by $x\cdot g$), such that $(gh)\cdot x=g\cdot(h\cdot x)$ and $e\cdot x=x$ for all $g,h\in G$ and all $x\in M$, cf. wikipedia. Instead of $g\cdot x$ we will also simply write $gx$. Of course, every group $G$ acts by left- or right-translation on itself: $(g,h)\mapsto gh$ or $(g,h)\mapsto hg$. If $G$ operates on $M$, then there is an equivalence relation $R$ on $M$: $$ xRy\colon\Lrar\exists g\in G:\quad y=gx~. $$ The quotient space $M/R$ is denoted by $M/G$ and it's called the orbit space - for each $x\in M$ the set $Gx\colon=\{gx:g\in G\}$ is called the orbit of $x$.
If $\Psi:G\rar\Gl(E)$ is a representation of $G$ in $E$, then $G$ operates on $E$ by $(g,x)\mapsto\Psi(g)x$.
Let $G$ be a finite group, then $G$ acts on itself by multiplication and the map $J:G\rar S(G)$, $J(g)(h)=gh$ is an injective mapping into the group $S(G)$ of all permutations of $G$. Prove Cayley's Theorem that every finite group is isomorphic to a subgroup of some $S(n)$. Solution by T. Speckhofer.
Let $G$ be a finite group, then $G$ acts on $M=G$ by conjugation, i.e. $(g,x)\mapsto gxg^{-1}$. The orbit of $x$ is called the conjugacy class of $x$, cf. subsection.
We have for all $x\in G$: $exe^{-1}=x$ and for all $g,h\in G$: $(gh)x(gh)^{-1}=ghxh^{-1}g^{-1}=g(hxh^{-1})g^{-1}$.
Let $G$ be a finite group, $p\in\N$ and $X\colon=\{(g_0,\ldots,g_{p-1})\in G^p: g_0\cdots g_{p-1}=e\}$. Verify that $\Z_p$ operates on $X$ by means of $k\cdot(g_0,g_1,\ldots,g_{p-1})\colon=(g_{k},g_{k+1},\ldots,g_{k+p-1})$ where addition is addition $\modul\ p$.
If $G$ operates on $M$ then we can define a representation of $G$ in the space of functions $F(M)\colon=\{f:M\rar\C\}$ by \begin{equation}\label{fineq1}\tag{FIN1} \G(g)f(x)\colon=f(g^{-1}x)~. \end{equation} Indeed, $\G(g_1g_2)f(x)=f(g_2^{-1}g_1^{-1}x)$ and $\G(g_1)\G(g_2)f(x)=\G(g_2)f(g_1^{-1}x)=f(g_2^{-1}g_1^{-1}x)$. If in addition $M$ carries a topology or a smooth structure, we usually require the mappings $x\mapsto gx$ to be continuous or smooth for all $g\in G$. In these cases we get a representation of $G$ in the space $C(M)$ of continuous functions or in the space $C^\infty(M)$ of smooth functions. Instead of complex valued function spaces we may obviously take function spaces with values in any set. In the subsection on induced representations this set will be a vector space
$\Gl(n,\R)$ acts on $\R^n$ smoothly by matrix multiplication: $(A,x)\mapsto Ax$. $\UU(n)$ operates on the unit sphere $S^{2n-1}$ of $\C^n$ by means of $(U,z)\mapsto Uz$ smoothly.
However all these spaces $C(M)$ lack the structure of a Hilbert-space. Of course we may replace $C(M)$ by some $L_2(M,\mu)$ space, but in order to turn the representation unitary, we have to choose a very specific measure $\mu$: we need a so-called $G$-invariant measure on $M$, i.e. a measure $\mu$ on $M$ such that for all $g\in G$ and all measurable sets $A$ in $M$: $\mu(gA)=\mu(A)$.
If $\mu$ is $G$-invariant, then the representation \eqref{fineq1} is a unitary representation of $G$ in $L_2(M,\mu)$.
$\proof$ For arbitrary $g\in G$ let $F:M\rar M$ be the mapping $x\mapsto g^{-1}x$, then the image measure of $\mu$ under $F$ is $\mu_F(A)\colon=\mu(F\in A)=\mu(gA)=\mu(A)$ and thus by the transformation theorem in measure theory for all $f_1,f_2\in L_2(M,\mu)$: $$ \la\G(g)f_1,\G(g)f_2\ra =\int_M\G(g)f_1\cl{\G(g)f_2}\,d\mu =\int_M f_1(F)\cl{f_2(F)}\,d\mu =\int_M f_1\cl f_2\,d\mu_F =\int_M f_1\cl f_2\,d\mu =\la f_1,f_2\ra~. $$ $\eofproof$
If $M$ is finite or discrete and countable, then the counting measure on $M$ is appropriate, otherwise it's not, for in case $G$ is uncountable an integrable functions is supported at a countable set at most. A measure with the property that $\mu(gA)=\mu(A)$ for all $g\in G$ and all measurable sets $A$ in $M$, is called a (left) Haar measure on $M$. In
chapter we will prove that every compact group $G$ admits a unique Haar probability measure $\mu$ and $\mu$ is both left and right invariant, i.e. for all $g\in G$ and all measurable sets $A$ of $G$: $\mu(gA)=\mu(A)=\mu(Ag)$. More generally we will verify that if $G$ acts isometrically on a compact metric space $M$, then $M$ admits a $G$-invariant probability measure $\mu$ and if $G$ acts transitively on $M$, i.e. if for every $x\in M$: $Gx=M$, then $\mu$ is unique.
$\UU(n)$ operates on the unit sphere $S^{2n-1}$ of $\C^n$ isometrically and transitively. The normalized surface measure - cf. section, is the Haar measure for the operation $(U,x)\mapsto Ux$.
$\UU(n)$ operates on $\C^n$ isometrically. The gaussian measure with density $$ (2\pi)^{-n}\exp(-\tfrac12\norm z^2) $$ is an invariant probability measure, but it's not the only one!

Left-regular representation

Another prominent example of this type of representation is the so-called left-regular representation $\g$, which will be discussed in detail in
subsection: If $G$ is finite, then the normalized counting measure is the Haar-measure on $G$ and thus $\g(g)f\colon=L_gf$, where $L_gf(x)=f(g^{-1}x)$, is a unitary representation of $G$ in the space $L_2(G)$ furnished with the euclidean product $$ \la f,g\ra\colon=\frac1{|G|}\sum_{x\in G}f(x)\cl{g(x)}~. $$ For $g\in G$ the set of functions $$ \d_g(x)\colon=\left\{\begin{array}{cl} 1&\mbox{if $x=g$}\\ 0&\mbox{if $x\neq g$} \end{array}\right. $$ evidently forms an orthogonal basis for $L_2(G)$ - the norm of each is given by $\norm{\d_g}=1/\sqrt{|G|}$. If follows that for all $g,h\in G$: $L_g\d_h(x)=\d_h(g^{-1}x)$, which is $1$ iff $g^{-1}x=h$, i.e. $x=gh$ and therefore: $L_g\d_h=\d_{gh}$. Hence the matrices of the left-regular representation are permutation matrices and they can be read straight off the Cayley table!
Prove that $g\mapsto R_g$, $R_gf(x)\colon=f(xg)$, is also a representation - the right-regular representation. 3. Show that $R_g\d_h=\d_{hg^{-1}}$.

Induced representation

Let $H$ be a subgroup of $G$ and suppose $\Psi:H\rar\Gl(E)$ is a representation of $H$. We define $E^G$ to be the subspace of all functions $f:G\rar E$ such that for all $h\in H$ and all $x\in G$: $\Psi(h)(f(x))=f(hx)$. Then \begin{equation}\label{fineq2}\tag{FIN2} \Psi^G(g)f(x)\colon=(\Psi^G(g)f)(x)\colon=f(xg) \end{equation} is a representation of $G$ in $E^G$, it's called the induced representation: $\Psi^G(g)f\in E^G$ iff for all $h\in H$ and all $x\in G$: $\Psi^G(g)f(hx)=\Psi(h)\Psi^G(g)(f(x))$, i.e. iff $f(hxg)=\Psi(h)(f(xg))$; for $f\in E^G$ this holds by definition of $E^G$. Finally, $$ \Psi^G(g)\Psi^G(h)f(x) =\Psi^G(h)f(xg) =f(xgh) =\Psi^G(gh)f(x)~. $$ Hence it's indeed a representation of $G$ and by \eqref{fineq2} $\Psi^G$ is just right-translation on $E^G$; in other words: $G$ operates on $E^G$ by right-translation. We'll resume that discussion in section.
Alternatively, we could define $E^G$ to be the set of all funtions $f:G\rar\C$ such that for all $h\in H$ and all $x\in G$: $\Psi(h)(f(x))=f(xh)$ and $\Psi^G(g)f(x)=f(g^{-1}x)$. Is this another representation of $G$?
If $G=Hx_1\cup\ldots\cup Hx_m$ is a disjoint union of right cosets, then $f\in E^G$ is given by $f(hx_j)=\Psi(h)(f(x_j))$ for $h\in H$ and arbitrary values of $f(x_j)$. Thus $\dim E^G=m\dim E$ and $m=|G/H|$. Thus given any function $F:G/H\rar E$ there is exactly one function $f\in E^G$ such that $f(x_j)=F([x_j])$.
Choosing $f_u(x_1)=\cdots=f_u(x_m)=u\in E$, i.e. $F:G/H\rar E$ is constant, we may identify $E$ with a subspace of $E^G$; for these functions we have for all $h\in H$: $\Psi^G(h)f_u(x_j)=f_u(x_jh)$, which differs in general from $$ f_{\Psi(h)u}(x_j)=\Psi(h)u=\Psi(h)(f_u(x_j))=f_u(hx_j)~. $$ However, in case of a commutative group $G$, $\Psi^G$ can be viewed as sort of extension of $\Psi$: define $J:E\rar E^G$ by $Ju\colon=f_u$, then $J$ is linear and (in case $G$ is commutative): $J\Psi(h)u=\Psi^G(h)Ju$, i.e. $\Psi$ and $\Psi^G|H$ are equivalent representations, cf. subsection.
Suppose $G$ is abelian, $\chi$ a character of the subgroup $H$ and $\Psi(h)$ is multiplication by $\chi(h)$ on $E=\C$. Then $E^G=\{f:G\rar\C:f(hx)=\chi(h)f(x)\}$ and for $u\in\C$: $f_u(hx_j)=\chi(h)u$ and for all $h\in H$: $\Psi^G(h)f_u=f_{\chi(h)u}$.

Irreducible Representations

Invariant subspaces

It may happen that a representation $\Psi:G\rar\UU(E)$ admits invariant subspaces: a subspace $F$ of $E$ is called $\Psi$-invariant or simply invariant, if for all $g\in G$: $\Psi(g)(F)=F$. Let's look for such subspaces in the following example: Every subgroup $G$ of the orthogonal group $\OO(3)$ operates on the space $S^2$ and by \eqref{fineq1} we get a representation of $G$ in $L_2(S^2)$: $$ \forall Y\in L_2(S^2):\quad \G(g)Y\colon=Y\circ g^{-1}~. $$ This representation is unitary and the spherical harmonics, which are subspaces of $L_2(S^2)$, are invariant under this representation, cf. section. We are going to calculate the representations of $T_d$ in ${\cal H}_l$ for $l=0,1,2$. First notice that $C_3^{-1}(x,y,z)=(z,x,y)$, $C_2^{-1}(x,y,z)=(x,-y,-z)$, $S_4^{-1}(x,y,z)=(-x,-z,y)$, $\s^{-1}(x,y,z)=(-z,y,-x)$.
  1. $l=0$: That's the trivial representation $\G(g)=1$.
  2. $l=1$: We choose the orthonormal basis $p_1=\sqrt3x$, $p_2=\sqrt3 y$, $p_3=\sqrt3z$ from exam: $\G(C_3)p_1(x,y,z)=p_1\circ C_3^{-1}(x,y,z)=p_1(z,x,y)=p_3(x,y,z)$ and analogously: $\G(C_3)p_2=p_1$, $\G(C_3)p_3=p_2$; thus the matrix of $\G(C_3)$ is: $$ \left(\begin{array}{ccc} 0&0&1\\ 1&0&0\\ 0&1&0 \end{array}\right) $$ For the matrices of $\G(C_2)$, $\G(S_4)$ and $\G(\s)$ we get: $$ \left(\begin{array}{ccc} 1&0&0\\ 0&-1&0\\ 0&0&-1 \end{array}\right), \left(\begin{array}{ccc} -1&0&0\\ 0&0&-1\\ 0&1&0 \end{array}\right), \left(\begin{array}{ccc} 0&0&-1\\ 0&1&0\\ -1&0&0 \end{array}\right) $$
  3. $l=2$: We choose the orthonormal basis $d_1=\sqrt{15}xy$, $d_2=\sqrt{15}yz$, $d_3=\sqrt{15}zx$, $d_4=\sqrt{15/4}(x^2-y^2)$ and $d_5=\sqrt{5/4}(2z^2-x^2-y^2)$ from exam: $\G(C_3)d_1(x,y,z)=d_1(z,x,y)=d_3(x,y,z)$ and similarly $\G(C_3)d_2=d_3$, $\G(C_3)d_3=-d_2$. Since $z^2-x^2=-\tfrac12(x^2-y^2)+\tfrac12(2z^2-x^2-y^2)$ we conclude that \begin{eqnarray*} \G(C_3)d_4(x,y,z)&=&d_4(z,x,y)=\tfrac{\sqrt{15}}2(z^2-x^2)\\ &=&\tfrac{\sqrt{15}}2\Big(-\tfrac12(x^2-y^2)+\tfrac12(2z^2-x⁻2-y^2)\Big) =-\tfrac12d_4+\tfrac{\sqrt3}2d_5~. \end{eqnarray*} and analogously: $\G(C_3)d_5=-\tfrac{\sqrt3}2d_4-\tfrac12d_5$. Thus the matrix of $\G(C_3)$ is: $$ \left(\begin{array}{ccccc} 0&1&0&0&0\\ 0&0&-1&0&0\\ 1&0&0&0&0\\ 0&0&0&-1/2&-\sqrt3/2\\ 0&0&0&\sqrt3/2&-1/2 \end{array}\right) $$ Similarly we get for the matrices of $\G(C_2),\G(S_4),\G(\s)$: $$ \left(\begin{array}{ccccc} -1&0&0&0&0\\ 0&1&0&0&0\\ 0&0&-1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1 \end{array}\right), \left(\begin{array}{ccccc} 0&0&-1&0&0\\ 0&-1&0&0&0\\ 1&0&0&0&0\\ 0&0&0&1/2&-\sqrt3/2\\ 0&0&0&-\sqrt3/2&-1/2 \end{array} \right), \left(\begin{array}{ccccc} 0&-1&0&0&0\\ -1&0&0&0&0\\ 0&0&1&0&0\\ 0&0&0&1/2&\sqrt3/2\\ 0&0&0&\sqrt3/2&-1/2 \end{array} \right) $$ Obviously, the spaces $\lhull{d_1,d_2,d_3}$ and $\lhull{d_4,d_5}$ are invariant under $\G$.
To find an invariant subspace of an arbitary representation $\Psi:G\rar\UU(E)$ containing a given vector $x\in E$ you just need to determine the space generated by the set $\{\Psi(g)x:g\in G\}$. Well, that's an easy recipe but in practice it can be fairly laborious!
Let $\Psi:T_d\rar\UU({\cal H}_1)$ be the representation obtained above for $l=1$. Determine the space generated by $\{\Psi(g)e_1:g\in T_d\}$.
If a representation $\Psi:G\rar\UU(E)$ has an invariant subspace $F$ then for all $g\in G$ the operator $\Psi(g)$ must have an eigen-vector in $F$. Solution by T. Speckhofer.
Find invariant subspaces of the representation of $C_{2v}$ given in exam
Find invariant subspaces of the representation of $T_d$ given in subsection.
Find invariant subspaces of the left-regular representation of $\Z_3$.

Irreducible subspaces and representations

Suppose $\Psi:G\rar\UU(E)$ is a unitary representation of the group $G$ in a finite dimensional Hilbert-space $E$. An invariant subspace $F$ is said to be irreducible, if $\{0\}$ and $F$ are the only invariant subspaces of $F$. If $E$ is irreducible, then we say that the representation $\Psi:G\rar\UU(E)$ is irreducible.
For arbitrary Hilbert-spaces the definition of irreducible subspaces (cf. section) also requires $F$ to be closed, which is redundant in the finite dimensional case!
The dimension of an irreducible representation of a finite group $G$ is at most $|G|$, for the space generated by $\{\Psi(g)x:g\in G\}$ is invariant and its dimension is at most $|G|$. Every one-dimensional representation is clearly irreducible and since every unitary operator on a one-dimensional space is multiplication by a complex number of modulus $1$, we infer that one-dimensional unitary representations are simply homomorphisms $\psi:G\rar S^1$.
If $F$ is an invariant subspace of $E$, then the restriction $\Psi_F(g)\colon=\Psi(g)|F$ is again a unitary representation, $\Psi_F$ is said to be a subrepresentation. Moreover in this case the space $F^\perp$ is also invariant, indeed for all $x\in F^\perp$ and all $y\in F$: $$ \la\Psi(g)x,y\ra =\la x,\Psi(g)^*y\ra =\la x,\Psi(g)^{-1}y\ra =\la x,\Psi(g^{-1})y\ra=0~ \quad\mbox{i.e.}\quad \Psi(g)x\in F^\perp $$ and since $E$ is finite dimensional: $\Psi(g)(F^\perp)=F^\perp$. Beware, this is a feature of unitary representations! Now suppose $\Psi:G\rar\UU(E)$ is not irreducible, then there is a non-trivial invariant subspace $F$; let $P$ be the orthogonal projection onto $F$, then $\Psi(g)=\Psi(g)P+\Psi(g)(1-P)$ and since both $F$ and $F^\perp$ are invariant: $P\Psi(g)=\Psi(g)P$. Hence there exists a non-trivial self-adjoint operator $A=P$ commuting with all $\Psi(g)$, $g\in G$. Conversely, assume that such an operator $A$ exists, then any eigen-space $F$ of $A$ is $\Psi$-invariant, cf. section:
A finite dimensional unitary representation $\Psi:G\rar\UU(E)$ is irreducible if and only if every self-adjoint operator $A\in\Hom(E)$ commuting with all $\Psi(g)$, $g\in G$, is a multiple of the identity.
Suppose that for every orthonormal basis $b_1,\ldots,b_n$ for $E$ and all $j,k$ there is some $g\in G$ such that $\la\Psi(g)b_j,b_k\ra\neq0$. Prove that $\Psi:G\rar\UU(E)$ is irreducible. Suggested solution.
Suppose that $A\in\Hom(E)$ is diagonalizable and commutes with $\Psi(g_0)$. If all eigen-values of $\Psi(g_0)$ are simple, then every eigen-vector of $\Psi(g_0)$ is also ein eigen-vector of $A$. 2. Let $b_1,\ldots,b_n$ be an ONB of eigen-vectors of $\Psi(g_0)$; if for all $j,k$ there is some $g\in G$ such that $\la\Psi(g)b_j,b_k\ra\neq0$, then $\Psi$ is irreducible.
In an effort to prove irreducibility of a representation $\Psi:G\rar\Gl(E)$ it is not enough that for some vector $x\in E$ the orbit $\{\Psi(g)x:g\in G\}$ of $x$ spans $E$; one has to show that for all $x\in E\sm\{0\}$ the orbit of $x$ spans $E$!
If $F$ is an invariant subspace of $\Psi:G\rar\UU(E)$ and $P:E\rar F$ the orthogonal projection onto $F$. Then for all $g\in G$ the operators $P$ and $\Psi(g)$ are simultaneously diagonalizable. Solution by T. Speckhofer.
Take the standard representation of $S(n)$ in $\R^n$ (cf. section). The space $E$ orthogonal to $b\colon=e_1+\cdots+e_n$ is invariant and $g\mapsto\Psi(g)|E$ is an irreducible representation of $S(n)$ in $E$. This in fact establishes an isomorphism of $S(n)$ onto a subgroup of $\OO(n-1)$.
Suppose $A\in\Hom(\R^n)$ and for all $\pi\in S(n)$: $AP(\pi)=P(\pi)A$, then for all $j,k$: $$ a_{j\pi(k)} =\sum_l a_{jl}p_{lk} =\sum_l p_{jl}a_{lk} =a_{\pi^{-1}(j)k}, $$ i.e. for all $\pi\in S(n)$: $a_{\pi(j)\pi(k)}=a_{jk}$, thus all diagonal entries must equal $\a$ and all off-diagonal entries must equal $\b$. It follows that $Ab=(\a+(n-1)\b)b$ and for $x\perp b$ the $j$-component of $Ax$ is given by $$ \sum_ka_{jk}x_k =\a x_j+\sum_{k\neq j}\b x_k =\b\sum_k x_k+(\a-\b)x_j =(\a-\b)x_j, $$ i.e. $A|E=(\a-\b)1_E$. Finally if $b_1,\ldots,b_{n-1}$ is an orthonormal basis for $E$ then the matrix $(p_{jk}(\pi))$ of $P(\pi):E\rar E$ with respect to this basis is given by $p_{jk}(\pi)=\la P(\pi)b_k,b_j\ra$.
Let $E$ be the space of the previous example for $n=4$. Is the standard representation restricted to $E$ an irreducible representation of the alternating group $A(3)=\{(123),(231),(132)\}$?
Let $E$ be the space of the previous example for any $n$. Is the standard representation restricted to $E$ an irreducible representation of the alternating group $A(n)$?
Let $E$ be the space of all traceless $n$ by $n$ diagonal matrices, then $\Psi(\pi)A\colon=P(\pi)AP(\pi^{-1})$ is an irreducible representation of $S(n)$ in $E$. Suggested solution.
If $\Psi:G\rar\Gl(E)$ is an irreducible finite dimensional representation, then so is its dual $\Psi^d:G\rar\Gl(E^*)$.
$\proof$ For any subspace $F$ of $E$ we denote by $F^\perp\colon=\{x^*\in E^*: x^*|F=0\}$ its annihilator. Now suppose $F$ is invariant, then for all $x^*\in F^\perp$ all $x\in F$ and all $A=\Psi(g)$: $A^*(x^*)x=x^*(Ax)=0$, i.e. $A^*(F^\perp)\sbe F^\perp$. Since $E$ is finite dimensional and $A^*\in\Gl(E^*)$: $A^*(F^\perp)=F^\perp$ and thus: $A^{-1*}(F^\perp)=F^\perp$. It follows that $F^\perp$ is $\Psi^d$-invariant. Finally, the fact that $E$ is finite diensional implies that $E^{**}=E$ and therefore also the converse holds: $\Psi$ is irreducible iff its dual $\Psi^d$ is irreducible. $\eofproof$

The ammonia molecule $\chem{NH_3}$

We want to calculate the irreducible subrepresentations of the site representation of ammonia. As in subsection we assume that the plane formed by the hydrogen atoms, labeled $1,2,3$ coincides with the $xy$-plane! The site representation including the central nitrogen atom, labeled $0$, is a four-dimensional representation of its symmetry group $C_{3v}$: Since the generating symmetries $C_3$ (rotation about $2\pi/3$ in the $xy$ plane) and $\s_1$ (reflection about the plane generated by the line through the central nitrogen atom and the first hydrogen atome $1$ and the $z$-axis) rearrange the atoms $0,1,2,3$ as follows: $0,2,3,1$ and $0,1,3,2$, we get for the matrices of $\Psi(C_3)$ and $\Psi(\s_1)$: $$ \Psi(C_3)= \left(\begin{array}{cccc} 1&0&0&0\\ 0&0&0&1\\ 0&1&0&0\\ 0&0&1&0 \end{array}\right),\quad \Psi(\s_1)= \left(\begin{array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0 \end{array}\right) $$ We immediately recognize the subspace generated by $e_0$ to be invariant under both $\Psi(C_3)$ and $\Psi(\s_1)$ and thus under all operations $\Psi(g)$. Let us check if the space $F$ generated by $e_1,e_2$ and $e_3$ is irreducible. $$ \Psi(C_3)|F= \left(\begin{array}{ccc} 0&0&1\\ 1&0&0\\ 0&1&0 \end{array}\right),\quad \Psi(\s_1)|F= \left(\begin{array}{ccc} 1&0&0\\ 0&0&1\\ 0&1&0 \end{array}\right) $$ Both matrices are permutation matrices and therefore the unit vector $x_1\colon=(e_1+e_2+e_3)/\sqrt3$ is invariant under both $\Psi(C_3)$ and $\Psi(\s_1)$. Hence the subrepresentation $g\mapsto\Psi(g)|F$ is not irreducible. Since $\lhull{x_1}$ is invariant, the subspace $G$ of $F$ orthogonal to $x_1$ must also be invariant, cf subsection. To identify this subspace we choose a unit vector $x_2\in\lhull{x_1}^\perp$, e.g. $x_2=(e_1-e_2)/\sqrt2$ and calculate $x_3\colon=x_1\times x_2=(e_1+e_2-2e_3)/\sqrt6$, then the vectors $x_2,x_3$ form an orthonormal basis for $G=\lhull{x_1}^\perp=\lhull{x_2,x_3}$. It follows that $\Psi(C_3)x_2=(e_2-e_3)/\sqrt2$, $\Psi(\s_1)x_2=(e_1-e_3)/\sqrt2$, $\Psi(C_3)x_3=(-2e_1+e_2+e_3)/\sqrt6$, $\Psi(\s_1)x_3=(e_1-2e_2+e_3)/\sqrt6$. With respect to the basis $x_2,x_3$ the matrix of $\Psi(C_3)|G$ is given by $$ \left(\begin{array}{cc} \la\Psi(C_3)x_2,x_2\ra&\la\Psi(C_3)x_3,x_2\ra\\ \la\Psi(C_3)x_2,x_3\ra&\la\Psi(C_3)x_3,x_3\ra \end{array}\right) = \left(\begin{array}{cc} -1/2&-\sqrt 3/2\\ \sqrt 3/2&-1/2 \end{array}\right) $$ Analogously we get for the matrix of $\Psi(\s_1)|G$ with respect to the basis $x_2,x_3$: $$ \left(\begin{array}{cc} \la\Psi(\s_1)x_2,x_2\ra&\la\Psi(\s_1)x_3,x_2\ra\\ \la\Psi(\s_1)x_2,x_3\ra&\la\Psi(\s_1)x_3,x_3\ra \end{array}\right) = \left(\begin{array}{cc} 1/2&\sqrt 3/2\\ \sqrt 3/2&-1/2 \end{array}\right)~. $$ The subrepresentations $g\mapsto\Psi(g)|\lhull{e_1}$, $g\mapsto\Psi(g)|\lhull{x_1}$ are one-dimensional and therefore irreducible. Also $g\mapsto\Psi(g)|G$ is irreducible, for if there is a non trivial invariant subspace of $G$, then it must be one-dimensional, i.e. there must be a common eigen-vector $x\in G$ of both $\Psi(C_3)$ and $\Psi(\s_1)$. Since the orthogonal complement of $x$ in $G$ is also invariant by subsection, we conclude that $\Psi(C_3)|G$ and $\Psi(\s_1)|G$ are simultaneously diagonalizable. In particular both commute, which is easily checked to be false. That's been a very simple example, nonetheless the irreducibility of $G$ was not that obvious. Below, cf. theorem, we will highlight a straightforward way to detect irreducible subspaces.
Find all irreducible subrepresentations of the representations of $T_d$ given in subsection
1. We claim that $\G:T_d\rar\UU({\cal H}_1)$ doesn't have a non trivial invariant subspace: Assume to the contrary that $F$ is invariant and non trivial, i.e. $F\neq\{0\},{\cal H}_1$, then one of the invariant spaces $F$, $F^\perp$ must be one-dimensional and therefore all operators $\Psi(g)$, $g\in T_d$, must have a common eigen-vector, which can be disproved by straight calculation. 2. The irreducible subspaces of the representation $\G:T_d\rar\UU({\cal H}_2)$ are $\lhull{d_1,d_2,d_3}$ and $\lhull{d_4,d_5}$ and the argument is similar.
Suppose $\Psi$ is a unitary representation of a group $G$ in a space $E$ of dimension two at least. If all operators $\Psi(g)$ have a common eigen-vector, then $\Psi$ is not irreducible. The converse is true if $\dim E\in\{2,3\}$.
Whether a representation is irreducible or not will become much easier to determine with the help of characters!
Find all irreducible subrepresentations of the representation of $C_{2v}$ given in exam
Find all irreducible subrepresentations of the representation of $T_d$ given in subsection

Sums of representations

Suppose $\Psi_1$ and $\Psi_2$ are unitary representations of a group $G$ in the Hilbert-spaces $E_1$ and $E_2$ respectively. Putting $$ \Psi_1\oplus\Psi_2(g)x_1\oplus x_2\colon=\Psi_1(g)x_1\oplus\Psi_2(g)x_2 $$ we get another representation $\Psi_1\oplus\Psi_2$ of $G$ in $E_1\oplus E_2$, it's called the direct sum of the representations. Of course, $\Psi_1$ and $\Psi_2$ are in turn subrepresentations of $\Psi_1\oplus\Psi_2$.
Let $\Psi$ be a finite dimensional unitary representation of any group $G$ in $E$. Then there is an orthogonal decomposition $E=E_1\oplus\cdots\oplus E_n$, such that all subrepresentations $\Psi_{E_j}$ are irreducible. $\Psi$ is the direct sum of irreducible subrepresentations.
$\proof$ If $E$ is not irreducible then there is a non-trivial $\Psi$-invariant subspace $F$; since $F^\perp$ is also $\Psi$-invariant, we get a decomposition of $E$ into $\Psi$-invariant subspaces. Now we simply iterate this procedure and since $E$ is of finite dimension, we wind up with an orthogonal decomposition $E=E_1\oplus\cdots\oplus E_n$, such that all subrepresentations $\Psi_{E_j}$ are irreducible. $\eofproof$
Thus we can find an orthonormal basis of $E$, which comprises orthonormal bases for $E_1,\ldots,E_n$. The matrix of $\Psi(g)$ with respect to such a basis is block diagonal for all $g$. Our next task is to compare all these irreducible subrepresentations and therefore we need to render this comparison more precisely. So let $E_1$, $E_2$ be finite dimensional. A linear isometry from $E_1$ onto $E_2$ is a linear bijection $u:E_1\rar E_2$, such that $$ \forall x,y\in E_1:\quad\la u(x),u(y)\ra=\la x,y\ra~. $$
A linear mapping $u:E_1\rar E_2$ is an isometry if and only if it maps an orthonormal basis onto an orthonormal basis.
Two representations $\Psi_1,\Psi_2$ of a group $G$ in $E_1$ and $E_2$ respectively are said to be equivalent, if there exists an isomorphism $J:E_1\rar E_2$, such that for all $g\in G$ and all $x\in E_1$: $\Psi_2(g)J(x)=J(\Psi_1(g)x)$, i.e.: $\Psi_2(g)=J\Psi_1(g)J^{-1}$.
equivalence
If $\Psi_1$ and $\Psi_2$ are equivalent, then there exist bases for $E_1$ and $E_2$, such that the matrices of $\Psi_1(g)$ and $\Psi_2(g)$ with respect to these bases coincide for all $g$. Thus up to the choice of the bases these representations are the same! The contragredient representation is by definition equivalent to the dual representation - the map $J$ is given by sending an orthonormal basis to its dual basis. Henceforth the dual representation will also be denoted by $\bar\Psi$.
Let $\Psi,\Phi$ be equivalent irreducible unitary representations in $E$, i.e. $\Phi(g)J=J\Psi(g)$ for some isomorphism $J\in\Hom(E)$. Prove that $J$ is a multiple of an isometry.
Put $R=J^*J$, the $R$ commutes with all $\Psi(g)$, indeed $$ \Psi(g)R =\Psi(g^{-1})^*J^*J =(J\Psi(g^{-1}))^*J =(\Phi(g^{-1})J)^*J =J^*\Phi(g)J =J^*J\Psi(g) =R\Psi(g) $$ and thus by corollary: $R=\l 1_E$ and since $R$ is self-adjoint and positive: $\l > 0$, i.e. $J/\sqrt\l$ is an isometry.
Another procedure of coming up with new representations is tensorization, which we are going to investigat in the upcoming section.

Tensor products of representations

Tensorizing representations is another means to obtain further representations. So let $\Psi_1$ and $\Psi_2$ be two unitary representations of a group $G$ in the Hilbert-spaces $E_1$ and $E_2$ respectively. Then \begin{equation}\label{fineq2a}\tag{TEP1} \Psi(g)\colon=\Psi_1(g)\otimes\Psi_2(g) \end{equation} is another unitary representation of $G$ in the Hilbert-space tensor product $E_1\otimes E_2$. $\Psi$ is called the tensor product of $\Psi_1$ and $\Psi_2$. Clueless? Well here comes a brief, dirty account on Hilbert-space tensor products, which works perfectly in the real case - cf. wikipedia: Suppose $E,F$ are two (separable) Hilbert-spaces, then $\Hom(E,F)$ will denote the space of all bounded linear operators $A:E\rar F$ and $E\otimes F$ is the space of all Hilbert-Schmidt operators $A:E\rar F$, which is a Hilbert-space with euclidean product: $$ \la A,B\ra\colon =\sum\la Ae_j,Be_j\ra =\sum\la B^*Ae_j,e_j\ra =\tr(B^*A)=\tr(AB^*), $$ where $e_j$ is any orthonormal basis for $E$. This product doesn't depend on the basis, because the trace doesn't depend on the basis (cf. Linear Algebra)! Beware that our definition differs from the usual definition in linear algebra in case one of the spaces is not finite dimensional! Denoting for $x\in E$ and $y\in F$ the operator $v\mapsto\la v,x\ra y$ by $x\otimes y$, we get by definition: $$ \la x\otimes y,u\otimes v\ra =\sum_j\la\la e_j,x\ra y,\la e_j,u\ra v\ra =\sum_j\cl{\la x,e_j\ra}\la u,e_j\ra\la y,v\ra =\cl{\la x,u\ra}\la y,v\ra =\la u,x\ra\la y,v\ra~. $$ Thus, if $f_k$ is any orthonormal basis for $F$, then the vectors $\{e_j\otimes f_k:n,m\}$ form an orthonormal basis for $E\otimes F$; moreover $\Vert x\otimes y\Vert=\Vert x\Vert\Vert y\Vert$ and in the real case: $\la x\otimes y,u\otimes v\ra=\la x,u\ra\la y,v\ra$ but in the complex case: $\la x\otimes y,u\otimes v\ra=\cl{\la x,u\ra}\la y,v\ra$ - there is the snag, because we want to have this relation without the conjugation! Anyway, these are the important relations, so from now on you should forget about the underlying Hilbert-Schmidt operators and just keep in mind that given Hilbert-spaces $E$ and $F$, $E\otimes F$ is another Hilbert-space with the properties above. Also, beware that $x\otimes y\neq y\otimes x$ in general! Here comes another important property: the universal property.
1. The mapping $\otimes:E\times F\rar E\otimes F$, $(x,y)\mapsto x\otimes y$ is a continuous mapping and if $E$ and $F$ are real it's bi-linear. 2. Moreover, if $E,F,G$ are finite dimensional real Hilbert-spaces and $B:E\times F\rar G$ is any bi-linear map, then there is a unique linear map $\wt B:E\otimes F\rar G$ such that for all $x\in E$ and all $y\in F$: $\wt B(x\otimes y)=B(x,y)$ - this is called the universal property of the tensor product.
$\proof$ 2. We simply define $\wt B$ on the basis vectors $e_j\otimes f_k$ by $\wt B(e_j\otimes f_k)\colon=B(e_j,f_k)$ and by linearity of $\wt B$ and bi-linearity of $B$ we get for all $x=\sum x_je_j$ and all $y=\sum y_kf_k$: $$ \wt B(x\otimes y) =\wt B(\sum x_jy_ke_j\otimes f_k) =\sum x_jy_k\wt B(e_j\otimes f_k) =\sum x_jy_kB(e_j,f_k) =B(x,y)~. $$ $\eofproof$
Remark: In the infinite dimensional case we need some kind of continuity of $\wt B$, yet continuity of $B$ is not sufficient. We won't elaborate on that, for we will mostly deal with finite dimensional spaces and thus we won't need it! More important to us: fiddeling with the definition one may get bi-linearity of $\otimes:E\times F\rar E\otimes F$ as well as the universal property also for complex finite dimensional vector-spaces. Moreover one may define a complex euclidean product on $E\otimes F$ such that: $\la x\otimes y,u\otimes v\ra=\la x,u\ra\la y,v\ra$. So, if you don't care about existence - an attitude we'd like to adopt - you may simply assume that there exists a Hilbert-space $E\otimes F$, such that
$E\oplus E$ is isomorphic to $\E\otimes\C^2$.
The bi-linear mapping $B:E\times\C^2\rar E\oplus E$, $(x,(\a,\b))=\a x\oplus \b x$ gives rise to a linear mapping $\wt B:E\otimes\C^2\rar E\oplus E$, which maps the basis vectors $e_j\otimes(1,0)$ and $e_j\otimes(0,1)$ to the basis vectors $e_j\oplus0$ and $0\oplus e_j$.
For the following we will assume all these four properties and forget about the Hilbert-Schmidt stuff, which was just a handy means to establish existence (in the real case, at least).
If $x\otimes y=u\otimes v$, then $u=\l x$ and $v=\l^{-1}y$ for some $\l\in\bK$.
Since $\norm{x\otimes y}=\Vert x\Vert\norm y$, we may assume w.l.o.g. that $\Vert x\Vert=\norm y=1$; decompose $u=\a x+\a^\prime x^\prime$ and $v=\b y+\b^\prime y^\prime$ with some normed vectors $x^\prime\in E$, $y^\prime\in F$ orthogonal to $x$ and $y$, respectively. Then $$ x\otimes y=u\otimes v =\a\b x\otimes y +\a\b^\prime x\otimes y^\prime +\b\a^\prime x^\prime\otimes y +\a^\prime\b^\prime x^\prime\otimes y^\prime $$ and since the vectors $x\otimes y,x\otimes y^\prime,x^\prime\otimes y$ and $x^\prime\otimes y^\prime$ are orthonormal, they must be linearly independent. Hence: $\a\b =1$ and $\a^\prime=\b^\prime=0$.
For finite dimensional spaces $E$ and $F$ the tensor product $E^*\otimes F^*$ is isomorphic to the space $L(E,F)$ of bi-linear mappings $E\times F\rar\bK$. A canonical isomorphism is given by identifying $x^*\otimes y^*$ with the bi-linear mapping $(x,y)\mapsto x^*(x)y^*(y)$.
Define $B:E^*\times F^*\rar L(E,F)$ by $B(x^*,y^*)(x,y)\colon=x^*(x)y^*(y)$; $B$ is bi-linear and thus by the universal property there is exactly one linear mapping $\wt B:E^*\otimes F^*\rar L(E,F)$ such that $\wt B(x^*\otimes y^*)(x,y)=x^*(x)y^*(y)$. Moreover $\wt B$ maps the basis $e_j^*\otimes f_k^*$ onto linearly independent vectors of $L(E,F)$, for $\wt B(e_j^*\otimes f_k^*)(e_l,f_m)=\d_{jl}\d_{km}$ and since both spaces have the same dimension, $\wt B$ is an isomorphism.
Formulate the universal property for the $p$-fold tensor product of a finite dimensional vector-space $E$ and prove that ${\cal T}_p(E)$ can be identified with the $p$-fold tensor product of $E^*$: just construe $x_1^*\otimes\cdots\otimes x_p^*$ as the $p$-linear map: $$ (x_1,\ldots,x_p)\mapsto x_1^*(x_1)\cdots x_p^*(x_p)~. $$
The space ${\cal T}_p(E)$ is called the space of all $p$-fold covariant tensors - or covariant tensors of order $p$. Analogously the space of all $p$-fold contravariant tensors is defined to be the space of all $p$-linear mappings $E^{*p}\rar\bK$.
The representation $\Phi(g)\in\Hom({\cal T}^p(E))$ in the space of $p$-fold contravariant tensors from exam is defined by \begin{eqnarray*} \Phi(g)(x_1\otimes\cdots\otimes x_p)(x_1^*,\ldots,x_p^*) &\colon=&g^*x_1^*(x_1)\cdots g^*x_p^*(x_p)\\ &=&x_1^*(gx_1)\cdots x_p^*(gx_p) =gx_1\otimes\cdots\otimes gx_p(x_1^*,\ldots,x_p^*)~. \end{eqnarray*} Identifying ${\cal T}^p(E)$ with the $p$-fold tensor product of $E$ we get that $\Phi(g):E^{\otimes p}\rar E^{\otimes p}$ is given by $$ \Phi(g)(x_1\otimes\cdots\otimes x_p)=gx_1\otimes\cdots\otimes gx_p~. $$

A note on tensor fields

This subsection can be skipped, it won't be refered to in a subsequent chapter! In the realm of finite dimensional vector-spaces we've found a convenient alternative description of the tensor product $E^*\otimes F^*$: it's the space of all bi-linear mappings $E\times F\rar\bK$ and $x^*\otimes y^*$ is the bi-linear mapping $(x,y)\mapsto x^*(x)y^*(y)$. This approach is commonly used for e.g. covariant tensor fields $T$ of order $p$ on open subsets $M$ of $\R^n$: for vector fields $X_1,\ldots,X_p$ on $M$ $T(X_1\ldots,X_p)$ is a smooth function on $M$ such that:
  1. $T$ is $p$-linear and in addition
  2. for all $f\in C^\infty(M)$: $T(fX_1,X_2,\ldots,X_p)=T(X_1,fX_2,\ldots,X_p)=\cdots=T(X_1,X_2,\ldots,fX_p)=fT(X_1,\ldots,X_p)$.
Thus a covariant tensor field of e.g. order $2$ operates bi-linear on pairs of vector fields and it has an additional property: it's $C^\infty(M)$-linear in all its components - that's just the second property restatet. Suppose we denote by $(x_1,\ldots,x_n)$ the coordinates of a point $x\in M$, then the projections $(x_1,\ldots,x_n)\mapsto x_j$ are typical examples of smooth functions on $M$; belive it or not, but it's very convenient to denote these projections by the same symbol $x_j$. Now suppose $X$ is a vector field on $M$ with components $\z_1,\ldots,\z_n$; for any smooth function $f\in C^\infty(M)$ the differential $df$ is defined by $$ df(X)=\sum\pa_jf\z_j $$ it's a covariant tensor field of order $1$ and thus $dx_j$ satisfies $dx_j(X)=\z_j$ (here $x_j$ stands for the projection and certainly not for the coordinate). Up to notation the function $x\mapsto df(X)(x)$ is just the ordinary derivative of a function $f\in C^\infty(M)$: in analysis this is usually denoted by $Df(x)(X(x))$ and called the derivative of $f$ at $x\in M$ in the direction $X(x)\in\R^n$.
If $S$ and $T$ are tensor fields on $M$, then $T\otimes S$ is the tensor field defined by $$ S\otimes T(X_1,\ldots,X_p,Y_1,\ldots,Y_q)\colon= S(X_1,\ldots,X_p)T(Y_1,\ldots,Y_q) $$ which is a tensor field of order $p+q$.
Prove that every covariant tensor field $\o$ of order $1$ is of the form $$ \o=\sum_{j=1}^n f_j\,dx_j $$ for some smooth functions $f_j\in C^\infty(M)$. Suggested solution.
Prove that every covariant tensor field $T$ of order $2$ is of the form $$ T=\sum_{j,k=1}^n f_{jk}\,dx_j\otimes dx_k $$ for some smooth functions $f_{jk}\in C^\infty(M)$.

Tensor products in QM

In QM the tensor product $E\otimes F$ is the joint space of states of two quantum systems with state space $E$ and $F$ respectively. In bra-ket notation vectors of the form $x\otimes y$ are usually denoted by $\la x,y|$ and $|x,y\ra$ respectively: suppose we have two spin $1/2$ particles, i.e. two two-dimensional complex Hilbert-spaces $E$ and $F$, then $E\otimes F$ is the state space of a quantum mechanical system, which consists of a couple of spin $1/2$ particles. We will prove in
section that the tensor product of spin $1/2$ representations $\SU(2)\rar\UU(E)$ and $\SU(2)\rar\UU(F)$ is the direct sum of the trivial spin $0$ representation and the spin $1$ representation. To a physicist this means that the system of two spin $1/2$ particles is either a spin $0$ or a spin $1$ particle. This subject will be covered in detail in subsection.

Tensor products of operators

We defined in \eqref{fineq2a} the tensor product of two representations $\Psi_1$ and $\Psi_2$ by the tensor product of the operators $\Psi_1(g)$ and $\Psi_2(g)$, which has not been definded yet - well, that's not true strictly (cf. exam). So suppose $A:E_1\rar F_1$, $B:E_2\rar F_2$ are bounded linear operators, then there is a unique bounded linear operator $A\otimes B:E_1\otimes E_2\rar F_1\otimes F_2$, such that for all $x\in E_1$, $y\in E_2$: $$ A\otimes B(x\otimes y)=Ax\otimes By~. $$ For the sake of simplicity we assume $E_1=E_2=E$ and $F_1=F_2=F$. Suppose $e_j$ and $f_k$ are orthonormal bases of $E$ and $F$ respectively: simply define $$ A\otimes B(e_j\otimes f_k)\colon=Ae_j\otimes Bf_k~. $$ If both $E$ and $F$ are finite dimensional, then we extend $A\otimes B$ linearly and we are thus left with verifying that for all $x,y\in E$: $A\otimes B(x\otimes y)=Ax\otimes By$. If $x=\sum a_je_j$ and $y=\sum b_kf_k$, then \begin{eqnarray*} Ax\otimes By &=&(\sum_j a_jAe_j)\otimes(\sum_k b_kBf_k) =\sum_{j,k}a_jb_k Ae_j\otimes Bf_k\\ &=&\sum_{j,k}a_jb_kA\otimes B(e_j\otimes f_k) =A\otimes B\Big(\sum_{j,k}a_jb_ke_j\otimes f_k\Big) =A\otimes B(x\otimes y)~. \end{eqnarray*} In the infinite dimensional case we need to establish continuity, i.e. boundedness. We defer this a little bit, because we actually won't need it. It's more important to us to show that the tensor product of unitary opereators is again unitary. We should emphasize that the infinite dimensional case will always come second to us, so you may frankly assume all spaces be finite dimensional.
If $A\in\Hom(E)$ and $B\in\Hom(F)$, then $(A\otimes B)^*=A^*\otimes B^*$. In particular $A\otimes B$ is self-adjoint if $A$ and $B$ are self-adjoint.
If $E$ and $F$ are finite dimensional then vectors of the form $x\otimes y$ generate the space $E\otimes F$. Thus it's sufficient to verify that $\la Ax,u\ra\la By,v\ra=\la x,A^*u\ra\la y,B^*v\ra$, which is evidently true. If one of the spaces is not of finite dimension, then the space generated by $x\otimes y$, $x\in E$ and $y\in F$, is only dense in $E\otimes F$. That's due to our requirement that $E\otimes F$ be a Hilbert-space. Anyway, density suffices since all operators are bounded (as we will shortly see)!
If $A\in\UU(E)$ and $B\in\UU(F)$, then $A\otimes B\in\UU(E\otimes F)$. Solution by T. Speckhofer.
Suppose $E,F$ are finite dimensional, then the space generated by $\{A\otimes B:A\in\Hom(E), B\in\Hom(F)\}$ coincides with $\Hom(E\otimes F)$ and it's isomorphic to the tensor product: $\Hom(E)\otimes\Hom(F)$. Suggested solution.
Now let's return to the issue of boundedness.
For every finite sequence $x_k$ in $E$ and every orthonormal sequence $f_k$ in $F$: $$ \Vert\sum x_k\otimes f_k\Vert^2=\sum\norm{x_k}^2~. $$
We have $\norm{A\otimes B}\leq\norm A\norm B$.
Indeed for $(a_{jk})\in\Ma(n,\C)$ and $x_k\colon=\sum_j a_{jk}e_j$ we get: \begin{eqnarray*} \Vert(A\otimes1)\sum_{j,k} a_{jk}e_j\otimes f_k\Vert^2 &=&\Vert(A\otimes1)\sum_k x_k\otimes f_k\Vert^2 =\Vert\sum_k Ax_k\otimes f_k\Vert^2\\ &=&\sum\norm{Ax_k}^2 \leq\norm A^2\sum\norm{x_k}^2 =\norm A^2\sum|a_{jk}|^2 \end{eqnarray*} i.e.: $\norm{A\otimes 1}\leq\norm A$ and analogously: $\norm{1\otimes B}\leq\norm B$. Since $A\otimes B=(A\otimes 1)(1\otimes B)$, we are done.
Finally we tackle the problem of positivity.
If $A,B\in\Ma(n,\C)$ are positive definite matrices, then 1. $A^t=\bar A$ is positive definite and 2. $\tr(AB)\in\R_0^+$. 3. If $B$ is strictly positive definite (i.e. all eigen-values of $B$ are strictly positive), then $\tr(AB)=0$ iff $A=0$. Suggested solution.
If $A$ and $B$ are positive (definite), then $A\otimes B$ is positive (definite).
We are going to check this for arbitrary bounded, positive linear operators $A$ and $B$ on Hilbert-spaces $E$ and $F$, respectively. The positivity implies that for all finite sequences $x_1,\ldots,x_n$ in $E$, the matrix $\wt A\colon=(\la Ax_j,x_k\ra)_{j,k}$ is positive definite; indeed for all $u_1,\ldots,u_n\in\C$ put $x\colon=\sum u_jx_j$, then $$ \sum\la Ax_j,x_k\ra u_j\bar u_k =\la A\sum_ju_jx_j,\sum_k u_kx_k\ra =\la Ax,x\ra \geq0 $$ and thus by exam: $$ \la(A\otimes B)\sum x_j\otimes y_j,\sum x_k\otimes y_k\ra =\sum_{j,k}\la Ax_j,x_k\ra\la By_j,y_k\ra =\tr(\wt A\wt B^t)\geq0~. $$ Thus $A\otimes B$ is positive definite on a dense subspace and since $A\otimes B$ is bounded, it's positive definite on $E\otimes F$.

Matrices, eigen-values and eigen-vectors

If $\dim E=m$, $\dim F=n$ and $A=(a_{jk})$, $B=(b_{jk})$ are the matrices with respect to the bases $e_1,\ldots,e_m$ and $f_1,\ldots,f_n$, then the matrix of $A\otimes B$ is given by the block matrix $$ \left(\begin{array}{cccc} a_{11}B&a_{12}B&\cdots&a_{1m}B\\ a_{21}B&a_{22}B&\cdots&a_{2m}B\\ \vdots&\vdots&\ddots&\vdots\\ a_{n1}B&a_{n2}B&\cdots&a_{nm}B \end{array}\right), $$ where the bases are ordered lexicographically: $e_1\otimes e_1$, $e_1\otimes e_2$, $\ldots$, $e_1\otimes e_m$, $e_2\otimes e_1$, $\ldots$, $e_m\otimes e_1$, $\ldots$, $e_m\otimes e_m$. This matrix-construction can be found under the name: Kronecker product, cf. wikipedia. To build the tensor product in sage use something like this ($A$ and $B$ are random gaussian $2$ by $2$ matrices):
import numpy as np
n=2
A=np.random.normal(0,1,size=(n,n))
B=np.random.normal(0,1,size=(n,n))
print(np.kron(A,B))
What about the trace and the determinant and more generally what can be said about the eigen-values of $A\otimes B$?
For every matrix $A\in\Ma(n,\C)$ and every $r > 0$ there is a matrix $B=(b_{jk})\in\Ma(n,\C)$ such that $\sum|b_{jk}|^2 < r$ and $A+B$ is diagonalizable, i.e. diagonalizable matrices are dense. Suggested solution.
Assume $E,F$ are finite dimensional, $A\in\Hom(E)$ and $B\in\Hom(F)$. Then $\tr(A\otimes B)=\tr(A)\tr(B)$ and $\det(A\otimes B)=\det(A)^n\det(B)^m$, where $m=\dim(E)$ and $n=\dim(F)$.
$\proof$ Suppose $A$ and $B$ are diagonalizable and let $x_j$ and $y_k$ be bases of eigen-vectors of $A$ and $B$ with eigen-values $a_j$ and $b_k$. Then $A\otimes B$ is diagonalizable with eigen-values $a_jb_k$ and eigen-vectors $x_j\otimes y_k$, indeed $$ A\otimes B(x_j\otimes y_k) =Ax_j\otimes By_k =a_jx_j\otimes b_ky_k =a_jb_kx_j\otimes y_k~. $$ Since diagonalizable operators are dense, we may by continuity of $\tr$ and $\det$ assume that both $A$ and $B$ are diagonalizable with eigen-values $a_1,\ldots,a_m$ and $b_1,\ldots,b_n$ and eigen-vectors $x_1,\ldots,x_m$ and $y_1,\ldots,y_n$. Therefore $\tr(A\otimes B)=\sum_{j,k}a_jb_k=\tr(A)\tr(B)$ and $$ \det(A\otimes B) =(a_1b_1\cdots a_mb_1)(a_1b_2\cdots a_mb_2)\cdots(a_1b_n\cdots a_mb_n) =\det(A)^n\det(B)^m~. $$ $\eofproof$
This kind of argument is mathematically quit unsatisfactory, for it applies to spaces over the complex field and apparently not to arbitrary fields. Anyhow, along the way we've shown that the tensor product of diagonalizable operators is again diagonalizable.
Suppose $A\in\Hom(F)$ and $B\in\Hom(E)$. Compute the characteristic polynomial of $A\otimes B$. Suggested solution
Prove exam for finite dimensional spaces by diagonalization.
Suppose $A\in\Hom(E)$ and $B\in\Hom(F)$ are diagonalizable with eigen-vectors $x_j$ and $y_k$, respectively. Find a basis of eigen-vectors of $A\otimes1+1\otimes B$ and their corresponding eigen-values.
Define a linear mapping $\tr_1:\Hom(E\otimes F)\rar\Hom(F)$ by $\tr_1(A\otimes B)\colon=\tr(A)B$. It's called the partial trace.
The mapping $(A,B)\mapsto\tr(A)B$ on $\Hom(E)\times\Hom(F)$ is bi-linear and by the universal property of the tensor product there is exactly one linear mapping $\tr_1:\Hom(E)\otimes\Hom(F)\rar\Hom(F)$ such that $\tr_1(A\otimes B)=\tr(A)B$. But by exam $\Hom(E)\otimes\Hom(F)$ is isomorphic to $\Hom(E\otimes F)$ and the isomorphism sends the tensor product $A\otimes B\in\Hom(E)\otimes\Hom(F)$ to the tensor product of operators $A\otimes B\in\Hom(E\otimes F)$.
Compute all eigen-values and all eigen-vectors of $\Psi(C_2)$ given in subsection.
By subsection $\Psi(C_2)$ is the tensor product of the permutation matrix $$ P=\left(\begin{array}{cccc} 0 & 0& 1& 0\\ 0 & 0& 0& 1\\ 1 & 0& 0& 0\\ 0 & 1& 0& 0 \end{array}\right) $$ and a $C_2$ rotation in $\R^3$. A $C_2$ rotation has eigen-values $1,-1,-1$ and $P$ has eigen-values $1,1,-1,-1$; thus $\Psi(C_2)=P\otimes C_2$ has eigen-values: $1,1,-1,-1,-1,-1,1,1,-1,-1,1,1$. The eigen-vectors of $P$ are $x_1=e_1-e_3$, $x_2=e_2-e_4$, $x_3=e_1+e_3$ and $x_4=e_2+e_4$, where $e_1,e_2,e_3,e_4$ is the canonical basis of $\C^4$. By subsection $C_2$ has eigen-vectors $y_1=b_1$, $y_2=b_2$ and $y_3=b_3$, where $b_1,b_2,b_3$ is the canonical basis of $\C^3$. Thus the eigen-vectors of $P\otimes C_2$ are given by $x_j\otimes y_k$, e.g. for $j=k=1$ and by bi-linearity of $\otimes$: $$ x_1\otimes y_1 =(e_1-e_3)\otimes b_1 =e_1\otimes b_1-e_3\otimes b_1~. $$
Compute all eigen-values of $\Psi(C_3)$ given in subsection and their multiplicities.
Compute all eigen-values of $\Psi(S_4)$ given in subsection and their multiplicities.
Compute the tensor product of the site representation (cf. subsection) of $\chem{H_2O}$ with itself.
Compute the matrices $\s_j\otimes\s_j$ for the Pauli spin matrices $\s_1,\s_2,\s_3$.
In general the tensor product of two irreducible representations need not be irreducible, unless one of these representations is one dimensional:
If $\Psi:G\rar\UU(E)$ is irreducible and $\vp:G\rar S^1$ is a homomorphisms then $g\mapsto\vp(g)\Psi(g)$ is an irreducible unitary representation.
Compute the tensor product of the spin $1/2$ representation with its dual.
The spin $1/2$ representation is just the identity $\SU(2)\rar\SU(2)$. Hence we simply have to compute the tensor product of the matrices $U=(u_{jk})$ and $\bar U=(\bar u_{jk})$ for $U\in\SU(2)$: $$ \left(\begin{array}{cc} u_{11}\bar U&u_{12}\bar U\\ u_{21}\bar U&u_{22}\bar U \end{array}\right) =\left(\begin{array}{cccc} u_{11}\bar u_{11}&u_{11}\bar u_{12}&u_{12}\bar u_{11}&u_{12}\bar u_{12}\\ u_{11}\bar u_{21}&u_{11}\bar u_{22}&u_{12}\bar u_{21}&u_{12}\bar u_{22}\\ u_{21}\bar u_{11}&u_{21}\bar u_{12}&u_{22}\bar u_{11}&u_{22}\bar u_{12}\\ u_{21}\bar u_{21}&u_{21}\bar u_{22}&u_{22}\bar u_{21}&u_{22}\bar u_{22} \end{array}\right) =\left(\begin{array}{cccc} |u_{11}|^2&u_{11}\bar u_{12}&u_{12}\bar u_{11}&|u_{12}|^2\\ u_{11}u_{12}&u_{11}\bar u_{22}&u_{12}^2&u_{12}\bar u_{22}\\ \bar u_{12}\bar u_{11}&\bar u_{12}^2&u_{22}\bar u_{11}&u_{22}\bar u_{12}\\ |u_{12}|^2&\bar u_{12}\bar u_{22}&u_{22}u_{12}&|u_{22}|^2 \end{array}\right) $$ which evidently shows that representations are not linear in general.
Verify that the representation discussed in subsection is the tensor product of the site representation of the molecule with the standard representations of the symmetry group of the molecule.
Prove that the representation of $C_{2v}$ discussed in subsection is not equivalent to the tensor product of the site representation of $\chem{H_2O}$ with another representations of $C_{2v}$.
Historically the following example was the prototype of the abstract notion of tensor products:
Suppose $X$ and $Y$ are finite sets and $\ell_2(X)$ the space of all functions $f:X\rar\C$ with euclidean product $\la f,g\ra=\sum_{x\in X}f(x)\cl{g(x)}$. Then $\ell_2(X)\otimes \ell_2(Y)$ is isometrically isomorphic to $\ell_2(X\times Y)$. The isomorphisms is given by $f\otimes g\mapsto((x,y)\mapsto f(x)g(y))$; therefore the function $(x,y)\mapsto f(x)g(y)$ is also denoted by $f\otimes g$.
More generally:
Suppose $X$ and $Y$ are measure spaces with measures $\mu$ and $\nu$ repectively. Then $L_2(X,\mu)\otimes L_2(Y,\nu)$ is isometrically isomorphic to $L_2(X\times Y,\mu\otimes\nu)$, where $\mu\otimes\nu$ denotes the product measure defined by $\mu\otimes\nu(A\times B)\colon=\mu(A)\nu(B)$. The isomorphisms is given by $f\otimes g\mapsto((x,y)\mapsto f(x)g(y))$; therefore the function $(x,y)\mapsto f(x)g(y)$ is also denoted by $f\otimes g$.
Recall that we started out by defining the Hilbert-space tensor product as the space of Hilbert-Schmidt operators. Thus in functional analysis the previous example is usually stated as follows: the space of Hilbert-Schmidt operators $A:L_2(X,\mu)\rar L_2(Y,\nu)$ is isometrically isomorphic (at least in the real case) to $L_2(X\times Y,\mu\otimes\nu)$ and this isomorphism is given by sending $K\in L_2(X\times Y,\mu\otimes\nu)$ to the integral operator $A_K:L_2(X,\mu)\rar L_2(Y,\nu)$, $$ A_Kf(y)=\int_XK(x,y)f(x)\,\mu(dx)~. $$

Tensor products in PDE

This subsection can be skipped, it won't be refered to in a subsequent chapter!
If $D$ and $H$ are dense subspaces of $E$ and $F$ respectively, then $$ D\otimes H\colon=\{x_1\otimes y_1+\cdots+x_n\otimes y_n: x_k\in D,y_k\in H,n\in\N\} $$ is dense in $E\otimes F$.
That's a convenient definition for handling tensor products of unbounded operators; we'll do that informally, without delving into that matter unneccesarily. Suppose $A$ and $B$ are linear partial differential operators on open subsets $U$ and $V$ of $\R^m$ and $\R^n$, respectively, i.e. $$ Af(x)=\sum_{\a}a_\a(x)\pa^\a f(x) \quad\mbox{and}\quad Bg(y)=\sum_{\b}b_\b(y)\pa^\b g(y) $$ On the subspace $C^\infty(U)\otimes C^\infty(V)$ of $L_2(U)\otimes L_2(V)$ (cf. exam) the operator $A\otimes B$ is defined by $(A\otimes B)(f\otimes g)=Af\otimes Bg$. Identifying $f\otimes g$ with the function $(x,y)\mapsto f(x)g(y)$ on $U\times V$ we get $$ (A\otimes B)f\otimes g(x,y)=\sum_{\a,\b} a_\a(x)b_\b(y)\pa_x^\a f(x)\pa_y^\b g(y)~. $$ Thus it's self-evident to define for $u\in C^\infty(U\times V)$: $$ (A\otimes B)u(x,y)=\sum_{\a,\b} a_\a(x)b_\b(y)\pa_x^\a\pa_y^\b u(x,y)~. $$ Analogously the operator $A\otimes 1+1\otimes B$ can be identified with the linear partial differential operator on $U\times V$, given by $$ (A\otimes 1+1\otimes B)u(x,y) =\sum_{\a}a_\a(x)\pa_x^\a u(x,y)+\sum_{\b}b_\b(y)\pa_y^\b u(x,y)~. $$
Write the Laplacian on $\R^n$ as a sum of tensor products of the Laplacian on $\R$ and the identity.
Suppose $(A\otimes1+C\otimes B)x\otimes y=0$, then $y$ is an eigen-vector of $B$ with some eigen-value $\l$ and $(A+\l C)x=0$. The Laplacian in spherical coordinates is an operator of this type! Solution by T. Speckhofer.
Suppose $E$ is a Hilbert-space, then $L_2(X,\mu)\otimes E$ is isometrically isomorphic to the Hilbert-space $L_2(X,\mu,E)$ of all square-integrable functions $F:X\rar E$ with norm $(\int\norm{F(x)}^2\,\mu(dx))^{1/2}$. The isomorphism is given by $f\otimes u\mapsto(x\mapsto f(x)u)$.
Identifying $L_2(\R^3)\otimes\C^2$ with $L_2(\R^3,\C^2)$ compute the tensor product $D\colon=\pa_1\otimes\s_1+\pa_2\otimes\s_2+\pa_2\otimes\s_2$, where $\s_j$ are the Pauli spin matrices.
Let $e_1,e_2$ denote the canonical basis of $\C^2$, then any vector in $C^\infty(\R^3)\otimes\C^2$ is given by $f\otimes e_1+g\otimes e_2$ and we identify this element with the function $x\mapsto(f(x),g(x))$ from $\R^3$ in $\C^2$. Since $\pa_j\otimes\s_j$ maps $f\otimes e_1+g\otimes e_2$ to $\pa_jf\otimes\s_j(e_1)+\pa_jg\otimes\s_j(e_2)$, we get for e.g. $\pa_1\otimes\s_1$ as an operator acting on $C^\infty(\R^3,\C^2)$: $$ \pa_1\otimes\s_1(f,g)=(\pa_1g,\pa_1f)=\pa_1(g,f)~. $$ Analogously: $\pa_2\otimes\s_2(f,g)=\pa_2(-ig,if)$ and $\pa_3\otimes\s_3(f,g)=\pa_3(-f,-g)$. Thus we first apply $\s_j$ formally to the pair $(f,g)$ as if it were a vector in $\C^2$ and then apply $\pa_j$ on both components, e.g. \begin{eqnarray*} \pa_1\otimes\s_1&:&(f,g)\rar(g,f)\rar(\pa_1g,\pa_1f),\\ \pa_2\otimes\s_2&:&(f,g)\rar(-ig,if)\rar(-i\pa_2g,i\pa_2f),\\ \pa_3\otimes\s_3&:&(f,g)\rar(f,-g)\rar(\pa_3f,-\pa_3g)~. \end{eqnarray*}
Obviously $D^2=\sum_{j,k}\pa_j\pa_k\otimes\s_j\s_k$; conclude by the properties of the spin matrices that this equals $\D\otimes1$. This is Diracs wunderful idea: a linear partial differential operator that essentially squares to the Laplacian.
$D$ is called a Dirac operator and it operates on pairs of complex valued functions, sometimes called spinors, thus an equation such as $D\psi=f$, $f\in C^\infty(\R^3,\C^2)$, is actually a system of two linear PDEs for $\psi=(\psi_1,\psi_2):\R^3\rar\C^2$.
If $D\psi=0$, then $\D\psi_1=\D\psi_2=0$, i.e. both $\psi_1$ and $\psi_2$ are harmonic! More generally, if $D\psi=\l\psi$, then $\D\psi_1=\l^2\psi_1$ and $\D\psi_2=\l^2\psi_2$.
Prove that $D$ is skew-symmetric on the domain $C_c^\infty(\R^3,\C^2)\sbe L_2(\R^3,\C^2)$.
This follows from $\pa_j^*=-\pa_j$, $\s_j^*=\s_j$ and $(\pa_j\otimes\s_j)^*=\pa_j^*\otimes\s_j^*$.

Characters

Let $\Psi:G\rar\UU(E)$ be an irreducible finite dimensional unitary representation, then the function $\chi:G\rar\C$ $\chi(g)\colon=\tr\Psi(g)$ is called a character of $G$. The set of all characters of $G$ is denoted by $\wh G$ and is called the dual of $G$.
If $\Psi$ is any finite dimensional unitary representation in $E$, then by proposition $\chi\colon=\tr\Psi$ is the sum of characters, moreover: $\chi(e)=\dim E$, for $\Psi(e)=1_E$ and thus $\tr 1_E=\dim E$. If $\Phi$ is equivalent to $\Psi$, then $\tr\Phi=\tr\Psi$. We will prove later (cf. corollary) that also the converse is true, i.e. two irreducible unitary representations $\Psi_1:G\rar\UU(E_1)$ and $\Psi_2:G\rar\UU(E_2)$ are equivalent iff for all $g\in G$: $\tr\Psi_1(g)=\tr\Psi_2(g)$.
If $\chi$ is a character, then so is $\bar\chi$, because $\bar\chi$ is the trace of the contragradient (or dual) representation. Finally, if $\psi$ and $\vp$ are two characters, then their pointwise product $\psi\vp$ is the trace of the tensor product of two representations; since the trace of any representation is a linear combinations of characters with non-negative integer coefficients - such animals are called compound characters, the product of any characters is a compound character.
If $\Psi_1$ and $\Psi_2$ are representations of $G$ and $H$ respectively, then $\Psi((g,h))\colon=\Psi_1(g)\otimes\Psi_2(h)$ is a representation of $G\times H$.
We will see in a while that all irreducible representations of $G\times H$ are of this form (cf. e.g. exam). Hence the characters of the product of two groups $G$ and $H$ are the tensor products of the characters (remember - cf. exam - for functions $f:X\rar\C$ and $g:Y\rar\C$ we have $f\otimes g:X\times Y\rar\C$: $f\otimes g(x,y)\colon=f(x)g(y)$).

Conjugacy classes

Let $g$ be any element in the group $G$ and put $C(g)\colon=\{hgh^{-1}: h\in G\}$ - this is called the conjugacy class of $g$. We say that $g,g^\prime\in G$ are conjugate, if $g^\prime\in C(g)$.
The conjugacy classes in $G\times H$ are the products of conjugacy classes in the factors.
1. Any normal subgroup $H$ of a group $G$ is the union of conjugacy classes. 2. If $H$ is a subgroup of $G$ then for all $g\in G$ the set $gHg^{-1}$ is also a subgroup of $G$, which is obviously isomorphic to $H$.
To get a list of subgroups, with just one subgroup in the set $\{gHg^{-1}:g\in G\}$ and some additional info about these subgroups, you may execute the following commands in sage:
G=AlternatingGroup(4)
SubGroups=G.conjugacy_classes_subgroups()
for Group in SubGroups:
    nor=Group.is_normal()
    abe=Group.is_abelian()
    sim=Group.is_simple()
    print(Group,Group.order(),nor,abe,sim)
This will give you the output:
(Subgroup generated by [()], 1, True, True, False)
(Subgroup generated by [(1,2)(3,4)], 2, False, True, True)
(Subgroup generated by [(2,4,3)], 3, False, True, True)
(Subgroup generated by [(1,2)(3,4),(1,3)(2,4)], 4, True, True, False)
(Subgroup generated by [(2,4,3),(1,2)(3,4), (1,3)(2,4)], 12, True, False, False),
Note that permutations are written as compositions of disjoint cycles, i.e. $()$ denotes the identity, $(1,2)(3,4)$ denotes the permutation $(2143)$, $(2,4,3)$ denotes the permutation $(1423)$. Quotient and product groups can also be created easily: Take the altenating group $G=A(4)$, two elements $g_1$ and $g_2$ of this group and the subgroup $H$ generated by these two elements, check if $H$ is normal and construct the quotient group $G/H$; check if $G/H$ is isomorphic to $\Z_3$; finally create the product group $\Z_3\times\Z_3$:
G=AlternatingGroup(4)
g1=G('(1,2)(3,4)')
g2=G('(1,3)(2,4)')
H=G.subgroup([g1,g2])
H.is_normal()
GqH=G.quotient(H)
GqH.is_isomorphic(CyclicPermutationGroup(3))
K=direct_product_permgroups([CyclicPermutationGroup(3),CyclicPermutationGroup(3)])
Now suppose we have a character $\chi\in\wh G$ some $g\in G$ and $g^\prime\in C(g)$, then $$ \chi(g^\prime) =\tr\Psi(h^{-1}gh) =\tr(\Psi(h^{-1})\Psi(g)\Psi(h)) =\tr\Psi(g) =\chi(g), $$ i.e. characters are constant on conjugacy classes - functions $f:G\rar\C$ with this property are called class functions (cf. proposition). We will in fact see that the number of characters, which by definition equals the number of pairwise inequivalent irreducible representations, equals the number of conjugacy classes (cf. theorem). The following is a standard result in group theory - compare Lagrange's theorem, which states that the order $|H|$ of a subgroup $H$ of a group $G$ divides the order of $G$.
For any finite group $G$ and any $g\in G$ the number $|C(g)|$ divides $|G|$ and we have: $|G|=\sum|C(g_j)|$, where $C(g_1),\ldots,C(g_N)$ are the disjoint conjugacy classes of $G$.
$\proof$ For $g\in G$ put $Z(g)\colon=\{h\in G:gh=hg\}$ - the centralizer of $g$; this is indeed a subgroup of $G$. The mapping $u:G\rar C(g)$, $h\mapsto hgh^{-1}$ is by definition of $C(g)$ onto and $u(h_1)=u(h_2)$ iff $h_2^{-1}h_1gh_1^{-1}h_2=g$, i.e. iff $h_1^{-1}h_2\in Z(g)$. It follows that: $|G|=|Z(g)||C(g)|$. $\eofproof$
To get the order of all centralizers in
sage:
G=SymmetricGroup(5)
Reps=G.conjugacy_classes_representatives()
Sizes=[]
for g in Reps:
    Sizes.append(G.centralizer(g).order())
print(Sizes)
Proposition can be generalized in the following sense:
Suppose $G$ is a finite group, which acts on a finite set $X$. Then the cardinality $|Gx|$ of the orbit $Gx$ of any point $x\in X$ divides the cardinality of $G$. More precisely: $|G|=|Gx||S(x)|$, where $S(x)\colon=\{g\in G:\,gx=x\}$ denotes the stabilizer of $x$.
This can be used to determine all finite subgroup $G$ of $\SO(3)$: Remember from section that every $g\in\SO(3)$ has a rotation axis and therefore the action of $\SO(3)$ on $S^2$ fixes a pair of antipodal points of $S^2$. Let $X$ be the set of points that come out as fixed points for some $g\in G\sm\{E\}$. Then $G\cdot X\sbe X$, for if $x\in X$, $g_xx=x$ and $g\in G$, then $(gg_xg^{-1})gx=gx$, i.e. $gx$ is a fixed point of $gg_xg^{-1}\in G$ - this also shows that $S(gx)=gS(x)g^{-1}$. The essential observation is that $S(x)=S(-x)$ and for $y\neq\pm x$: $(S(x)\sm\{E\})\cap(S(y)\sm\{E\})=\emptyset$. Hence $$ 2(|G|-1)=\sum_{x\in X}(|S(x)|-1)~. $$ Now for all $g\in G$: $S(gx)=gS(x)g^{-1}$ and thus $|S(gx)|=|S(x)|$; therefore the right hand side equals $$ \sum_{\wh x\in X/G}(|S(x)|-1)|Gx|~. $$ Finally by means of exam: $|S(x)|=|G|/|Gx|$ and by division by $|G|$ we obtain: $$ 2-\frac2{|G|} =\sum_{\wh x\in X/G}\Big(1-\frac{|Gx|}{|G|}\Big) =\sum_{\wh x\in X/G}\Big(1-\frac{1}{|S(x)|}\Big)~. $$ As for the solutions to this equation cf. wikipedia. It's probably a bit more obvious to let $\SO(3)$ and thus $G$ operate on the projective space $P^2(\R)$, because the later is the space of all rotation axes and there is no double-counting.
Suppose $G$ is a finite group of order $p^m$ for some prime number $p$. If $G$ operates on a finite set $X$ and $X_0$ denotes set of points fixed by $G$, i.e. $X_0=\{x\in X:\forall g\in G:\,gx=x\}$, then $|X|=|X_0|\modul(p)$.
The orbit $Gx$ of $x\in X$ contains exactly one point if and only if $x\in X_0$. Hence by the previous exam $X$ is the disjoint union of $X_0$ and orbits $Gx_1,\ldots,Gx_n$, whose cardinality is divisable by $p$.
Let $G$ and $X$ be as in exam. Let $p$ be a prime number such that $p||G|$. Compute the set $X_0$ and conclude that $G$ contains an element of order $p$. This is known as Cauchy's Theorem. Suggested solution.

Conjugacy classes of symmetry groups

Since every group of symmetry $G$ is a subgroup of $\OO(3)$, all elements of a conjugacy class have the same type of symmetry. The converse is not true in general: two symmetries in $G$ may have the same type (i.e. they may be conjugate in $\OO(3)$) but they need not be conjugate in $G$! Most computer algebra programs realize finite groups as subgroups of some $S(n)$ (cf. Cayley's Theorem). To get a list of representatives of all conjugacy classes of e.g. $C_{6v}$ in sage enter
c6v=DihedralGroup(6)
Reps=c6v.conjugacy_classes_representatives()
for g in Reps:
    cg=c6v.conjugacy_class(g)
    print(g.domain(),len(cg))
This gives you representatives and the cardinalities of the conjugacy classes: \begin{array}{lll} ((123456), 1),&((165432), 3),&((216543), 3),\\ ((234561), 2),&((345612), 2),&((456123), 1) \end{array}
In $\UU(n)$ every unitary matrix $U$ is conjugate to a diagonal matrix of the form $diag\{e^{i\o_1},\ldots,e^{i\o_n}\}$ for some real numbers $\o_1,\ldots,\o_n$.
By definition two matrices $A,B\in\Gl(n,\C)$ are conjugate if they are similar. Suppose $U,V$ are similar unitary $n$ by $n$ matrices. Are $U$ and $V$ conjugate in $\UU(n)$? Cf. exam, Suggested solution
Suppose $A,B\in\SU(n)$ are conjugate in $\UU(n)$. Prove that $A$ and $B$ are conjugate in $\SU(n)$. 2. Suppose $A,B\in\SO(n)$ are conjugate in $\OO(n)$. Are $A$ and $B$ conjugate in $\SO(n)$?

The characters of finite groups

Suppose we have a finite group, which has exactly $N$ conjugacy classes. Pick a complete set of representatives $g_1=e,g_2,\ldots,g_{N}$ and let $c_1=1,c_2,\ldots,c_{N}$ be the number of elements in the corresponding class, i.e. $c_j$ is the cardinality $|G(g_j)|$ of $C(g_j)$. The table of characters of $G$ will be written as a table with $N+1$ columns and $N+1$ rows: the symbol in the first row and first column is just a symbol for the group. In the remaining columns of the first row we write $1e,c_2g_2,\ldots,c_{N}g_{N}$, indicating the conjugacy class and its cardinality. In the remaining rows of the first column we place symbols for the characters, i.e. $\chi_1,\chi_1,\ldots,\chi_{N}$. Finally for $j,k\in\{1,\ldots,N\}$ we place in row $j+1$ and column $k+1$ the value $\chi_j(g_k)$ of the character $\chi_j$ an the conjugacy class $C(g_k)$: $$ \begin{array}{c|cccc} G&1e&c_2g_2&\ldots&c_Ng_N\\ \hline \chi_1&\chi_1(e)&\chi_1(g_2)&\ldots&\chi_1(g_N)\\ \chi_2&\chi_2(e)&\chi_2(g_2)&\ldots&\chi_2(g_N)\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \chi_N&\chi_N(e)&\chi_N(g_2)&\ldots&\chi_N(g_N) \end{array} $$ For the time beeing we do not explain how to produce the character table of a given group, we simply fall back to computer algebra systems. It's here that I'd suggest to familiarize yourself with the relevant commands of the computer algebra system of your choice! The actual construction of characters will be explicated in
section. Here we will anticipate a salient feature of this table, which is orthogonality: the columns form an orthogonal basis for $\C^N$ with the canonical complex euclidean product - cf. convolution. Also the rows form an orthonormal set in $\C^N$ with respect to a different complex euclidean product, which will be elucidated in section. However, this is not a deep result, it's just a consequence of the following
Suppose $u_1,\ldots,u_n$ are pairwise orthogonal vectors in $\C^n$ such that $\norm{u_k}=d_k$. Let $U=(u_{jk})$ be the matrix $u_{jk}=\la u_k,e_j\ra$ and $D$ the diagonal matrix $diag\{d_1,\ldots,d_n\}$. Verify that $UD^{-1}$ is unitary and thus both its rows and its colums form orthonormal bases for $\C^n$.

The characters of $C_{2v}$

$C_{2v}$ is isomorphic to $\Z_2^2$ and its character table is given by $$ \begin{array}{c|rrrr} C_{2v}&1E&1C_2&1\s_1&1\s_2\\ \hline \chi_1&1&1&1&1\\ \chi_2&1&1&-1&-1\\ \chi_3&1&-1&1&-1\\ \chi_4&1&-1&-1&1 \end{array} $$

The characters of $C_{3v}$

Let us return to the site representation of the ammonia molecule $\chem{NH_3}$: The irreducible representations $\Psi_1(g)\colon=\Psi(g)|[e_0]$ and $\Psi_2(g)\colon=\Psi(g)|[x_1]$ feature both the same character: the trivial character $\chi_1(g)=1$. The character $\chi_3$ of the irreducible representation $\Psi_3(g)\colon=\Psi(g)|[x_2,x_3]$ is given by $\chi_3(C_3)=-1$ and $\chi_3(\s_1)=0$. The conjugacy classes of the group $C_{3v}$ are $C(E)=\{E\}$, $C(C_3)=\{C_3,C_3^*\}$ and $C(\s_1)=\{\s_1,\s_2,\s_3\}$. A complete set of representatives is $E,C_3$ and $\s_1$ and the number of elements in the corresponding classes is $1,2$ and $3$. The table of characters looks like this: $$ \begin{array}{c|rrr} C_{3v}&1E&2C_3&3\s_1\\ \hline \chi_1&1&1&1\\ \chi_2&1&1&-1\\ \chi_3&2&-1&0 \end{array} $$ For our site representation $\Psi$ of ammonia we have: $\tr\Psi=2\chi_1+\chi_3$. This means, that $\Psi$ is the sum of three irreducible subrepresentations; two of them are equivalent and feature the character $\chi_1$ and the character of the third is $\chi_3$ - this will be explicated in detail in section.
Compute all nine characters of $C_{3v}^2$, cf. exam

The characters of $T_d$

Execute in sage the subsequent lines
Td=SymmetricGroup(4)
Reps=Td.conjugacy_classes_representatives()
for g in Reps:
    cg=Td.conjugacy_class(g)
    print(g.domain(),len(cg))
Td.character_table()
and you will find that $T_d$ has $5$ conjugacy classes of respective size $1,8,3,6,6$ and the table of characters is given by: $$ \begin{array}{c|rrrrr} T_d&1E&8C_3&3C_2&6S_4&6\s\\ \hline \chi_1&1&1&1&1&1\\ \chi_2&1&1&1&-1&-1\\ \chi_3&2&-1&2&0&0\\ \chi_4&3&0&-1&1&-1\\ \chi_5&3&0&-1&-1&1 \end{array} $$ $\chi_1$ is the character of the trival representation, $\chi_2$ the character of sign representation, $\chi_5$ the character of the standard representation and $\chi_4$ is the character of the standard representation multiplied by sign, which is a character by exam.
Remark: sage (and most likely any other program) doesn't give you the types of the representatives but the permutations. Thus you have to find the types by yourself!

The characters of $O_h$

This group has $10$ conjugacy classes, representatives of which are the identity $E$, the inversion $I$, the following matrices with positive determinant: $$ C_3=\left(\begin{array}{ccc} 0&0&1\\ 1&0&0\\ 0&1&0 \end{array}\right), C_2=\left(\begin{array}{ccc} -1&0&0\\ 0&0&1\\ 0&1&0 \end{array}\right), C_4=\left(\begin{array}{ccc} 1&0&0\\ 0&0&-1\\ 0&1&0 \end{array}\right), C_4^2=\left(\begin{array}{ccc} 1&0&0\\ 0&-1&0\\ 0&0&-1 \end{array}\right) $$ and the following matrices with negative determinant: $$ S_4=\left(\begin{array}{ccc} -1&0&0\\ 0&0&-1\\ 0&1&0 \end{array}\right), S_6=\left(\begin{array}{ccc} 0&0&-1\\ 1&0&0\\ 0&1&0 \end{array}\right), \s_1=\left(\begin{array}{ccc} 1&0&0\\ 0&1&0\\ 0&0&-1 \end{array}\right), \s_2=\left(\begin{array}{ccc} 1&0&0\\ 0&0&-1\\ 0&1&0 \end{array}\right)~. $$ The table of characters looks as follows: $$ \begin{array}{c|rrrrrrrrrr} O_h&1E&8C_3&6C_2&6C_4&3C_4^2&1I&6S_4&8S_6&3\s_1&6\s_2\\ \hline \chi_1 & 1& 1& 1& 1& 1& 1& 1& 1& 1& 1\\ \chi_2 & 1& 1&-1&-1& 1& 1&-1& 1& 1&-1\\ \chi_3 & 1& 1& 1& 1& 1&-1&-1&-1&-1&-1\\ \chi_4 & 1& 1&-1&-1& 1&-1& 1&-1&-1& 1\\ \chi_5 & 2&-1& 0& 0& 2& 2& 0&-1& 2& 0\\ \chi_6 & 2&-1& 0& 0& 2&-2& 0& 1&-2& 0\\ \chi_7 & 3& 0&-1& 1&-1& 3& 1& 0&-1&-1\\ \chi_8 & 3& 0& 1&-1&-1& 3&-1& 0&-1& 1\\ \chi_9 & 3& 0&-1& 1&-1&-3&-1& 0& 1& 1\\ \chi_{10}& 3& 0& 1&-1&-1&-3& 1& 0& 1&-1 \end{array} $$ $\chi_1$ is the character of the trivial representation, $\chi_3$ the character of the sign representation, $\chi_9$ the character of the standard representation and $\chi_8$ the character of the standard representation multiplied by sign, which is a character by exam. Execute the following commands in sage
Oh=PermutationGroup([(2,3,4,1),(6,4,5,2),(6,3,5,1),(3,1)])
Reps=Oh.conjugacy_classes_representatives()
for g in Reps:
    cg=Oh.conjugacy_class(g)
    print(g.domain(),len(cg))
print(Oh.character_table())
and you will get representatives $(123456)$, $(123465)$, $(143265)$, $(153624)$, $(153642)$, $(214365)$, $(234165)$, $(254613)$, $(254631)$, $(341265)$ and sizes $1,3,3,6,6,6,6,8,8,1$ of conjugacy classes as well as the table of characters of $O_h$ - of course, the order of conjugacy classes and the order of characters may differ from the orders in the table above!

The characters of $A(5)$

The following list of sage commands:
Al=AlternatingGroup(5)
Reps=Al.conjugacy_classes_representatives()
for g in Reps:
    cg=Al.conjugacy_class(g)
    print(g.domain(),len(cg))
print(Al.character_table())
gives representatives of the conjugacy classes: $E\colon=(12345)$, $A\colon=(21435)$, $B\colon=(23145)$, $C\colon=(23451)$, $D\colon=(23514)$, their sizes: $1,15,20,12,12$ and the table of characters: $$ \begin{array}{c|rrrcc} A(5)&1E&15A&20B&12C&12D\\ \hline \chi_1&1 & 1 & 1 & 1 & 1 \\ \chi_2&3 & -1 & 0 & \zeta_{5}^{3} + \zeta_{5}^{2} + 1 & -\zeta_{5}^{3} - \zeta_{5}^{2} \\ \chi_3&3 & -1 & 0 & -\zeta_{5}^{3} - \zeta_{5}^{2} & \zeta_{5}^{3} + \zeta_{5}^{2} + 1 \\ \chi_4&4 & 0 & 1 & -1 & -1 \\ \chi_5&5 & 1 & -1 & 0 & 0 \end{array} $$ where $\zeta_5\colon=e^{2\pi i/5}$. For additional character tables important in chemistry cf. wikipedia. Of course, all these groups can be handled by sage, gap, etc.

The characters of $C_{nv}$, $n\geq4$

We have $C_{nv}=\{\s^jC_n^k:j=0,1,k=0,\ldots,n-1\}$ with rotations $C_n^0=E,\ldots,C_n^{n-1}$ and reflections $\s C_n^0,\ldots,\s C_n^{n-1}$; since: $C_n^k\s=\s C_n^{n-k}$, we get the following conjugacy classes: $$ C(E)=\{E\}, C(C_n^k)=\{C_n^k,C_n^{n-k}\}, C(\s C_n^k)=\{\s C_n^{k-2l}:l\in\N_0\}~. $$ If $n$ is even, then we get $1+[n/2]+2=(n+6)/2$ conjugacy classes with $1+(2[n/2]-1)+2[n/2]=2n$ elements and if $n$ is odd we have $1+[n/2]+1=(n+3)/2$ conjugacy classes with $1+2[n/2]+n=2n$ elements. In any case we have found all conjugacy classes and a complete list of representatives thereof.
For $n=5$ the conjugacy classes are: $\{E\},\{C,C^4\},\{C^2,C^3\},\{\s,\s C^3,\s C^1,\s C^4,\s C^2\}$ and for $n=6$: $\{E\},\{C,C^5\},\{C^2,C^4\},\{C^3\},\{\s,\s C^4,\s C^2\},\{\s C^1,\s C^5,\s C^3\}$
A one-dimensional unitary representation is just a homomorphism $\chi:C_{nv}\rar S^1$, thus: $\chi(\s)=\pm1$; since $\s C_n=C_n^{-1}\s$, we must have $\chi(C_n)^2=1$, i.e. $\chi(C_n)=\pm1$. Hence there are at most $4$ one-dimensional representations; in fact if $n$ is even there are exactly $4$ - we provide the values of these homomorphisms on the conjugacy classes of the two generators $C_n$ and $\s$ only: $$ \begin{array}{r|rr} C_{nv}&2C_n&(n/2)\s\\ \hline \chi_1&1&1\\ \chi_2&1&-1\\ \chi_3&-1&1\\ \chi_4&-1&-1 \end{array} $$ If $n$ is odd then only the first two extend to homomorphisms on $C_{nv}$, because $C_n^n=E$ and thus $1=\chi_3(C_n^n)=(-1)^n$. There is also a bunch of irreducible representations of dimension two: for $j\in\{0,\ldots,n-1\}$ we put $$ \Psi_j(C_n) =\left(\begin{array}{cc} e^{2\pi ij/n}&0\\ 0&e^{-2\pi ij/n} \end{array}\right),\quad \Psi_j(\s) =\left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right)~. $$ It can be verified easily that $\Psi_j(C_n)^n=1=\Psi_j(\s)^2$ and $\Psi_j(\s)\Psi_j(C_n)=\Psi_j(C_n)^{-1}\Psi_j(\s)$ and therefore all mappings $\Psi_j$ extend to homomorphisms $\Psi_j:C_{nv}\rar\UU(2)$ and we have $\Psi_{n+j}=\Psi_j$. Putting $J\colon=\Psi_j(\s)$, we get: \begin{eqnarray*} J\Psi_j(C_n) &=&\Psi_j(\s C_n)=\Psi_j(C_n^{-1}\s)=\Psi_{n-j}(C_n)J\\ J\Psi_j(\s)&=&\Psi_{-j}(\s)\Psi_j(\s)=\Psi_{n-j}(\s)J, \end{eqnarray*} thus $J\Psi_j=\Psi_{n-j}J$, consequently $\Psi_j$ and $\Psi_{n-j}$ are equivalent. Since we are only interested in inequivalent representations, we may assume: $0\leq j\leq[n/2]$. For $j=0$ we have $\tr\Psi_0=\chi_1+\chi_2$, which implies as we will see that $\Psi_0$ is the sum of two irreducible representations with characters $\chi_1$ and $\chi_2$, and indeed the corresponding subspaces are $\lhull{e_1-e_2}$ and $\lhull{e_1+e_2}$. Also for $n$ even we have: $\tr\Psi_{n/2}=\chi_3+\chi_4$, indicating that $\Psi_{n/2}$ is the sum of two irreducible representations with characters $\chi_3$ and $\chi_4$.
Find the invariant subspaces of $\Psi_{n/2}$ for $n$ even.
For $0 < j < n/2$ the representation $\Psi_j$ are irreducible.
Moreover, they are pairwise inequivalent: if $\Psi_j$ and $\Psi_k$ are equivalent, i.e. if there is some $A\in\Gl(2,\C)$ such that $A\Psi_j=\Psi_kA$, then it follows that $\Psi_j(C_n)$ and $\Psi_k(C_n)$ have the same eigenvalues, i.e. $e^{2\pi ij/n}=e^{\pm2\pi ik/n}$, thus either $j+k=0\,\modul(n)$ or $j-k=0\,\modul(n)$. For even $n$ we've found $(n+6)/2$ pairwise inequivalent irreducible representations and for odd $n$ we've got $(n+3)/2$ such entities, which coincides with the number of conjugacy classes and therefor these are all pairwise inequivalent irreducible representations of $C_{nv}$. Consequently for $n$ even and $j\in\{1,\ldots,n/2-1\}$ we get the following table of characters: $$ \begin{array}{r|cr} C_{nv}&2C_n&(n/2)\s\\ \hline \chi_1&1&1\\ \chi_2&1&-1\\ \chi_3&-1&1\\ \chi_4&-1&-1\\ \psi_j&2\cos(2\pi j/n)&0 \end{array} $$ Since $\chi_j$ are homomorphisms the value of $\chi_j$ at any element is determined by its values at the generators $C_n$ and $\s$. However, the values of the characters $\psi_j$ at any element of $C_{nv}$ must be calculated via the representations: $$ \psi_j(\s C_n^k)=\tr(\Psi_j(\s)\Psi_j(C_n)^k)=0, \psi_j(C_n^k)=\tr\Psi_j(C_n)^k=2\cos(2\pi jk/n)~. $$
Identify the character table of $C_{nv}$ for odd $n$.

The characters of commutative groups

We will see below (cf.
corollary) that every irreducible representation of a commutative group $G$ is one-dimensional and that there are exactly $|G|$ inequivalent irreducible representations, for each conjugacy class contains exactly one element. Thus the characters determine all representations and every character is a homomorphism $\chi:G\rar S^1$.
For every character $\chi$ of $\Z_n^d$ there exists exactly one $y\in\Z_n^d$, such that for all $x\in\Z_n^d$: $$ \chi(x)=\chi_y(x)\colon=e^{2\pi i\sum x_jy_j/n}=e^{2\pi i\la x,y\ra/n}~. $$
For all $y\in\Z_n^d$ the mapping $\chi_y$ is obviously a homomorphism and $\chi_y=\chi_z$ iff for all $x\in\Z_n^d$: $\la x,y-z\ra=0$ in $\Z_n$ but this only holds iff $y$ and $z$ coincide. Hence we've got $|\Z_n^d|$ pairwise distinct homomorphisms $\chi_y:\Z_n^d\rar S^1$, i.e. all characters of $\Z_n^d$. Therefore we can identify the dual of $\Z_n^d$ with $\Z_n^d$, i.e. $\Z_n^d$ is self-dual. Since any finite commutative group $G$ is isomorphic to a direct product of cyclic groups (cf. wikipedia) and the characters of a product group is the tensor product of the characters, all characters of $G$ can be computed!
Find the character table of $\Z_2^3$ and $\Z_8$.
Compute the characters of $\Z_2\times\Z_3$.
Suppose $\vp:H\rar S^1$ is a homomorphism of a subgroup $H$ of the finite abelian group $G$. Then $\vp$ can be extended to a homomorphism $\wt\vp:G\rar S^1$: For $x\in G\sm H$ put $n\colon=\inf\{k:x^k\in H\}$ and let $a$ be an $n$-th root of unity. Then for all $k=0,\ldots,n-1$ and all $h\in H$: $\psi(x^kh)\colon=a^k\vp(h)$ is a homomorphism on the group $\la H,x\ra$ generated by $H$ and $x$.

The characters of $S(n)$

By Cayley's Theorem every group of order $n$ is isomorphic to some subgroup of $S(n)$. Thus it's of interest to know all characters of $S(n)$, which are not at all easy to find. We will just determine the number of characters, i.e. the number of conjugacy classes: Every permutation $\pi\in S(n)$ is composition of pairwise disjoint cycles (cf. wikipedia) - since the cycles are disjoint, they commute. So let us first compute the conjugacy class of a cycle $\t=(n_1,\ldots,n_k)$ of length $k\geq2$, i.e. the permutation $n_1\mapsto n_2\mapsto\cdots\mapsto n_{k-1}\mapsto n_k\mapsto n_1$ - cycles of length $2$ are called transpositions. Since $\pi\t\pi^{-1}(\pi(n_j))=\pi(\t(n_j))=\pi(n_{j+1})$, it follows that $\pi\t\pi^{-1}$ is the cycle $(\pi(n_1),\ldots,\pi(n_k))$.
By a partition
of $n$ we understand a representation $n=l_1+l_2+\cdots+l_m$ as sum of natural numbers $l_1,\ldots,l_m$ such that $l_1\geq l_2\geq\cdots\geq l_m$ for some $m\in\{1,\ldots,n\}$. If $\pi=\t_1\cdots\t_k$ is a composition of $\pi$ into disjoint cycles of lengths $l_1\geq\cdots\geq l_m$, then the sum $L\colon=l_1+\cdots+l_m$ is usually smaller than $n$ - the difference $n-L$ is just the number of fixed points; put $l_{m+1}=\ldots=l_{m+n-L}=1$. This way we get for every permutation a partition $l_1,\ldots,l_k$ of $n$. For example the partition associated with a transposition is $2,1,\ldots,1$ and the partition associated with a single cycle of lenght $l$: $l,1,\ldots,1$.
Two permutations are conjugate iff they induce the same partition.
If $\pi,\s$ induce the same partition $(l_1,\ldots,l_k)$, then $$ \pi=\t_1\cdots\t_k,\quad \s=\r_1\cdots\r_k $$ where $\t_j=(n_{j1},\ldots,n_{jl_j})$ and $\r_j=(m_{j1},\ldots,m_{jl_j})$. Put $\a(n_{ji})\colon=m_{ji}$, then it follows that: $\a\t_j\a^{-1}=\r_j$ and therefore: $$ \a\pi\a^{-1}=(\a\t_1\a^{-1})\cdots(\a\t_k\a^{-1})=\r_1\cdots\r_k=\s~. $$ If $\pi=\t_1\cdots\t_k$ and $\s=\a\pi\a^{-1}$, then $\s=\a\t_1\a^{-1}\cdots\a\t_k\a^{-1}$ and hence $\pi$ and $\s$ induce the same partition. $\eofproof$
Thus we can identify the set of conjugacy classes of $S(n)$ with the set of partitions of $n$.
The number of conjugacy classes of $S(n)$ is the number $p(n)$ of solutions $(l_1,\ldots,l_n)\in\N_0^n$ to the equation $l_1+\cdots+l_n=n$ satisfying $l_1\geq\cdots\geq l_n$. The function $p$ is called the partition function.
Verify that $p(4)=5$ and $p(5)=7$.
Prove that for all $\t\in S(n)$: $|Z(\t)|=1^{q_1}q_1!2^{q_2}q_2!\cdots n^{q_n}q_n!$ and thus $$ |C(\t)|=\frac{n!}{1^{q_1}q_1!2^{q_2}q_2!\cdots n^{q_n}q_n!}, $$ where $q_j$ is the number of cycles of length $j$ in $\t$. In particular, if $\t$ is a cycle of length $k$, then $|Z(\t)|=k(n-k)!$. Suggested solution.
As for the characters of $S(n)$ cf. e.g. Murnaghan-Nakayama Rule or Frobenius Method.
Compute the character of the $(n-1)$-dimensional irreducible representation $\pi\mapsto P(\pi)|E$ of $S(n)$, where $E=[e_1+\cdots+e_n]^\perp$. Suggested solution.

Some examples in sage

To get the character table of e.g. $C_{6v}$ in sage enter
c6v=DihedralGroup(6)
Reps=c6v.conjugacy_classes_representatives()
for g in Reps:
    cg=c6v.conjugacy_class(g)
    print(g.domain(),len(cg))
latex(c6v.character_table())
This gives you the following representatives: $E=(123456)$, $A=(165432)$, $B=(216543)$, $C=(234561)$, $D=(345612)$, $F=(456123)$ of size $1,3,3,2,2,1$ and the character table: \begin{array}{c|rrrrrr} C_{6v}&1E&3A&3B&2C&2D&1F\\ \hline \chi_1&1 & 1 & 1 & 1 & 1 & 1 \\ \chi_2&1 & -1 & -1 & 1 & 1 & 1 \\ \chi_3&1 & -1 & 1 & -1 & 1 & -1 \\ \chi_4&1 & 1 & -1 & -1 & 1 & -1 \\ \chi_5&2 & 0 & 0 & 1 & -1 & -2 \\ \chi_6&2 & 0 & 0 & -1 & -1 & 2 \end{array} For $C_{5v}$ we get the representatives: $E=(12345)$, $A=(15432)$, $B=(23451)$, $C=(34512)$ of size $1,5,2,1$ and the character table: \begin{array}{c|rrcc} C_{5v}&1E&5A&2B&1C\\ \hline \chi_1&1 & 1 & 1 & 1 \\ \chi_2&1 & -1 & 1 & 1 \\ \chi_3&2 & 0 & \zeta_{5}^{3} + \zeta_{5}^{2} & -\zeta_{5}^{3} - \zeta_{5}^{2} - 1 \\ \chi_4&2 & 0 & -\zeta_{5}^{3} - \zeta_{5}^{2} - 1 & \zeta_{5}^{3} + \zeta_{5}^{2} \end{array} where $\zeta_5\colon=e^{2\pi i/5}$. Alternatively you may use in sage the gap interface; for the subgroup $G$ of $S(4)$ generated by $(1,2)(3,4)$ and $(1,2,3)$:
gap.eval("G:=Group((1,2)(3,4),(1,2,3))")
gap.eval("T:=CharacterTable(G)")
gap.eval("SizesConjugacyClasses(T)")
print(gap.eval("irr:=Irr(G)"))
The linear group $\Gl(2,\Z_3)$ of the vector-space $\Z_3^2$, which has order $48$ and
G=GL(2,3)
Reps=G.conjugacy_classes_representatives()
for g in Reps:
    cg=G.conjugacy_class(g)
    print(g.domain(),len(cg))
print(G.character_table())
gives you $8$ conjugacy classes, representatives thereof are: $$ E=\left(\begin{array}{cc} 1&0\\ 0&1 \end{array}\right), A=\left(\begin{array}{cc} 0&2\\ 1&1 \end{array}\right), B=\left(\begin{array}{cc} 2&0\\ 0&2 \end{array}\right), C=\left(\begin{array}{cc} 0&2\\ 1&2 \end{array}\right) $$ and $$ D=\left(\begin{array}{cc} 0&2\\ 1&0 \end{array}\right), F=\left(\begin{array}{cc} 0&1\\ 1&2 \end{array}\right), G=\left(\begin{array}{cc} 0&1\\ 1&1 \end{array}\right), H=\left(\begin{array}{cc} 2&0\\ 0&1 \end{array}\right), $$ sizes: $1,8,1,8,6,6,6,12$ and the character table: $$ \begin{array}{c|rrrrrccr} \Gl(2,\Z_3)&1E&8A&1B&8C&6D&6F&6G&12H\\ \hline \chi_1&1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \chi_2&1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 \\ \chi_3&2 & -1 & 2 & -1 & 2 & 0 & 0 & 0 \\ \chi_4&2 & 1 & -2 & -1 & 0 & -\zeta_{8}^{3} - \zeta_{8} & \zeta_{8}^{3} + \zeta_{8} & 0 \\ \chi_5&2 & 1 & -2 & -1 & 0 & \zeta_{8}^{3} + \zeta_{8} & -\zeta_{8}^{3} - \zeta_{8} & 0 \\ \chi_6&3 & 0 & 3 & 0 & -1 & 1 & 1 & -1 \\ \chi_7&3 & 0 & 3 & 0 & -1 & -1 & -1 & 1 \\ \chi_8&4 & -1 & -4 & 1 & 0 & 0 & 0 & 0 \end{array} $$ where $\zeta_8\colon=e^{2\pi i/8}$. Here is another alternative: 'chi.adams_operation(k).values()' gives you the value of the character $\chi$ at the group element $g^k$, where $g$ runs through a complete set of representatives of the conjugacy classes:
G=GL(2,3)
Chars=G.irreducible_characters()
Reps=G.conjugacy_classes_representatives()
for chi in Chars:
    chi.adams_operation(1).values()
Find the table of characters of $\Gl(3,\Z_2)$. Find a cyclic subgroup of order $7$. Suggested solution.
Here are some other examples of matrix groups: A subgroup of $\Gl(2,\Z_5)$ generated by two matrices:
F=GF(5)
Gen=[matrix(F,2,[1,2,4,1]), matrix(F,2,[1,1,0,1])]
G=MatrixGroup(Gen)
Chars=G.irreducible_characters()
for chi in Chars:
    chi.adams_operation(1).values()
The groups $\Sp(4,\Z_3)$, $\SO(3,\Z_2)$, $\Sl(2,\Z_5)$ and a subgroup of $S(6)$ with two generators and certain elements:
G=Sp(4,GF(3))
H=SO(3,GF(2))
HP=H.as_permutation_group()
for g in H:
    print(g.domain())
HF=HP.as_finitely_presented_group()
J=SL(2,GF(5))
K=PermutationGroup(['(1,2)(3,4)','(3,4,5,6)'])
g=K('(1,2)(3,5,6)')
h=K([2,1,5,4,6,3])
g==h
gh=g*h; gh
Beware, in sage $g*h$ is the permutation $h\circ g$!
Find the table of characters of the symmetry group $C_{6v}\times\Z_2$ of benzene $\chem{C_6H_6}$. Suggested solution.

The Great Orthogonality Theorem

Schur's Lemma revised

Let $\Psi:G\rar\Gl(E),\Phi:G\rar\Gl(F)$ be finite dimensional representations of $G$; the set of intertwining operators
$\Hom_G(\Psi,\Phi)$ is the set of all linear mappings $A:E\rar F$ such that for all $g\in G$: $\Phi(g)A=A\Psi(g)$. For $\Hom_G(\Psi,\Psi)$ we simply write $\Hom_G(\Psi)$. If $\Psi$ is unitary then $\Hom_G(\Psi)$ is just the set of operators $A\in\Hom(E)$, which admit the group $\{\Psi(g):g\in G\}$ as symmetries.
$\Hom_G(\Psi,\Phi)$ is a vector-space and $\Hom_G(\Psi)$ is a ring.
If $A\in\Hom_G(\Psi,\Phi)$, then $\ker A$ and $\im A$ are $G$ invariant and thus if both $\Psi$ and $\Phi$ are irreducible, then $A$ is either $0$ or $A$ is an isomorphism, in the latter case $\Psi$ and $\Phi$ are equivalent. For the time being we prove an extension of corollary. The following lemma might be of independent interest:
Let $E$ be a Hilbert-space, $A\in\Hom(E)$ and put for all $x\in E$: $Q(x)\colon=\la Ax,x\ra$. 1. If $E$ is real, $A$ self-adjoint and $Q=0$, then $A=0$. 2. If $E$ is complex and $Q=0$, then $A=0$.
$\proof$ Both statements follow by polarization: in the first case we have $4\la Ax,y\ra=Q(x+y)-Q(x-y)$ and in the second case $$ 4\la Ax,y\ra=Q(x+y)-Q(x-y)+iQ(x+iy)-iQ(x-iy)~. $$ $\eofproof$
Suppose $E$ is a real Hilbert-space and $A\in\Hom(E)$. Then $Q=0$ iff $A^*=-A$, i.e. $A$ is skew-symmetric. Solution by T. Speckhofer.
$A\in\Hom(E)$ is unitary iff for all $x\in E$: $\norm{Ax}=\norm x$.
If $E$ is a complex Hilbert-space than $A\in\Hom(E)$ is normal iff for all $x\in E$: $\norm{A^*x}=\norm{Ax}$. Solution by T. Speckhofer.
Let $E$ be finite dimensional Hilbert-space and $\Psi:G\rar\OO(E)$ an irreducible orthogonal representation. 1. A self-adjoint operator $A\in\Hom_G(\Psi)$ is a multiple of the identity. 2. If $E$ is complex and $\Psi:G\rar\UU(E)$ unitary and irreducible, then $\Hom_G(\Psi)=\C1_E$.
$\proof$ 1. Suppose $E(\l)$ is an eigen-space of $A$, then for all $x\in E(\l)$: $A\Psi(g)x=\Psi(g)Ax=\l\Psi(g)x$, i.e. $\Psi(g)x\in E(\l)$. Since $\Psi$ is irreducible, it follows that $E(\l)=E$, i.e. $A=\l1_E$. This argument doesn't apply to the inifinite dimensional case directly, however, the spectral theorem for bounded self-adjoint operators will do it!
2. If $E$ is complex, the above proof also works for an arbitrary linear operator $A$, for $A$ must have an eigen-value and an eigen-space. However, we prefer to give a different, more involved proof, which can also be applied in the infinite dimensional case: We decompose $A\in\Hom(E)$ as follows: $A=B+iC$ where $B\colon=(A+A^*)/2$ and $C\colon=(A-A^*)/2i$. $B$ and $C$ are self-adjoint and for all $g\in G$ we have: $\Psi(g)(B+iC)\Psi(g)^{-1}=B+iC$. Since $\Psi$ is unitary both $B_1\colon=\Psi(g)B\Psi(g)^{-1}$ and $C_1\colon=\Psi(g)C\Psi(g)^{-1}$ are self-adjoint and for all $x\in E$: $\la(B-B_1)x,x\ra+i\la(C-C_1)x,x\ra=0$, it follows from the facts that $\la(B-B_1)x,x\ra,\la(C-C_1)x,x\ra\in\R$, that $\la(B-B_1)x,x\ra=\la(C-C_1)x,x\ra=0$ and thus by lemma $B_1=B$ and $C_1=C$. $\eofproof$
As we already know, the reverse is also true: Let $\Psi:G\rar\UU(E)$ be a unitary representation and suppose that a self-ajoint operator $A\in\Hom(E)$, which commutes with all $\Psi(g)$, $g\in G$, is necessarily a multiple of the identity. Then $\Psi$ is irreducible. Summarizing, we have
Let $\Psi:G\rar\UU(E)$ be a unitary representation (finite dimensional or not). If $\Psi$ is irreducible then $\Hom_G(\Psi)=\C1_E$ and if $\{A\in\Hom_G(\Psi): A^*=A\}=\R1_E$, then $\Psi$ is irreducible.

Isotypic Components and SALCs

Suppose $\Psi:G\rar\UU(E)$ and $\Phi:G\rar\UU(F)$ are irreducible. We want to compute the intertwining operators $H$ of the representation $\Psi\oplus\Phi$. So let $$ H=\left(\begin{array}{cc} A&B\\ C&D \end{array}\right) $$ be the block matrix of $H$ with respect to the decomposition $E\oplus F$. From $$ \left(\begin{array}{cc} \Psi(g)&0\\ 0&\Phi(g) \end{array}\right) \left(\begin{array}{cc} A&B\\ C&D \end{array}\right) =\left(\begin{array}{cc} A&B\\ C&D \end{array}\right) \left(\begin{array}{cc} \Psi(g)&0\\ 0&\Phi(g) \end{array}\right) $$ we get: $$ \left(\begin{array}{cc} \Psi(g)A&\Psi(g)B\\ \Phi(g)C&\Phi(g)D \end{array}\right) =\left(\begin{array}{cc} A\Psi(g)&B\Phi(g)\\ C\Psi(g)&D\Phi(g) \end{array}\right) $$ and thus: $A\in\Hom_G(\Psi)$, $D\in\Hom_G(\Phi)$, $B\in\Hom_G(\Phi,\Psi)$ and $C\in\Hom_G(\Psi,\Phi)$. Since both $\Psi$ and $\Phi$ are irreducible: $A=a1_E$ and $D=d1_F$. If $\Psi$ and $\Phi$ are not equivalent, then both $B$ and $C$ must vanish. If $\Phi$ and $\Psi$ are equivalent, then $B$ and $C$ need not vanish necessarily but if one of them doesn't vanish it must be an isomorphism. It turns out that the eigenvalues of $H$ are related to the eigen-values of $CB$:
Suppose $a,b\in\C$, $B,C\in\Ma(\C,n)$ and let $H\in\Ma(2n,\C)$ be the block-matrix $$ \left(\begin{array}{cc} a&B\\ C&d \end{array}\right) $$ where, of course, $a$ and $d$ denote $a$ times identity and $d$ times identity, respectively. If $\mu\in\C$ is an eigen-value of $CB$, then any zero of the quadratic polynomial $\l^2-(a+d)\l+ad-\mu$ is an eigen-values of $H$. Suggested solution.
Since $CB\in\Hom_G(\Phi)$ it must be a multiple of the identity, i.e $H$ has in any case at most two distinct eigen-values.
For $a,b,\mu\in\C$, $B,C\in\Ma(\C,n)$ and $CB=\mu$, compute all eigen-vectors of the block-matrix $$ H\colon=\left(\begin{array}{cc} a&B\\ C&d \end{array}\right) $$
If $\Phi=\Psi$ and $n=\dim\Psi$, then $H\in\Hom_G(\Psi\oplus\Psi)$ iff for some $a,b,c,d\in\C$: $$ H\colon=\left(\begin{array}{cc} a&b\\ c&d \end{array}\right)\in\Ma(2n,\C) $$
So suppose we have an arbitrary unitary representation $\Psi:G\rar\UU(E)$ and a decomposition $E=\bigoplus E_j$ of $E$, such that $g\mapsto\Psi(g)|E_j$ is irreducible, merging all spaces $E_j$ with equivalent subrepresentations $g\mapsto\Psi(g)|E_j$, we get a decomposition $E=\bigoplus F_k$ of $E$ into pairwise orthogonal subspaces $F_k$, which are called the isotypic components of $\Psi$ - this is because all irreducible subrepresentations of $g\mapsto\Psi(g)|F_k$ are equivalent. To state it differently: $g\mapsto\Psi(g)|F_k$ is the sum of all equivalent irreducible subrepresentations of $\Psi$. A basis for $E$ which is made up of bases for the isotypic components is called a symmetry adapted linear combination, for short: SALC. Any intertwining operator $A\in\Hom_G(\Psi)$ must have block-diagonal form with respect to the decomposition $E=\bigoplus F_k$ and if all irreducible subrepresentations $g\mapsto\Psi(g)|E_j$ are pairwise inequivalent - which means $E_1,\ldots,E_l$ are the isotypic components, then SALCs are always eigen-vectors of $A$. If some of these subrepresentations are equivalent, then an arbitrary SALC may not be an eigen-vector. In any case the isotypic components are invariant by $A$ and if $n_k$ is the number of irreducible equivalent subrepresentations, which make up the isotypic component $F_k$, then $A|F_k$ has at most $n_k$ different eigen-values - we just proved that for $n_k=2$, but it's not that difficult to prove it in general:
If $\Psi=\Psi_1\oplus\cdots\oplus\Psi_n$ is a decomposition of a representation $\Psi$ of $G$ into irreducible subrepresentations, then any intertwining operator $A\in\Hom_G(\Psi)$ has at most $n$ pairwise distinct eigen-values. If all $\Psi_j$ are pairwise inequivalent then any basis of SALCs is a basis of eigen-vectors of any intertwining operator $A\in\Hom_G(\Psi)$.
$\proof$ Suppose $F$ is an isotypic component and $g\mapsto\Psi(g)|F$ is the sum of $m$ equivalent representations, then $F=E_1\oplus\cdots\oplus E_m$ and $d\colon=\dim(E_1)=\cdots=\dim(E_m)$. If $A|F$ has at least $m+1$ pairwise distinct eigen-values, then the dimension of it's smallest eigen-space must be smaller then $d$. On the other hand any eigen-space of $A$ is also $\Psi$-invariant, which contradicts the minimality of the dimension of irreducible representations. $\eofproof$
Suppose $\Phi$ is the sum of $m$ irreducible representations $\Psi$, i.e. $\Phi=\Psi\oplus\cdots\oplus\Psi$ and $\dim\Psi=n$. Then $H\in\Hom_G(\Phi)$ iff for some $(a_{jk})\in\Ma(m,\C)$: $$ H\colon=\left(\begin{array}{cccc} a_{11}&a_{12}&\cdots&a_{1m}\\ a_{21}&a_{22}&\cdots&a_{2m}\\ \vdots&\vdots&\ddots&\vdots\\ a_{m1}&a_{m2}&\cdots&a_{mm}\\ \end{array}\right)=A\otimes 1_n\in\Ma(mn,\C) $$ We remark that $\Phi=\Psi\otimes 1_m$ and $H=1_E\otimes A$.
The example is a particular case of the following result, which is just the commutative version of the previous corollary:
1. Every character of a commutative group $G$ is a homomorphism $\chi:G\rar S^1$. 2. For every finite dimensional representation $\Psi:G\rar\UU(E)$ of $G$ there is an orthonormal basis $b_1,\ldots,b_n$ of $E$ and characters, i.e. homomorphisms $\chi_1,\ldots,\chi_n:G\rar S^1$, such that $$ \forall g\in G\,\forall x\in E:\quad \Psi(g)x=\sum\chi_j(g)\la x,b_j\ra b_j~. $$ If all these characters are distinct and $A\in\Hom(E)$ commutes with all $\Psi(g)$, i.e. $A\in\Hom_G(\Psi)$, then $b_1,\ldots,b_n$ are also eigen-vectors of $A$.
$\proof$ We simply have to prove that all irreducible representations of a commutative group are one-dimensional and thus every irreducible representation is just multiplication by a homomorphism $\chi:G\rar S^1$. If $\Psi$ is irreducible, then for all $g\in G$: $\Psi(g)\in\Hom_G(\Psi)$ and thus $\Psi(g)=\chi(g)1_E$, which is irreducible iff $\dim E=1$. $\eofproof$
1. If $G$ has a commutative subgroup $H$, then every irreducible representation of $G$ has at most dimension $|G/H|=|G|/|H|$. 2. If $G$ has a normal sugbroup $H$ such that $G/H$ is commutative, then $G$ has at least $|G/H|$ irreducible one-dimensional representations (cf. subsection).
$\proof$ 1. Suppose $\Psi:G\rar\UU(E)$ is irreducible, then $E=[x_1]\oplus\ldots\oplus[x_n]$ splits into one-dimensional subspaces invariant under $\Psi(h)$, $h\in H$. Let $E_j$ be the subspace generated by $\Psi(g)x_j$, $g\in G$; by irreducibility: $E_j=E$ and since $\Psi(h)x_j=\r(h)x_j$, we have for all $g^\prime\in gH$: $\Psi(g^\prime)x_j=\Psi(gh)x_j=\Psi(g)\r(h)x_j$, i.e. $\Psi(g^\prime)x_j$ and $\Psi(g)x_j$ are collinear. Hence $E=E_j$ contains at most $|G/H|$ linearly independent vectors. 2. Every homomorhism $\vp:G/H\rar S^1$ determines a homomorphism $\wt\vp:G\rar S^1$: just put $\wt\vp(x)=\vp(\pi(x))$, where $\pi:G\rar G/H$ is the quotient map. $\eofproof$
The cyclic group $C_n$ is a commutative subgroup of the dihedral group $C_{nv}$ and $|C_{nv}/C_n|=2$, it follows that every irreducible representation of $C_{nv}$ has dimension $2$ at most.
Any irreducible representation of $O_h$ has dimension $6$ at most.
Prove that $S(m+n)$ has a commutative subgroup of order $mn$ and $S(mn)$ has a commutative subgroup of order $m^n$.

Great Orthogonality Theorem

Let $u_1,\ldots,u_n$ and $y_1,\ldots,y_n$ be vectors in $E$. Then the trace of the operator $Ax\colon=\sum\la x,u_j\ra y_j$ is given by: $\tr A=\sum_j\la y_j,u_j\ra$.
$\proof$ Let $e_k$ be an orthonormal basis of $E$, then $Ae_k=\sum_j\la e_k,u_j\ra y_j$ and therefore $$ \tr A=\sum_k\la Ae_k,e_k\ra =\sum_{j,k}\la e_k,u_j\ra\la y_j,e_k\ra =\sum_{j,k}\la y_j,\la u_j,e_k\ra e_k\ra =\sum_j\la y_j,u_j\ra~. $$ $\eofproof$
The basic tool for the proof of the main result of this section is again some sort of averaging.
Suppose $\Psi$ and $\Phi$ are two finite dimensional irreducible unitary representations of the group $G$ in the Hilbert-spaces $E$ and $F$ respectively. 1. If $\Psi$ and $\Phi$ are not equivalent, then for all $g\in G$, all $x,u\in E$ and all $y,v\in F$: \begin{eqnarray*} &&\frac1{|G|}\sum_{h\in G}\la\Psi(h)x,u\ra\la\Phi(h^{-1}g)y,v\ra=0 \quad\mbox{i.e.}\\ &&\frac1{|G|}\sum_{h\in G}\Psi(h)\otimes\Phi(h^{-1}g)=0~. \end{eqnarray*} where $\otimes$ is the tensor product, cf. section, and the euclidean product on $E\otimes F$ satisfies $\la x\otimes y,u\otimes v\ra=\la x,u\ra\la y,v\ra$.
2. If $\Psi=\Phi$ and $\dim(E)=l$, then for all $g\in G$, all $x,u,y,v\in E$: \begin{eqnarray*} &&\frac1{|G|}\sum_{h\in G}\la\Psi(h)x,u\ra\la\Psi(h^{-1}g)y,v\ra =\frac1l\la\Psi(g)y,u\ra\la x,v\ra \quad\mbox{i.e.}\\ &&\frac1{|G|}\Big(\sum_h\Psi(h)\otimes\Psi(h^{-1}g\Big)x\otimes y =\Big(\frac1l\Psi(g)\otimes1\Big)y\otimes x~. \end{eqnarray*}
$\proof$ Fixing $g\in G$, $u\in E$ and $y\in F$ we define a linear operator $J:E\to F$ by $$ Jx\colon=\frac1{|G|}\sum_{h\in G}\la\Psi(h)x,u\ra\Phi(h^{-1}g)y~. $$ Then for all $g^\prime\in G$ we have: \begin{eqnarray*} J\Psi(g^\prime)x &=&\frac1{|G|}\sum_{h\in G}\la\Psi(h)\Psi(g^\prime)x,u\ra\Phi(h^{-1}g)y\\ &=&\frac1{|G|}\sum_{h\in G}\la\Psi(hg^\prime)x,u\ra\Phi(h^{-1}g)y\\ &=&\frac1{|G|}\sum_{h\in G}\la\Psi(h)x,u\ra\Phi(g^\prime h^{-1}g)y =\Phi(g^\prime)Jx~. \end{eqnarray*} Thus $J\in\Hom_G(\Psi,\Phi)$ and therefore $J=0$ or $J$ is an isomorphism, the latter cannot happen in case $\Psi$ and $\Phi$ are not equivalent. If $\Psi=\Phi$, then, by Schur's Lemma: $J=\l 1_E$ and thus $\l=l^{-1}\tr J$. Now by lemma the trace of $J$ is \begin{eqnarray*} \tr J &=&\frac1{|G|}\sum_{h\in G}\la\Psi(h^{-1}g)y,\Psi(h)^*u\ra\\ &=&\frac1{|G|}\sum_{h\in G}\la\Psi(h^{-1})\Psi(g)y,\Psi(h^{-1})u\ra =\frac1{|G|}\sum_{h\in G}\la\Psi(g)y,u\ra =\la\Psi(g)y,u\ra~. \end{eqnarray*} Hence for all $x,v\in E$: $$ \frac1{|G|}\sum_{h\in G}\la\Psi(h)x,u\ra\la\Psi(h^{-1}g)y,v\ra =\la Jx,v\ra =\frac1l\la\Psi(g)y,u\ra\la x,v\ra~. $$ $\eofproof$
Suppose $\Psi:G\rar\UU(E)$ is irreducible and $\dim E=n$. 1. Show that for all $g\in G$ and all $x,y\in E$: $$ \frac{l}{|G|}\Big(\sum_{g\in G}\Psi(g)\otimes\Psi(g)^*\Big)x\otimes y=y\otimes x~. $$ 2. Let $e_1,\ldots,e_n$ be an orthonormal basis for $E$ and put $E^{jk}x\colon=\la x,e_k\ra e_j$ and define the exchange operator or swap operator $P_{ex}\in\Hom(E\otimes E)$ by $$ P_{ex}=\sum_{j,k}E^{kj}\otimes E^{jk}=\sum_{j,k}E^{kj}\otimes E^{kj*}~. $$ Verify that $P_{ex}(x\otimes y)=y\otimes x$, that the eigen-values of the unitary operator $P_{ex}$ are $\pm1$ and that for all $A\in\Hom(E)$: $[A\otimes A,P_{ex}]=0$. 3. Cf. exam and prove that for all $A\in\Hom(E)$: $\tr_1((A\otimes1)P_{ex})=A$. Suggested solution.
Given two distinct homomorphisms $\psi,\vp:\Z_n=\{0,\ldots,n-1\}\rar S^1$, verify by means of the great orthogonality theorem and directly that for all $m\in\Z_n$: $$ \sum_{k=0}^{n-1}\psi(k)\vp(m-k)=0 \quad\mbox{and}\quad \frac1n\sum_{k=0}^{n-1}\psi(k)\psi(m-k)=\psi(m)~. $$
What conclusions can be drawn from the Great Orthogonality Theorem for characters? So let $\psi$ and $\vp$ be the characters of $\Psi$ and $\Phi$ respectively, then we obtain from this theorem by putting $x=u=e_j$ and $y=v=f_k$ and summing over all basis vectors $e_j$ of $E$ and $f_k$ of $F$: \begin{equation}\label{goteq1}\tag{GOT1} \frac1{|G|}\sum_{h\in G}\psi(h)\vp(h^{-1}g)=0 \quad\mbox{and}\quad \frac1{|G|}\sum_{h\in G}\psi(h)\psi(h^{-1}g)=\frac1l\psi(g)~. \end{equation} In particular for $g=e$ we conclude by observing $\vp(h^{-1})=\cl{\vp(h)}$ and $\psi(e)=\tr(id_E)=\dim(E)=l$: \begin{equation}\label{goteq2}\tag{GOT2} \frac1{|G|}\sum_{h\in G}\psi(h)\cl{\vp(h)}=0 \quad\mbox{and}\quad \frac1{|G|}\sum_{h\in G}|\psi(h)|^2=1~. \end{equation}
Two finite dimensional irreducible unitary representations $\Psi$ and $\Phi$ of the group $G$ are equivalent, if and only if their characters coincide.
$\proof$ Suppose $\Psi$ and $\Phi$ are not equivalent but their characters coincide, then we infer from relation \eqref{goteq2}: $0=\sum_{h\in G}|\psi(h)|^2=|G|$. $\eofproof$
We define a more or less obvious euclidean product on the space of complex-valued functions $f$ on $G$ by $$ \la f_1,f_2\ra\colon=\frac1{|G|}\sum_{g\in G}f_1(g)\cl{f_2(g)} $$ and denote this space by $L_2(G)$
. By relation \eqref{goteq2} the characters form an orthonormal set in $L_2(G)$. Now let $\Psi$ be any finite dimensional unitary representation, then by subsection: $\tr\Psi=\sum n_j\chi_j$ for some numbers $n_j\in\N_0$; $n_j$ is called the multipliticity of a representation, which features the character $\chi_j$. Hence there are $n_j$ pairwise orthogonal subspaces $E_1^j,\ldots,E_{n_j}^j$ all of dimension $l_j\colon=\chi_j(e)$, such that all subrepresentations $g\mapsto\Psi(g)|E_1^j,\ldots,\Psi(g)|E_{n_j}^j$ are equivalent. We've called the space $$ E_1^j\oplus\cdots\oplus E_{n_j}^j $$ the isotypic component of the representation $\Psi$. If $g_1,\ldots,g_N$ is a complete set of representatives of the conjugacy classes, then we get by the orthonormality of characters: \begin{equation}\label{goteq3}\tag{GOT3} n_j=\la\tr\Psi,\chi_j\ra=\frac1{|G|}\sum_{j=1}^N c_j\tr\Psi(g_j)\cl{\chi(g_j)} \end{equation} where $c_j$ denotes the cardinality of the conjugacy class $C(g_j)$. Moreover $\la\tr\Psi,\tr\Psi\ra=\sum_j n_j^2$ and thus
A finite dimensional representation $\Psi:G\rar\UU(E)$ is irreducible iff $\tr\Psi$ is a character of $G$, which exactly holds iff $\norm{\tr\Psi}=1$
The character table of $S(5)$: $$ \begin{array}{c|rrrrrrr} S(5)&1E &10A&15B&20C&20D&30F&24G \\ \hline \chi_1&1 & -1 & 1 & 1 & -1 & -1 & 1 \\ \chi_2&4 & -2 & 0 & 1 & 1 & 0 & -1 \\ \chi_3&5 & -1 & 1 & -1 & -1 & 1 & 0 \\ \chi_4&6 & 0 & -2 & 0 & 0 & 0 & 1 \\ \chi_5&5 & 1 & 1 & -1 & 1 & -1 & 0 \\ \chi_6&4 & 2 & 0 & 1 & -1 & 0 & -1 \\ \chi_7&1 & 1 & 1 & 1 & 1 & 1 & 1 \end{array} $$ Write $\chi_2^2$ and $\chi_6^2$ as the sum of characters.
If $\Psi:G\rar\UU(E)$ and $\Phi:G\rar\UU(F)$ are representations, then $$ \la\tr\Psi,\tr\Phi\ra=\dim\Hom_G(\Psi,\Phi), $$ in particular $\dim\Hom_G(\Psi)=\sum n_j^2$, where $n_j$ is the multipliticity of the irreducible representation featuring the character $\chi_j$.
$\proof$ 1. If both representations are irreducible, then the left hand side is $1$ if $\Psi$ and $\Phi$ are equivalent and zero otherwise. Now, by Schur's Lemma also the right hand side is $1$ if $\Psi$ and $\Phi$ are equivalent and zero otherwise.
2. Suppose $\Phi$ is irreducible and write $\Psi$ as sum of irreducible subrepresentations $\Psi_j$, then $$ \dim\Hom_G(\Psi,\Phi)=\sum\dim\Hom_G(\Psi_j,\Phi), $$ which by 1. coincides with the number $n_j$ of subrepresentations $\Psi_j$ equivalent to $\Phi$.
3. Similarly we decompose an arbitrary representation $\Phi$. $\eofproof$
Our next result gives us some sort of projection $\Hom(E,F)\rar\Hom_G(\Psi,\Phi)$, which takes any operator $A\in\Hom(E,F)$ and maps it to an intertwining operator $A^\sharp\in\Hom_G(\Psi,\Phi)$ - once more the intertwining operator $A^\sharp$ is given by averaging:
Suppose $\Psi,\Phi$ are unitary representations of $G$ in the finite dimensional spaces $E$ and $F$ respectively. If $A:E\rar F$ denotes any linear mapping, then putting for any fixed $g\in G$: $$ J_g(A) \colon=\frac1{|G|}\sum_{h\in G}\Phi(h^{-1}g)A\Psi(h) =\frac1{|G|}\sum_{h\in G}\Phi(h^{-1})A\Psi(gh) $$ we get an intertwining operator $J_g(A)\in\Hom_G(\Psi,\Phi)$; in particular \begin{equation}\label{goteq4}\tag{GOT4} A^\sharp\colon=J_e(A)\colon=\frac1{|G|}\sum_{h\in G}\Phi(h^{-1})A\Psi(h) \end{equation} 2. If $A\in\Hom_G(\Psi,\Phi)$, then $A=A^\sharp$. 3. The mapping $J_e:\Hom(E,F)\rar\Hom_G(\Psi,\Phi)$ is a projection onto $\Hom_G(\Psi,\Phi)$. 4. If both $\Psi$ and $\Phi$ are irreducible, then $$ A^\sharp=\left\{\begin{array}{cl} 0&\mbox{if $\Psi$ and $\Phi$ are not equivalent}\\ \frac{\tr A}{\dim E}\,1_E&\mbox{if $\Psi=\Phi$} \end{array}\right. $$
$\proof$ 1. This is a simple matter of checking: \begin{eqnarray*} J_g(A)\Psi(g^\prime) &=&\frac1{|G|}\sum_{h\in G}\Phi(h^{-1}g)A\Psi(hg^\prime)\\ &=&\frac1{|G|}\sum_{h\in G}\Phi(g^\prime(hg^\prime)^{-1}g)A\Psi(hg^\prime)\\ &=&\frac1{|G|}\sum_{h\in G}\Phi(g^\prime h^{-1}g)A\Psi(h) =\Phi(g^\prime)J_g(A)~. \end{eqnarray*} 2., 3. If $A\in\Hom_G(\Psi,\Phi)$, then $$ J_g(A)=\Big(\frac1{|G|}\sum_{h\in G}\Phi(h^{-1}gh)\Big)A \quad\mbox{and thus}\quad A^\sharp=J_e(A)=A~. $$ 4. If $\Phi=\Psi$ is irreducible, then $A^\sharp=\l 1_E$ and $\tr(A^\sharp)=\l\dim E$. $$ \tr A^\sharp =\frac1{|G|}\sum_{h\in G}\tr(\Psi(h^{-1})A\Psi(h)) =\frac1{|G|}\sum_{h\in G}\tr(A\Psi(h)\Psi(h^{-1})) =\tr A~. $$ $\eofproof$
Show that if $A\in\Hom_G(\Psi,\Phi)$, then $$ J_g(A)=\Big(\frac{|Z(g)|}{|G|}\sum_{h\in C(g)}\Phi(h)\Big)A, $$ where $Z(g)$ is the centralizer of $g\in G$.
Is proposition also true for $I_g(A)\colon=\sum_{h\in G}\Phi(h)A\Psi(h^{-1}g)$?

The Convolution Algebra $L_2(G)$

Coordinate functions

Let $\Psi:G\rar\UU(E)$ be a unitary representation in a finite dimensional Hilbert-space and let $e_1,\ldots,e_l$ be an orthonormal basis for $E$. We put $$ \psi_{jk}(g)\colon=\la\Psi(g)e_k,e_j\ra $$ i.e. $(\psi_{jk}(g))_{j,k=1}^l$ is the matrix of $\Psi(g)$ with respect to the orthonormal basis $e_1,\ldots,e_l$. Since $\Psi$ is unitary, we have: $$ \psi_{jk}(g) =\la\Psi(g)e_k,e_j\ra =\la e_k,\Psi(g)^*e_j\ra =\la e_k,\Psi(g)^{-1}e_j\ra =\cl{\la\Psi(g^{-1})e_j,e_k\ra} =\cl{\psi_{kj}(g^{-1})}~. $$ These functions are called coordinate functions
of the representation $\Psi$. For the contragredient representation we just get the complex conjugates, i.e. $$ \la\bar\Psi(g)e_k,e_j\ra=\bar\psi_{jk}(g)~. $$ Next, let $\Psi$ be irreducible and let $\Phi$ be another irreducible representation in $F$. Choose an orthonormal basis $f_1,\ldots,f_m$ for $F$ and put $\vp_{jk}(g)\colon=\la\Phi(g)f_k,f_j\ra$. If $\Psi$ and $\Phi$ are not equivalent, then we get by putting in the Great Orthogonality Theorem $x=e_k,u=e_j,y=f_s$ and taking the inner product with $f_r$: \begin{equation}\label{caleq1}\tag{CAL1} \frac1{|G|}\sum_{h\in G}\psi_{jk}(h)\vp_{rs}(h^{-1}g)=0 \quad\mbox{and}\quad \frac1{|G|}\sum_{h\in G}\psi_{jk}(h)\psi_{rs}(h^{-1}g) =\frac{\d_{kr}}l\psi_{js}(g)~. \end{equation} where $l=\dim E$; for $g=e$ this comes down to ($\vp_{rs}(h^{-1})=\bar\vp_{sr}(h)$): $$ \la\psi_{jk},\vp_{sr}\ra =\frac1{|G|}\sum_{h\in G}\psi_{jk}(h)\vp_{rs}(h^{-1})=0 \quad\mbox{and}\quad \la\psi_{jk},\psi_{sr}\ra =\frac1{|G|}\sum_{h\in G}\psi_{jk}(h)\psi_{rs}(h^{-1}) =\frac1l\d_{js}\d_{kr}~. $$ Of course, the coordinate functions depend on the basis but the space ${\cal A}_\Psi\colon=\lhull{\psi_{jk}}$ does not:
The space ${\cal A}_\Psi\colon=\lhull{\psi_{jk}}$ does not depend on the basis of $E$. This holds whether $\Psi$ is irreducible or not.
Determine the coordinate functions of the tensor product of the spin $1/2$ with its dual.
We recap the previous calculations as follows:
If $\Psi:G\rar\UU(E)$ is an irreducibl unitary representation then the set of functions $\psi_{jk}(g)\colon=\la\Psi(g)e_k,e_j\ra$ is orthogonal in $L_2(G)$. If $\Phi$ is another inequivalent irreducible representation, then ${\cal A}_\Psi\colon=\lhull{\psi_{jk}}$ and ${\cal A}_\Phi\colon=\lhull{\vp_{rs}}$ are mutually orthogonal subspaces in $L_2(G)$.
Thus if $\Psi_1,\ldots,\Psi_N$ is a complete set of pairwise inequivalent irreducible representations of the finite group $G$ and $l_j$ is the dimension of $\Psi_j$, then $$ N\leq\sum_{j=1}^Nl_j^2\leq|G|~. $$ We will see (cf. theorem), that the right inequality is in fact an equality, moreover the set of all functions $$ \sqrt{l_m}\psi_{jk}^m(g) \quad j,k=1,\ldots,l_m, \quad m=1,\ldots,N, $$ where $l_m\colon=\dim\Psi^m$, will turn out to be an orthonormal basis for $L_2(G)$. In any case the left inequality is an equality iff $G$ is commutative.
The relations \eqref{caleq1} say a great deal more than what has been summarized in corollary. In order to formulate it concisely we need the notion of convolution, which, by the way, was the essence of the averaging trick, we employed so many times before.

Convolution

The convolution is a bilinear mapping from $L_2(G)\times L_2(G)$ into $L_2(G)$: $(f,g)\mapsto f*g$, $$ f*g(x) \colon=\frac1{|G|}\sum_{y\in G}f(y)g(y^{-1}x) =\frac1{|G|}\sum_{y\in G}f(xy^{-1})g(y); $$ it is just the average of $\{f(y)g(z): yz=x\}$ for a given $x\in G$. In case of a finite group $G$ the algebra $(L_2(G),*)$ is in fact a group with neutral element $|G|\d_e$, where $\d_x(x)=1$ and for $y\neq x$: $\d_x(y)=0$. The vector-space $L_2(G)$ equipped with this binary operation is called the convolution algebra of $G$ - in algebra it's usually denoted by $\C[G]$ and called the group algebra.

Prove associativity: for all $f,g,h\in L_2(G)$: $(f*g)*h=f*(g*h)$ and bi-linearity: $f*(g+h)=f*g+f*h$ and $(f+g)*h=f*h+g*h$.
Now we can reinterpret the relations \eqref{caleq1} as follows: \begin{equation}\label{caleq2}\tag{CAL2} \psi_{jk}*\vp_{rs}=0 \quad\mbox{and}\quad \psi_{jk}*\psi_{rs}=\tfrac1l\d_{kr}\psi_{js} \end{equation} where $l$ is the dimension of the irreducible representation $\Psi$. Let $\psi\colon=\tr\Psi=\sum\psi_{jj}$ and $\vp\colon=\tr\Phi=\sum\vp_{rr}$ be the characters of two inequivalent irreducible representations, then \begin{equation}\label{caleq3}\tag{CAL3} \psi*\vp=0,\quad \psi*\psi=\tfrac1l\psi \quad\mbox{and}\quad \psi(x^{-1})=\cl{\psi(x)}, \end{equation} where the last identity follows from: $\psi_{jj}(x^{-1})=\cl{\psi_{jj}(x)}$. In particular for $x=e$ we get: \begin{equation}\label{caleq4}\tag{CAL4} \la\psi,\vp\ra=0,\quad \la\psi,\psi\ra=1, \quad\mbox{and}\quad \psi(e)=l~. \end{equation} Therefore the subspaces ${\cal A}_\Psi$ are subalgebras of the convolution algebra $L_2(G)$ such that for any pair of inequivalent irreducible representations $\Psi$ and $\Phi$: ${\cal A}_\Psi*{\cal A}_\Phi=\{0\}$.
The algebra ${\cal A}_\Psi$ has a unit: it's a multiple $l$ of the character $\psi$ of $\Psi$, where $l$ is the dimension of $\Psi$.
$\proof$ By \eqref{caleq2}: $\psi_{jk}*\psi_{rs}=\d_{kr}\psi_{js}/l$ and thus: $\psi_{jk}*\psi=\sum_r\d_{kr}\psi_{jr}/l=\psi_{jk}/l$ and $\psi*\psi_{jk}=\sum_r\d_{rj}\psi_{rk}/l=\psi_{jk}/l$. $\eofproof$
Remember the center
$Z$ of an arbitrary group $G$ is the subgroup $Z\colon=\{x\in G:\forall y\in G: xy=yx\}$. Thus the center $Z(G)$ of $L_2(G)$ is the subgroup: $Z(G)=\{f\in L_2(G):\forall g\in L_2(G):\,f*g=g*f\}$.
Prove that the center $\{A\in\Hom(E):\forall B\in\Hom(E): AB=BA\}$ of the algebra $\Hom(E)$ is the span of $1_E$.
A function $f:G\rar\C$ is in the center $Z(G)$ of the convolution algebra $L_2(G)$ - a so called class function - iff for all $x,y\in G$: $f(xy)=f(yx)$ and this holds if and only if for every conjugacy class $C$ $f|C$ is constant.
$\proof$ Suppose $f$ is in the center, then $|G|f*\d_x(y)=f(yx^{-1})$ and $|G|\d_x*f(y)=f(x^{-1}y)$. Conversely, if for all $x,y\in G$: $f(yx)=f(xy)$, which is equivalent to $f(x)=f(y^{-1}xy)$ i.e. $f$ must be constant on conjugacy classes. If $f$ is constant on conjugacy classes, then $f(xy)=f(yx)$ and for any $g\in L_2(G)$: \begin{eqnarray*} f*g(x) &=&\frac1{|G|}\sum_{h\in G}f(xh^{-1})g(h) =\frac1{|G|}\sum_{h\in G}f(h^{-1}x)g(h)\\ &=&\frac1{|G|}\sum_{h\in G}g(h^{-1})f(hx) =\frac1{|G|}\sum_{h\in G}g(xh^{-1})f(h) =g*f(x)~. \end{eqnarray*} $\eofproof$
Since characters are constant on conjugacy classes, all characters are in the center $Z(G)$ of the convolution algebra $L_2(G)$.
If $f:G\rar\C$ is any function then $$ f_c(x)\colon=\frac1{|G|}\sum_{y\in G}f(yxy^{-1}) $$ is a class function.
Prove that $P:f\mapsto f_c$ is the orthogonal projection $L_2(G)\rar Z(G)$. Suggested solution.
Prove that $\la f*g,h\ra=\la g,\check f*h\ra$, where $\check f(x)\colon=\cl{f(x^{-1})}$. Conclude that the convolution operator $A_f:g\mapsto f*g$ is normal iff $\check f*f=f*\check f$ - which is always true for e.g. class functions $f$ - and self-adjoint iff for all $x\in G$: $\bar f(x)=f(x^{-1})$.
\begin{eqnarray*} \sum f*g(y)\bar h(y) &=&\sum_{x,y}f(y)g(y^{-1}x)\bar h(x) =\sum_{x,y}f(y)g(x)\bar h(yx)\\ &=&\sum_x g(x)\sum_y\cl{\check f(y^{-1})h(yx)} =\sum_x g(x)\sum_y\cl{\check f(y)h(y^{-1}x)}\\ &=&\sum_x g(x)\cl{\check f*h}(x) =\la g,\check f*h\ra \end{eqnarray*}

Character theorem

In the sequel we are going to prove that the characters form actually an orthonormal basis of the center $Z(G)$. First we 'extend' a unitary representation $\Psi:G\rar\UU(E)$ to an algebra homomorphism $L_2(G)\rar\Hom(E)$: For any $f\in L_2(G)$ we define a linear operator $\Psi(f)\in\Hom(E)$ by $$ \Psi(f)\colon=\frac1{|G|}\sum_{x\in G} f(x)\Psi(x) $$ Since any $f\in L_2(G)$ can be written as $f=\sum_{x\in G}f(x)\d_x$, $\Psi(f)$ is just the linear extension of $$ \forall x\in G:\quad\Psi(\d_x)\colon=\frac1{|G|}\Psi(x) $$ to the convolution algebra $L_2(G)$.
  1. For all $f,g\in L_2(G)$: $\Psi(f)\Psi(g)=\Psi(f*g)$, i.e. $f\mapsto\Psi(f)$ is an algebra-homomorphism from $L_2(G)$ into $\Hom(E)$.
  2. If $f\in Z(G)$, then $\Psi(f)\in\Hom_G(\Psi)$
  3. If $f\in Z(G)$ and if $\Psi$ is irreducible with character $\psi$ and dimension $l$, then: $$ \Psi(f)=\frac1l\la f,\bar\psi\ra 1_E \quad\mbox{and in particular}\quad \Psi(l\bar\psi)=1_E~. $$
$\proof$ 1. Putting $z=xy$ i.e. $x=zy^{-1}$ we have by definition: \begin{eqnarray*} \Psi(f)\Psi(g) &=&\frac1{|G|^2}\sum_{x,y\in G} f(x)g(y)\Psi(x)\Psi(y)\\ &=&\frac1{|G|^2}\sum_{z,y\in G} f(zy^{-1})g(y)\Psi(z) =\frac1{|G|}\sum_{z\in G}f*g(z)\Psi(z) =\Psi(f*g)~. \end{eqnarray*} 2. If $f$ is in the center $Z(G)$ of $L_2(G)$, then for all $x\in G$: $f*\d_x=\d_x*f$ and thus by 1.: $\Psi(f)\Psi(x)=\Psi(x)\Psi(f)$. 3. If in addition $\Psi$ is irreducible, then by Schur’s Lemma: $\Psi(f)=\l1_E$, i.e.: $\tr\Psi(f)=\l l$; on the other hand: $$ \tr\Psi(f) =\frac1{|G|}\sum_{h\in G}f(h)\psi(h) =\la f,\bar\psi\ra~. $$ Hence $\l=\la f,\bar\psi\ra/l$. $\eofproof$
Prove that for every conjugacy class $C$ and all $g\in C$: $$ \tr\Psi(I_C)=\frac{|C|}{|G|}\tr\Psi(g)~. $$
Prove that $\Psi(\check f)=\Psi(f)^*$.
Suppose $\Psi:G\rar\UU(E)$ is irreducible and $\dim E=l$. For any $A\in\Hom(E)$ put $f(x)\colon=l\tr(A\Psi(x))$. Prove by means of exam that $A=\Psi(f)$ and conclude that $\Psi:L_2(G)\rar\Hom(E)$ is onto, i.e. any operator $A$ is a linear combination of the unitary operators $\Psi(x)$, $x\in G$. Suggested solution
It's not surprising that any operator is a linear combination of unitary operators, for any operator is linear combination of four unitary operators (cf. the example below), the point is that for any $A\in\Hom(E)$ these unitary operators are in $G$, which may be finite!
Let $E$ be a (finite dimensional) complex Hilbert-space. 1. If $A\in\Hom(E)$ is self-adjoint and $\norm A\leq1$, then $U\colon=A+i(1-A^2)^{1/2}$ is an isometry and $A=(U+U^*)/2$. 2. Every $A\in\Hom(E)$ is real linear combination of four unitary operators.
Suppose $\Psi:G\rar\UU(E)$ is irreducible. Suppose $A\in\Hom(E)$ satisfies $\tr(A\Psi(g))=0$ for all $g\in G$. Then $A=0$.
Suppose $\Psi:G\rar\UU(E)$ is a representation such that $\Psi:L_2(G)\rar\Hom(E)$ is onto, then $A$ commutes with all $\Psi(x)$, $x\in G$, iff for some $f\in Z(G)$: $A=\Psi(f)$.
Next let's show that the characters form an orthonormal basis for the subspace $Z(G)$ of $L_2(G)$: Assume that there is a function $z\in Z(G)$, orthogonal to all characters. Then for any irreducible representation $\Psi$: $$ \sum_{h\in G}\bar z(h)\Psi(h)=0~. $$ Indeed, the left hand side is in $\Hom_G(\Psi)$ by lemma and since $\Psi$ is irreducible, it must be a multiple $\l$ of the identity. Hence $$ \l\dim(E) =\sum_{h\in G}\bar z(h)\tr\Psi(h) =\la\tr\Psi,z\ra=0~. $$ Since every representation is the sum of irreducibles, this holds for all representations. In order to show that the characters span the center, it suffices to find a representation $\Psi$, for which the only function $z\in Z(G)$, which nullifies $\sum_{h\in G} z(h)\Psi(h)$, is the null function. One such representation is the left-regular representation $$ \g:G\to\UU(L_2(G)),\quad g\mapsto\g(g)\colon= L_g,\quad\mbox{where}\quad L_gf(x)\colon=f(g^{-1}x)~. $$ As we've seen in section this is indeed a representation of $G$ in $L_2(G)$. Now suppose that for all $f\in L_2(G)$ and all $x\in G$: $$ 0=\Big(\sum_{h\in G}\bar z(h)\g(h)f\Big)(x) =\sum_{h\in G}\bar z(h)f(h^{-1}x)=z*f(x), $$ then obviously $z$ must vanish everywhere. Thus we've established the first assertion of the following
1. The characters form an orthonormal basis of the center $Z(G)$ of the convolution algebra $L_2(G)$. Hence the number $|\wh G|$ of characters coincides with the number of conjugacy classes.
2. Suppose $\psi_m$, $m\in\wh G$, are all characters and let $l_m$ be the dimension of a corresponding irreducible representation $\Psi^m$, then $$ |G|=\sum_{m\in\wh G}l_m^2 $$ 3. Two unitary representations $\Psi$ and $\Phi$ are equivalent iff $\tr\Psi=\tr\Phi$.
4. We have an orthogonal decomposition $$ L_2(G)=\bigoplus{\cal A}_m,\quad\forall m,k:\quad {\cal A}_m*{\cal A}_k=\d_{km}{\cal A}_m, $$ where ${\cal A}_m\colon={\cal A}_{\Psi^m}$ is the subalgebra generated by the coordinate functions of the irreducible representation $\Psi^m$.
5. The orthogonal projection $P_m:L_2(G)\rar{\cal A}_m$ is given by $P_mf=l_m\psi_m*f$.
$\proof$ 2. Let $n_j$ be the multipliticity of the character $\psi_j$ in the left-regular representation $\g$; by \eqref{goteq3} we have: $$ n_j =\la\tr\g,\psi_j\ra =\frac1{|G|}\sum_{g\in G}\tr L_g\cl{\psi_j(g)} $$ Now $L_g\d_h=\d_{gh}$, hence $\tr L_e=|G|$ and $\tr L_g=0$ if $g\neq e$; thus we conclude: $n_j=\psi_j(e)=l_j$ and therefore: $|G|=\tr\g(e)=\sum l_j^2$.
3. $\tr\Phi=\sum m_j\psi_j=\sum n_j\psi_j=\tr\Psi$ iff for all $j$: $m_j=n_j$.
4. $L_2(G)$ has dimension $|G|$ and $\bigoplus{\cal A}_m$ has dimension $\sum l_m^2$, thus both coincide.
5. This follows from
lemma and the self-adjointness of $P_m$ and the orthogonality of the subalgebras ${\cal A}_m$, $m\in\wh G$. $\eofproof$
To decide wether a given class function $\chi$ is a character and to determine the characters it is made of, just compute the coefficients $$ \la\chi,\psi_j\ra =\frac1{|G|}\sum_{C_k}c_k\chi(g_k)\bar\psi_j(g_k) $$ where $c_k=|C_k|$ and the sum is over all conjugacy classes $C_k$. For e.g. $G=\Gl(2,\Z_3)$ and a class function $\chi$ with values $-1,-1,-1,-1,0,0,0,0$ on the conjugacy classes you may utilize the following commands in sage:
G=GL(2,3)
Reps=G.conjugacy_classes_representatives()
len(Reps)
chi=ClassFunction(G, [-1,-1,-1,-1,0,0,0,0])
chi.is_irreducible()
Irr=chi.irreducible_constituents()
list(map(list,Irr))
chi.decompose()
On every commutative group $G$ there are exactly $|G|$ homomorphisms $\chi:G\rar S^1$ and they form an orthonormal basis of $L_2(G)$.
Since the conjugacy class in $G\times H$ are the products of conjugacy classes in the factors (cf. exam), we get
If $\Psi:G\rar\UU(E)$ and $\Phi:H\rar\UU(F)$ are irreducible, then so is $\Psi\otimes\Phi:G\times H\rar\UU(E\otimes F)$. Moreover, any irreducible representation of $G\times H$ is of this form. Suggested solution.
Find all characters of $C_{3v}\simeq S(3)$.
$C_{3v}$ has three conjugacy classes, hence three characters. Of course, we already know all three irreducible representations, but for sake of presentation we only take the trivial representation of $C_{3v}$ for granted and seek for the remaining two character $\psi_2$ and $\psi_3$: \begin{array}{c|rrr} C_{3v}&1E&2C_3&3\s\\ \hline \psi_1&1&1&1\\ \psi_2&?&?&?\\ \psi_3&?&?&? \end{array} Since the order of $C_{3v}$ is six we infer from the Character Theorem: $6=l_1^2+l_2^2+l_3^2=2+l_3^2$, i.e. $l_2=1$ and $l_3=2$ and thus $\psi_2(E)=1$ and $\psi_3(E)=2$. Since $\psi_2$ is a homomorphism, we must have: $\psi_2(C_3)^3=\psi_2(\s)^2=1$; by orthogonality of characters we also have: $1+2\psi_2(C_3)+3\psi_2(\s)=0$. Only $\psi_2(C_3)=1$ and $\psi_2(\s)=-1$ meet these conditions. By orthogonality of the characters we finally have $$ 0=2+2\psi_3(C_3)+3\psi_3(\s) =2+2\psi_3(C_3)-3\psi_3(\s) $$ which implies $\psi_3(\s)=0$ and $\psi_3(C_3)=-1$. Hence the table of characters is given by: \begin{array}{c|rrr} C_{3v}&1E&2C_3&3\s\\ \hline \psi_1&1&1&1\\ \psi_2&1&1&-1\\ \psi_3&2&-1&0 \end{array} Finally let's look at the restriction of the left-regular representation to the algebra ${\cal A}_\Psi$ of an irreducible representation $\Psi:G\rar\UU(E)$: Since $(\psi_{jk}(g))_{j,k=1}^l$ is the matrix of $\Psi(g)$, we have $$ \forall x,y\in G:\quad \psi_{jk}(yx)=\sum_m\psi_{jm}(y)\psi_{mk}(x)~. $$ Now $\g(y)\psi_{jk}(x)=\psi_{jk}(y^{-1}x)$, so it follows that $$ \psi_{jk}(y^{-1}x) =\sum_m\psi_{jm}(y^{-1})\psi_{mk}(x) =\sum_m\bar\psi_{jm}(y)\psi_{mk}(x) $$ and the character of this representation is $\sum\bar\psi_{jj}$. Moreover for any $f\in L_2(G)$ we have $$ f*\psi_{jk}(x) =\frac1{|G|}\sum_{y\in G} f(y)\psi_{jk}(y^{-1}x) =\frac1{|G|}\sum_{y\in G}\sum_m f(y)\bar\psi_{jm}(y)\psi_{mk}(x) $$ i.e. $f*\psi_{jk}\in{\cal A}_\Psi$ and analogously: $\psi_{jk}*f\in{\cal A}_\Psi$. Therefore we get the following
Let $\Psi^m$, $m\in\wh G$, be any irreducible representation with coordinate functions $\psi_{jk}^m$, $j,k=1,\ldots,l_m$ and dimension $l_m$. Then ${\cal A}_m$ is an ideal in $L_2(G)$, i.e. for all $f\in L_2(G)$ and all $\psi\in{\cal A}_m$: $f*\psi,\psi*f\in{\cal A}_m$. Moreover, for all $k=1,\ldots,l_m$ the spaces $$ \ell_k^m\colon=\lhull{\psi_{1k}^m,\ldots,\psi_{l_mk}^m} $$ are invariant under the left-regular representation $\g$ and $\g|\ell_k^m$ is equivalent to the contragredient representation $\bar\Psi^m$. Thus the restriction of the left-regular representation to ${\cal A}_m$ is the sum of $l_m$ irreducible representations all equivalent to the contragredient representation $\bar\Psi^m$.
Show that $\ell_k^m$ is a left ideal in $L_2(G)$, e.g. for all $f\in L_2(G)$: $f*\ell_k^m\sbe\ell_k^m$.
Verify that for all $f,\vp\in L_2(G)$: $$ f*\vp=\frac1{|G|}\sum_{y\in G} f(y)\g(y)\vp $$ i.e. the convolution operator $A_f:L_2(G)\rar L_2(G)$, $A_f\vp\colon=f*\vp$ is an average over the left-regular representation with weight function $f$. If $\vp$ is an eigen-function of all operators $\g(y)$, $y\in G$, with eigen-value $\l(y)$, then $\vp$ is an eigen-function of $A_f$ with eigen-value $\tfrac1{|G|}\sum f(y)\l(y)$. Prove that every homomorphism $\vp:G\rar S^1$ is an eigen-function of all operators $\g(y)$ for the eigen-value $\vp(y^{-1})$.
We know that conjugacy classes of the symmetric group $S(n)$ can be identified with the set of partitions of $n$ (cf. subsection) and therefore we can also identify each character of $S(n)$ with a partition $p$. The celebrated Frobenius Character Formula assigns to each partition $p$ a character $\chi_p$ in a one-one way.

Another form of orthogonality

In general \eqref{caleq3} is the basic tool for identifying characters. However, also the following corollary (compare exam) may help to find the character table: expanding the indicator function $I_{C(g)}$ of the conjugacy class $C(g)$ of $g$ we get: $$ I_{C(g)}(h) =\sum_{m\in\wh G}\la I_{C(g)},\psi_m\ra\psi_m(h) =\frac1{|G|}\sum_{m\in\wh G}\sum_{x\in C(g)}\bar\psi_m(x)\psi_m(h) =\frac{|C(g)|}{|G|}\sum_{m\in\wh G}\bar\psi_m(g)\psi_m(h) $$ where $\psi_m$, $m\in\wh G$, are the characters of $G$. Putting for any pair $(A,B)$ of subsets of $G$: $$ \d(A,B) =\left\{\begin{array}{cl} 1&\mbox{if $A=B$}\\ 0&\mbox{otherwise} \end{array}\right. $$ we thus get the following
For all $g,h\in G$: $$ \sum_{m\in\wh G}\psi_m(g)\bar\psi_m(h) =\d(C(g),C(h))\frac{|G|}{|C(g)|}\in\N_0~. $$ Equivalently: the columns of the character table are orthogonal vectors in $\C^N$ with its canonical complex euclidean product. In particular: $$ \forall g\neq e:\quad\sum_{m\in\wh G}l_m\psi_m(g)=0~. $$
Prove the following generalization of the relation $\sum l_j^2=|G|$: $$ \forall g\in G:\quad\sum_{m\in\wh G}|\psi_m(g)|^2=|Z(g)|~. $$
Let $C_m$, $m\in\wh G$, be the conjugacy classes, $c_m=|C_m|$, $g_m\in C_m$ and $\Psi^m:G\rar\UU(E_m)$ a complete set of pairwise inequivalent irreducible representations of dimension $l_m$ with character $\psi_m$. Then for all $m,k\in\wh G$: $$ \Psi^m(I_{C_k})=\frac{c_k\psi_m(g_k)}{|G|l_m}\,1_{E_m} \quad\mbox{and}\quad \tr\Psi^m(I_{C_k})=\frac{c_k\psi_m(g_k)}{|G|} $$
Since $I_{C_k}$ is a class function we have by lemma: $$ \Psi^m(I_{C_k}) =\frac1{l_m}\la I_{C_k},\bar\psi_m\ra\,1_{E_m} =\frac{c_k}{|G|l_m}\psi_m(g_k)\,1_{E_m}~. $$
For all subsets $A$ and $B$ of $G$ we have: $$ I_A*I_B(g)=\frac{|A\cap gB^{-1}|}{|G|} $$
Find all characters of $S(3)$ assuming only the trivial character.
Find all characters of $A(4)$ by employing orthogonality of the columns of the character table and orthogonality of characters.
$A(4)$ has order $12$ and $4$ conjugacy classes, representatives thereof are $E=(1234)$, $A=(2143)$, $B=(2314)$ and $C=(2431)$ and sizes: $1,3,4,4$. Next we seek for integer solutions to the equation $l_1^2+l_2^2+l_3^2+l_4^2=12$ and get $l_1=l_2=l_3=1$ and $l_4=3$. Putting $\psi_1(g)=1$ we need to find two homomorphisms $\psi_2,\psi_3:A(3)\rar S^1$. Since the order of $E,A,B$ and $C$ is $1,2,3$ and $3$, we must have $$ \psi_j(A)^2=1, \psi_j(B)^3=\psi_j(C)^3=1~. $$ Finally the standard representation $\Psi_4:A(4)\rar E\colon=[e_1+e_2+e_3+e_4]^\perp$ is irreducible (cf. exam) and it's easily checked that $\tr\Psi_4(A)=-1$ and $\tr\Psi_4(B)=\tr\Psi_4(C)=0$. Thus we have $$ \begin{array}{c|rccc} A(4)&1E&3A&4B&4C\\ \hline \psi_1&1&1&1&1\\ \psi_2&1&\psi_2(A)&\psi_2(B)&\psi_2(C)\\ \psi_3&1&\psi_3(A)&\psi_3(B)&\psi_3(C)\\ \psi_4&3&-1&0&0\\ \end{array} $$ By orthogonality of characters: $3-3\psi_2(A)=0$ and $3-3\psi_3(A)=0$, i.e. $\psi_2(A)=\psi_3(A)=1$. By orthogonality of columns: $\psi_2(B)+\psi_3(B)=-1$ and $\psi_2(C)+\psi_3(C)=-1$; since $\psi_j(B)^3=\psi_j(C)^3=1$, this implies: $\psi_2(B)=e^{2\pi i/3}$ and $\psi_3(B)=e^{-2\pi i/3}$ or conversely: $$ \begin{array}{c|rrcc} A(4)&1E&3A&4B&4C\\ \hline \psi_1&1&1&1&1\\ \psi_2&1&1&e^{-2\pi i/3}&e^{2\pi i/3}\\ \psi_3&1&1&e^{2\pi i/3}&e^{-2\pi i/3}\\ \psi_4&3&-1&0&0\\ \end{array} $$
Determine all characters of the representations of $T_d$ in ${\cal H}_l$ for $l=0,1,2$ discussed in subsection. Suggested solution
Find all characters of $T_d\simeq S(4)$ by employing orthogonality of the columns of the character table and the previous example. Suggested solution

Projection theorem

This theorem will produce SALCs among other things. Before stating the theorem, we need to discuss the components we are projection on: the isotypic components. Given any finite dimensional unitary representation $\Psi:G\rar\UU(E)$, we can decompose $\Psi$ into irreducible subrepresentations $\Psi_j:G\rar\UU(E_j)$. For any irreducible representation $\Phi$ of $G$ with character $\chi$ the isotypic component of $\Psi$ (featuring the character $\chi$) is the sum of all irreducible subrepresentations $\Psi_j$ equivalent to $\Phi$ - that's the way we previously defined the isotypic components in subsetion. From this definition it's not at all clear that the decomposition of $E$ into its isotypic components does not depend on the decomosition in irreducible subspaces and thus it's not clear if the decomosition of $E$ into isotypic components is unique.
Suppose $E=\sum E_j$ and $\Psi|E_j$ is irreducible. If $F$ is another irreducible subspace of $E$, then $\Psi|F$ is equivalent to one of $\Psi|E_j$.
$\proof$ Consider the inclusion $J:F\rar E$ and the orthogonal projection $P_j:E\rar E_j$. Then both $J\Psi(g)=\Psi(g)J$ and $P_j\Psi(g)=\Psi(g)P_j$ and thus $P_jJ:F\rar E_j$ is intertwining. Since $F$ and $E_j$ are irreducible we infer from Schur's Lemma that $P_jJ$ is either an isomorphism or the zero map. If it is zero for all $j$, then $J=0$. $\eofproof$
From this lemma it follows that the isotypic component is the sum of all irreducible subrepresentations with the same character. Indeed, let $E=\bigoplus E_j$ be a decomposition in irreducible subrepresentations and $F_0$ an isotypic component for the character $\chi$, then $F_0$ is a subspace of the sum $F_1$ of all irreducible subrepresentations with character $\chi$. By lemma any irreducible subrepresentation of $\Psi|F_1$ features the character $\chi$ and therefore $\Psi|F_1^\perp$ must comprise all the other irreducible representations, i.e. $\dim F_1^\perp=\dim F_0^\perp$ and thus $\dim F_1=\dim F_0$.
It immediately follows that the decomposition into isotypic components does not depend on the decomosition in irreducible subspaces and this in turn implies that the decomposition in isotypic components is unique. However the decomposition of an isotypic component in its irreducible subspaces is not unique.
If $\Psi:G\rar\UU(E)$ is irreducible and equivalent to $\Phi:G\rar\UU(F)$, i.e. $\Phi(g)=J^{-1}\Psi(g)J$ for some isomorphism $J:E\rar F$, then $H\colon=\{x\oplus Jx:x\in E\}$ is an irreducible subspace of $\Psi\oplus\Phi$ and $H^\perp=\{-J^*y\oplus y:y\in F\}$.
Let $\Psi$ be a finite dimensional unitary representation of $G$. Then the orthogonal projection $P_m$ onto the isotypic component (with character $\chi_m$) is given by: $$ P_m\colon =l_m\Psi(\bar\chi_m) =\frac{l_m}{|G|}\sum_{h\in G}\cl{\chi_m(h)}\Psi(h) =l_m\sum_{j=1}^N\cl{\chi_m(h_j)}\Psi(I_{C_j}) $$ where $h_j\in C_j$ and $C_1,\ldots,C_N$ are all pairwise disjoint conjugacy classes.
$\proof$ 1. First of all $$ \tr P_j =\frac{l_j}{|G|}\sum_{h\in G}\cl{\chi_j(h)}\tr\Psi(h) =l_j\la\tr\Psi(.),\chi_j\ra =l_jn_j $$ is the dimension of the isotypic component; secondly $\chi(h^{-1})=\bar\chi(h)$ and thus $$ P_j^* =\frac{l_j}{|G|}\sum_{h\in G}\chi_j(h)\Psi(h)^* =\frac{l_j}{|G|}\sum_{h\in G}\bar\chi_j(h^{-1})\Psi(h^{-1}) =P_j $$ Finally we infer from lemma and \eqref{caleq3} that: $$ P_kP_j =l_jl_k\Psi(\bar\chi_j*\bar\chi_)k =l_j\d_{jk}\Psi(\bar\chi_j) =\d_{jk}P_j~. $$ Thus the operators $P_j$ are pairwise orthogonal projections onto subspaces of dimension $l_jn_j$. Put $E_j\colon=\im P_j$; since $\bar\chi_j$ is a class function, we have by lemma: $\Psi(g)P_j=P_j\Psi(g)$. Define a representation $\Psi_j(g)\colon=\Psi(g)|E_j$ of $G$ in $E_j$; for all $k\neq j$ we have by \eqref{caleq3}: \begin{eqnarray*} \la\tr\Psi_j(.),\chi_k\ra &=&\la\tr\Psi(.)P_j,\chi_k\ra =\la\tr(P_j\Psi(.)),\chi_k\ra\\ &=&\sum_g\tr(P_j\Psi(g))\cl{\chi_k(g)} =\frac{l_j}{|G|}\sum_{g,h\in G}\cl{\chi_j(h)}\cl{\chi_k(g)}\tr\Psi(hg)\\ &=&l_j\sum_g\cl{\chi_j*\chi_k}(g)\tr\Psi(g)=0 \end{eqnarray*} showing that any irreducible subrepresentation of $\Psi_j$ must have character $\chi_j$. By lemma the isotypic component $F_j$ is the sum of all irreducible representations with character $\chi_j$, i.e. $E_j$ is a subspace of $F_j$. Finally both spaces have the same dimension $l_jn_j$. $\eofproof$
In case $\Psi$ is the left-regular representation $\g$ the isotypic components coincide with the algebras ${\cal A}_j$, $j\in\wh G$. Since the associated characters are the characters of contragredient representations, the orthogonal projection of $f$ at the point $x\in G$ is given by $$ \frac{l_j}{|G|}\sum_{h\in G}\chi_j(h)\g(h)(f)(x) =\frac{l_j}{|G|}\sum_{h\in G}\chi_j(h)f(h^{-1}x) =\frac{l_j}{|G|}\sum_{y\in G}\chi_j(xy^{-1})f(y) =(l_j\chi_j)*f(x), $$ i.e. we recover that $f\mapsto l_j\chi_j*f$ is the orthogonal projection onto ${\cal A}_j$.
Given a representation $\Psi:G\rar\UU(E)$. $f\in L_2(G)$ is a class function such that $f*f=f$, then $\Psi(f)$ is a projection. If in addition $f(x^{-1})=\cl{f(x)}$, then $\Psi(f)$ is self-adjoint and thus it's an orthogonal projection.
The summation over all group elements makes the formula for the projections $P_m$ on the isotypic components in Theorem laborious in applications, on the other hand constructing SALCs is pretty much straightforward: For $j\in\{1,\ldots,N\}$ put $d_j\colon=n_jl_j$. We choose an orthonormal basis $e_1,\ldots,e_d$ in $E$ and calculate the vectors $P_je_1,\ldots,P_je_d$; for each $j$ there are exactly $d_j$ linearly indepentent vectors. Using these vectors we invoke Gram-Schmidt and end up with orthonormal bases of the isotypic components $F_j\colon=E_j^1\oplus\cdots\oplus E_j^{n_j}$, which give us the required SALCs. Once we have got SALCs of a representation $\Psi$, the eigen-spaces of any Hamiltonian $H$ commuting with all $\Psi(g)$ can be computed more easily: if all irreducible subrepresentations are pairwise inequivalent, i.e. $n_j\in\{0,1\}$, then the SALCs $b_1,\ldots$ constructed above are in fact eigen-vectors of $H$ and the corresponding eigen-value is $\la Hb_k,b_k\ra$. If e.g. $n_j > 1$, then we only have to find the eigen-vectors of the operators $H|F_j$, which in general is much easier, for the spaces $F_j$ are way smaller then $E$, moreover $H|F_j$ has at most $n_j$ pairwise distinct eigen-values by corollary and the matrix of $H$ with respect to SALCs is of a very specific form cf. exam.

Some Examples

Some standard representations

The standard representation of $C_{3v}\simeq S(3)$ has the following traces: $\tr(E)=3$, $\tr(C_3)=0$, $\tr(\s_1)=1$. Comparing with the table of characters of
$C_{3v}$ shows that the representation is not irreducible; its trace is the sum of two characters: $\chi_1+\chi_3$.
The standard representation of $T_d$ has the following traces: $\tr(E)=3$, $\tr(C_3)=0$, $\tr(C_2)=-1$, $\tr(S_4)=-1$ and $\tr(\s)=1$. A comparison with the table of characters of $T_d$ shows that the standard representation is irreducible and its character ist $\chi_5$.
The standard representation of $O_h$ has the following traces: $\tr(E)=3$, $\tr(C_3)=0$, $\tr(C_2)=-1$, $\tr(C_4)=1$, $\tr(C_4^2)=-1$, $\tr(I)=-3$, $\tr(S_4)=-1$, $\tr(S_6)=0$, $\tr(\s_1)=1$ and $\tr(\s_2)=1$. A comparison with the table of characters of $O_h$ shows that the standard representation is irreducible and its character ist $\chi_9$.
What are the characters of the irreducible subrepresentations of the tensor product $\Psi$ of the standard representation of $T_d$ with itself?
Since the trace of the tensor product is the product of the traces, we get: $\tr(E)=9$, $\tr(C_3)=0$, $\tr(C_2)=1$, $\tr(S_4)=1$ and $\tr(\s)=1$. By the table of characters of $T_d$ we get for the multiplicities of the irreducible representations: $n_1=(9+0+3+6+6)/24=1$, $n_2=(9+0+3-6-6)/24=0$, $n_3=(18+0+6+0+0)/24=1$, $n_4=(27-3+6-6)/24=1$ and $n_5=(27-3-6+6)/24=1$, i.e. the trace of the representation ist the sum of the characters $\chi_1+\chi_3+\chi_4+\chi_5$.
If we want to determine the irreducible two dimensional representation $\Psi_3$ featuring the character $\chi_3$, we need to calculate the orthogonal projection $P_3$ onto a two dimensional subspace $E$ of $\R^9$. It's tedious, because it's a weighted sum over $24$ $9$ by $9$ matrices, but it can be accomplished by the Projection Theorem: $$ P_3 =2\sum_{j=1}^5\bar\chi_3(g_j)\Psi(I_{C_j}) =\frac1{12}\sum_{j=1}^5\bar\chi_3(g_j)\sum_{g\in C_j}g\otimes g~. $$ Suppose $\Psi:G\rar\UU(n)$ is a representation of a finite group $G$, then $\Psi(G)$ is a subgroup of $\UU(n)$ isomorphic to the quotient $G/\ker\Psi$ and the standard representation of $\Psi(G)$ is 'the same' as $\Psi$. Therefore in the formula of the Projection Theorem only the characters that are also characters of $G/\ker\Psi\simeq\Psi(G)$ are essential and all other projections must vanish, i.e. if $\chi$ is a character of $G$ but not a character of $G/\ker\Psi\simeq\Psi(G)$, then the corresponding projection is zero. We only need the characters of the matrix-group $\Psi(G)$. Well, if $\Psi$ is faithful this doesn't give you any advantage, but it might facilitate your handling the problem by a computer algebra system.
Use a computer algebra program to compute the projections onto the isotypic components. Suggested solution.
As we've just realized computing the trace of a tensor product is pretty easy, thus tensorization can in turn be useful to detect new characters: suppose we already knew the characters $\chi_1,\chi_4$ and $\chi_5$ in exam, then tensorization would deliver a new candidate: $$ f\colon=(\tr\Psi)^2-\chi_1-\chi_4-\chi_5 $$ In general this must be a compound character. To verify that it's in fact a character, we simply have to compute it's norm: if it's norm is $1$, we are done. In our case we of course know that it's the character $\chi_3$, but you can easily guess the outline of the general construction!

Representations in ${\cal H}_l$

Every group $G$ of order $|G|$ is a subgroup of $\OO(n)$ for some $n$ - by Cayley's Theorem and exam we know that this is always possible for $n\geq|G|-1$. Hence by restricting the representation $\G:\OO(n)\rar\Gl({\cal H}_l)$ (cf. subsection) of $\OO(n)$ in the space ${\cal H}_l$ of all harmonic and $l$-homogeneous polynomials on $\R^n$ we get a representation of $G$ in ${\cal H}_l$: $$ \forall g\in\OO(n)\forall P\in{\cal H}_l\quad \G(g)P\colon=P\circ g^{-1}~. $$ For $n=3$ the spaces ${\cal H}_l$ are discussed in section and it's not that difficult to guess what the results look like in general.
Find unitary representations of $S(3)$ in ${\cal H}_l$ for $l=1,2$ and $n=3$.
Find (not necessarily unitary) representations of $C_{mv}$ in ${\cal H}_l$ for $l\in\N$ and $n=2$. Suggested solution.

The ammonia molecule

The site representation of the ammonia molecule $\chem{NH_3}$ discussed initially: we have the following conjugacy classes: $C(E)=\{E\}$, $C(C_3)=\{C_3,C_3^*\}$ and $C(\s_1)=\{\s_1,\s_2,\s_3\}$; for the trace of the representation $\Psi$ and the three characters we get $$ \begin{array}{c|rrr} C_{3v}&1E&2C_3&3\s_1\\ \hline \tr\Psi&4&1&2\\ \hline \chi_1&1&1&1\\ \chi_2&1&1&-1\\ \chi_3&2&-1&0 \end{array} $$ thus by \eqref{goteq3}: $n_1=(1.4.1+2.1.1+3.2.1)/6=2$, $n_2=(1.4.1+1.2.1-3.2.1)/6=0$ and $n_3=(1.4.2-2.1.1)/6=1$. The multiplicities of the irreducible subrepresentations are therefore $2,0$ and $1$. The next table exhibits the values of $\Psi(g)$ at the basis elements $e_0,e_1,e_2$ and $e_3$: $$ \begin{array}{c|cccc} \Psi&e_0&e_1&e_2&e_3\\ \hline E&e_0&e_1&e_2&e_3\\ C_3&e_0&e_2&e_3&e_1\\ C_3^*&e_0&e_3&e_1&e_2\\ \s_1&e_0&e_1&e_3&e_2\\ \s_2&e_0&e_3&e_2&e_1\\ \s_3&e_0&e_2&e_1&e_3 \end{array} $$ From this and the Projection Theorem we get the orthogonal projections onto the isotypic components with character $\chi_1$: $P_1=\tfrac16\sum\chi_1(g)\Psi(g)$, i.e. $P_1e_j$ is the sum of the entries of the column headed by $e_j$ dividied by $6$, i.e.: $e_0$ and $(e_1+e_2+e_3)/3$; normalizing gives: $e_0$ and $(e_1+e_2+e_3)/\sqrt3$. $P_2=0$ and for $P_3=\tfrac26\sum\chi_3(g)\Psi(g)$ we get: $P_3e_0=\tfrac13(2e_0-e_0-e_0)=0$, $P_3e_1=\tfrac13(2e_1-e_2-e_3)$, $P_3e_2=\tfrac13(2e_2-e_3-e_1)$ and $P_3e_3=\tfrac13(2e_3-e_1-e_2)$; this gives us two linearly independent vectors and after normalizing we get e.g. $(2e_1-e_2-e_3)/\sqrt6$ and $(e_2-e_3)/\sqrt2$. A Hamiltonian $H$, which commutes with all $\Psi(g)$ must have $\lhull{e_0,(e_1+e_2+e_3)/\sqrt3}$ as invariant subspace and the vectors $(2e_1-e_2-e_3)/\sqrt6$ and $(e_2-e_3)/\sqrt2$ are eigen-vectors with the same eigen-value.

Vibrations of $\chem{H_2O}$

We resume the representation $\Psi$ of $C_{2v}$ given in example. As for the trace of $\Psi(g)$ we only get a non-zero contribution from those atoms, which remain invariant under $g$; the number of these atoms is the number of fixed points $Fix(g)$ of the corresponding permutation. Thus $$ \tr\Psi(g)=Fix(g)\tr(g)~. $$ Putting $\chi=\tr\Psi$; by comparison with the table of characters $$ \begin{array}{c|cccc} C_{2v}&1E&1C_2&1\s_1&1\s_2\\ \hline \tr\Psi&6&0&0&2\\ \hline \chi_1&1& 1& 1& 1\\ \chi_2&1& 1&-1&-1\\ \chi_3&1&-1& 1&-1\\ \chi_4&1&-1&-1& 1\\ \chi &6& 2& 2& 2 \end{array} $$ we get by \eqref{goteq3} for the multiplicities: $n_1=(6+0+0+2)/4=2$, $n_2=(6+0-0-2)/4=1$, $n_3=(6-0+0-2)/4=1$ and $n_4=(6-0-0+2)/4=2$. Also because of the commutativity of $C_{2v}$: $l_1=l_2=l_3=l_4=1$; finally by the Projection Theorem: $$ 4P_j=\chi_j(E)\Psi(E)+\chi_j(C_2)\Psi(C_2)+ \chi_j(\s_1)\Psi(\s_1)+\chi_j(\s_2)\Psi(\s_2)~. $$ Thus we get for $j=1$: $$ 4P_1=\left(\begin{array}{cccccc} 0 & 0& 0& 0& 0& 0\\ 0 & 2& 0& 0&-2& 0\\ 0 & 0& 2& 0& 0& 2\\ 0 & 0& 0& 0& 0& 0\\ 0 &-2& 0& 0& 2& 0\\ 0 & 0& 2& 0& 0& 2 \end{array}\right) $$ the image of which is spanned by the SALCs: $(e_2-e_5)/\sqrt2,(e_3+e_6)/\sqrt2$. Similarly: $$ 4P_2=\left(\begin{array}{cccccc} 2 & 0& 0&-2& 0& 0\\ 0 & 0& 0& 0& 0& 0\\ 0 & 0& 0& 0& 0& 0\\ -2& 0& 0& 2& 0& 0\\ 0 & 0& 0& 0& 0& 0\\ 0 & 0& 0& 0& 0& 0 \end{array}\right), 4P_3=\left(\begin{array}{cccccc} 2 & 0& 0& 2& 0& 0\\ 0 & 0& 0& 0& 0& 0\\ 0 & 0& 0& 0& 0& 0\\ 2 & 0& 0& 2& 0& 0\\ 0 & 0& 0& 0& 0& 0\\ 0 & 0& 0& 0& 0& 0 \end{array}\right), 4P_4=\left(\begin{array}{cccccc} 0 & 0& 0& 0& 0& 0\\ 0 & 2& 0& 0& 2& 0\\ 0 & 0& 2& 0& 0&-2\\ 0 & 0& 0& 0& 0& 0\\ 0 & 2& 0& 0& 2& 0\\ 0 & 0&-2& 0& 0& 2 \end{array}\right) $$ which gives us the SALCs: $(e_1-e_4)\sqrt2,(e_1+e_4)/\sqrt2,(e_2+e_5)/\sqrt2$ and $(e_3-e_6)/\sqrt2$; both $(e_1-e_4)\sqrt2$ and $(e_1+e_4)/\sqrt2$ are eigen-vectors of any Hamiltonian $H$ commuting with all $\Psi(g)$. Moreover $H$ has two two-dimensional invariant spaces generated by $(e_2-e_5)/\sqrt2,(e_3+e_6)/\sqrt2$ and $(e_2+e_5)/\sqrt2,(e_3-e_6)/\sqrt2$.

Vibrations of methane $\chem{CH_4}$

In section we found a representation $\Psi$ of $T_d\simeq S(4)$ in $\R^{12}$; this time we also add the central atom and get a representation in $\R^{15}$. Now we want to find the characters of the irreducible subrepresentations. Again $\Psi(g)$ only delivers a non-zero contribution to the trace from those atoms, which remain invariant under $g$. Now $T_d$ has $5$ conjugacy classes and the afore mentioned symmetries $E,C_3,C_2,S_4,\s$ form a complete set of representatives. We obviously have: $Fix(E)=5$, $Fix(C_3)=1$, $Fix(C_2)=1$, $Fix(S_4)=1$, $Fix(\s)=3$ and $\tr(E)=3$, $\tr(C_3)=0$, $\tr(C_2)=-1$, $\tr(S_4)=-1$, $\tr(\s)=1$, thus, in comparison to the table of characters of $T_d$: $$ \begin{array}{r|rrrrr} T_d&1E&8C_3&3C_2&6S_4&6\s\\ \hline \tr\Psi&15& 0&-1&-1& 3\\ \hline \chi_1 & 1& 1& 1& 1& 1\\ \chi_2 & 1& 1& 1&-1&-1\\ \chi_3 & 2&-1& 2& 0& 0\\ \chi_4 & 3& 0&-1& 1&-1\\ \chi_5 & 3& 0&-1&-1& 1 \end{array} $$ Again by \eqref{goteq3} we infer that the multiplicities $n_j$ of the irreducible subrepresentations with character $\chi_j$ are given by: $n_1=1$, $n_2=0$, $n_3=1$, $n_4=1$, $n_5=3$, i.e. $$ \tr\Psi=\chi_1+\chi_3+\chi_4+3\chi_5~. $$ Thus $H$ may have at most $1+0+1+1+3=6$ distinct eigen-values. Since $\chi_1(E)=1$, $\chi_3(E)=2$, $\chi_4(E)=3$ the first of these eigen-values is simple, the second is two-fold and the third is three-fold. The remaining space has dimension $9$ and $H$ can have only three different eigen-values on this space. The computation of the SALCs is tedious, since we have to average $4!=24$ matrices, but it's straightforward.
Use a computer algebra system to determine the isotypic projections of the previous example.

Molecular orbitals of $\chem{H_2O}$

We continue on the representation $\Psi$ of $C_{2v}$ in $\C^4$ in
subsection. The molecule is assumed to lie in the $yz$-plane and the $y$-direction point from the left to the right hydrogen atome. This time we have $\tr\Psi(E)=6$, $\tr\Psi(C_2)=0$, $\tr\Psi(\s_1)=4$, $\tr\Psi(\s_2)=2$. For the multiplicities of the irreducible representations featuring character $\chi_j$ we get by \eqref{goteq3}: $$ n_1=(1\cdot6+1\cdot4+1\cdot2)/4=3, n_2=0, n_3=2, n_4=1 $$ We therefore infer that $\Psi$ is built up of three irreducible subrepresentations with character $\chi_1$, two irreducible subrepresentations with character $\chi_3$ and one irreducible subrepresentations with character $\chi_4$. In order to get the SALCs we have to cacculating the projections $P_j$ onto the isotypic components for $j\in\{1,3,4\}$, which can be determined by means of the Projection Theorem: $$ P_j=\tfrac14\sum_{g\in C_{2v}}\cl{\chi_j(g)}\Psi(g)~. $$ For the matrix of $4P_1$ we get: $$ \left(\begin{array}{cccccc} 2&2&0&0&0&0\\ 2&2&0&0&0&0\\ 0&0&4&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&4 \end{array}\right) $$ This gives us three SALCs: $b_1=(s_1+s_2)/\sqrt2$, $b_2=s_O$ and $b_3=p_3$. Remember, the basis vectors are the atomic orbitals $s_O,p_1,p_2$ and $p_3$ of oxygen and the two orbitals $s_1$ and $s_2$ of the hydrogen atoms (cf. subsection). Similarly the projection $P_3$ gives us two orthonormale SALCs; $b_4=(s_1-s_2)/\sqrt2$ und $b_5=p_2$ and finally we get from $P_4$: $b_6=p_1$. A molecular orbital is simply an element of one of the isotypic components. In case of $\chem{H_2O}$ all molecular orbitals featuring character $\chi_1$ are of the form: $a_1(s_1+s_2)/\sqrt2+a_2s_O+a_3p_3$, all molecular orbitals featuring the character $\chi_3$: $a_4(s_1-s_2)/\sqrt2+a_5p_2$ and there is only one molecular orbitals featuring the character $\chi_4$: $p_1$. Hence the $p_1$ orbital of the central oxygen is a stable state of any Hamiltonian $H$ commuting with $\Psi(g)$, $g\in C_{2v}$. Another two stable states of $H$ form an orthonormal basis of $\lhull{b_4,b_5}$ and the remaining three form an orthonormal basis of $\lhull{b_1,b_2,b_3}$.
Suppose we have a hypothetical molecule $A_6$ with symmetry group $O_h$. How many different eigen-values a Hamiltonian $H\in\Hom(\C^6)$ may have, which commutes with all the operators of the site representation $\Psi$ of the molecule? What are the dimensions of the eigen-spaces?
The symmetries permute the atomes $1,2,3,4,5,6$ at the positions $e_1,e_2,e_3,-e_1,-e_2,-e_3$ as follows: $E:(123456)$, $C_3:(231564)$, $C_2:(432165)$, $C_4:(135462)$, $C_4^2:(156423)$, $I:(456123)$, $S_4:(435162)$, $S_6:(234561)$, $\s_1:(126453)$, $\s_2:(135462)$. This gives us the series of traces: $6,0,0,2,2,0,0,0,4,2$ and therefore the multiplicities of the irreducible representations: $n_1=n_3=n_9=1$ and all others are zero. Hence the Hamiltonian has at most three different eigen-values and the dimensions of the corresponding eigen-spaces are $1,2$ and $3$. Again, the SALCs are a bit tedious to compute!
Nonetheless it's instructive to compute them. Use a computer algebra system! Suggested solution in sage and its output.
We consider the molecular orbitals of benzene $\chem{C_6H_6}$ as linear combinations of the orbitals of six carbon atoms. Each of these atoms has four atomic orbitals, one $s$ orbital and three $p$ orbitals. Use a computer algebra system to find the multiplicities and the isotypic components of the representation of the symmetry group $C_{6v}\times\Z_2$ in $\R^6\otimes\R^4\simeq\R^{24}$. Suggested solution in sage and its output.
Given a group representation $\Psi:G\rar\OO(n)$ and the table of characters of $G$, sketch a computer program that determines the isotypic projections. Of course, you may employ any computer algebra program! Suggested solution.

Pertubations of Hamiltonians

Suppose $G_0$ is the group of symmetries of a Hamiltonian $H_0$ and we have got another Hamiltonian $H$, whose group of symmetry is assumed to be a subgroup $G$ of $G_0$. Since restrictions of irreducible representations of $G_0$ to $G$ are direct sums of irreducible representations of $G$, eigen-spaces of $H_0$ are generally split into several eigen-spaces of the pertubation $H$. In QM that's the explanation of the Stark effect and the Zeeman effect.
The operator $P_{ex}\in\Hom(\C^2\otimes\C^2)$, $P_{ex}:e_j\otimes e_k\mapsto e_k\otimes e_j$, $j,k\in\{1,2\}$, is called the exchange operator (cf. exam). Verify that for all $U\in\Hom(\C^2)$ the operator $U\otimes U$ commutes with $P_{ex}$. Therefore the symmetry group of $P_{ex}$ contains the copy $\{U\otimes U:\,U\in\UU(2)\}$ of $\UU(2)$. Show that $H_0\colon=A(2P_{ex}-1)$ has eigen-values $-3A,A,A,A$. $H_0$ is called the Hamiltonian of a spin $1/2$ couple. The Hamiltonian of this couple in a magnetic field along the third axis is given by: $$ H\colon=H_0+\mu_1B(\s_3\otimes1)+\mu_2B(1\otimes\s_3), $$ where $\s_j$ denote the Pauli spin operators (cf. subsection) and $B,\mu_1,\mu_2\in\R$ the strength of the magnetic field and the magnetic moments of the particles. Compute the eigen-values $E_1,E_2,E_3,E_4$ of $H$. Suggested solution.
Verify that $H_0=A(\s_1\otimes\s_1+\s_2\otimes\s_2+\s_3\otimes\s_3)$.
If for some state $x$: $P_{ex}x=x$, $x$ is called bosonic. If for some state $x$: $P_{ex}x=-x$, $x$ is called fermionic. Which of the eigen-states of $H_0$ are bosonic and which are fermionic?

Subgroups

Suppose $\chi$ is a character of a subgroup $H$ of $G$. In general $\chi$ is not the restriction of a character of $G$. However we can construct a compound character of $G$: we first expand $\chi:G\rar\C$ by putting $\chi|H^c=0$ and then define $\chi^G:G\rar\C$ by $$ \chi^G(g) \colon=\frac1{|H|}\sum_{x\in G}\chi(xgx^{-1})~. $$ Of course, $\chi^G$ is a class function (cf. e.g. exam) and we will actually see that it's a compound character of $G$. But first of all we'd like to indicate that $\chi$ is in general not the restriction of $\chi^G$ to $H$: take e.g. the trivial character $\chi=1$, then $$ \chi^G(g)=\frac{|\{x\in G:xgx^{-1}\in H\}|}{|H|}~. $$ The mapping $u:G\rar C(g)$, $x\mapsto xgx^{-1}$, is by definition of $C(g)$ onto and $u(x)=u(y)$ iff $y^{-1}xgx^{-1}y=g$, i.e. iff $x^{-1}y\in Z(g)$. It follows that for any subset $A$ of $G$: $$ |\{x\in G:xgx^{-1}\in A\}| =|A\cap C(g)||Z(g)| =|A\cap C(g)||G|/|C(g)| $$ Therefore we conclude \begin{equation}\label{subeq1}\tag{SUB1} \chi^G(g)=\frac{|G||H\cap C(g)|}{|H||C(g)|} \end{equation} which need not be constant on $H$. However, the following result shows amongst others that it's for all $g\in G$ a non-negative integer.
We start with an arbitary character $\chi$ of the subgroup $H$ of $G$. If $G=Hx_1\cup\ldots\cup Hx_m$ is a disjoint union of right cosets, then $$ \chi^G(g) =\sum_{j=1}^m\chi(x_jgx_j^{-1}) $$ is a compound character of $G$ - it's called the induced character. Moreover, for all characters $\psi$ of $G$ we have: $$ \la\chi^G,\psi\ra_{L_2(G)}=\la\chi,\psi\ra_{L_2(H)} $$
$\proof$ For all $x\in H$: $\chi(xgx^{-1})=\chi(g)$. Indeed, for $g\in H$ the elements $xgx^{-1}$ and $g$ are conjugate in $H$ and since $\chi$ is a class function on $H$: $\chi(xgx^{-1})=\chi(g)$. If $g\notin H$, then $xgx^{-1}\notin H$ and thus both sides vanish.
1. Now suppose $y\in Hx$, then $\chi(ygy^{-1})=\chi(hxgx^{-1}h^{-1})=\chi(xgx^{-1})$ and therefore the function $y\mapsto\chi(ygy^{-1})$ is constant on cosets $Hx$, i.e.: $$ \frac1{|H|}\sum_{x\in G}\chi(xgx^{-1}) =\sum_{j=1}^m\chi(x_jgx_j^{-1})~. $$ 2. By definition of the euclidean product we get for every character $\psi$ of $G$: \begin{eqnarray*} \la\chi^G,\psi\ra_{L_2(G)} &=&\frac1{|G|}\sum_{g\in G}\chi^G(g)\cl{\psi(g)}\\ &=&\frac1{|G||H|}\sum_{x\in G}\sum_{g\in G}\chi(xgx^{-1})\cl{\psi(g)} =\frac1{|G||H|}\sum_{x\in G}\sum_{g\in G}\chi(g)\cl{\psi(x^{-1}gx)}\\ &=&\frac1{|G||H|}\sum_{g,x\in G}\chi(g)\cl{\psi(g)} =\frac1{|H|}\sum_{g\in G}\chi(g)\cl{\psi(g)}\\ &=&\frac1{|H|}\sum_{g\in H}\chi(g)\cl{\psi(g)} =\la\chi,\psi\ra_{L_2(H)}~. \end{eqnarray*} 3. It remains to show that $\chi^G$ is a compound character of $G$. Since $\chi^G$ is a class function, $\chi^G$ is a linear combination of characters by
theorem and we are left to prove that for all characters $\psi$ of $G$: $\la\chi^G,\psi\ra_{L_2(G)}\in\N_0$. Now, by 2. we have: $$ \la\chi^G,\psi\ra_{L_2(G)} =\la\chi,\psi\ra_{L_2(H)} $$ which is a non-negative integer by \eqref{goteq3}, because the restriction of $\psi$ to $H$ is a compound character of $H$. $\eofproof$
The above construction isn't specific to characters, it can be done for an arbitrary class function $\chi$ of $H$ and thus it gives rise to a linear map $$ I:Z(H)\rar Z(G),\quad\chi\mapsto\chi^G~. $$ This map also satisfies the relation $\la I(\chi),\psi\ra_{L_2(G)}=\la\chi,\psi\ra_{L_2(H)}$, which merely says that its adjoint $I^*$ is the restriction $R:Z(G)\rar Z(H)$, $\psi\mapsto\psi|H$. Or conversely, the map $I$ is the adjoint of $R$. Thus we could have started as well with the problem of finding the adjoint of the more natural restriction map $R:Z(G)\rar Z(H)$. $I$ is not necesserily injective, because the restriction map $R:Z(G)\rar Z(H)$ may not be onto.
Since $G$ is finite $L_2(H)$ is a subspsce of $L_2(G)$. Verify that the restriction $R:L_2(G)\rar L_2(H)$, $f\mapsto f|H$ is an orthogonal projection.
Verify that $\chi^G(e)=|G/H|\dim E$.
Theorem is particularly useful if $G$ has a big commutative subgroup $H$, because these subgroups have lots of characters and all of them are homomorphisms $G\rar S^1$. Take e.g. the dihedral group $G=C_{nv}$ and the cyclic subgroup $H=\Z_n$. There are two right cosets with representatives $E$ and $\s$, i.e. $\chi^G(g)=\chi(g)+\chi(\s g\s)$: $$ \chi^G(\s)=2\chi(\s)=0 \quad\mbox{and}\quad \chi^G(C_n)=\chi(C_n)+\chi(C_n^{-1})~. $$ For $\chi(C_n^j)=e^{2\pi ikj/n}$ we get: $\chi^G(\s)=0$ and $\chi^G(C_n)=2\cos(2\pi k/n)$. This is actually a character of $C_{nv}$ of a two dimensional irreducible representation, because
$\Vert\chi^G\Vert=1$.
Suppose $G=H\cup Hx_2\cup\ldots\cup Hx_m$ is a disjoint union of right cosets. If for all $j\geq2$: $x_jH\cap Hx_j=\emptyset$, then $\chi^G|H=\chi$.
If $G$ is commutative then $\chi^G|H=|G/H|\chi$.
Determine the character table of $S(3)\simeq C_{3v}$ using the character table of $S(2)$ and induced characters.
As $S(2)$ is commutative of order $2$ we get the following table of characters: $$ \begin{array}{c|rr} S(2)&1(12)&1(21)\\ \hline \chi_1&1&1\\ \chi_2&1&-1 \end{array} $$ $S(3)$ has three classes $C_1=\{(123)\}$, $C_2=\{(231),(312)\}$, $C_3=\{(213),(132),(321)\}$ and $|C_1|=1$, $|C_2|=2$, $|C_3|=3$. We already know two characters: the trivial character $\psi_1=1$ and $\psi_2=\sign$, so we have: $$ \begin{array}{c|rrr} S(3)&1(123)&2(231)&3(213)\\ \hline \psi_1&1&1&1\\ \psi_2&1&1&-1\\ \psi_3&x&y&z \end{array} $$ We identify $S(2)$ with the subgroup $H=\{(123),(213)\}$ of $S(3)$ and employ \eqref{subeq1} to get an induced character of $S(3)$: $|H\cap C_1|=1$, $|H\cap C_2|=0$ and $|H\cap C_3|=1$. For $\chi_1$ we get that $\chi_1^G((123))=3$, $\chi_1^G((231))=0$ and $\chi_1^G((213))=1$ is an induced character, i.e. for some $m_1,m_2,m_3\in\N_0$: $m_1\psi_1+m_2\psi_2+m_3\psi_3=\chi_1^G$, where $m_1=\la\chi_1^G,\psi_1\ra=(3+0+3)/6=1$ and $m_2=\la\chi_1^G,\psi_2\ra=(3+0-3)/6=0$. Thus we are left with $m_3\psi_3=\chi_1^G-\psi_1$, which gives us three equations: $$ m_3x=2,\quad m_3y=-1,\quad m_3z=0 $$ and only one solution: $z=0$, $m_3=1$, $y=-1$ and $x=2$.
Compute the induced character $\chi_2^G$.
Try to find induced characters $\chi^G$ of $O_h$, by employing the characters of the subgroup generated by the reflections about the coordinate planes.
We consider $O_h$ as a subgroup of $S(6)$ and the subgroup of reflections about the coordinate planes is $H$. In sage this can be done as follows: To get the elements of $H$ enter
H=PermutationGroup([(5,6),(1,3),(2,4)])
for g in H:
    print(g.domain())
with eight elements of $H$: $$ (123456),(123465),(143256),(143265),(321456),(321465),(341256),(341265)~. $$ A complete list of representatives for the right cosets can be obtained from:
Oh=PermutationGroup([(2,3,4,1),(6,4,5,2),(6,3,5,1),(3,1)])
Cosets=Oh.cosets(H, side='right')
for c in Cosets:
    print(c[0].domain())
which produces the output: $$ (123456), (153624), (214356), (254613), (516324), (526413)~. $$ Now let's take the trivial character $\chi=1$ of $H$, then by \eqref{subeq1} $$ \chi^G(g)=\frac{|G||H\cap C(g)|}{|H||C(g)|}=\frac{6|H\cap C(g)|}{|C(g)|} $$ and we are essentially left with the computation of the cardinality of $H\cap C(g)$:
Reps=Oh.conjugacy_classes_representatives()
coh=len(Oh)
ch=len(H)
for g in Reps:
    cg=Oh.conjugacy_class(g)
    hsg=set(cg) & set(H)
    print(g,(coh*len(hsg))/(ch*len(cg)))
This gives the induced character $\chi^G$: it's $6$ times the indicator function of the union of the following conjugacy classes $$ C(())\cup C((5,6))\cup C((2,4)(5,6))\cup C((1,3)(2,4)(5,6)) =C(E)\cup C(\s_1)\cup C(C_4^2)\cup C(I)~. $$ Since $\Vert\chi^G\Vert_{L_2(O_h)}=\sqrt6$ it's either the sum of $6$ characters or the sum of $3$ characters one with multiplicity $2$.
Of course sage has some built-in functions to produce induced characters:
Oh=PermutationGroup([(2,3,4,1),(6,4,5,2),(6,3,5,1),(3,1)])
H=Oh.subgroup([(5,6),(1,3),(2,4)])
for chi in H.irreducible_characters():
    print(chi.induct(Oh).values())
The output exhibits the values of the induced characters on the 10 conjugacy classes. It will give us essentially four induced characters; the norm of the first two is $\sqrt6$ and the remaining two have norm $\sqrt2$. If you start with an arbitrary class function of $H$ just execute
chi=ClassFunction(H, [-1,-1,-1,-1,0,0,0,0])
print(chi.induct(Oh).values())
$\norm{I(\chi)}^2=\la RI(\chi),\chi\ra$ and if $G$ is commutative then $\norm{I(\chi)}=|G/H|\norm{\chi}$.

Quotients

If $H$ is a normal subgroup of $G$, then a representation $\Psi:G\rar\UU(E)$ of $G$ descends to a representation of $G/H$ iff for all $h\in H$: $\Psi(h)=1_E$. On the other hand any representation $\wh\Psi$ of $G/H$ extends to a representation $\Psi$ of $G$: simply define $\Psi(g)\colon=\wh\Psi(\pi(g))$, where $\pi$ is the quotient map. This way the irreducible representations of the quotient $G/H$ can be identified with the subset $H^\perp$ of $\wh G$ defined by $$ H^\perp\colon=\{\Psi\in\wh G:\forall h\in H:\quad\Psi(h)=1\}~. $$ The characters of $G$ can be determined provided $G$ has plenty of normal subgroups and the characters of the quotiens are known. A prominet example thereof is the quaternion group, cf.
subsection.

Frobenius Reciprocity Theorem

Given a representation $\Psi$ of $H$ with character $\chi$, can we produce a representation $\Psi^G$ of $G$ with $\tr\Psi^G=\chi^G$? Remember the induced representation of $\Psi:H\rar\Gl(E)$, defined in subsection: it's a representation of $G$ in the vector-space $$ E^G=\{f:G\rar E:\forall h\in H\,\forall x\in G: f(hx)=\Psi(h)(f(x))\} $$ given by $$ \forall g,x\in G:\quad \Psi^G(g)f(x)=f(xg)~. $$
Determine the spaces $E^G$ and the representations $\Psi^G$ in the trivial cases: $H=\{e\}$ and $H=G$.
Determine the space $E^G$ and the representation $\Psi^G$ for $G=C_{nv}$ and $H=\Z_n$.
If $\Phi:G\rar\Gl(F)$ is a representation of $G$ then $\Hom_G(\Phi,\Psi^G)$ is isomorphic to $\Hom_H(\Phi,\Psi)$: $\vp\in\Hom_G(\Phi,\Psi^G)$ will be mapped to $\psi\in\Hom_H(\Phi,\Psi)$ and these two are related by: $$ \forall y\in F:\quad \psi(y)=\vp(y)(e) \quad\mbox{and}\quad \forall y\in F\,\forall g\in G\quad \vp(y)(g)=\psi(\Phi(g)y) $$
$\proof$ Remember, if $\vp\in\Hom_G(\Phi,\Psi^G)$ and $\psi\in\Hom_H(\Phi,\Psi)$, then we have: $$ \vp(\Phi(g)y)=\Psi^G(g)(\vp(y)) \quad\mbox{and}\quad \psi(\Phi(h)x)=\Psi(h)(\psi(x)) $$ 1. Suppose $\vp\in\Hom_G(\Phi,\Psi^G)$, we must prove that $\psi\in\Hom_H(\Phi,\Psi)$: \begin{eqnarray*} \psi(\Phi(h)y) &=&\vp(\Phi(h)y)(e) =\Psi^G(h)(\vp(y))(e) =\vp(y)(eh)\\ &=&\vp(y)(he) =\Psi(h)(\vp(y)(e)) =\Psi(h)(\psi(y)), \end{eqnarray*} i.e. $\psi\in\Hom_H(\Phi,\Psi)$.
2. Conversely, suppose $\psi\in\Hom_H(\Phi,\Psi)$, then we have to prove that $\vp\in\Hom_G(\Phi,\Psi^G)$: \begin{eqnarray*} \vp(\Phi(g)y)(x) &=&\psi(\Phi(x)\Phi(g)y) =\Psi(xg)(\psi(y))\\ &=&\psi(\Phi(xg)y) =\vp(y)(xg) =\Psi^G(g)(\vp(y))(x), \end{eqnarray*} i.e. $\vp\in\Hom_G(\Phi,\Psi^G)$.
3. These maps are inverse of each other: suppose $\vp$ is given; putting $\psi(y)=\vp(y)(e)$, we get by the definition of $\Psi^G$: $$ \psi(\Phi(g)y) =\vp(\Phi(g)y)(e) =\Psi^G(g)(\vp(y))(e) =\vp(y)(g) $$ and analogously: given $\psi$ and putting $\vp(y)(g)\colon=\psi(\Phi(g)y)$ it follows that $\vp(y)(e)=\psi(\Phi(g)y)=\psi(y)$. $\eofproof$
If $\chi$ is the character of $\Psi$, then $\tr\Psi^G$ is the induced character $\chi^G$.
$\proof$ Put $\wt\chi(g)\colon=\tr\Psi^G(g)$ and suppose $\Phi:G\rar\UU(F)$ is irreducible with character $\r$, then by corollary, theorem and theorem we conclude: $$ \la\wt\chi,\r\ra_{L_2(G)} =\dim\Hom_G(\Phi,\Psi^G) =\dim\Hom_H(\Phi,\Psi) =\la\chi,\r\ra_{L_2(H)} =\la\chi^G,\r\ra_{L_2(G)}~. $$ Since the characters of $G$ form a basis of $Z(G)$ and both $\wt\chi$ and $\chi^G$ are in $Z(G)$, it follows that $\wt\chi=\chi^G$. $\eofproof$
Let $\Psi:H\rar\Gl(E)$ be a representation of the subgroup $H$ of $G$. For $x\in G$, $u\in V$ put for all $h\in H$: $f_{x,u}(hx)=\Psi(h)u$ and for all $g\notin Hx$: $f_{x,u}(g)=0$. Then $f_{x,u}\in E^G$. 2. If $u_1,\ldots,u_n$ is a basis for $E$ and $G=Hx_1\cup\ldots\cup Hx_m$ is a disjoint union of right cosets, then $f_{x_j,u_k}$ is a basis for $E^G$.
Suppose $\dim E=1$ and $G=Hx_1\cup\ldots\cup Hx_m$ is a disjoint union of right cosets, then for any $u\in E\sm\{0\}$ the functions $f_j\colon=f_{x_j,u}$ form a basis of $E^G$ and $\Psi^G(g)$ is a permutation matrix with respect to this basis.
We just have to compute $\Psi^G(g)f_j(x)=f_j(xg)$. For any $g\in G$ the cosets $Hx_1g,\ldots,Hx_mg$ are just a permutation $Hx_{\pi(1)},\ldots,Hx_{\pi(m)}$ of the cosets $Hx_1,\ldots,Hx_m$ and thus $\Psi^G(g)$ maps the basis $f_j$ onto the basis $f_{\pi(j)}$.

Abelianization and abelian subgroups

How many one-dimensional inequivalent representations of a finite group $G$ do exist? If $f:G\rar S^1$ is a homomorphism, then for all $x,y\in G$: $f(x^{-1}y^{-1}xy)=1$. Thus the commutator subgroup $[G,G]$ generated by all its commutators $[x,y]\colon=x^{-1}y^{-1}xy$ must be in the kernel of $f$.
1. Prove that $xy=yx[x,y]$, $[x,y]^{-1}=[y,x]$ and for all $g\in G$: $g[x,y]g^{-1}=[gxg^{-1},gyg^{-1}]$. 2. For every homomorphism $f:G\rar H$ we have $f([G,G])\sbe[H,H]$.
$[G,G]$ is normal and the quotient $G/[G,G]$ is commutative - it's called the abelianization of $G$, for $xy=yx[x,y]$ and thus $yx$ and $xy$ are in the same class. Moreover, by the above considerations every homomorphism $f:G\rar H$ in a commutative group $H$ descends to a unique homomorphism $\wh f:G/[G,G]\rar H$ such that $f=\wh f\circ\pi$, where $\pi:G\rar[G,G]$ is the quotient map. Thus there are exactly $|G/[G,G]|$ mutually inequivalent one-dimensional representations.
Identify the commutator subgroup, the abelianization of the quaternion group $Q$ and all its one-dimensional irreducible representations. Determine the complete character table of $Q$.
In sage this can be done as follows:
Q=QuaternionGroup()
CQ=Q.cayley_table()
head=CQ.row_keys()
n=Q.order()
for i in range(n):
    print(head[i].domain())
Q.cayley_table()
Q.commutator()
F=Q.as_finitely_presented_group()
F.abelian_invariants()
This will realize $Q$ as a subgroup of $S(8)$, it will exhibit the Cayley table of $Q$ with elements $a,b,c,d,e,f,g,h$ (neutral element: $a$) and what permutations these elements stand for. It will tell you that the commutator subgroup $[Q,Q]$ is $\{a,c\}$ and that $Q/[Q,Q]$ is isomorphic to $\Z_2\times\Z_2$. Thus $Q$ has exactly four one-dimensional irreducible representations. Since the characters of $\Z_2^2$ are known, we can construct all four homomorphisms $\chi_j:Q\rar S^1$. The left cosets are: $\{a,c\}$, $\{b,d\}$, $\{e,g\}$, $\{f,h\}$, and the homomorphisms $\chi_j:Q\rar S^1$ are therefore given by $$ \begin{array}{c|rrrr} &\{a,c\}&\{b,d\}&\{e,g\}&\{f,h\}\\ \hline \chi_1&1&1&1&1\\ \chi_2&1&1&-1&-1\\ \chi_3&1&-1&1&-1\\ \chi_4&1&-1&-1&1 \end{array} $$ To get representatives and sizes of the conjugacy classes enter:
Reps=Q.conjugacy_classes_representatives()
for g in Reps:
    cg=Q.conjugacy_class(g)
    print(g.domain(),len(cg))
Now we can set up the table of homomorphisms $\chi_j:Q\rar S^1$: $$ \begin{array}{c|rrrrr} Q&1a&2d&1c&2e&2f\\ \hline \chi_1&1&1&1&1&1\\ \chi_2&1&1&1&-1&-1\\ \chi_3&1&-1&1&1&-1\\ \chi_4&1&-1&1&-1&1 \end{array} $$ The dimension of the fifth character must be $2$ (since $4+2^2=8$) and by orthogonality of the columns we conclude: $\chi_5(d)=0$, $\chi_5(c)=-2$, $\chi_5(e)=0$ and $\chi_5(f)=0$.
For the following examples you are strongly advised to use a computer algebra system!
Identify the commutator group and the abelianization of $T_d$.
Identify the commutator group and the abelianization of $O_h$.
Identify the commutator group and the abelianization of $A(5)$.

Dimension devides order

Algebraic integers

The following result also bears a helping hand!
The dimension $l$ of any irreducible representation of a finite group $G$ divides the order $|G|$ of $G$.
The proof is not self-contained; we need some facts from algebraic number theory: A number $a\in\C$ is said to be an algebraic integer if it is the root of a monic polynomial $p$ with integer coefficients, i.e. for some $n\in\N$ and $a_0,\ldots,a_{n-1}\in\Z$: $$ p(z)=z^n+a_{n-1}z^{n-1}+\cdots+a_1z+a_0 \quad\mbox{and}\quad p(a)=0~. $$ Of course, if $a\neq0$, then we may assume $a_0\neq0$. Here are the results, which we are going to employ (cf. e.g. Theorem 3.5.4).
  1. A rational algebraic integer must be an integer. This has an obvious and elementary proof: Suppose $x=a/b\in\Q$, $p(z)=z^n+a_{n-1}z^{n-1}+\cdots+a_1z+a_0$, $a_j\in\Z$, and $p(x)=0$, then we may assume that $b\in\N$, $a\in\Z\sm\{0\}$ and $\gcd(|a|,b)=1$. From $p(x)=0$ we infer that $$ a^n=-b(a_{n-1}a^{n-1}+\cdots+a_1ab^{n-2}+a_0b^{n-1}) $$ which implies that $b||a|^n$ and since $\gcd(|a|,b)=1$: $b=1$.
  2. The sum and the product of algebraic integers are algebraic integers. The proof is probably less obvious but it only needs results from subsection: Suppose $a,b\neq0$ are algebraic integers, $p(a)=q(b)=0$ and $A,B$ matrices with characteristic polynomials $p$ and $q$ respectively and integer entries (cf. exam). Then $a+b$ is a root of the characteristic polynomial of $A\otimes1+1\otimes B$ and $ab$ is a root of the characteristic polynomial of $A\otimes B$ and both characteristic polynomials are monic with integer coefficients. Thus the set $\bar\Z$ of algebraic integers is a subring of $\C$.
  3. $a_1,\ldots,a_N$ are algebaric integer iff the subgroup $\Z[a_1,\ldots,a_N]$ of $(\C,+)$ generated by $a_1^{k_1}\cdots a_N^{k_N}$, $k_1,\ldots,k_N\in\N_0$ is finitely generated, i.e. there exists $d\in\N$ and $x_1,\ldots,x_d\in\Z[a_1,\ldots,a_N]$ such that any $x\in\Z[a_1,\ldots,a_N]$ can be written as $$ x=n_1x_1+\cdots+n_dx_d \quad\mbox{with}\quad n_1,\ldots,n_d\in\Z~. $$ The proof is an application of the so called determinant trick, which is a generalization of the Cayley-Hamilton Theorem: Let $E$ be a finitely generated modul over a commutative ring $R$ with unity (cf. wikipedia) and let $\Hom_R(E)$ be the set of all $R$-homomorphisms, i.e. the set of all homomorphisms $u:E\rar E$ such that for all $r\in R$ and all $x\in E$: $u(r\cdot x)=r\cdot u(x)$. $\proof$. $\eofproof$
    If $R$ is a field and $E$ a vector-space, then the Cayley-Hamilton Theorem asserts that the characteristic polynomial $c_u$ of $u$ is such a polynomial, i.e. $c_u(u)=0$.
    Now let $R=\Z$, $E=\Z[a_1,\ldots,a_N]$ and $u_j\in\Hom_R(E)$ multiplication by $a_j$, i.e. for all $x\in E$: $u_j(x)=a_jx$. Then by the determinant trick for each $j=1,\ldots,N$ there is some $d\in\N$ and $n_0,\ldots,n_{d-1}\in\Z$ such that $u_j^d+n_{d-1}u_j^{d-1}+\cdots+n_{0}$ is the zero map. Putting $$ a\colon=a_j^d+n_{d-1}a_j^{d-1}+\cdots+n_{1}a_j+n_{0} $$ this says that the $\Z$-homomorphism that sends each $x\in\Z[a_1,\ldots,a_N]$ to $ax$ is the zero map; that's only possible if $a=0$. Hence each $a_j$ is an algebraic integer.
Deduce the second assertion from the third.
For $m,n\in\N$ find a monic polynomial $p\in\Z[X]$ such that $p(\sqrt m+\sqrt n)=0$. Suggested solution.
Proof of the Theorem: Let $\Psi:G\rar\UU(E)$ be any irreducible representation; by 1. we only need to prove that $|G|/\dim E$ is an algebraic integer. Since $|G|$ is finite there must be some $n\in\N$ such that for any $g\in G$: $g^n=e$, i.e. $\Psi(g)^n=1_E$ and therefore every eigen-value of $\Psi(g)$ must be an $n$-th root of unity, which is an algebraic integer. Hence $\chi(g)=\tr\Psi(g)$ is the sum of such numbers, which is also an algebraic integer by 2. Now we conclude from $\norm\chi_2^2=1$ and $\chi(e)=\dim E$: $$ \frac{|G|}{\dim E} =\frac{|G|}{\chi(e)} =\chi(e)+\sum_j\frac{c_j\chi(g_j)\bar\chi(g_j)}{\dim E}, $$ where $C_1,\ldots,C_N$ are all pairwise disjoint conjugacy classes $c_j=|C_j|$ and $g_j\in C_j$. Since $\bar\chi(g_j)$ are algebraic integers it suffices by 2. to prove that $c_j\chi(g_j)/\dim E$ are algebaric integers as well: From lemma we infer that for any conjugacy class $C$: $\Psi(I_{C})=\colon\l(C)1_E$. Now for any pair of conjugacy classes $C,K$: $$ |G|I_C*I_K(x) =\sum_{y\in C}I_K(y^{-1}x) =|C\cap xK^{-1}|\in\N_0~. $$ On the other hand $I_C*I_K\in Z(G)$, i.e. $$ I_C*I_K=\sum_{j=1}^N I_C*I_K(g_j)I_{C_j}, $$ we therefore conclude that $$ I_C*I_K=\frac1{|G|}\sum n_jI_{C_j} \quad\mbox{with}\quad n_j\colon=|C\cap g_jK^{-1}|\in\N_0~. $$ Since $f\mapsto\Psi(f)$ is by lemma an algebra homomorphism we get $$ |G|^2\l(C)\l(K)1_E =|G|^2\Psi(I_C*I_K) =|G|\sum_{j=1}^N n_j\Psi(C_j) =\sum_{j=1}^N n_j|G|\l(C_j)1_E $$ i.e. $\Z[|G|\l(C_1),\ldots,|G|\l(C_N)]$ is finitely generated, i.e. by 3. $|G|\l(C_1),\ldots,|G|\l(C_N)$ are algebraic integers. Since (cf. e.g. exam) $$ \frac{c_j\chi(g_j)}{|G|} =\tr\Psi(I_{C_j}) =\l(C_j)\dim E $$ we infer that $c_j\chi(g_j)/\dim E=|G|\l(C_j)$ is an algebraic integer. $\eofproof$
For any finite dimensional unitary representation $\Psi:G\rar\UU(E)$ of a finite group $G$ and all $g\in G$ $\tr\Psi(g)$ is an algebraic integer,
If $a$ is an algebraic integer, then $\Z[a]$ is finitely generated.
If $a_1,\ldots,a_N$ are algebraic integer, then $\Z[a_1,\ldots,a_N]$ is finitely generated.
If $a$ is an algebraic integer, then so is $a^{1/n}$ for all $n\in\N$. $1/a$ is in general not an algebraic integer.

Algebraic numbers

We start this digression with an ostensibly trivial map: the multiplication $z\mapsto az$ for some fixed $a\in\C\sm\{0\}$, which sends the complex number $z$ to the complex number $az$. This is obviously a $\C$-linear map and $\Psi:n\mapsto(z\mapsto a^nz)$ is a representation of $\Z$ in $\C$. Now let's consider $\C$ as a vector-space over $\Q$, then $\Psi$ is a representation of $\Z$ in the infinite dimensional vector-space $\C$ over $\Q$. For what kind of numbers $a$ does $\Psi$ have a finite dimensional invariant subspace $E$? Since $E$ is of finite dimension, this holds if and only if $\Psi(1)E\sbe E$, i.e. iff $aE\sbe E$. By the Cayley-Hamilton Theorem the characteristic polynomial $c_a\in\Q[X]$ of the $\Q$-linear map $\Psi(1):E\rar E$ satisfies $c_a(\Psi(1))=0$ and therefore $c_a(a)=0$,
A complex number $a$ is called an algebraic number if it is the root of a (monic) polynomial $p$ with rational coefficients.
Conversely, if $a$ is an algebraic number, then the $\Q$-vector-space $E$ generated by $1,a,a^2,\ldots$ is finite dimensional and $\Psi(1)E\sbe E$. Thus $a$ is an algebraic number if and only if $\Psi$ has a finite dimensional invariant subspace. For the remainder of this subsection we always consider vector-space as vector-spaces over $\Q$!
1.The set $\bar\Q$ of all algebraic numbers is a subfield of $\C$. 2. $\bar\Q$ is algebraically closed, i.e. every root of any polynomial $p\in\bar\Q[X]$ lies in $\bar\Q$
$\proof$ 1. Adjust the proof that $\bar\Z$ is a subring of $\C$. However, we'd like to indicate a proof based on invariance: If $a$ is algebraic, then $\Psi(1)E=E$ and thus $\Psi(-1)E=E$, which proves that $1/a$ is algebraic. For a detailed proof in this spirit cf. e.g. Primes Theorem 1.1. and Theorem 1.2.
2. Suppose $a\in\C$ is a root of $p=X^{d_0}+a_{d_0}X^{d_0-1}+\cdots+a_2X+a_1\in\bar\Q[X]$. We need to find a finite dimensional invariant subspace $E$ of $\C$ such that $aE\sbe E$. The space generated by $1,a,a^2,\ldots$ is finite dimensional as vector-space over $\bar\Q$ - since $a_j\in\bar\Q$, but in general infinite dimensional as vector-space over $\Q$. So we take the space $E$ generated by $$ a_0^{k_0}a_1^{k_1}\cdots a_{d_0}^{k_{d_0}}\quad k_j=0,\ldots,d_j-1,\quad j=0,\ldots,d_0,\quad a_0\colon=a, $$ which is of dimension $d_0d_1\cdots d_n$, where $p_j(a_j)=0$ for some $p_j\in\Q[X]$ with $\deg p_j=d_j$. To prove that $E$ is invariant under multiplication with $a$, we only have to consider elements where $k_0=d_0-1$; these are mapped to \begin{eqnarray*} a^{d_0}a_1^{k_1}\cdots a_{d_0}^{k_{d_0}} &=&-(a_{d_0}a^{d_0-1}+\cdots+a_2a+a_1)a_1^{k_1}\cdots a_{d_0}^{k_{d_0}}\\ &=&-\Big(a^{d_0-1}a_1^{k_1}\cdots a_{d_0}^{k_{d_0}+1}+\cdots +aa_1^{k_1}a_2^{k_2+1}\cdots a_{d_0}^{k_{d_0}} +a_1^{k_1+1}a_2^{k_2}\cdots a_{d_0}^{k_{d_0}}\Big)~. \end{eqnarray*} If one of the exponents $k_j+1$ equals $d_j$, then use $p_j(a_j)=0$ to replace $a_j^{d_j}$ with a linear combination of $1,a_j,\ldots,a_j^{d_j-1}$. This way you find that the right hand side is some element in $E$. $\eofproof$
The proof is constructive, for the Cayley-Hamilton Theorem actually gives you a polynomial $P\in\Q[X]$ with $P(a)=0$: $P$ is the characteristic polynomial of the $\Q$-linear map $\Psi(1):E\rar E$.
We assume $a^2+av+u=0$ for $u,v\in\bar\Q$ and $u^2+uq+p=0$, $v^2+sv+r=0$ for $p,q,r,s\in\Q$. Find some monic polynomial $P\in\Q[X]$ such that $P(a)=0$. Suggested solution.
Let $E$ and $F$, respectively, be the vector-space generated by $1,a,a^2,\ldots$ and $1,b,b^2,\ldots$, respectively. Then the vector-space generated by $\{xy:x\in E,y\in F\}$ is the vector-space generated by $a^nb^m$, $n,m\in\N$.
For all $A\in\Ma(n,\bar\Q)$ the eigen-values of $A$ are algebraic numbers.
Suppose $E$ is an $n$-dimensional subspace of $\C$, such that $\Psi:\Z\rar\Gl(E)$, $\Psi(n)z=a^nz$ is irreducible. Take some $z\in E\sm\{0\}$, then for all $k < n$ the vectors $z,az,\ldots,a^kz$ must be linearly independent, for otherwise the subspace generated by $z,az,\ldots,a^{k-1}z$ would be $\Psi$-invariant. Therefore for all monic polynomials $q\in\Q[X]$ with $\deg(q) < n$: $q(a)\neq0$. It follows that the characteristic polynomial of $\Psi(1):E\rar E$ is up to a sign the unique monic polynomial $p\in\Q[X]$ of minimal degree, satisfying $p(a)=0$ - we say it's the minimal polynomial of $a$. Conversely, let $p=X^{n+1}+r_nX^n+\cdots+r_1X+r_0$ be minimal, then the vector-space $E$ generated by $1,a,\ldots,a^n$ is invariant and the matrix of $\Psi(1):z\mapsto az$ with respect to this basis is the companion matrix of $p$, i.e. the characteristic polynomial of $\Psi(1)$ is $p$. If $E$ is not irreducible, then there is a non-trivial subspace $F$ of $E$ invariant under $\Psi$ and the degree of the characteristic polynomial $q\in\Q[X]$ of $\Psi(1)|F$ is strictly smaller than the degree $n+1$ of $p$. By Cayley-Hamilton: $q(a)=0$ and thus $p$ is not minimal. Cf. also exam where we proved, that the minimal polynomial of $\Psi(1):E\rar E$ is its characteristic polynomial.
A finite dimensional subrepresentation $\Psi_E$ of $\Psi:Z\rar\C$, $\Psi(n)z=a^nz$ is irreducible if and only if the characteristic polynomial of $\Psi(1):E\rar E$ is the minimal polynome of $a$.
Finally a minimal polynomial for $a$ is obviously irreducible in $\Q[X]$, i.e. it cannot be written as the product of two non-constant polynomials in $\Q[X]$. Conversely if $p$ is monic and irreducible and $p(a)=0$, then $p$ is minimal for $a$: Assume $m$ is minimal, then $p=mq+r$ with $\deg(r) < \deg(m)$ and thus $0=p(a)=m(a)q(a)+r(a)=r(a)$, i.e. if $r\neq0$, then $m$ is not minimal. Thus $r=0$ and by irreducibility of $p$: $p=m$.
A finite dimensional subrepresentation $\Psi_E$ of $\Psi:Z\rar\C$, $\Psi(n)z=a^nz$ is irreducible if and only if the characteristic polynomial of $\Psi(1):E\rar E$ is irreducible.

The character table of $A(5)$ again

As an application of
Theorem we are going to determine the dimensions of all irreducible representations of $A(5)$: $A(5)$ has order $60$ and $5$ conjugacy classes; the trival representation has always dimension $1$ and therefore we get $60=1+l_2^2+l_3^2+l_4^2+l_5^2$, where by Theorem and Theorem: $l_j\in\{1,2,3,4,5,6\}$ (we won't use the fact that $[A(5),A(5)]=A(5)$ and thus by subsection $A(5)$ only has one one dimensional character). Suppose $l_5\geq l_4\geq l_3$, then $4l_5^2\geq 59$, i.e. $l_5\geq 4$. If $l_5=6$, then $l_2^2+l_3^2+l_4^2=23$ and thus $3l_4^2\geq23$, i.e. $l_4\geq3$. If $l_4=4$, then $l_2^2+l_3^2=7$ which doesn't have any solution. If $l_4=3$, then $l_2^2+l_3^2=14$, which doesn't have any solution either. Thus $l_5\leq 5$. If $l_5=4$, then $l_2^2+l_3^2+l_4^2=43$ and therefore $l_4\geq 4$, i.e. $l_4=4$ and $l_2^2+l_3^2=27$, with no solution. It follows that $l_5=5$ and hence $l_2^2+l_3^2+l_4^2=34$, which has exactly one solution: $l_4=4$ and $l_3=l_2=3$. The dimensions of the irreducible representations of $A(5)$ are: $1,3,3,4,5$.
Verify that the values of the character of the irreducible $4$-dimensional standard representation of $A(5)$ on the conjugacy classes are: $4,-1,-1,1,0$.
Next, we'd like to construct a $3$-dimensional irreducible representation of $A(5)$. We start with a general remark, which might be helpful elsewhere: Suppose we are given any representation $\Psi:G\rar\OO(3)$. From section we know that if $\Psi(g)$ is a proper rotation, i.e. $\Psi(g)\in\SO(3)$, then $\tr\Psi(g)=1+2\cos\theta(g)$, where $\theta(g)$ is the angle of rotation of $\Psi(g)$. If $\Psi(g)$ is an improper rotation, then $\tr\Psi(g)=-1+2\cos\theta(g)$. Now we utilize the fact that $A(5)$ is a subgroup of the group of symmetries of the icosahedron (cf. subsection) and it has $5$ conjugacy classes: $1E,12C_5,12C_5^2,20C_3,15C_2$, which are all proper rotations. By the previous remark the traces of this representation on the conjugacy classes are: $$ (3,1+2\cos(2\pi/5),1+2\cos(4\pi/5),1+2\cos(2\pi/3),1+2\cos(2\pi/2)) =(3,(1+\sqrt5)/2,(1-\sqrt5)/2,0,-1)~. $$ Is the corresponding representation irreducible? Yes, because the square of the norm of its trace is $$ (9+12(1+\sqrt5)^2/4+12(1-\sqrt5)^2/4+15)/60=1~. $$ So far we have the following incomplete table: $$ \begin{array}{c|ccccc} A(5)&1E&12C_5&12C_5^2&20C_3&15C_2\\ \hline \chi_1 & 1& 1& 1& 1& 1\\ \chi_2 & 3&(1+\sqrt5)/2&(1-\sqrt5)/2& 0&-1\\ \chi_3 & 3&a&b&c&d\\ \chi_4 & 4&-1&-1& 1& 0\\ \chi_5 & 5& x& y& z& u \end{array} $$ The tensor product of the second representation $\Psi_2$ with itself has traces $9,(3+\sqrt5)/2,(3-\sqrt5)/2,0,1$, which implies: \begin{eqnarray*} \la(\tr\Psi_2)^2,\chi_1\ra &=&\tfrac1{60}(9+6(3-\sqrt5)+6(3+\sqrt5)+0+15)=1,\\ \la(\tr\Psi_2)^2,\chi_2\ra &=&\tfrac1{60}(27+3(3+5-4\sqrt5)+3(3+5+4\sqrt5)+0-15)=1,\\ \la(\tr\Psi_2)^2,\chi_4\ra &=&\tfrac1{60}(36-6(3-\sqrt5)-6(3+\sqrt5)+0+0)=0~. \end{eqnarray*} Thus the values of $(\tr\Psi_2)^2-\chi_1-\chi_2$ on the conjugacy classes are: $5,0,0,-1,1$ and since it has norm $1$, it must be a character and it must be the character $\chi_5$: $$ \begin{array}{c|ccccc} A(5)&1E&12C_5&12C_5^2&20C_3&15C_2\\ \hline \chi_1 & 1& 1& 1& 1& 1\\ \chi_2 & 3&(1+\sqrt5)/2&(1-\sqrt5)/2& 0&-1\\ \chi_3 & 3&a&b&c&d\\ \chi_4 & 4&-1&-1& 1& 0\\ \chi_5 & 5& 0& 0&-1& 1 \end{array} $$ Finally by orthogonality of columns: $a=(1-\sqrt5)/2$, $b=(1+\sqrt5)/2$, $c=0$ and $d=-1$: $$ \begin{array}{c|ccccc} A(5)&1E&12C_5&12C_5^2&20C_3&15C_2\\ \hline \chi_1 & 1& 1& 1& 1& 1\\ \chi_2 & 3&(1+\sqrt5)/2&(1-\sqrt5)/2& 0&-1\\ \chi_3 & 3&(1-\sqrt5)/2&(1+\sqrt5)/2& 0&-1\\ \chi_4 & 4&-1&-1& 1& 0\\ \chi_5 & 5& 0& 0&-1& 1 \end{array} $$ That's exactly the table of subsection for $A=C_2$, $B=C_3$, $C=C_5$ and $D=C_5^2$.
Determine the character table (cf. exam) of $S(5)$.

Fourier Transform

Fourier inversion formula

In case of a finite commutative group $G$ we have $|\wh G|=|G|$ and the characters form an orthonormal basis of $L_2(G)$ and thus every function $f:G\rar\C$ can be expanded $$ f=\sum_{\chi\in\wh G}\la f,\chi\ra\chi $$ The coefficients $$ \wh f(\chi)\colon=\la f,\chi\ra =\frac1{|G|}\sum_{h\in G}f(h)\cl{\chi(h)} $$ are called the Fourier coefficients
of $f$ and the function $\wh f:\wh G\rar\C$ is called the Fourier transform of $f$. In the finite but non-commutative case let $\Psi^m$, $m\in\wh G$, denote a complete set of pairwise inequivalent irreducible representations of $G$ of dimension $l_m$. Denote by $\psi_{jk}^m$ the coordinate functions of the representation $\Psi^m$ and by $\Psi_m(g)$ the matrix $(\psi_{jk}^m(g))_{j,k=1}^{l_m}$, i.e. the homomorphism $\Psi_m:G\rar\UU(l_m)$ is just a matrix version of $\Psi^m$. Then the function $\wh f:\wh G\rar\C$ defined by \begin{equation}\label{foueq1}\tag{FOU1} \wh f(m)\colon=\frac1{|G|}\sum_{h\in G}f(h)\Psi_m(h)^* =\frac1{|G|}\sum_{h\in G}f(h)\Psi_m(h^{-1})\in\Ma(l_m,\C) \end{equation} is said to be the Fourier transform of $f$ at $m\in\wh G$. $\wh f(m)$ is more or less the same as $\Psi_m(f)$ (compare lemma). Since $\cl{\psi_{kj}(h)}=\psi_{jk}(h^{-1})$, the entries of this matrix are given by $$ \wh f(m)_{jk} =\frac1{|G|}\sum_{h\in G}f(h)\psi_{jk}^m(h^{-1}) =\la f,\psi_{kj}^m\ra~. $$ We know that the functions $\sqrt{l_m}\psi_{jk}$, $m\in\wh G$, $j,k\in\{1,\ldots,l_m\}$, form an orthonormal basis, and therefore: \begin{eqnarray*} f&=&\sum_{m,j,k}\la f,\sqrt{l_m}\psi_{jk}^m\ra\sqrt{l_m}\psi_{jk}^m\\ &=&\sum_{m,j,k}l_m\la f,\psi_{jk}^m\ra\psi_{jk}^m =\sum_{m,j,k}l_m(\wh f(m))_{kj}\psi_{jk}^m \end{eqnarray*} Thus we have the Fourier inversion formula: \begin{equation}\label{foueq2}\tag{FOU2} \forall x\in G:\quad f(x)=\sum_{m\in\wh G}l_m\tr(\wh f(m)\Psi_m(x))~. \end{equation} We will also identify $\wh G$ with the set of all inequivalent irreducible representations $\Psi$ and we will therefore write for the Fourier transform of $f\in L_2(G)$ and its inverse: \begin{eqnarray*} \forall\Psi\in\wh G&\quad& \wh f(\Psi)=\frac1{|G|}\sum_{x\in G}f(x)\Psi(x)^*\quad\mbox{and}\\ \forall x\in G&\quad& f(x)=\sum_{\Psi\in\wh G}\dim(\Psi)\tr(\wh f(\Psi)\Psi(x))~. \end{eqnarray*} where in slight abuse of notation $\Psi(x)\in\UU(\dim(\Psi))$ is the matrix of the irreducible representation $\Psi:G\rar\UU(E)$ with respect to an orthonormal basis.
Verify that the Fourier transform of $\check f(x)\colon=\cl{f(x^{-1})}$ is given by the adjoint $\wh f(\Psi)^*$ of $\wh f(\Psi)$.
Determine a set of coordinate functions for $S(3)\simeq C_{3v}$. Suggested solution.
If $f$ is central, i.e. $f\in Z(G)$, then by lemma: $$ \wh f(\Psi)=\frac1{\dim(\Psi)}\la f,\psi\ra $$ and thus the Fourier inversion formula yields: $$ f(x) =\sum_{\Psi\in\wh G}\dim(\Psi)\tr(\frac1{\dim(\Psi)}\la f,\psi\ra\Psi(x)) =\sum_{\Psi\in\wh G}\la f,\psi\ra\psi(x)~. $$ which restates that the characters form an orthonormal basis of $Z(G)$.

Convolution and Fourier transform

Suppose $G$ is a commutative group and $f,g\in L_2(G)$, then the Fourier coefficient of the convolution of these functions is given by \begin{eqnarray*} \wh{f*g}(\chi) &=&\la f*g,\chi\ra =\frac1{|G|^2}\sum_{x,y\in G}f(xy^{-1})g(y)\cl{\chi(x)}\\ &=&\frac1{|G|^2}\sum_{x,y\in G}f(xy^{-1})\cl{\chi(xy^{-1})}g(y)\cl{\chi(y)}\\ &=&\frac1{|G|}\sum_{y\in G}\la f,\chi\ra g(y)\cl{\chi(y)} =\la f,\chi\ra\la g,\chi\ra =\wh f(\chi)\wh g(\chi)~. \end{eqnarray*} Thus in the non-commutative case we conjecture that the Fourier transform of the convolution of two functions $f,g\in L_2(G)$ is the product of the matrices $\wh f(\Psi)$ and $\wh g(\Psi)$, i.e. \begin{equation}\label{foueq3}\tag{FOU3} \forall\Psi\in\wh G:\quad \wh{f*g}(\Psi)=\wh f(\Psi)\wh g(\Psi)~. \end{equation} Indeed, we just need to copy the proof above replacing the homomorphism $\chi:G\rar S^1$ with the irreducible representations $\Psi:G\rar\UU(\dim(\Psi))$: \begin{eqnarray*} \wh{f*g}(\Psi) &=&\frac1{|G|}\sum_{x\in G}f*g(x)\Psi(x^{-1})\\ &=&\frac1{|G|^2}\sum_{x,y\in G}f(xy^{-1})g(y)\Psi(x^{-1}y)\Psi(y^{-1})\\ &=&\frac1{|G|}\sum_{y\in G} \Big(\frac1{|G|}\sum_{x\in G}f(xy^{-1})\Psi(x^{-1}y)\Big)g(y)\Psi(y^{-1})\\ &=&\frac1{|G|}\wh f(\Psi)\sum_{y\in G}g(y)\Psi(y^{-1}) =\wh f(\Psi)\wh g(\Psi)~. \end{eqnarray*} This is no surprise! It's just the first assertion of lemma!
Let $n$ be an even integer. Find the Fourier transform of a function $f:C_{nv}\rar\C$.
We use the irreducible representations given in subsection. For the one-dimensional representations we have: $\chi_1=1$, $\chi_2(C_n^k)=1$, $\chi_2(\s C_n^k)=-1$, $\chi_3(C_n^k)=(-1)^k$, $\chi_3(\s C_n^k)=(-1)^k$ and $\chi_4(C_n^k)=(-1)^k$, $\chi_4(\s C_n^k)=-(-1)^k$ and therefore: \begin{eqnarray*} \wh f(\chi_1)&=&\frac1{2n}\sum_kf(C_n^k)+f(\s C_n^k)\\ \wh f(\chi_2)&=&\frac1{2n}\sum_kf(C_n^k)-f(\s C_n^k)\\ \wh f(\chi_3)&=&\frac1{2n}\sum_k(-1)^kf(C_n^k)+(-1)^kf(\s C_n^k)\\ \wh f(\chi_4)&=&\frac1{2n}\sum_k(-1)^kf(C_n^k)-(-1)^kf(\s C_n^k) \end{eqnarray*} For the two-dimensional representations $\Psi_j$, $j=1,\ldots,n/2-1$, we have: $$ \Psi_j(C_n^k) =\left(\begin{array}{cc} e^{2\pi ijk/n}&0\\ 0&e^{-2\pi ijk/n} \end{array}\right),\quad \Psi_j(\s C_n^k) =\left(\begin{array}{cc} 0&e^{-2\pi ijk/n}\\ e^{2\pi ijk/n}&0 \end{array}\right)~. $$ Let $\psi_j$ denote its character, i.e. $\psi_j(C_n^k)=2\cos(2\pi jk/n)$ and $\psi_j(\s C_n^k)=0$. \begin{eqnarray*} \wh f(\psi_j) &=&\frac1{2n}\sum_{h\in C_{nv}}f(h)\Psi_j(h) =\frac1{2n}\sum_{l\in\{0,1\}}\sum_{k=0}^{n-1}f(\s^lC_n^k)\Psi_j(\s^lC_n^k)\\ &=&\frac1{2n}\sum_{k=0}^{n-1}f(C_n^k)\Psi_j(C_n^k) +f(\s C_n^k)\Psi_j(\s C_n^k) \end{eqnarray*} i.e. $$ \wh f(\psi_j) =\frac1{2n}\sum_{k=0}^{n-1} \left(\begin{array}{cc} f(C_n^k)e^{2\pi ijk/n}&f(\s C_n^k)e^{-2\pi ijk/n}\\ f(\s C_n^k)e^{2\pi ijk/n}&f(C_n^k)e^{-2\pi ijk/n} \end{array}\right)~. $$
Let $n$ be an even integer. Find the Fourier transform of the function $f:C_{nv}\rar\C$, $f=\d_{\s}$, i.e. $f(\s)=1$ and $f(x)=0$ for all $x\neq\s$. Suggested solution.
Let $n$ be an even integer. Find the Fourier transform of the function $f:C_{nv}\rar\C$, $f=\d_{C_n}$, i.e. $f(C_n)=1$ and $f(x)=0$ for all $x\neq C_n$.
Utilizing exam find the Fourier transform of a function $f:S(3)\rar\C$
The following example analyzes convolution operators on finite groups; it sort of generalizes exam:
Let $A_f:L_2(G)\rar L_2(G)$ be the convolution operator $A_f(g)\colon=f*g$, then $A_f({\cal A}_m)\sbe{\cal A}_m$ and therefore all eigen-values of the convolution operator $A_f$ must be eigen-values of one of the restrictions $A_f:{\cal A}_m\rar{\cal A}_m$ for some $m\in\wh G$. Moreover, for every $b\in{\cal A}_m$: $A_f(b)=A_{P_mf}(b)$, where $P_mf$ is the orthogonal projection of $f$ to ${\cal A}_m$. $b\in{\cal A}_m$ is an eigen-function of $A_f$ if and only if: $(\wh f(m)-\l)\wh b(m)=0$. If $\l$ is an eigen-value of $\wh f(m)\in\Ma(l_m,\C)$ and $(v_1,\ldots,v_{l_m})\in\C^{l_m}$ is a corresponding eigen-vector, then all functions $$ b_k\colon=\sum_jv_j\psi_{jk}\in\ell_k^m, \quad k=1,\ldots,l_m $$ are eigen-functions with the same eigen-value.
The convolution operator $A_f(g)\colon=f*g$ is normal if $\wh f(m)$ is normal for all $m\in\wh G$:
The Fourier transform $f\mapsto\wh f$ is an isometry of $L_2(G)$ onto the Hilbert-space $$ L_2(\wh G)\colon=\bigoplus_{\Psi\in\wh G}\Ma(\dim(\Psi),\C) $$ with euclidean product: $$ \la A,B\ra\colon=\sum_{\Psi\in\wh G}\dim(\Psi)\tr(A(\Psi)B(\Psi)^*)~. $$
$\proof$ First we will show that $f\mapsto\wh f$ is an isometry. By exam we have: $\wh{f*\check f}(\Psi)=\wh f(\Psi)\wh f(\Psi)^*$ and thus we conclude by the Fourier inversion formula \eqref{foueq2} that \begin{eqnarray*} \Vert f\Vert^2 &=&\frac1{|G|}\sum f(x)\cl{f(x)} =\frac1{|G|}\sum f(x)\check f(x^{-1})\\ &=&f*\check f(e) =\sum_{\Psi\in\wh G}\dim(\Psi)\tr(\wh f(\Psi)\wh f(\Psi)^*) =\Vert\wh f\Vert^2~. \end{eqnarray*} It remains to verify that this isometry is onto. To this end put for given $A\in L_2(\wh G)$: $$ f(x)=\sum_\Psi\dim(\Psi)\tr(A(\Psi)\Psi(x))~. $$ Then we need to check that $\wh f(\Phi)=A(\Phi)$. This follows immediately from exam; however we verify it by straight calculation: for any $\Phi\in\wh G$: $$ \wh f(\Phi)=\frac1{|G|}\sum_\Psi\dim(\Psi)\sum_{x}\tr(A(\Psi)\Psi(x))\Phi(x)^* $$ and therefore we need to calculate the matrix entries of the right hand side. By \eqref{caleq2} we get \begin{eqnarray*} l_m\Big(\sum_{x}\tr(A(\Psi)\Phi(x))\Phi(x)^*\Big)_{rs} &=&l_m\sum_x\sum_{j,k}a_{jk}(\Psi)\psi_{kj}(x)\vp_{rs}(x^{-1})\\ &=&l_m|G|\sum_{j,k}a_{jk}(n)\psi_{kj}*\vp_{rs}(e) =|G|\d_{\Psi,\Phi}a_{rs}(\Phi) \end{eqnarray*} i.e. $\wh f(\Phi)=A(\Phi)$. $\eofproof$
Verify that $f\mapsto\wh f$ is onto by checking dimensions.
Deduce from Plancherel's Theorem: for all $f,g\in L_2(G)$: $$ \la f,g\ra=\sum_{\Psi\in\wh G}\dim(\Psi)\tr(\wh f(\Psi)\wh g(\Psi)^*)~. $$

Poisson's summation formula

This formula is sort of generalization of $$ f(e)=\sum_{\Psi\in\wh G}\dim(\Psi)\tr(\wh f(\Psi)) $$ which holds for all functions $f:G\rar\C$ by the Fourier inversion formula \eqref{foueq2}. Suppose $H$ is a normal subgroup of $G$, then the characters of the group $G/H$ can be identified with a subset $H^\perp$ of $\wh G$: $$ H^\perp\colon=\{\Psi\in\wh G:\forall h\in H:\quad\Psi(h)=1\}~. $$ Now put for any $f\in L_2(G)$: $$ f_1(x)\colon=\sum_{y\in H}f(xy), $$ then $f_1$ is constant on the cosets $xH$ and thus $f_1=F\circ\pi$ for a unique function $F\in L_2(G/H)$. For all $\Psi\in H^\perp$: $$ \wh F(\Psi) =\frac1{|G/H|}\sum_{[h]\in G/H}F([h])\Psi([h])^* =\frac1{|G/H|}\sum_{[h]\in G/H}\sum_{y\in H}f(hy)\Psi([h])^* =\frac1{|G|}\sum_{x\in G} f(x)\Psi(\pi(x))^*~. $$ On the other hand we have for all $\Psi\in H^\perp\sbe\wh G$: $$ \wh f(\Psi) =\frac1{|G|}\sum_{x\in G} f(x)\Psi(\pi(x))^* $$ and therefore: $\wh F(\Psi)=\wh f(\Psi)$. By the Fourier inversion formula \eqref{foueq2} we conclude that $$ f_1(x)=F(\pi(x)) =\sum_{\Psi\in H^\perp}\dim(\Psi)\tr(\wh F(\Psi)\Psi(\pi(x))) =\sum_{\Psi\in H^\perp}\dim(\Psi)\tr(\wh f(\Psi)\Psi(\pi(x)))~. $$ Putting $x=e$ we get
For all $f\in L_2(G)$: $$ \sum_{y\in H}f(y)=\sum_{\Psi\in H^\perp}\dim(\Psi)\tr(\wh f(\Psi)) $$
One of the points of this formula is that if for example $H$ is a huge subgroup of $G$, then $H^\perp$ is small. Provided both $f$ and $\wh f$ are known, it's probably much easier to compute the right hand side. Of course you may also use it the other way round. Finally let's have a look at the commutative case: if $G$ is abelian, then $H^\perp$ is the set of homomorphisms $\psi:G\rar S^1$, satisfying $\psi|H=1$ and thus: $$ \sum_{y\in H}f(y) =\sum_{\psi\in H^\perp}\wh f(\psi) =\sum_{\psi\in H^\perp}\la f,\psi\ra~. $$
If $G$ is commutative then $H^\perp$ is a subgroup of $\wh G$.
Moreover the dual $\wh G$ as the set of characters is again a commutative group (with pointwise multiplication). Now the dual of this group is canonically isomorphic to $G$: the isomorphism $J:G\rar\wh{\wh G}$ is given by evaluation: $J(g)(\psi)=\psi(g)$, indeed the group operation on $\wh{\wh G}$ is pointwise multiplication and thus $$ J(g)J(h)(\psi) \colon=(J(g)\psi)(J(h)\psi) =\psi(g)\psi(h) =\psi(gh) =J(gh)\psi~. $$ If $J(g)=1$, then for all $\psi\in\wh G$: $\psi(g)=1$, i.e. $g=e$. Hence $J$ is 1-1. Finally $J$ is onto because $G$ and its bi-dual have the same finite cardinality.
The characters of $\wh G$ are just the evaluations of the characters of $G$. 2. Verify that under the above identification of $\wh{\wh G}$ and $G$: $H^{\perp\perp}=H$ and $H\mapsto H^\perp$ is a one to one map of the subgroups of $G$ onto the set of subgroups of $\wh G$.
Write down Poisson's summation formula explicitely for $G=\Z_{pq}$ and $H=\Z_p$.

Cayley Graphs

Convolution operators, random walks and adjacency matrices

Let $R\sbe G\sm\{e\}$ be a symmetric subset of a finite group $G$ with neutral element $e$, i.e. for all $x\in R$: $x^{-1}\in R$. Then we may define a symmetric relation on $G$: $xRy$ (or $(x,y)\in R$) iff $xy^{-1}\in R$. Thus $R$ may either mean a symmetric subset of $G$ or the relation $\{(x,y): xy^{-1}\in R\}$. Since the set $R$ is symmetric the relation $R$ is symmetric and $(x,y)\in R$ iff $y\in Rx$. We will also assume that the set $R$ generates the group $G$, i.e. $$ G=R^0\cup R^1\cup R^2\cup R^3\cup\cdots $$ where $R^0=\{e\}$, $R^1=R$ and $R^{k+1}=R^kR=\{xy:x\in R^k,y\in R\}$. Let's investigate the convolution operator associated with the indicator function $|G|I_R$; since for any two subsets $C$ and $K$ of $G$: $|G|I_C*I_K(y)=|C\cap yK^{-1}|$, it follows that $$ |G|I_R*\d_x(y)=\left\{\begin{array}{cl} 1&\mbox{if $yRx$ i.e. $y\in Rx$}\\ 0&\mbox{otherwise} \end{array}\right. $$ i.e. $|G|I_R*\d_x=I_{Rx}$; more generally, for all $f\in L_2(G)$: $$ |G|I_R*f(y) =\sum I_R(z)f(z^{-1}y) =\sum_{z\in R} f(z^{-1}y) =\sum_{z\in R}f(zy)~. $$
The Cayley graph $(G,R)$ of a finite group $G$ with respect to the set $R$ is a simple graph with vertex set $G$ and the edges are all pairs $(x,y)$ such that $xRy$. If $(x,y)$ is an edge, the vertices $x$ and $y$ are called adjacent.
The set $R$ is said to be the set of generators of the Cayley graph. Since $R$ generates $G$, the Cayley graph $(G,R)$ is connected, i.e. for each pair $(x,y)\in G^2$ there is some $n\in\N_0$ such that $xy^{-1}\in R^n$. It's also a regular graph, i.e. for all $x\in G$ the cardinality of $Rx$, which is the set of vertices adjacent to $x$ is constant (actually it's $|R|$). To get a 3d-plot of the Cayley graph of $S(4)$ and the set of all transpositions in sage enter
G=SymmetricGroup(4)
e1=G((1,2));e2=G((1,3));e3=G((1,4));e4=G((2,3));e5=G((2,4));e6=G((3,4))
gens=[e1,e2,e3,e4,e5,e6]
CG=G.cayley_graph(generators=gens)
C=CG.to_simple()
PC=C.plot3d()
PC.show()
The function $d_R(x,y)\colon=\min\{n:xy^{-1}\in R^n\}$ defines a metric on $G$ and $B_1(e)=R\cup\{e\}$.
Let $A_R$ be the convolution operator $f\mapsto I_R*f$, then the relation $|G|I_R*\d_x=I_{Rx}$ simply says that the matrix (with respect to the orthonormal basis $\sqrt{|G|}\d_x$) of the operator $|G|A_R$ is the so called adjacency matrix of the Cayley graph $(G,R)$: the rows and columns of the adjacency matrix are indexed by $x,y\in G$ and $$ a(x,y)=1 \quad\mbox{if}\quad y\in Rx \quad\mbox{and $0$ otherwise.} $$ The operator $Q\colon=|G|A_R/|R|$ is a symmetric Markov operator, i.e. for all $x,y\in G$ the matrix $q(x,y)\colon=Q\d_x(y)=a(x,y)/|R|$ has the following properties: $q(y,x)=q(x,y)\geq0$ and for all $x\in G$: $\sum_yq(x,y)=1$ - actually $q(x,y)=1/|R|$ iff $y\in Rx$ and $q(x,y)=0$ otherwise. Thus it defines a simple random walk on $G$: jump with probability $1/|R|$ from $x$ to $y$ if $y\in Rx$. We'll return to this subject at the end of this subsection.
In graph theory there is a whole field called
spectral graph theory that tries to study graphs by the eigen-values of its adjacency matrix. For the Cayley graph all eigenvalues can be computed, in principle: we just have to determine all eigen-values of $A_R$. By exam $b\in{\cal A}_\Psi$ is an eigen-function with eigen-value $\l$ for $A_R$ if and only if: $(\wh I_R(\Psi)-\l)\wh b(\Psi)=0$. Thus in general we have to compute the eigen-values of $\wh I_R(\Psi)$ for all $\Psi\in\wh G$. But there is a particular class of functions $f$ for which $\wh f$ is a multiple of the identity: the class functions.
$I_R$ is a class function if and only if for all $x\in G$: $xRx^{-1}\sbe R$.
$\proof$ $I_R\in Z(G)$ if and only if $I_R$ is constant on conjugacy classes, i.e. $I_R(xyx^{-1})=I_R(y)$, which is equivalent to: $x^{-1}Rx=R$ for all $x\in G$. But this in turn is equivalent to: $xRx^{-1}\sbe R$ for all $x\in G$. $\eofproof$
The metric $d_R$ is bi-invariant, i.e. $d_R(zx,zy)=d_R(x,y)=d_R(xz,yz)$ if and only if $I_R$ is a class function.
Given any metric $d$ on $G$ construct a bi-invariant metric by averaging.
If $I_R$ is a class function, then its Fourier transform is given by $$ \wh I_R(\Psi) =\frac{1}{\dim(\Psi)}\la I_R,\psi\ra\,1_{{\cal A}_\Psi} =\frac{1}{\psi(e)}\la I_R,\psi\ra\,1_{{\cal A}_\Psi}, \mbox{where}\quad \psi=\tr\Psi $$ and $1_{{\cal A}_\Psi}$ is the identity in $\Ma(\dim(\Psi),\C)$. Thus the eigen-spaces of $A_R$ are the subalgebras ${\cal A}_\Psi$, $\Psi\in\wh G$, and the corresponding eigen-values are $\{\la I_R,\psi\ra/\psi(e)\}$, i.e. each eigen-value $\la I_R,\psi\ra/\psi(e)$ has multiplicity $\dim(\Psi)^2$. The assumptions on $R$ imply that $\check I_R=I_R$ and thus by exam $A_R$ is self-adjoint, i.e. all eigen-values are real. Moreover, since $|\psi(x)|\leq\psi(e)$: $$ \Big|\frac{1}{\psi(e)}\la I_R,\psi\ra\Big| \leq\frac1{\psi(e)|G|}\sum_{x\in R}|\psi(x)| \leq\frac{|R|}{|G|} $$ and equality holds iff $\psi$ is constant on $R\cup\{e\}$, i.e. $\Psi(x)=id$ for all $x\in R\cup\{e\}$ and since the latter set generates $G$, $\Psi$ must be the trivial irreducible representation $\Psi_0$. Moreover, all eigen-values of $A_R$ lie in the interval $[-|R|/|G|,|R|/|G|]$.
Compute the eigen-values of $A_R$ for the Cayley graph of the group $\Z_p$ with generators $R=\{\pm1\}$.
Compute all eigen-values of $A_R$ for the Cayley graph of the group $S(4)$ with generators $R=\{(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)\}$.

The Laplacian of the Cayley graph

As of now we will only deal with real valued functions $f\in L_2(G)$!
The Laplacian $\D_R:L_2(G)\rar L_2(G)$ of $(G,R)$ is defined by $$ \D_Rf(y) \colon=-\sum_{z\in Ry}(f(z)-f(y)) =-\sum_{x\in R}\nabla_xf(y) $$ where for all $x\in R$ and all $y\in G$: $\nabla_xf(y)\colon=f(xy)-f(y)$.
It is easily checked that $\D_R=|R|-|G|A_R$ or $$ \D_Rf(x)=|R|f(x)-\sum_{y\in G}a(x,y)f(y) $$ where $(a(x,y))$ is the adjacency matrix. It follows that $\D_R$ is self-adjoint and all its eigen-values lie in the interval $[0,2|R|]$, i.e. $\D_R$ is a positive definite operator. The linear differential equation $u^\prime(t)=-\D_Ru(t)$ for $u:\R_0^+\rar L_2(G)$ is called the heat equation for the Cayley graph $(G,R)$; given $u(0)=f\in L_2(G)$, the unique solution is $u(t)=e^{-t\D_R}f$.
Compute the Laplacian $\D_p$ of the Cayley graph of the subgroup $\Z_p\colon=\{z\in S^1: z^p=1\}$ and $R$ the set $\{e^{\pm2\pi i/p}\}$. 2. For every smooth $1$-periodic function $f:\R\rar\C$ we have $f(t)=F(e^{2\pi it})$ for some smooth function $F:S^1\rar\C$. Then for all $z=e^{2\pi it}$: $$ \lim_{p\to\infty}p^2\D_pF(z)=-f^\dprime(t) $$
If $xRx^{-1}\sbe R$, then by the Character Theorem we get for the operator $P_t\colon=e^{-t\D_R}$: $$ P_tf=\sum_{\Psi\in\wh G}e^{-\l(\Psi)t}\dim(\Psi)\psi*f \quad\mbox{where}\quad \l(\Psi)\colon=|R|-\frac{|G|\la I_R,\psi\ra}{\dim(\Psi)} =|R|-\frac{\sum_{x\in R}\bar\psi(x)}{\dim(\Psi)}~. $$ are the eigen-values of $\D_R$. If $\Psi_0$ is the trivial irreducible representation, then $\l(\Psi_0)=|R|-|G|\la I_R,\psi\ra=0$ and the corresponding eigen-space is the space of constant functions. Thus if $$ \l_1\colon=\l_1(\D_R)\colon=\min\{\l(\Psi):\Psi\in\wh G, \Psi\neq\Psi_0\} $$ denotes the smallest non-zero eigen-value of $\D_R$ - the so called spectral gap of $\D_R$, then: $$ \forall f\in L_2(G):\quad \Vert P_t(f-\la f\ra)\Vert\leq e^{-\l_1t}\Vert f-\la f\ra\Vert, $$ where we conveniently write $\la f\ra\colon=\sum_{y\in G}f(y)/|G|$. In particular $\lim_{t\to\infty}P_tf=\la f\ra$. Moreover the rate of convergence is exponentially fast!
The only functions $f$ invariant under $P_t$, i.e. $P_tf=f$ for all $t > 0$, are the constant functions. This property is called ergodicity (or irreducibility) of $P_t$ and it also holds without the assumption $xRx^{-1}\sbe R$. In our setting this might be rephrased as follows: The normalized counting measure - that's the probability measure with density $1/|G|$ on $G$ - is the only $P_t$-invariant normalized measure on $G$.
Let's compute the eigen-values of $(\Z_p^d,R)$ for $R=\{\pm r_j:j=1,\ldots,d\}$, $r_j=(0,\ldots,0,1,0,\ldots,0)$ with $1$ in the $j$-th slot. The characters of $\Z_p^d$ are $\psi_y(x)=\exp(2\pi i\la x,y\ra/p)$, $y\in\Z_p^d$. For $p > 2$ we have $|R|=2d$; thus the eigen-values of $\D_R$ are $$ 2d-\sum_{j=1}^d(\bar\psi(r_j)+\bar\psi(-r_j)) =2(d-\sum_{j=1}^d\cos(2\pi y_j/p)) \quad\mbox{i.e.}\quad \l_1=2(1-\cos(2\pi/n))~. $$ For $p=2$ we have $|R|=d$ and therefore the eigen-values are $$ d-\sum_{j=1}^d\exp(-\pi iy_j/2) \quad\mbox{i.e.}\quad \l_1=1-(-1)=2~. $$
Compute the eigen-values of the Laplacian for the Cayley graph of the group $S(4)$ with generators $R=\{(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)\}$.
Compute the eigen-values of the Laplacian for the Cayley graph of the group $S(5)$ with $R$ the set of all transpositions.
For the Laplacian of the Cayley graph of the group $S(n)$ with $R$ the set of all transpositions we have: $$ \l_1(\D_R)=|R|\inf\Big\{1-\frac{\psi(\t)}{\psi(e)}:\psi\neq1\Big\}, $$ where $\t$ is any transposition and the minimum ranges over all characters $\psi\neq1$. In particular for $n=3,4,5$ we have: $\l_1=3,4,5$. Thus for $S(n)$ we'd expect: $\l_1=n$, equivalently: the second largest eigen-value of the adjacency matrix is $n(n-3)/2$ or for every character $\chi$ of $S(n)$: $\chi(\t)/\chi(e)\leq(n-3)/(n-1)$. This is indeed the case and its proof employs Frobenius character formula (cf. e.g. W. Fulton and J. Harris p.52)
By proposition all transpositions are conjugate; hence every character is constant on $R$ and $$ \l_1(\D_R) =\inf\{|R|-|G|\la I_R,\psi\ra/\psi(e):\psi\neq1\} =|R|\inf\{1-\psi(\t)/\psi(e):\psi\neq1\}, $$ You may check the assertion on $\l_1(\D_R)$ for some $n$:
n=6; R=n*(n-1)/2
G=SymmetricGroup(n)
e=G(()); t=G((1,2))
Chars=G.irreducible_characters()
for chi in Chars:
    ev1=R*(1-chi(t)/chi(e))
    if ev1 != 0:
        ev=min(ev1,ev)
print(ev)
The family of operators $P_t\colon=e^{-t\D_R}$, $t\geq0$, is a so called conservative, symmetric Markov semi-group with generator $-\D_R$, i.e. it has the following properties:
  1. $P_0=id$ and for all $s,t\geq0$: $P_{t+s}=P_tP_s$ - it's a semi-group.
  2. For all $f\in L_2(G)$: $\ttd t P_tf=-\D_RP_tf$ - its generator is $-\D_R$.
  3. $P_t$ is self-adjoint and maps non negative functions to non negative functions - it's a symmetric Markovian semi-group.
  4. $P_t1=1$ (it's conservative) and for all $p\in[1,\infty]$: $\Vert P_t:L_p(G)\rar L_p(G)\Vert=1$.
$\proof$ The first two assertions are obvious, so let's prove the third: $P_t$ is self-adjoint, since $\D_R$ is self-adjoint. The matrix of $A_R$ with respect to the orthonormal basis $\sqrt{|G|}\d_x$, $x\in G$, is the adjacency matrix, thus its entries are either $1$ or $0$. Hence the diagonal entries of the matrix of $-\D_R=|G|A_R-|R|$ are bounded from below by $-|R|$ and the off-diagonal entries are bounded from below by $0$. Adding $|R|$ to all diagonal entries, we conclude that the matrix $\exp(t(-\D_R+|R|1))$ must be non-negative, because all powers of a matrix with non-negative entries have non-negative entries. Now $e^{-t\D_R}=\exp(t(-\D_R+|R|))e^{-t|R|}$. 4. $\D_R1=0$ and since $u=1$ is a solution of the heat equation, it follows by uniqueness: $P_t1=1$, i.e. the sum of each row and each column of the matrix of $P_t$ (with repsect to the orthonormal basis $\sqrt{|G|}\d_x$, $x\in G$) equals $1$. Finally put $p_t(y,x)\colon=P_t\d_x(y)=\la P_t\sqrt{|G|}\d_x,\sqrt{|G|}\d_y\ra$, then $$ P_tf(y)=\sum_x f(x)P_t\d_x(y)=\sum_x f(x)p_t(y,x)~. $$ Since $p_t(x,y)\geq0$ and $\sum_xp_t(y,x)=1=\sum_y p_t(y,x)$, we infer by Jensen's inequality (cf. e.g. wikipedia: $$ \Vert P_tf\Vert_p^p =\frac1{|G|}\sum_y|\sum_x f(x)p_t(y,x)|^p \leq\frac1{|G|}\sum_y\sum_x|f(x)|^pp_t(y,x) =\frac1{|G|}\sum_x|f(x)|^p =\Vert f\Vert_p^p $$ $\eofproof$
For all non-negative $f$ we have $\Vert P_tf\Vert_1=\Vert f\Vert_1$.
Let us remark that the semi-group $P_t=e^{-t\D_R}$ is sort of 'Poissonization' of a simple random walk. As we've noted above (cf. the remarks following exam) the operator $Q\colon=|G|A_R/|R|$ is a symmetric Markov operator, and thus its matrix entries $q(x,y)\colon=Q\d_x(y)$ with respect to the basis $\d_x$, $x\in G$, can be thought of as conditional probabilities of a (reversible) Markov chain $X_n$ - a simple (reversible) random walk $X_n$ on $(G,R)$: $\P(X_{n+1}=y|X_n=x)=q(y,x)$. Its powers $Q^nf$ thus describe the distribution of $X_n$ given the distribution $f$ of $X_0$. Now let $N_t$ be a Poisson process (independent of the random walk) with parameter $|R|$, i.e. $\P(N_t=n)=(|R|t)^ne^{-|R|t}/n!$, then if $f=\sum f(x)\d_x$ is the distribution of $X_0$: \begin{eqnarray*} \P(X_{N_t}=x) &=&\sum_{n=0}^\infty \P(X_n=x,N_t=n)\\ &=&\sum_{n=0}^\infty \P(X_n=x)\P(N_t=n) =\sum_{n=0}^\infty Q^nf(x)\frac{(|R|t)^n}{n!}e^{-|R|t}\\ &=&e^{t|R|Q-|R|t}f(x) =e^{t(|G|A_R-|R|)}f(x) =e^{-t\D_R}f(x) =P_tf(x) \end{eqnarray*} $P_t$ describes the distribution at time $t$ of a simple random walk, where the number of steps is $N_t$. So $P_tf(x)$ is the probability of finding the 'Poissonized random walker' in position $x\in G$ at time $t$ given $X_0$ has distribution $f$.
Show that the expected number of steps $\E N_t$ is given by $|R|t$.

Spectral gap inequality

Following D. Bakry and M. Emery ([BE], [B]) we define two bi-linear operators: \begin{eqnarray*} \G(f,g)&\colon=&\tfrac12(-\D_R(fg)+f\D_Rg+g\D_Rf) \quad\mbox{and}\\ \G_2(f,g)&\colon=&\tfrac12(-\D_R\G(f,g)+\G(f,\D_Rg)+\G(g,\D_Rf))~. \end{eqnarray*} Let us note somewhat more convenient formulas for $\G$ and $\G_2$: $$ \G(f,g)=\tfrac12\sum_{x\in R}\nabla_xf\nabla_xg \quad\mbox{and}\quad \G_2(f,f)=\tfrac12\sum_{x\in R}\G(\nabla_xf,\nabla_xf)-\nabla_xf[\D_R,\nabla_x]f $$ where $[X,Y]$ denotes the commutator $XY-YX$. The first formula follows from the definition of $\G$ and the relation $$ \nabla_x(fg)-f\nabla_xg-g\nabla_xf=\nabla_xf\nabla_xg~. $$ As for the seconde we note that the first relation implies: $\tfrac12(-\D_R)(f^2)=\G(f,f)-f\D_Rf$ and thus $$ \tfrac12(-\D_R)\G(f,f) =\tfrac14\sum_x(-\D_R)(\nabla_xf)^2 =\tfrac12\sum_x\G(\nabla_xf,\nabla_xf)+\nabla_xf(-\D_R)\nabla_xf $$ Now the second formula follows by the definition of $\G_2$. $\eofproof$
We also have the following integration by parts formulas: \begin{eqnarray*} \la\G(f,g)\ra&=&\la\D_Rf,g\ra=\la f,\D_Rg\ra,\\ \la\G_2(f,g)\ra&=&\tfrac12\la\G(f,\D_Rg)+\G(\D_Rf,g)\ra=\la\D_Rf,\D_Rg\ra, \end{eqnarray*} which follow from $\la\D_R f\ra=0$ and self-adjointness of $\D_R$. In particular the first re-proves that $\D_R$ is positive definit. $\eofproof$
The following assertions are equivalent:
  1. For all $x\in G$: $xRx^{-1}\sbe R$.
  2. $R$ is the union of some conjugacy classes.
  3. $I_R$ is a class function.
  4. $d_R$ is bi-invariant.
  5. For all $x\in R$: $[\nabla_x,\D_R]=0$.
$\proof$ We only need to verify that the last assertion is equivalent to the first. \begin{eqnarray*} \nabla_x\D_Rf(y) &=&\D_Rf(xy)-\D_Rf(y) =-\sum_{z\in R}f(zxy)-f(xy)-f(zy)+f(y) \quad\mbox{and}\\ \D_R\nabla_xf(y) &=&-\sum_{z\in R}\nabla_xf(zy)-\nabla_xf(y) =-\sum_{z\in R}f(xzy)-f(zy)-f(xy)+f(y)~. \end{eqnarray*} Thus the commutator is given by $$ [\nabla_x,\D_R]f(y)=-\sum_{z\in R}f(zxy)-f(xzy) =-\sum_{u\in xR}\nabla_{x^{-1}ux}f(y)-\nabla_{u}f(y)~. $$ This vanishes for all $f$ and all $y\in G$ if and only iff $xR=x^{-1}xRx$. $\eofproof$
If for all $x\in G$: $xRx^{-1}\sbe R$, then $$ \G(f,f)=\tfrac12\sum_{x\in R}(\nabla_xf)^2\quad\mbox{and}\quad \G_2(f,f)=\tfrac14\sum_{x,y\in R}(\nabla_x\nabla_yf)^2 $$
The following assertions are equivalent:
  1. For all $f\in L_2(G)$: $\Vert P_t(f-\la f\ra)\Vert\leq e^{-\l t}\Vert f-\la f\ra\Vert$.
  2. For all $f\in L_2(G)$: $\l\Vert f-\la f\ra\Vert^2\leq\la\G(f,f)\ra$ - this sort of inequality is usually called Poincaré inequality.
  3. For all $f\in L_2(G)$: $\l\la\G(f,f)\ra\leq\la\G_2(f,f)\ra$.
$\proof$ 1. $\Lrar$ 2.: We may assume $\la f\ra=0$. Put $F(t)\colon=e^{2\l t}\la P_tf,P_tf\ra$, then by integration by parts: $$ F^\prime(t) =2e^{2\l t}(\l\la P_tf,P_tf\ra-\la P_tf,\D_RP_tf\ra) =2e^{2\l t}(\l\la P_tf,P_tf\ra-\la\G(P_tf,P_tf)\ra)~. $$ Thus 2. implies that $F$ is decreasing, in particular: $\Vert P_tf\Vert\leq e^{-\l t}\Vert f\Vert$. Conversely, the first assumption yields: $F^\prime(0)\leq0$, which is the second assertion.
2. $\Rar$ 3.: By Cauchy-Schwarz and integration by parts we have: \begin{eqnarray*} \la\G(f,f)\ra&=&\la f,\D_Rf\ra =\la f-\la f\ra,\D_Rf\ra\\ &\leq&\la(f-\la f\ra)^2\ra^{1/2}\Vert\D_Rf\Vert_2 \leq\tfrac1{\sqrt{\l}}\la\G(f,f)\ra^{1/2}\la\G_2(f,f)\ra^{1/2}~. \end{eqnarray*} 3. $\Rar$ 2.: Put $F(t)=\la\G(P_tf,P_tf)\ra$, then by integration by parts: $$ F^\prime(t)=-2\la\G(\D_RP_tf,P_tf)\ra=-2\la\G_2(P_tf,P_tf)\ra, $$ i.e.: $F^\prime(t)\leq-2\l F(t)$ and thus: $F(t)\leq F(0)e^{-2\l t}$, in particular: $$ \Vert f-\la f\ra\Vert^2 =\la f^2\ra-\la f\ra^2 =2\int_0^\infty F(s)\,ds \leq2\int_0^\infty e^{-2\l s}\,ds\G(f,f) =\frac1{\l}\G(f,f)~. $$ $\eofproof$
Since $\D_R$ is self-adjoint and $\ker\D_R$ is the one dimensional space of constant functions, we have by the minimax principle of self-adjoint operators: $$ \l_1(\D_R) =\inf\{\la\D_Rf,f\ra:\norm f=1,\la f\ra=0\} =\inf\{\la\G(f,f)\ra:\norm f=1,\la f\ra=0\} $$
For $S(n)$ with $R$ the set of all transpositions we have $\l_1(\D_R)=n$ and thus for all $f\in L_2(S(n))$: $$ 0\leq\Vert P_tf\Vert^2-\la f\ra^2\leq e^{-2nt}\Vert f-\la f\ra\Vert^2 $$ 2. Let $Q$ denote the transition operator of the simple random walk on $(S(n),R)$, i.e. $Q=1-\D_R/|R|$ and thus $$ 0\leq\Vert Q^kf\Vert^2-\la f\ra^2\leq(1-\tfrac2{n-1})^{2k}\Vert f-\la f\ra\Vert^2~. $$ 3. The diameter of the metric space $(S(n),d_R)$ is $n-1$.
1. This follows from $\la P_tf\ra=\la f\ra$ and $\norm{f-\la f\ra}^2=\norm f^2-\la f\ra^2$. 2. Since $Q=1-\D_R/|R|$ and $\la Qf\ra=\la f\ra$, we conclude as before: $$ \Vert Q^kf\Vert^2-\la f\ra^2 =\Vert Q^k(f-\la f\ra)\Vert^2 \leq(1-\l_1/|R|)^{2k}\Vert f-\la f\ra\Vert^2 =(1-\tfrac2{n-1})^{2k}\Vert f-\la f\ra\Vert^2~. $$ Remember that $P_t$ is the semi-group of the 'Poissonization' of a simple random walk on $(S(n),R)$ and $\E N_t=|R|t$, so we guess that it takes approximately $k=(n-1)\log(1/\e)$ steps for the simple random walk to get $\e$-close to the uniform distribution. Indeed for $k\geq\tfrac12(n-1)\log(1/\e)$ we have $$ (1-\tfrac2{n-1})^{2k} =(1-\tfrac2{n-1})^{4k(n-1)/2(n-1)} \leq e^{-4k/(n-1)} \leq\e^2~. $$ 3. Each permutation is the composition of disjoint cycles and a cycle of length $l$ is the composition of $l-1$ transpositions.
If $(G,R)$ is bipartite, i.e. if there is a decomposition $G=X\cup Y$ such that edges join only vertices in $X$ with vertices in $Y$, then $\l$ is an eigen-value of $\D_R$ iff $-\l+2|R|$ is. Thus the spectrum of $\D_R$ is symmetric about $|R|$.
$\proof$ Assume $f$ is an eigen-function of $\D_R$ for the eigen-value $\l$. Put $$ g(x)\colon=\left\{\begin{array}{cl} f(x)&\mbox{if $x\in X$}\\ -f(x)&\mbox{if $x\in Y$} \end{array}\right. $$ Then for $x\in X$: \begin{eqnarray*} \D_Rg(x) &\colon=&-\sum_{z\in Rx}(g(z)-g(x)) =-\sum_{z\in Rx}(-f(z)-f(x)) =-\sum_{z\in Rx}(-f(z)+f(x))+2|R|f(x)\\ &=&(-\D_Rf+2|R|f)(x) =(-\l+2|R|)f(x) =(-\l+2|R|)g(x) \end{eqnarray*} and for $x\in Y$: \begin{eqnarray*} \D_Rg(x) &\colon=&-\sum_{z\in Rx}(g(z)-g(x)) =-\sum_{z\in Rx}(f(z)+f(x)) =-\sum_{z\in Rx}(f(z)-f(x))-2|R|f(x)\\ &=&(\D_Rf-2|R|f)(x) =(\l-2|R|)f(x) =(-\l+2|R|)g(x) \end{eqnarray*} $\eofproof$
$S(n)$ with $R$ the set of all transpositions is bipartite with $X=[\sign=+1]$ and $[Y=[\sign=-1]$. Moreover, in this case the sign character is an eigen-function for the eigen-value $-2|R|$.
For the remaining part of this subsection let us assume that there exists a positive number $\l_1$ such that for all $f$ the following spectral gap (or Poincaré) inequality holds: \begin{equation}\label{cgreq1}\tag{CGR1} \la(f-\la f\ra)^2\ra\leq\tfrac1{\l_1}\la\G(f,f)\ra~. \end{equation}
For all convex functions $\vp:[a,b]\rar\R$ we have: $$ \vp\Big(\frac{a+b}2\Big) \leq\frac1{b-a}\int_a^b\vp(x)\,dx \leq\frac{\vp(a)+\vp(b)}2 $$ Cf. e.g. wikipedia
For $\vp(x)=e^x$ the right hand inequality implies: $$ \frac{e^b-e^a}{b-a}\leq\frac{e^b+e^a}2 \quad\mbox{and thus}\quad \Big(\frac{e^b-e^a}{b-a}\Big)^2\leq\frac12(e^{2b}+e^{2a})~. $$
Suppose $f$ satisfies $\G(f,f)\leq1$. Then for all $|t| < \sqrt{2\l_1}$: $$ \la e^{t(f-\la f\ra)}\ra \leq\Big(1-\tfrac{t^2}{2\l_1}\Big)^{-2}~. $$
$\proof$ We essentially follow the proof of S. Bobkov and M. Ledoux (Poincaré Inequalities and Talagrand's Concentration Phenomenon for the Exponential Measure). W.l.o.g. we may assume $\la f\ra=0$. By the previous inequality we have for all $x,y\in G$: $$ \Big(\frac{e^{tf(xy)}-e^{tf(y)}}{tf(xy)-tf(y)}\Big)^2 \leq\frac{e^{2tf(xy)}+e^{2tf(y)}}2~. $$ Averaging over $y\in G$ it follows by symmetry of $R$: $$ \la(\nabla_xe^{tf})^2\ra\leq\frac{t^2}{2\l_1}\la(\nabla_xf)^2e^{tf}\ra $$ and summing over all $x\in R$: $$ \la\G(e^{tf/2},e^{tf/2})\ra\leq\frac{t^2}{2\l_1}\la\G(f,f)e^{tf}\ra~. $$ Put $y(t)=\la e^{tf}\ra$, then $y(t)\leq(1-\tfrac{t^2}{2\l_1})^{-1}y(t/2)^2$. Iterating this relation we obtain by employing: $\prod(1-x_n)\geq1-\sum x_n$, $y(0)=1$ and $y^\prime(0)=0$: $\lim_{N\to\infty}y(2^{-N}t)^{2^N}=1$: \begin{eqnarray*} y(t) &\leq&\Big(1-\tfrac{t^2}{2\l_1}\Big)^{-1} \Big(1-\tfrac{(t/2)^2}{2\l_1}\Big)^{-1}y(t/4)^4\cdots\\ \cdots&\leq&\Big(1-\tfrac{t^2}{2\l_1}\Big)^{-1} \prod_{n=1}^{N-1}\Big(1-\tfrac{(t/2^n)^2}{2\l_1}\Big)^{-1}y(2^{-N}t)^{2^N}\cdots\\ \cdots&\leq&\Big(1-\tfrac{t^2}{2\l_1}\Big)^{-1} \prod_{n=1}^\infty\Big(1-\tfrac{(t/2^n)^2}{2\l_1}\Big)^{-1}\\ &\leq&\frac1{1-\tfrac{t^2}{2\l_1}} \frac1{1-\tfrac{t^2}{2\l_1}\sum_{n=1}^\infty 2^{-2n}} =\frac1{1-\tfrac{t^2}{2\l_1}} \frac1{1-\tfrac{t^2}{6\l_1}}~. \end{eqnarray*} $\eofproof$
Using Chebyshev's inequality and an optimization argument we get the following version of a result by M. Gromov and V. Milman:
Let $\mu$ be the normalized counting measure. Then for all $f$ satisfying $\G(f,f)\leq1$: $$ \mu(|f-\la f\ra| > \e)\leq2 e^{-2\vp(\e\sqrt{\l_1})} \quad\mbox{where}\quad \vp(x)\colon=\sqrt{1+\tfrac12x^2}-1-\log\tfrac{1+\sqrt{1+\frac12x^2}}2~. $$
$\proof$ W.l.o.g. $\la f\ra=0$. By proposition and Chebyshev's inequality we get for all $t > 0$: $$ \mu(f > \e) =\mu(e^{tf-t\e} > 1) \leq\la e^{tf}\ra e^{-\e t} \leq\exp(-(t\e+2\log(1-t^2/2\l_1)))~. $$ For $\a,K>0$ the supremum $\sup\{t\e+\a\log(1-t^2/K):\,t>0\}$ is given by $$ \a\left(\sqrt{1+\tfrac{K\e^2}{\a^2}}-1- \log\tfrac12\left(1+\sqrt{1+\tfrac{K\e^2}{\a^2}}\right)\right)~. $$ i.e. for $\a=2$ and $K=2\l_1$: $$ 2\left(\sqrt{1+\tfrac{2\l_1\e^2}{4}}-1- \log\tfrac12\left(1+\sqrt{1+\tfrac{2\l_1\e^2}{4}}\right)\right)~. $$ $\eofproof$
Since $\lim_{x\to0}\vp(x)/x^2=1/8$ and $\lim_{x\to\infty} \vp(x)/x=1/\sqrt2$, the exponent is of order $\l_1\e^2/4$ for $\l_1\e^2 < 1$ and for $1\ll\l_1\e^2$ of order $\e\sqrt{2\l_1}$.
The notion 'cencentration of measure' (cf. M. Ledoux; The Concentration of Measure Phenomenon) probably dates back to the late 60's and early 70's when V. Milman put forth local Banach Space Theory. It simply means that a certain type of functions is "almost constant". Usually this type of functions are Lipschitz functions, in our case these functions $f$ are characterized by the uniform boundedness of $\G(f,f)$.
← Symmetries → Matrix Groups

<Home>
Last modified: Tue Jul 23 18:47:22 CEST 2024