← Index → Representation of Finite Groups

What should you be acquainted with?

1. Linear Algebra in particular: inner product spaces, also called spaces with a euclidean product, both over the real and the complex numbers. Our standard reference is Introduction to Linear Algebra by Serge Lang. You may also be interested in Linear Algebra with Python by J. Brownlee. Basics in Functional Analysis and Measure Theory. 3. Very basics in Group Theory, Linear Differential Equations and Complex Analysis.

Acknowledgments

Thanks to J. Obrovsky and T. Speckhofer for valuable corrections!

Symmetries

Subgroups of $\OO(3)$

Cf. students version in german

Symmetries of a molecule

The symmetries of a molecule are, of course, the symmetries of a geometric model of a molecule. As for this model, we assume that it comprises pointlike atoms at fixed positions in euclidean space. A symmetry is just a linear isometry (also called orthogonal transformation) $g:\R^3\rar\R^3$, which maps the model onto itself in such a way, that the original model and its image are indistinguishable. The set of all these operations form a subgroup of the orthogonal group $\OO(3)$, it is called the group of symmetries of the molecule, cf. wikipedia. We say that two symmetries $g_1$ and $g_2$ are of the same type if they are conjugate in $\OO(3)$, i.e. $g_2=ug_1u^{-1}$ for some $u\in\OO(3)$. Equivalently: $g_1$ and $g_2$ are of the same type if there exist orthonormal bases such that the matrix of $g_1$ with respect to the first basis and the matrix of $g_2$ with respect to the second basis coincide. The concept of conjugacy (cf. subsection), which is widely used in group theory, is more restrictive than the notion of similarity of matrices. The type of an orthogonal transformation $g\in\OO(3)$ can essentially be characterized by two numbers: $\det g$ and $\tr g$: By the spectral theorem of linear algebra we can find for every orthogonal operator $g\in\OO(3)$ a positively oriented orthonormal basis $b_1,b_2,b_3$, a sign $\e\in\{\pm1\}$ and a unique real number $\theta\in(-\pi,\pi]$, such that: $$ g(b_1)=\e b_1,\quad g(b_2)=\cos(\theta)\,b_2+\sin(\theta)b_3,\quad g(b_3)=-\sin(\theta)\,b_2+\cos(\theta)b_3~. $$ In other words: the matrix of $g$ with respect to the basis $b_1,b_2,b_3$ has the following form \begin{equation}\label{dsr3eq1}\tag{SGO1} \left(\begin{array}{ccc} \e&0&0\\ 0&\cos\theta&-\sin\theta\\ 0&\sin\theta&\cos\theta \end{array}\right), \end{equation} thus $\det g=\e$ and $\tr g=\e+2\cos\theta$. If $\det g=1$ then $g$ is said to be a proper rotation and the subgroup $\SO(3)\colon=\{g\in\OO(3):\det g=1\}$ is called the special orthogonal group; the number $\theta\in(-\pi,\pi]$ is essentially determined by $\cos\theta=(\tr g-1)/2$ and is called the angle of rotation. Since $g(b_1)=b_1$, $b_1$ or $\R b_1$ is said to be the axis of rotation. If $\e=\det g=-1$ then $g$ is called an improper rotation, $b_1$ or $\R b_1$ the (improper) axis of rotation, because $g(b_1)=-b_1$, and $\theta$, which satisfies $\cos\theta=(\tr g+1)/2$, the (improper) angle of rotation. Of course symmetries of the same type have the same trace and the same determinant, but a little bit more is true:
Similar symmetries have the same determinant and the same trace. Suggested solution.
In general we do not distinguish notationally a linear operator $A\in\Hom(E)$ of an $n$ dimensional vector-space $E$ over the field $\bK$ from the matrix $A\in\Ma(n,\bK)$ of this operator with respect to a fixed basis: usually the context dictates what is meant! In the sequel we will stick to notions common in chemistry for symmetries. Usually a group of symmetries is a finite subgroup of the orthogonal group; there are only a few exceptions: for homonuclear diatomic molecules like $\chem{O_2}$ or linear molecules like $\chem{CO_2}$ the groups of symmetry are infinite! Suppose the group of symmetries is finite, then each symmetry of the molecule is a composition of a finite set of elementary types of symmetries, which can be described as follows: Of course, $E$, $I$ and $\s$ are special cases of rotations or improper rotations. A molecule may have different $C_n$- or $S_n$-axes, anyway, in chemistry the axis with the largest value of $n$ is called the main axis.
Prove that $\tr C_n^k=1+2\cos(2\pi k/n)$, $\tr S_n^k=(-1)^k+2\cos(2\pi k/n)$.
Let $e_1,e_2,e_3$ be any positive orthonormal basis of $\R^3$. Then $b_1=(e_1-e_2)/\sqrt2$, $b_2=(e_1+e_2-2e_3)/\sqrt6$ and $b_3=(e_1+e_2+e_3)/\sqrt3$ is another orthonormal basis. Determine the type of symmetry of the isometry $U$ defined by $Ue_j=b_j$.
Remember the following definitions from group theory: 1. A mapping $F:G\rar H$ of a group $G$ into a group $H$ is called a homomorphism if for all $x,y\in G$: $F(xy)=F(x)F(y)$. If in addition $F$ is a bijection, then $F$ is said to be an isomorphism. 2. If $G,H$ are groups then $G\times H$ with the binary operation $(g_1,h_1)(g_2,h_2)\colon=(g_1g_2,h_1h_2)$ is a group. 3. A subset $H$ of a group $(G,\cdot)$ is called a subgroup (of $G$) if for all $x,y\in H$: $x\cdot y^{-1}\in H$; this is the case if and only if $(H,\cdot)$ is a group. 4. If $A$ is a subset of a group $G$ then the "smallest" subgroup $H$ of $G$ containing $A$ is called the group generated by $A$, i.e. $H$ is the intersection of all subgroups of $G$ containing the set $A$. The elements of $A$ are called generators of $H$.
The subgroup of $(\R,+)$ generated by $A=\{1\}$ is $\Z$.
Determine the group table, usually called Cayley table (cf. wikipedia), of the group $\Z_2^2$.
In sage this can be done as follows:
G=direct_product_permgroups([CyclicPermutationGroup(2),CyclicPermutationGroup(2)])
labels=['(0,0)','(1,0)','(0,1)','(1,1)']
CG=G.cayley_table(names=labels)
print(CG)
This will give you the Cayley table of $\Z_2^2$, elemets labeled $(0,0),(1,0),(0,1),(1,1)$: $$ \begin{array}{c|cccc} \Z_2^2&(0,0)&(1,0)&(0,1)&(1,1)\\ \hline (0,0)&(0,0)&(1,0)&(0,1)&(1,1)\\ (1,0)&(1,0)&(0,0)&(1,1)&(0,1)\\ (0,1)&(0,1)&(1,1)&(0,0)&(1,0)\\ (1,1)&(1,1)&(0,1)&(1,0)&(0,0) \end{array} $$ Put $G=\{e,a,b,c\}$ and define a group structure on $G$ by the following Cayley table $$ \begin{array}{c|cccc} G&e&a&b&c\\ \hline e&e&a&b&c\\ a&a&e&c&b\\ b&b&c&e&a\\ c&c&b&a&e \end{array} $$ Then $F(e)\colon=(0,0)$, $F(a)\colon=(1,0)$, $F(b)\colon=(0,1)$, $F(c)\colon=(1,1)$ defines an isomorphism $F:G\rar\Z_2^2$.
Determine the group table of the suggroup $G$ of $\OO(3)$ generated by the reflections $\s(xy)$, $\s(yz)$ and $\s(zx)$.
Prove that the group in the previous example is isomorphic to $\Z_2^3$. Suggested solution

Matrices of symmetries

For $t\in\R$ let $A\in\Hom(\R^2)$ be the linear map with matrix (with respect to a fixed ONB) $$ \left(\begin{array}{cc} 0&-t\\ t&0 \end{array}\right) $$ Prove that $A^*=-A$ and that the matrix of $\exp(A)\colon=\sum t^kA^k/k!$ is given by $$ \left(\begin{array}{cc} \cos t&-\sin t\\ \sin t&\cos t \end{array}\right) $$
Given a matrix of a rotation we can easily figure out its axis and its angle of rotation. But sometimes we have to go the other way round; we need to find a matrix representation $U$ of a rotation with given axis $b$, $\norm b=1$, and angle of rotation $t$, with respect to a given positive orthonormal basis $e_1,e_2,e_3$. How to accomplish this? First we compute the matrices $A$ and $P$ (with respect to $e_1,e_2,e_3$) of the linear map $x\mapsto b\times x$ and the orthogonal projection $x\mapsto x-\la x,b\ra b$ onto $b^\perp$ respectively.
By employing vector product rules (cf. wikipedia) prove that $A^*=-A$ and $A^2=-P$. Suggested solution
We claim that $U$ is given by: $$ U=\exp(tA)=1+(\cos(t)-1)P+\sin(t)A~. $$ Indeed, we have $Ub=b$ and for a unit vector $x\perp b$: $Ux=\cos(t)x+\sin(t)b\times x$; now $b\times x$ is in the plane $b^\perp$, it's orthogonal to $x$ and its norm is $1$; thus $b,x,b\times x$ is a positive orthonormal basis and it's mapped to $b$, $\cos(t)x+\sin(t)b\times x$ and $\cos(t)b\times x-\sin(t)x$. Hence $U|b^\perp$ is a rotation in $b^\perp$ by $t$. Alternatively one may verify: $U^*U=1$, $\det U=+1$ and $\tr U=1+2\cos t$, which implies that $U$ is a proper rotation by $t$ or $-t$.
Compute $A$ and $P$ for $b=(e_1+2e_2-e_3)/\sqrt6$, where $e_1,e_2,e_3$ is a positive orthonormal basis of $\R^3$
$Ae_1=b\times e_1=(-e_2-2e_3)/\sqrt6$, $Ae_2=(e_1+e_3)/\sqrt6$, $Ae_3=(2e_1-e_2)/\sqrt6$ and thus the matrix of $A$ with respect to the basis $e_1,e_2,e_3$: $$ A=\frac1{\sqrt6}\left(\begin{array}{ccc} 0&1&2\\ -1&0&-1\\ -2&1&0 \end{array}\right) \quad\mbox{and}\quad P=-A^2=\frac1{6}\left(\begin{array}{ccc} 5&-2&1\\ -2&2&2\\ 1&2&5 \end{array}\right) $$
Compute $A$ and $P$ for $b=(-1,1,-1)/\sqrt3$.
For the improper rotation about the axis $b$, $\norm b=1$, by the angle $t$ we get: $$ V\colon=-1+(\cos(t)+1)P+\sin(t)A~. $$ Of course this can be verified along the same lines. However, this time we establish the identity $V^*V=1$ by brute force calculation: notice that $A^*=-A$, $P^*=P$, $A^2=-P$ and $AP=PA$: \begin{eqnarray*} V^*V &=&(-1+(1+\cos t)P-\sin t\,A)(-1+(1+\cos t)P+\sin t\,A)\\ &=&1+(1+\cos t)^2P+\sin^2tP -(1+\cos t)P-\sin t\,A\\ &&-(1+\cos t)P+\sin t(1+\cos t)PA +\sin t\,A-\sin t(1+\cos t)AP\\ &=&1+(1+2\cos t+\cos^2t+\sin^2t-2-2\cos t)P=1~. \end{eqnarray*} As for the determinant choose some normalized vector $e\in b^\perp$, then $b,e,b\times e$ is a positive orthonormal basis and it's mapped by $V$ to $-b,b\times e,-e$, which is a negative orthonormal basis. It follows that $\det V=-1$.
Prove that $\tr V=-1+2\cos t$.
Compute the matrix of the rotation about the axis $\R(1,1,1)$ by $\pi/3$ with respect to the canonical basis.
Compute the matrix of the improper rotation about the axis $\R(1,-1,1)$ by $\pi/2$.
A particular case $(t=0)$ of an improper rotation is a reflection $\s$ about the plane orthogonal to $b$: $$ \s(x)=-x+2Px=-x+2(x-\la x,b\ra b)=x-2\la x,b\ra b~. $$
The composition of reflections about three mutually orthogonal planes through $0$ equals the inversion. Suggested solution

The group $C_{nv}$

This group is generated by a $C_n$-rotation and a reflection $\s_1$ about a plan vertical to the plane of rotation such that: $\s_1C_n=C_n^{-1}\s_1$ - the latter implies: $\s_1C_n^k=C_n^{-k}\s_1$. In math it's called the dihedral group: the group of symmetries of the regular $n$-gon and thus it's in fact a subgroup of $\OO(2)$, cf. wikipedia. $C_{nv}$ contains exactly $2n$ symmetries: the rotations $C_n^0=E,\ldots,C_n^{n-1}$ and the reflections $\s_l=\s_1C_n^{l-1}$, $l=1,\ldots,n$.
Verify that $C_n^k\s_l=\s_{l-k}$, $\s_lC_n^k=\s_{l+k}$ and $\s_l\s_k=C_n^{k-l}$ with addition (subtraction) modulo $n$, where $l,k\in\{1,2,\ldots,n\}$.
Only $C_{2v}$ is commutative, it's the symmetry group of water $\chem{H_2O}$ and its Cayley table is given by: $$ \begin{array}{c|cccc} C_{2v}&E&C_2&\s_1&\s_2\\ \hline E &E &C_2 &\s_1&\s_2\\ C_2 &C_2 &E &\s_2&\s_1\\ \s_1&\s_1&\s_2&E &C_2\\ \s_2&\s_2&\s_1&C_2 &E \end{array} $$ where $\s_1(x,y,z)=(x,-y,z)$, $\s_2(x,y,z)=(-x,y,z)$, $C_2(x,y,z)=(-x,-y,z)$ and the water molecule being placed in the $yz$-plane! The mapping $E\mapsto(0,0)$, $C_2\mapsto(1,1)$, $\s_1\mapsto(0,1)$, $\s_2\mapsto(1,0)$ is easily checked to establish an isomorphism $C_{2v}\rar\Z_2^2$.
water
In complex notation the reflection about a line generated by the unit $p$ - i.e. $|p|=1$ - is given by $z\mapsto p^2\bar z$ and the rotation by an angle of $\vp$ by $z\mapsto e^{i\vp}z$. For $C_{nv}$, $n\geq3$, we have $\vp=2\pi/n$ and $p=1$, i.e. $C_n(z)=e^{2\pi i/n}z$ and $\s(z)=\bar z$.
1. Every rotation in $\R^2$ is the composition of two reflections. 2. Every proper rotation in $\R^3$ is the composition of two reflections about planes. 3. Every improper rotation in $\R^3$ is the composition of three reflections about planes. These are special cases of the Cartan-Dieudonné Theorem
$C_{3v}$ is the group of symmetries of ammonia $\chem{NH_3}$ - we assume that the hydrogen atoms form an equilateral triangle in the $xy$-plane and the nitrogen atom is located right over the center of the triangle. Denoting the rotations by $E$, $C_3$, $C_3^*=C_3^{-1}$ and the reflections by $\s_1,\s_2$ and $\s_3$, the following sage code
D=DihedralGroup(3)
CD=D.cayley_table()
head=CD.row_keys()
print(head)
labels=['E','\s_1','\s_3','C_3','C_3^*','\s_2']
CD=D.cayley_table(names=labels)
latex(CD)
will give you the Cayley table of $C_{3v}$: $$ \begin{array}{c|cccccc} C_{3v}&E&\s_1&\s_3&C_3&C_3^*&\s_2\\ \hline E&E&\s_1&\s_3&C_3&C_3^*&\s_2\\ \s_1&\s_1&E&C_3&\s_3&\s_2&C_3^*\\ \s_3&\s_3&C_3^*&E&\s_2&\s_1&C_3\\ C_3&C_3&\s_2&\s_1&C_3^*&E&\s_3\\ C_3^*&C_3^*&\s_3&\s_2&E&C_3&\s_1\\ \s_2&\s_2&C_3&C_3^*&\s_1&\s_3&E\\ \end{array} $$ It's isomorphic to $S(3)$: indeed every symmetry $g\in C_{3v}$ permutes the hydrogen atomes and this association establishes an isomorphism!
ammonia
The dihedral group $C_{nv}$ is isomorphic to a subgroup of $\SO(3)$. Suggested solution.

The group $C_{nh}$

This commutative group is generated by a $C_n$ rotation a reflection $\s$ about the plane of rotation. The group comprises $2n$ isometries: $\s^jC_n^k$, $k=0,\ldots,n-1$, $j=0,1$. $C_{2h}$ is again isomorphic to $\Z_2^2$.

$C_n$: the cyclic group of order $n$

This group needs just one $C_n$ generator; it's commutative and isomorphic to $\Z_n$, i.e. $\{0,\ldots,n-1\}$ with addition modulo $n$.

$T_d$: the group of symmetries of the tetrahedron

Let $e_1,e_2,e_3$ be an orthonormal basis of $\R^3$. The vectors $a_1=e_1+e_2+e_3$, $a_2=-e_1+e_2-e_3$, $a_3=e_1-e_2-e_3$ and $a_4=-e_1-e_2+e_3$ constitute the vertices of a regular tetrahedron $T$ in $\R^3$.
tetrahedron
Which type of symmetries represent the following matrices and how do they act on the vertices? $$ \left(\begin{array}{ccc} 1&0&0\\ 0&1&0\\ 0&0&1 \end{array}\right), \left(\begin{array}{ccc} 0&1&0\\ 0&0&1\\ 1&0&0 \end{array}\right), \left(\begin{array}{ccc} 1&0&0\\ 0&-1&0\\ 0&0&-1 \end{array}\right), \left(\begin{array}{ccc} -1&0&0\\ 0&0&1\\ 0&-1&0 \end{array}\right), \left(\begin{array}{ccc} 0&0&-1\\ 0&1&0\\ -1&0&0 \end{array}\right) $$ The group generated by these symmetries is the group of symmetries of the tetrahedron, it's actually isomorphic to $S(4)$. The list above is of course not a minimal list of generators!
Project the canonical basis $e_1,\ldots,e_{n+1}$ of $\R^{n+1}$ orthogonally to the subspace $H$ orthogonal to the unit vector $N\colon=(e_1+\cdots+e_{n+1})/\sqrt{n+1}$, i.e. $H=[\la.,N\ra=0]\colon=\{x\in\R^{n+1}:\la x,N\ra=0\}$. Prove that this gives $n+1$ points $a_1,\ldots,a_{n+1}\in H$ such that for all $j\neq k$: $\norm{a_j-a_k}=\sqrt2$.
The orthogonal projection $P$ is an isometry (in the euclidean sense) from the positive face $K\colon=\convex{e_1,\ldots.e_{n+1}}=B_1^{n+1}\cap(H+N/\sqrt{n+1})$ of the convex set $B_1^{n+1}\colon=\{x\in\R^{n+1}:\sum|x_j|\leq1\}$ onto $C\colon=\convex{a_1,\ldots,a_{n+1}}$, for $Px=x-\la x,N\ra N$ simply moves the hyperplane $H+N/\sqrt{n+1}=[\la.,N\ra=1/\sqrt{n+1}]$ to the subspace $H=[\la.,N\ra=0]$. Thus for all $j\neq k$: $\norm{a_j-a_k}=\norm{e_j-e_k}=\sqrt 2$.
The group of symmetries of the convex hull $C\colon=\convex{a_1,\ldots,a_{n+1}}$ is isomorphic to the symmetric group $S(n)$ of all permutations of $n$ distinct objects.

$O_h$: the group of symmetries of the cube and the octahedron

Octahedron is just another name for the set $B_1^3$. Suppose $\pi\in S(3)$ is a permutation and $\e_1,\e_2,\e_3\in\{\pm1\}$. The group of symmetries of the cube consists of all linear isometries $g\in\OO(3)$, given by $g:e_k\mapsto\e_k e_{\pi(k)}$. The order of this group is thus $2^3\cdot 3!=48$.
octahedron
Obviously the trace and the determinant of the symmetry $g$ are given by: $\tr g=\sum\e_k\d_{k\pi(k)}$ and $\det g=\e_1\e_2\e_3\sign(\pi)$. A permutation $\pi\in S(3)$ can only have one or three fixpoints or none; in the first case the trace may have the values: $\pm1$; in the second case its possible values are $\pm1,\pm3$ and in the latter case it can only have the value $0$. Thus $\tr g=\pm1+2\cos(2\pi/n)\in\{-3,-1,0,1,3\}$ or $2\cos(2\pi/n)\in\{-2,-1,0,1,2\}$, implying: $2\pi/n\in\{\pi,2\pi/3,\pi/2,\pi/3\}$ or $n\in\{2,3,4,6\}$.
Identify the symmetry type and the axes of the following matrices for $\e=\pm1$: $$ \left(\begin{array}{ccc} -1&0&0\\ 0&\e&0\\ 0&0&1 \end{array}\right), \left(\begin{array}{ccc} 1&0&0\\ 0&0&\e\\ 0&1&0 \end{array}\right), \left(\begin{array}{ccc} -1&0&0\\ 0&0&\e\\ 0&1&0 \end{array}\right), \left(\begin{array}{ccc} 0&0&\e\\ 1&0&0\\ 0&1&0 \end{array}\right), \left(\begin{array}{ccc} 0&0&\e\\ -1&0&0\\ 0&1&0 \end{array}\right). $$
Again, this is not a minimal list of generators! We remark that the reflections $\s_1$ and $\s_2$ are of course conjugate in $\OO(3)$ and hence they are of the same type, however they are not conjugate in $O_h$, i.e. there is no $g\in O_h$ such that $\s_2=g\s_1g^{-1}$!
Computer algebra programs usually generate groups as subgroups of some $S(n)$, cf
Cayley's Theorem or exam. In e.g.sage the full group $O_h$ can be generated as a subgroup of $S(6)$:
Oh=PermutationGroup([(2,3,4,1),(6,4,5,2),(6,3,5,1),(3,1)])
To determine the representatives of the conjugacy classes of $O_h$ and the size of their classes use the series of commands:
Reps=Oh.conjugacy_classes_representatives()
for g in Reps:
    cg=Oh.conjugacy_class(g)
    print(g.domain(),len(cg))
The last command should give you a list of representatives and the size of their conjugacy class; in our case there are 10 conjugacy classes and they contain $1,3,3,6,6,6,6,8,8,1$ elements.

$I_h$: the group of symmetries of the icosahedron

The symmetry group of the icosahedron ($12$ vertices, $30$ edges and $20$ faces) and its dual the dodecahedron consists of $60$ proper rotations: 1. Choose any pair of antipodal vertices and rotate by multiples of $2\pi/5$ - this gives $6\cdot4=24$ rotations. 2. Choose any pair of centers of opposite faces and rotate by $2\pi/3$ and $4\pi/3$ - this gives $10\cdot2=20$ rotations. 3. Choose any pair of midpoints of opposite edges and rotate by $\pi$ - this gives $15\cdot1=15$ rotations. Adding the identity we therefore get $60$ rotations. The group of proper rotations is isomorphic to $A(5)$, the subgroup of $S(5)$ which is made up of all even permutations (cf. e.g. construction and properties of the icosahedron). There are another $60$ improper rotations and the whole group of symmetries of the icosahedron is isomorphic to $A(5)\times\Z_2$.
icosahedron
Put $a=(1+\sqrt5)/2$ (the golden ratio), then the vertices obtained by cyclical permutation of the coordinates of the four vertices $(\pm a,\pm1,0)$ form the vertex set of an icosahedron. Cf. e.g. coordinates. A typical face is given by the vertices: $(a,1,0),(0,a,1),(1,0,a)$
Determine all even permutations $A(4)$ in $S(4)$ and the Cayley table of $A(4)$. The subgroup $A(n)$ of $S(n)$ consisting of all even permutations is called the alternating group.
The following sage commands will do it (the last command will generate a $\LaTeX$ code of the Cayley table).
G=AlternatingGroup(4)
n=G.order()
labels=[str(i+1) for i in range(n)]
GT=G.cayley_table(names=labels)
head=GT.row_keys()
for i in range(n):
    string=labels[i]+':= '
    string+=str(head[i].domain())
    print(string)
latex(GT)
$A(n)$ is a normal subgroup of $S(n)$ and a cycle $\s\colon=(j_1,\ldots,j_k)$ is in $A(n)$ iff its lenght $k$ is odd. Remember, $\s$ is the permutation $j_1\to j_2\to j_3\ldots\to j_k\to j_1$.

Molecular Vibrations

Cf. students version in german

Some additional groups

We already mentioned the orthogonal group $\OO(n)$ of all linear isometries (also called orthogonal transformations) of the euclidean space $\R^n$; it can be identified with a subgroup of the group $\Gl(n,\R)$ - the set of all invertible $n\times n$-matrices, more precisely $\OO(n)=\{U\in\Gl(n): U^*U=1\}$. The special orthogonal group $\SO(n)$ is the subgroup of $\OO(n)$ comprising all matrices $U\in\OO(n)$ satisfying $\det U=+1$. The complex analogue of $\OO(n)$ is the unitary group $\UU(n)$, it's the set of all linear isometries (also called unitary transformations) of the complex euclidean space $\C^n$ and can in turn be identified with the set of matrices $U\in\Gl(n,\C)$ such that $U^*U=1$. Remember: if $A=(a_{jk})$ is in $\Ma(n,\C)$, then $A^*$ is given by $(\bar a_{kj})$, i.e. $A^*$ is the complex conjugate of the transposed of $A$!
$\OO(n)$ is a subgroup of $\UU(n)$
Prove that for all $U\in\UU(n)$: $|\det U|=1$.
Prove that $\OO(n)$ is isomorphic to $\Z_2\times\SO(n)$.
$\SU(n)\colon=\{U\in\UU(n):\det U=+1\}$ is called the special unitary group. $\UU(1)$ is the unit circle $S^1\colon=\{z\in\C:|z|=1\}$ (the group operation is multiplication) which is isomorphic to the torus $\TT\colon=\R/2\pi\Z$, i.e. addition in $\R$ $\modul\ 2\pi$. The space of functions $f:\TT\rar\C$ may thus be identified with the space of $2\pi$-periodic functions $f:\R\rar\C$: if $f:\R\rar\C$ is $2\pi$-periodic then there is exactly one function $F:S^1\rar\C$ such that for all $x\in\R$: $F(e^{ix})=f(x)$. $F$ is continuous, smooth, etc. if and only if $f$ is continuous, smooth, etc.
Suppose $E$ is an $n$-dimensional complex euclidean space and $\UU(E)$ the group of all linear isometries $U:E\rar E$. Prove that $\UU(E)$ is isomorphic to $\UU(n)$.
Let $e_1,\ldots,e_n$ be the standard ONB of $\C^n$ and $b_1,\ldots,b_n$ any ONB of $E$. Define an isomorphism $J:\C^n\rar E$ by $Je_j=b_j$, then $U\mapsto JUJ^{-1}$ is an isomorphism of $\UU(n)$ onto $\UU(E)$.

The site representation of a molecule

The symmetry group of a molecule simply permutes atoms of the same type. This gives rise to what is called the site representation of the molecule in some space $\C^n$. It is a particular case of a unitary representation, i.e. a homomorphism $\Psi:G\rar\UU(n)$ into the group of unitary matrices. Of course, if the group of symmetries is generated by $g_1,\ldots,g_n$, then it suffices to determine $\Psi(g_1),\ldots,\Psi(g_n)$. In case $\Psi$ is injective the representation is called faithful. The site representation of a molecule need not be faithful: the reflection $\s_2$ in the symmetry group of $\chem{H_2O}$ (cf. subsection) leaves all three atoms invariant, i.e. $\Psi(\s_2)=1$. If $\Psi$ is a faithful representation, then the subgroup $\Psi(G)$ of $\UU(n)$ is isomorphic to $G$.
$\Psi:G\rar\UU(n)$ is faithful iff $\ker\Psi\colon=\{g\in G:\Psi(g)=1\}$ contains only the neutral element of $G$.
For example, any $g\in T_d$ permutes the hydrogen atoms of methane $\chem{CH_4}$ and leaves the carbon atom invariant, cf. the group $T_d$. Thus we obtain the following representation $\Psi$ of $T_d$ in $\C^4$ - we disregard the central carbon atom: $$ \Psi(C_3)=\left(\begin{array}{cccc} 1&0&0&0\\ 0&0&0&1\\ 0&1&0&0\\ 0&0&1&0 \end{array}\right), \Psi(C_2)=\left(\begin{array}{cccc} 0&0&1&0\\ 0&0&0&1\\ 1&0&0&0\\ 0&1&0&0 \end{array}\right), \Psi(S_4)=\left(\begin{array}{cccc} 0&0&0&1\\ 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0 \end{array}\right), \Psi(\s)=\left(\begin{array}{cccc} 0&1&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{array}\right), $$
Check that the site representation of methane is faithful.
Determine the site representation of benzene $\chem{C_6H_6}$, i.e. the regular hexagon.
In math groups are somehow the abstract mathematical structures that can describe transformations without referring to any objects out there and a representation of a group is the action of the group on a particular set of objects. In the previous example the object is a model of the methane molecule.

The standard representation, $\det$ and the trivial representation

Since any symmetry group $G$ is a subgroup of $\OO(3)$, the canonical inclusion map $G\rar\OO(3)\rar\UU(3)$ is a unitary representation, it's called the standard representation - it's clearly faithful. Also $g\mapsto\det(g)$ is a representation and it's only 'inequivalent' to the trivial representation $g\mapsto1$ if $G$ containts both proper and improper rotations - both are in general not faithful.

The standard representation of $S(n)$

Assigning each permutation $\pi$ the matrix $P(\pi)=(p_{jk})$, where $p_{\pi(k)k}=1$ and otherwise $p_{jk}=0$, i.e. $P(\pi)e_k=e_{\pi(k)}$, we get a unitary representation $P$ of $S(n)$ in $\C^n$ (or an orthogonal representation of $S(n)$ in $\R^n$). Indeed for all $k$: $P(\s)P(\pi)e_k=P(\s)e_{\pi(k)}=e_{\s(\pi(k))}=e_{\s\pi(k)}=P(\s\pi)e_k$.

Vibrations

Let us assume $n$-atoms comprise a molecule and the potential energy of the molecule attains a (local) minimum when the atoms are located at positions $s_1,\ldots,s_n$ - that means the molecule is stable. The $j$-th atom we assign a $3$-dimensional euclidean space $E_j(=\R^3)$ and put $E\colon=E_1\oplus\cdots\oplus E_n(=\R^{3n})$. Let $x_1\in E_1,\ldots,x_n\in E_n$ be small pertubations of the atoms, so that the $j$-th atoms is located at $s_j+x_j$, then putting $$ X\colon=(x_1,\ldots,x_n) =(x_{11},x_{12},x_{13},x_{21},\ldots,x_{n3}) =(X_1,\ldots,X_{3n})\in\R^{3n}: $$ we get by Taylor's Theorem for sufficiently small pertubations $x_j$: $$ V(X)=V(x_1,\ldots,x_n) \sim V_0+\tfrac12\Hess V(0,\ldots,0)(X,X) =V_0+\tfrac12\la X,HX\ra $$ where $V_0=V(0,\ldots,0)$, $H$ is a positive linear operator on $E$ and $\la.,.\ra$ the canonical euclidean product on $E=\R^{3n}$, i.e. $H$ is self-adjoint and for all $X\in E$: $\la HX,X\ra\geq0$. The latter property follows from the fact that $V$ attains a local minimum at $X=(0,\ldots,0)$. Elements $X\in E$ essentially represent the positions of the atoms and are therefore called configurations, making $E$ the configuration space.
Suppose $E$ is real euclidean and $B:E\times E\rar\R$ a bi-linear symmetric mapping, i.e. for all $x,y\in E$ the mappings $u\mapsto B(x,u)$ and $v\mapsto B(v,y)$ are linear and $B(x,y)=B(y,x)$. Then there is exactly one self-adjoint operator $H\in\Hom(E)$ such that for all $x,y\in E$: $B(x,y)=\la Hx,y\ra$.
Compute the operator $H$ for $V(x_1,\ldots,x_n)=-(1+\sum x_j^2)^{-2}$.
We try to sidestep partial derivatives, which are a last resort! Since the Taylor expansion of $(1+x^2)^{-2}$ up to order $2$ is given by $1-2x^2$, we get $V(X)=-1+2\sum X_j^2+\cdots$ and thus $HX=4X$.
Compute the operator $H$ for $V(x_1,\ldots,x_n)=1-\exp(-\sum jx_j^2)$.
Compute the operator $H$ for $V(x_1,x_2,x_3)=\tfrac12\sum_{j < k}\norm{x_k-x_j}^2$.
A motion of the molecule is just a smooth curve $X:\R\rar E$ describing the positions of the atoms over time. For simplicity's sake let us assume that all atoms have the same mass, then the classical equation of motion (cf. example) gives us the following linear differential equation: $X^\dprime(t)=-HX(t)$, which has the unique solution: $$ X(t)=\cos(t\sqrt H)X(0)+\frac{\sin(t\sqrt H)}{\sqrt H}X^\prime(0)~. $$ where $\sqrt H$ is the unique positive linear operator $A$ that comes up to $A^2=H$. The operator $\sin(t\sqrt H)/\sqrt H$ is defined by power series expansion of the analytic function $\sin(tz)/z$ replacing $z$ with $\sqrt H$. In our context the uniqueness is not that significant, it's more important to know how to caclculate $\sqrt H$:
If $H=UDU^*$ for some $U\in\OO(n)$ and some diagonal matrix $D=diag\{a_1,\ldots,a_n\}$ with non negative entries $a_j$, then $\sqrt H=U\sqrt D U^*$, where $\sqrt D=diag\{\sqrt{a_1},\ldots,\sqrt{a_n}\}$.
Check by series expansion that for $A\in\Hom(E)$ (or $A\in\Ma(n,\C)$) and $x\in E$ ($x\in\C^n$) the derivative of the curve $c(t)\colon=\cos(tA)x$ is given by $c^\prime(t)=-\sin(tA)Ax$.
The roots of the eigen-values of $H$ are called the eigen-frequencies of the molecule, cf wikipedia. An important tool for computing the eigen-spaces of $H$ (or modes, as a physicist would put it) is the group of symmetries $G$ of the molecule: Recall, a symmetry $g\in G$ just permutes the atoms, thus we may think of it as a permutation $j\mapsto\pi(j)$ assigning each displacement vector $x_j$ of the $j$-th atom the displacement vector $g(x_j)$ of the $\pi(j)$-th atom. Now, putting $$ \Prn_k(\Psi(g)X) \colon=\d_{k\pi(j)}g(\Prn_j(X)) $$ we get an isometry $\Psi(g)$ of $E$ and it's pretty obvious that $\Psi(g)\in\OO(E)$ and for all $g,h\in G$: $\Psi(gh)=\Psi(g)\Psi(h)$, to state it differently: $\Psi$ is a homomorphism of the group $G$ into the group $\OO(E)$ of isometries of $E$, i.e. $\Psi$ is an orthogonal representation of $G$ in $E$ and a fortiori a unitary representation in $\C^{3n}$ - we will explicate this in more detail when discussing vibrations of $\chem{H_2O}$ and $\chem{CH_4}$ below. Since $G$ is the group of symmetries of the molecule, we hypothesize that the potential energy doesn't change under $\Psi(g)$, i.e.: $V(\Psi(g)X)=V(X)$. This is of course not a statement that can be proved, it's rather an assumption that can be made plausible. In our context it's equivalent (cf. lemma) to the statement: \begin{equation}\label{mvi1}\tag{MVI1} \forall g\in G:\qquad H\Psi(g)=\Psi(g)H \end{equation} i.e. the operator $H$ commutes with all isometries $\Psi(g)$, $g\in G$.

Approximate stable states

Remarkably a variety of problems in QM comes down to the same mathematical problem: suppose $n$ electrons, which only interact electrically, are bound by some molecule. What are the "stable states" of the $n$ electrons in the vicinity of positions $s_1,\ldots,s_n\in\R^3$ with minimal electric potential (we assume the atoms are pinned down, thus in this problem $x_1,\ldots,x_n$ are dislocations of the electrons, not of the atoms): approximating the electric potential by its second order Taylor polynomial we have to solve Schrödinger's eigen-equation on $E=\R^{3n}$: \begin{equation}\label{mvi2}\tag{MVI2} -\D\psi(X)+\tfrac12\la X,HX\ra\psi(X)=(E-V_0)\psi(X), \end{equation} where $V_0\colon=V(0,\ldots,0)$ and $E$ is the energy of the state - mathematically speaking: $\psi$ is an eigen-function and $E-V_0$ the corresponding eigen-value. To solve it, we only need to find the eigen-vectors and eigen-values of $H$; once we've got them, the eigen-functions $\psi$ are well known: Hermite polynomials multiplied by a Gaussian. If $\psi\in L_2(\R^{3n})$ is a normalized solution of \eqref{mvi2}, i.e. $\int|\psi|^2=1$, then the function $X\mapsto|\psi(X)|^2$ is a probability density and $|\psi(x_1,x_2,\ldots,x_n)|^2$ is the probability density for finding one electron in position $s_1+x_1$, another in position $s_2+x_2$ and so on! The stable state with the lowest energy $E$ is called the ground state; mathematically it's the eigen-function with lowest eigen-value!
Prove that the functions $e^{-ax^2}$ and $xe^{-ax^2}$ are eigen-functions of the operator $-\psi^\dprime(x)+4a^2x^2\psi(x)$. What are the corresponding eigen-values?
This just needs two lines in maxima: enter
diff(-exp(-a*x^2),x,2)+4*a^2*x^2*exp(-a*x^2);
diff(-x*exp(-a*x^2),x,2)+4*a^2*x^3*exp(-a*x^2);
and you will see that these two functions are indeed eigen-functions with eigen-values $2a$ and $6a$, respectively.
Prove that the functions $\psi_0(x)\colon=e^{-\sum a_jx_j^2}$ on $\R^n$ is an eigen-functions of the operator $-\D\psi(x)+4(\sum a_j^2x_j^2)\psi(x)$. What is the corresponding eigen-value? The function $\psi_0$ is actually the ground state!

Vibrations of $\chem{H_2O}$

We assume that the molecule lies in the $yz$-plane (cf. subsection).
Fixing the central oxygen atom our state space reduces to $\R^6$. Compute the matrices of the above representation (cf. subsection) of $C_{2v}$ in this state space.
Both the $C_2$ rotation and the reflection $\s_1$ interchange the hydrogen atoms and thus the matrix of the site-representation of these symmetries is $$ \left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right) $$ To get the matrices of $\Psi(C_2)$ and $\Psi(\s_1)$ we just have to substitute the matrices of $C_2$ and $\s_1$, respectively, for $1$ in the above matrix; this gives us two block matrices: $$ \left(\begin{array}{cc} 0 &C_2\\ C_2& 0 \end{array}\right), \left(\begin{array}{cc} 0&\s_1\\ \s_1&0 \end{array}\right), $$ which are tensor products - cf. section; in unblocked form they are given by $$ \left(\begin{array}{cccccc} 0 & 0&0&-1& 0&0\\ 0 & 0&0& 0&-1&0\\ 0 & 0&0& 0& 0&1\\ -1& 0&0& 0& 0&0\\ 0 &-1&0& 0& 0&0\\ 0 & 0&1& 0& 0&0 \end{array}\right), \left(\begin{array}{cccccc} 0& 0&0&1& 0&0\\ 0& 0&0&0&-1&0\\ 0& 0&0&0& 0&1\\ 1& 0&0&0& 0&0\\ 0&-1&0&0& 0&0\\ 0& 0&1&0& 0&0 \end{array}\right) $$ The reflection $\s_2$ leaves both hydrogen atoms invariant and therefore the matrix of $\Psi(\s_2)$ is given by $$ \left(\begin{array}{cc} \s_2&0\\ 0&\s_2 \end{array}\right) =\left(\begin{array}{cccccc} -1& 0&0& 0& 0&0\\ 0& 1&0& 0& 0&0\\ 0& 0&1& 0& 0&0\\ 0& 0&0&-1& 0&0\\ 0& 0&0& 0& 1&0\\ 0& 0&0& 0& 0&1 \end{array}\right) $$

Vibrations of tetrahedral molecules like methane $\chem{CH_4}$

We retain the notation from subsection. Fixing the central $C$-atom and labeling the surrounding $H$ atoms $1,2,3,4$ we see that $g\in\{E,C_3,C_2,S_4,\s\}$ permutes the hydrogen atoms as follows: $$ E:(1234),\quad C_3:(1342),\quad C_2:(3412),\quad S_4:(2341),\quad \s:(2134)~. $$ These give rise to associated permutation matrices in $\OO(4)$ - the site representation of $\chem{CH_4}$ (disregarding the central $C$ atom): $$ \left(\begin{array}{cccc} 1 & 0& 0& 0\\ 0 & 0& 0& 1\\ 0 & 1& 0& 0\\ 0 & 0& 1& 0 \end{array}\right), \left(\begin{array}{cccc} 0 & 0& 1& 0\\ 0 & 0& 0&1\\ 1& 0& 0& 0\\ 0 &1& 0& 0 \end{array}\right), \left(\begin{array}{cccc} 0 & 0& 0&1\\ 1& 0& 0& 0\\ 0 &1& 0& 0\\ 0 & 0&1& 0 \end{array}\right), \left(\begin{array}{cccc} 0 & 1& 0& 0\\ 1 & 0& 0& 0\\ 0 & 0& 1& 0\\ 0 & 0& 0& 1 \end{array}\right)~. $$ Thus we get an orthogonal representation of $T_d$ in $\R^{12}$ and a fortiori a unitary representation of $T_d$ in $\C^{12}$. We write out the matrices of $\Psi(C_3)$, $\Psi(C_2)$, $\Psi(S_4)$ and $\Psi(\s)$ in block form: $$ \left(\begin{array}{cccc} C_3& 0& 0& 0\\ 0 & 0& 0&C_3\\ 0 &C_3& 0& 0\\ 0 & 0&C_3& 0 \end{array}\right), \left(\begin{array}{cccc} 0 & 0&C_2& 0\\ 0 & 0& 0&C_2\\ C_2& 0& 0& 0\\ 0 &C_2& 0& 0 \end{array}\right), \left(\begin{array}{cccc} 0 & 0& 0&S_4\\ S_4& 0& 0& 0\\ 0 &S_4& 0& 0\\ 0 & 0&S_4& 0 \end{array}\right), \left(\begin{array}{cccc} 0 & \s& 0& 0\\ \s & 0& 0& 0\\ 0 & 0& \s& 0\\ 0 & 0& 0& \s \end{array}\right)~. $$ Again, these are just the tensor products of the permutation matrices of the site representation and the symmetries itself, cf. section.
Compute all eigen-values and eigen-spaces of $\Psi(\s)\in\UU(12)$.
The following series of maxima commands will essentially do it:
s: matrix([0,0,-1],[0,1,0],[-1,0,0]);
n: zeromatrix(3,3);
A: matrix([n,s,n,n],[s,n,n,n],[n,n,s,n],[n,n,n,s]);
a: mat_unblocker(A);
eigenvectors(a);
or in sage:
s=matrix([[0,0,-1],[0,1,0],[-1,0,0]])
a=block_matrix([[0,s,0,0],[s,0,0,0],[0,0,s,0],[0,0,0,s]])
a.eigenvalues()
a.eigenspaces_right()
instead of the last two commands you may use:
a.eigenmatrix_right()
or by means of the NumPy library in sage:
import numpy
E,V=numpy.linalg.eig(a)
print(E);print(V)
In any case: the eigen-values are $-1$ ($5$-fold) and $+1$ ($7$-fold) and the eigen-vectors are the columns of the following matrix: $$ \left(\begin{array}{cccccccccccc} 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \end{array}\right) $$
Compute all eigen-values and eigen-spaces of $\Psi(C_3)\in\UU(12)$.
Determine the block matrices $\Psi(C_6)$ and $\Psi(\s_1)$ of the representation $\Psi:C_{6v}\rar\UU(18)$ for the vibrations of the benzene molecule $\chem{C_6H_6}$. $C_6$ and $\s_1$ denote the two generators of the dihedral group $C_{6v}$, cf. subsection.
Determine the block matrices $\Psi(C_4)$ and $\Psi(S_6)$ of the representation $\Psi:O_h\rar\UU(18)$ for the vibrations of sulfur hexafluoride $\chem{SF_6}$ (cf. wikipedia). $C_4$ and $S_6$ denote two generators of the octahedral group $O_h$, cf. subsection.

Computing eigenvalues exactly in sage

Usually the unitary matrices $U$ we will run across have finite order, i.e. there is some $n\in\N$ such that $U^n=1$. To compute its eigen-values exactly in sage you may possibly consider this matrices as matrices over the cyclotomic field $\Q(e^{2\pi i/N})$ - the field generated by $\Q$ and the complex number $e^{2\pi i/N}$, also known as the cyclotomic field $\Q(N)$ of order $N$, where $N$ is some suitable multiple of $n$:
Compute all eigen-values and eigen-spaces of $U\in\UU(5)$: $$ U=\left(\begin{array}{ccccc} 0&1&0&0&0\\ 0&0&0&0&1\\ 0&0&0&1&0\\ 0&0&1&0&0\\ 1&0&0&0&0 \end{array}\right) $$
The order of the matrix is $6$ and since all the entries of the matrix are certainly elements of $\Q(e^{2\pi i/6})$, we choose the cyclotomic field of order $6$:
F.<zeta>=CyclotomicField(6)
U=matrix(F,[[0,1,0,0,0],[0,0,0,0,1],[0,0,0,1,0],[0,0,1,0,0],[1,0,0,0,0]])
U.eigenspaces_right()
This will give you the eigen-values: $\z-1,-\z,-1,1,1$, where $\z=e^{2\pi i/6}$ and the corresponding eigen-vectors, which we omit.
Compute all eigen-values and eigen-spaces of $U\in\UU(3)$: $$ U=\left(\begin{array}{cc} 0&i&0\\ 0&0&1\\ -i&0&0 \end{array}\right) $$
The order of the matrix is $3$ but $i$ is not an elements of $\Q(e^{2\pi i/3})$, thus we choose the cyclotomic field of order $12$ because this field contains both $e^{2\pi i/3}$ and $\pm i$:
F.<zeta>=CyclotomicField(12)
U=matrix(F,[[0,I,0],[0,0,1],[-I,0,0]])
U.eigenspaces_right()
The eigen-values of $U$ are: $1,\z^2-1,-\z^2$ where $\z=e^{2\pi i/12}$. The eigen-values must be numbers in $\Q(3)$, indeed: $\z^2-1=e^{2\pi/3}$ and $-\z^2=e^{4\pi i/3}$.
Compute all eigen-values and eigen-spaces of $U\in\UU(5)$: $$ U=\left(\begin{array}{ccccc} 0&0&1&0&0\\ 0&0&0&0&1\\ 0&1&0&0&0\\ 1&0&0&0&0\\ 0&0&0&1&0 \end{array}\right) $$

Hamiltonians

Diagonalizable operators

Let $H$ be a self-adjoint operator
on a real or complex Hilbert-space $E$, i.e. $H^*=H$. The group $G$ of symmetries of $H$ is the set of all orthogonal or unitary operators $U:E\rar E$, such that (cf. \eqref{mvi1}): $$ [H,U]\colon=HU-UH=0~. $$ $[H,U]$ is called the commutator of $H$ and $U$. This has some easy but important consequences: Suppose $E(\l)$ is the eigen-space of $H$ with eigen-value $\l$, then for all $U\in G$ and all $x\in E(\l)$: $HUx=UHx=\l Ux$, i.e. $U(E(\l))=E(\l)$, thus $E(\l)$ is invariant under all symmetries of $H$. Similarly, the assumption $Ux=e^{i\o}x$ implies: $UHx=HUx=e^{i\o}Ux$ and therefore all eigen-spaces of symmetries of $H$ are invariant under $H$. Thus if $E_1=\lhull{x_1}$ happens to be a one-dimensional eigen-space of some symmetry $U$, then $x_1$ has got to be an eigen-vector of $H$. For this to hold the operator $U$ need not be unitary, it suffices that its eigenspaces is one-dimensional.
Suppose that $B$ and $A$ are commuting linear operators on any vector space $E$. If $a$ is a simple eigen-value of $A$, i.e. $\dim\ker(A-a)=1$, with eigen-vector $x$, then $x$ is also an eigen-vector of $B$.
Remember from linear algebra that a linear operator $A\in\Hom(E)$ on a finite dimensional complex vector-space $E$ is said to be diagonalizable if there exist some basis vectors $b_1,\ldots,b_n$ in $E$ and numbers $\l_1,\ldots,\l_n\in\C$ such for all $j$: $Ab_j=\l_jb_j$, in other words: $A$ is similar to a diagonal operator. The spaces $\ker(A-\l_j)$ are called the eigen-spaces of $A$, thus $\ker(A-\l)=\{0\}$ iff $\l\notin\{\l_1,\ldots,\l_n\}$ and $A$ is diagonalizable iff $$ \sum\ker(A-\l_j)=E, $$ where the sum is over all pairwise distinct eigen-values; obviously the sum is direct. Let $E$ be a finite dimensional complex vector-space and $A\in\Hom(E)$, $$ \forall z\in\C:\qquad c_A(z)\colon=\det(A-z)=\pm(z-\l_1)^{k_1}\ldots(z-\l_m)^{k_m} $$ its characteristic polynomial with pairwise distinct roots $\l_1,\ldots,\l_m\in\C$ of multipliticities $k_1,\ldots,k_m$. Then a variant of the Cayley-Hamilton Theorem states that $$ E=\ker(A-\l_1)^{k_1}\oplus\cdots\oplus(A-\l_m)^{k_m} \quad\mbox{and}\quad \dim\ker(A-\l_j)^{k_j}=k_j~. $$ This assertion is a bit stronger than the usual statement: $c_A(A)=0$. However, the following lemma shows that both are equivalent
Suppose $p,q$ are polynomials such that $\gcd(p,q)=1$, i.e. the only divisors of both $p$ and $q$ are the constant polynomials. Then for any $A\in\Hom(E)$: $$ \ker(pq)(A)=\ker p(A)\oplus\ker q(A)~. $$
$\proof$ We obviously have $\ker(pq)(A)\spe\ker p(A)+\ker q(A)$. Since $\gcd(p,q)=1$, there are polynomials $r,s$ satisfying $pr+qs=1$ and thus $p(A)r(A)+q(A)s(A)=1$. It follows that for all $x\in E$: $x=p(A)r(A)x+q(A)s(A)x=\colon x_1+x_2$ and in particular for $x\in\ker(pq)(A)$: $q(A)x_1=0$ and $p(A)x_2=0$, i.e. $\ker(pq)(A)\sbe\ker p(A)+\ker q(A)$. Finally for $x\in\ker p(A)\cap\ker q(A)$ we have: $x=0$, i.e. $\ker (pq)(A)=\ker p(A)\oplus\ker q(A)$. $\eofproof$
Remember from linear algebra that in the complex case the roots $\l_1,\ldots,\l_m$ are the eigen-values, but the dimension $d_j$ of the corresponding eigen-space $\ker(A-\l_j)$ is generally smaller than $k_j$. $d_j$ and $k_j$ are called the geometric and the algebraic multipliticity, respectively.
If for all $j=1,\ldots,m$: $d_j=k_j$, then $A$ is diagonalizable and vice versa.
$A$ is diagonalizable iff $\sum d_j=\dim E=\sum k_j$ and since $d_j\leq k_j$, this tantamounts to: $d_j=k_j$ for all $j$.
Let $E$ be a finite dimensional complex vector-space and $A\in\Hom(E)$. $A$ is diagonalizable iff for all $\l\in\C$: $\ker(A-\l)=\ker(A-\l)^2$.
$\proof$ If $\ker(A-\l)=\ker(A-\l)^2$, then by induction on $k$: $\ker(A-\l)^k=\ker(A-\l)$. Indeed, if $(A-\l)^{k+1}x=0$ then by induction hypothesis: $(A-\l)x\in\ker(A-\l)^k=\ker(A-\l)$, i.e. $x\in\ker(A-\l)^2=\ker(A-\l)$. Now the assertion follows from the Cayley-Hamilton Theorem. Conversely if $A$ is diagonalizable, then $E=\bigoplus_j\ker(A-\l_j)$, where the sum is taken over all pairwise distinct eigenvalues. Thus for all $x\in\ker(A-\l_k)^2$: $x=\sum x_j$, $x_j\in\ker(A-\l_j)$ and $$ 0=\sum_j(A-\l_k)^2x_j =\sum(A-\l_j+\l_j-\l_k)^2x_j =\sum(\l_j-\l_k)^2x_j~. $$ Since the sum is direct, it follows that for all $j\neq k$: $x_j=0$, i.e. $x=x_k\in\ker(A-\l_k)$. $\eofproof$
Let $E$ be a finite dimensional complex vector-space and $A\in\Hom(E)$. Prove that $A$ is diagonalizable iff for all $\l\in\C$: $E=\ker(A-\l)+\im(A-\l)$. Suggested solution
For $\l\in\C$ the Jordan matrix $J(\l)\in\Ma(n,\C)$: $$ J(\l)=\left(\begin{array}{cccccc} \l&1&0&\cdots&0&0\\ 0&\l&1&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&1&0\\ 0&0&0&\cdots&\l&1\\ 0&0&0&\cdots&0&\l \end{array}\right) $$ is not diagonalizable unless $n=1$. Suggested solution
Two linear operators $A,B\in\Hom(E)$ are said to be similar if there exists an isomorphism $P\in\Hom(E)$ such that $B=PAP^{-1}$. Verify that if $A$ and $B$ are similar, then 1. $A$ is diagonalizable if and only if $B$ is diagonalizable, 2. $x$ is an eigen-vector of $A$ iff $Px$ is an eigen-vector of $B$
For $a_0,\ldots,a_{n-1}\in\C$ prove that the characteristic polynomial $c_A(z)=\det(A-z)$ of the matrix $$ A=\left(\begin{array}{ccccc} 0&0&\cdots&0&-a_0\\ 1&0&\cdots&0&-a_1\\ 0&1&\cdots&0&-a_2\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&1&-a_{n-1} \end{array}\right)\in\Ma(n,\C) $$ is given by $(-1)^n(z^n+a_{n-1}z^{n-1}~\cdots+a_1z+a_0)$. $A$ is called the companion matrix (cf. e.g. companion matrix) of the monic polynomial $p(z)=z^n+a_{n-1}z^{n-1}~\cdots+a_1z+a_0$. 2. Check that for all $k=0,\ldots,n-1$: $A^ke_1=e_{k+1}$ and conclude that the minimal polynomial of $A$ is $c_A$. 3. $A$ is diagonalizable iff $c_A$ has only simple roots. Suggested solution.
For $n\in\N$ put $\o\colon=e^{2\pi i/n}$. Verify that for all $A\in\Ma(N,\C)$ and all $z\in\C$: $$ A^n-z^n=(A-z)(A-\o z)(A-\o^2z)\cdots(A-\o^{n-1}z)~. $$ 2. Find a formula for the characteristic polynomial of $A^n$ in terms of the characteristic polynomial for $A$. 3. Prove that if $A^n=1$, then $A$ is diagonalizable.
Let $E$ be a finite dimensional complex Hilbert-space i.e. $E$ is a complex euclidean space. Then any normal operator $A\in\Hom(E)$, i.e. $A^*A=AA^*$ (or $[A^*,A]=0$), is diagonalizable.
$\proof$ Since $A-\l$ is normal iff $A$ is normal, it suffices by example to prove $\ker A^2=\ker A$: So assume $A^2x=0$, then $A^2A^*x=A^*A^2x=0$ and therefore by definition of $A^*$ and normality: $$ 0=\la A^2A^*x,Ax\ra =\la AA^*x,A^*Ax\ra =\la A^*Ax,A^*Ax\ra =\norm{A^*Ax}^2~. $$ Hence $A^*Ax=0$, which implies: $0=\la A^*Ax,x\ra=\norm{Ax}^2$, i.e. $x\in\ker A$. $\eofproof$
Let $E$ be a finite dimensional complex Hilbert-space. $A\in\Hom(E)$ is diagonalizable if $A$ is self-adjoint or skew-symmetric(i.e. $A^*=-A$) or unitary.
Suppose $A\in\Ma(n,\C)$ is such that $A^t=PAP^{-1}$ for some strictly positive definite matrix $P\in\Ma(n,\C)$. Then $A$ is diagonalizable and all eigen-values are real. Hint: $A$ is self-adjoint with respect to the euclidean producht $\la x,y\ra\colon=(Px|y)$, where $(.|.)$ denotes the canonical euclidean product.

Weights and weight vectors

Let $E$ be a finite dimensional vector-space, $A\in\Hom(E)$ diagonalizable and $F$ an $A$-invariant subspace. Then $A|F$ is diagonalizable.
$\proof$ Suppose $x\in F$ is in the kernel of $(A-\l)^2$; since $A\in\Hom(E)$ is diagonalizable, it follows that $x\in\ker(A-\l)$, i.e. $\ker(A-\l)^2|F\sbe F\cap\ker(A-\l)=\ker(A-\l)|F$. $\eofproof$
Suppose ${\cal A}$ is a commuting family of linear operators on a finite dimensional vector-space $E$ such that each of them is diagonalizable, then $E$ has a basis $e_1,\ldots,e_n$ such that each basis-vector $e_j$ is an eigen-vector of all operators $A\in{\cal A} $ - we say the family of operators ${\cal A}$ is simultaneously diagonalizable.
$\proof$ Since the space ${\cal H}$ generated by ${\cal A}$ is finite dimensional we may assume that ${\cal A}$ is finite, because if the assertion holds for a basis of ${\cal H}$, then it obviously holds for all $A\in{\cal H}$! Suppose both $A,B\in\Hom(E)$ are diagonalizable and $AB=BA$; for each vector $x$ in an eigen-space $F=\ker(A-\l)$ of $A$ we have: $ABx=BAx=\l Bx$, i.e. $F$ is $B$ invariant and since $B|F$ diagonalizable by lemma, $A$ and $B$ are simultanously diagonalizable on all eigen-spaces of $A$. The general case follows by induction on the number of operators in ${\cal A}$. $\eofproof$
Beware, this doesn't mean that a given basis of eigen-vectors of one operator $A\in{\cal A}$ is also a basis of eigen-vectors of another operator $B\in{\cal A}$. Moreover, none of the eigen-vectors of $A$ need to be an eigen-vector of $B$! However, if one operator $A\in{\cal A}$ only has simple eigen-values, then by the previous subsection all of its eigen-vectors are also eigen-vectors of any other $B\in{\cal A}$ -this will be the case in an example we analyze in subsection. The following definition will turn out pivotial in chapter!
Suppose ${\cal A}$ is a family of linear operators on $E$ and let ${\cal H}$ be the vector-space generated by ${\cal A}$. A linear functional $\l:{\cal H}\rar\C$ is called a weight of ${\cal A}$ if there is some vector $x\in E\sm\{0\}$ such that for all $A\in{\cal A}$: $Ax=\l(A)x$. The vector $x$ is called a weight vector (of ${\cal A}$).
If ${\cal A}$ is simultanously diagonalizable, then proposition can be restated as follows: There is a basis for $E$, which consist of weight vectors of ${\cal A}$. Now let's assume in addition that $E$ is a finite dimensional Hilbert-space, then the space $\Hom(E)$ and hence ${\cal H}$ carry a euclidean product: $(A,B)\mapsto\tr(AB^*)$. Thus every linear functional $\l$ on ${\cal H}$ is of the form $\l(A)=\tr(AL^*)$ for some $L\in{\cal H}$. If ${\cal A}$ is simultanously diagonalizable, there are $L_1,\ldots,L_n\in{\cal H}$ and a basis $x_1,\ldots,x_n\in E$ such that $$ \forall A\in{\cal H}\,\forall j=1,\ldots,n:\quad Ax_j=\tr(AL_j^*)x_j~. $$
Find $L_1,\ldots,L_n$ in the case ${\cal A}=\{A_1,\ldots,A_m\}$ is simultanously diagonalizable and $\tr(A_jA_k^*)=\d_{jk}\norm{A_j}_{HS}^2$, i.e. the operators $A_1,\ldots,A_m$ form an orthogonal set in $\Hom(E)$ equipped with the Hilbert-Schmidt norm.
$L_j=\sum_k c_{jk}A_k,$ and thus $$ \l_j(A_l) =\sum_k\bar c_{jk}\tr(A_lA_k^*) =\bar c_{jl}\norm{A_l}_{HS}^2 \quad\mbox{i.e.}\quad c_{jl}=\cl{\l_j(A_k)}/\norm{A_l}_{HS}^2 $$ Therefor the operators $L_j$ are given by $$ L_j=\sum_k\frac{\cl{\l_j(A_k)}}{\norm{A_k}_{HS}^2}A_k~. $$ In this example $\dim{\cal H}=m$ and if $n > m$ then the operators $L_1,\ldots,L_n$ are obviously linearly dependent, i.e. the weights $\l_1,\ldots,\l_n$ are linearly dependent!
Suppose ${\cal A}$ is simultanously diagonalizable, then we may extend all the weights to homomorphisms $\l_j:\wt{\cal A}\rar\C$ to the algebra $\wt{\cal A}$ generated by ${\cal A}$.
The algebra $\wt{\cal A}$ is the subspace generated by all monomials $A_1^{k_1}\cdots A_m^{k_m}$, $k_j\in\N_0$; it is commutative and any weight vector of ${\cal A}$ is a weight vector of $\wt{\cal A}$. If $x_1,\ldots,x_n$ is a basis of weight vectors for ${\cal A}$, then for all $A,B\in{\cal A}$: $ABx_j=\l_j(B)\l_j(A)x_j$, i.e. $\l_j(AB)=\l_j(A)\l_j(B)$.
If $A\in\Hom(E)$ is normal, then there is an orthonormal basis $e_1,\ldots,e_n$ for $E$ and complex numbers $a_1,\ldots,a_n$ such that $Ae_j=a_je_j$ and $A^*e_j=\bar a_j e_j$. Suggested solution
Suppose $e_1,\ldots,e_n$ is an orthonormal basis for $E$ and $a_1,\ldots,a_n$ are complex numbers, then the diagonal operator $A$ defined by $Ae_j\colon=a_je_j$ is normal.

An example

We are going to discuss a unitary representation of the cyclic group $\Z_n$ based on the following
Let $e_1,\ldots,e_n$ be any orthonormal basis of $\C^n$. By putting $Ue_1\colon=e_n$, $Ue_2=e_1,\ldots,Ue_n\colon=e_{n-1}$ we get a linear isometry on $\C^n$. What are the eigen-values and eigen-vectors of $U$?
Suppose $u_1,\ldots,u_n$ are the components of a vector $u$, then $u$ is an eigen-vector with eigen-value $\l$ iff $u_{j+1}=\l u_j$, thus: $u_j=\l^{j-1}u_1$. Since $u_{n+1}=u_1$ we conclude: $\l^n=1$, i.e. $\l=w^k\colon=\exp(2\pi ik/n)$ for some $k\in\{0,\ldots,n-1\}$ and the normed eigen-vector is $$ b_k\colon=\tfrac1{\sqrt n}(e_1+w^ke_2+\cdots+w^{k(n-1)}e_n)~. $$ The mapping $\Psi(k)\colon=U^k$ defines a unitary representation $\Psi:\Z_n\rar\UU(n)$ and for all $k\in\Z_n$ we have: $$ \forall x\in\C^n:\quad \Psi(k)x=\sum_{j=0}^n e^{2\pi ijk/n}\la x,b_j\ra b_j~. $$
Let $f_j$ be any cyclic sequence in $\C$ with period $n$ and define $H\in\Hom(\C^n)$ by its matrix $H_{jk}\colon=f_{j-k}$ with respect to the orthonormal basis $e_1,\ldots,e_n$. Compute all eigen-values and eigen-vectors of $H$.
Since $$ \la U^{-1}HUe_k,e_j\ra =\la HUe_k,Ue_j\ra =\la He_{k-1},e_{j-1}\ra =H_{j-1,k-1} =f_{j-k} =\la He_k,e_j\ra, $$ the operators $H$ and $U$ commute. Hence the vectors $b_0,\ldots,b_{n-1}$ are all eigen-vectors of $H$ and the corresponding eigen-value can be easily computed - recall that the sequences $j\mapsto w^{kj}$ and $j\mapsto f_j$ are $n$-periodic: \begin{eqnarray*} \l_k =\la Hb_k,b_k\ra &=&\frac1n\sum_{j,l=0}^{n-1} w^{kj}\bar w^{kl}\la He_{j+1},e_{l+1}\ra\\ &=&\frac1n\sum_{j,l=0}^{n-1} w^{k(j-l)}f_{l-j} =\frac1n\sum_{j,l=0}^{n-1} w^{-kl}f_l =\sum_{l=0}^{n-1}f_lw^{-kl} =\sqrt n\wh f_k~. \end{eqnarray*} Thus up to the multiplicative constant $\sqrt n$ the sequence of eigen-values coincides with the (unitary) discrete Fourier transform $\wh f_0,\ldots,\wh f_{n-1}$ of the sequence $f_0,\ldots,f_{n-1}$, cf. e.g. wikipedia. Determine the unitary Fourier transform of a random sequence in e.g. sage (cf. NumPy) its inverse and check if the original sequence is close to the inverse:
import numpy
n=11
f=numpy.random.random(n)
whf=numpy.fft.fft(f,norm='ortho')
g=numpy.fft.ifft(whf,norm='ortho')
numpy.allclose(f,g)
Again: the crucial property of the operator $H$ can be stated as follows: we have a unitary representation $\Psi:\Z_n\rar\UU(n)$, $\Psi(k)\colon=U^k$ and for all $k\in\Z_n$ $\Psi(k)$ is a symmetry of $H$.
Find all eigen-values of the difference operator $U-1$ and verify that $H$ commutes with $U$ iff $H$ commutes with $U-1$. Suggested solution
Verify that $H$ is normal. Find a necessary and sufficient condition on the sequence $f_j$ such that 1. $H$ is self-adjoint, 2. $H$ is skew-symmetric
Of course, we may view $\Z_n$ as the subgroup $\{z\in S^1:z^n=1\}$ of $n$-th roots of unity of the unit circle $S^1(\sbe\C)$, in which case rotation by $2\pi/n$ in the group corresponds to the operator $U$, which 'rotates' the basis vectors $e_n\to e_{n-1}\to\ldots\to e_2\to e_1\to e_n$.
Prove that the mapping $\vp:\R\rar S^1$, $\vp(t)=e^{it}$ descends to an isomorphism $\wh\vp:\R/2\pi\Z\rar S^1$, i.e. $\vp=\wh\vp\circ q$, where $q:\R\rar\R/2\pi\Z$ denotes the quotient map. The quotient group $\R/2\pi\Z$ is called the (one-dimensional) torus and denoted by $\TT$. Which subgroup of $\TT$ is isomorphic to $\Z_n$?
As $n$ tends to infinity, we get an 'infinite-dimensional' anlogue: $\Z_n$ is replaced with the compact group $\TT\simeq S^1$ and it acts on $L_2(S^1)$ by sending each $f\in L_2(S^1)$ to the function $U(z)f(w)\colon=f(zw)$, which is readily seen to be a representation $U$ of $S^1$ in $L_2(S^1)$. If we identify $L_2(S^1)$ with the space of $2\pi$-periodic functions with the euclidean product $$ \la f,g\ra\colon=\frac1{2\pi}\int_{-\pi}^\pi f(y)\cl{g(y)}\,dy $$ and put $z=e^{ix}$, $x\in(-\pi,\pi]$ and $\Psi(x)\colon=U(e^{ix})$, then this representation is given by $\Psi(x)f(y)=f(x+y)$ - addition $\modul\ 2\pi$. Is there an obvious operator, which plays a role analogous to the difference operator in the case of $\Z_n$? A good candidate is the operator: $\lim_{x\to0}(\Psi(x)f-f)/x$, which is simply the differential operator; let us take the physicists version: $P:f\mapsto-if^\prime$.
Let $f$ be a smooth $2\pi$-periodic function, i.e. a smooth function on the torus $\TT$, $A_f$ the convolution operator $$ A_f\psi(x) \colon=\frac1{2\pi}\int_{-\pi}^\pi f(x-y)\psi(y)\,dy =\frac1{2\pi}\int_{-\pi}^\pi\psi(x-y)f(y)\,dy $$ and $P$ the differential operator $P\psi\colon=-i\psi^\prime$ for every smooth $2\pi$-periodic function $\psi$. Prove the following assertions: 1. $A_f$ and $P$ commute. 2. For all $n\in\Z$ $\psi_n(x)=e^{inx}$ is an eigen-function of $P$. 3. $\psi_n$ is an eigen-function of $A_f$ and the corresponding eigen-value is $$ \wh f(n)=\frac1{2\pi}\int_{-\pi}^\pi e^{-iny}f(y)\,dy $$ i.e. the $n$-th Fourier coefficient of $f$. 4. For all $2\pi$-periodic smooth functions $\vp,\psi$: $\la P\psi,\vp\ra=\la\psi,P\vp\ra$. Suggested solution.
It should be noted that $P$ is not defined on the whole space $L_2(\TT)$ but only on a dense subspace, e.g. $C^\infty(\TT)$. In functional analysis it's called an essentially self-adjoint operator, cf. wikipedia. Also the functions $\psi_n$ form a Hilbert-space basis of $L_2(\TT)$ and not a basis of the vector-space $L_2(\TT)$. However, these are only technical issues, detracting from the important property: in all of the above calculations we have a representation $\Psi$ of a group $G$ and operators $H,A_f$, for which all $\Psi(g)$ are symmetries and all eigen-values of $\Psi(g)$ are simple. The difference operator as well as the differential operator are simply "localized" versions of these representations. In case $G$ has some differentiable structure, as is the case with $G=S^1$, this marks the transition from the Lie-group $S^1$ to its Lie-algebra $\R$, cf. chapter!

Observables

The fundamental objects of QM are states (what configurations, i.e. positions are in classical mechanics, states are in QM, so states are not positions literally); mathematically the state space is in general a complex Hilbert-space $E$ (for convenience we will assume $E$ be finite dimensional) and its normalized vectors are called states. The evolution in time of a state is governed by the time-dependent Schrödinger equation: $$ \hbar\psi^\prime(t)=-iH\psi(t),\quad\psi(0)=x, $$ where $H$ is a self-adjoint linear operator on $E$ - the Hamiltonian of the system. Schrödinger's equation is to QM what is Newton's equation to Classical Mechanics! Unlike Newton's equation, the time dependent Schrödinger equations is pretty easy to solve in principle: putting $\hbar=1$, the solution to this linear differential equation is given by $\psi(t)=U(t)x\colon=\exp(-itH)x$ and $t\mapsto U(t)$ is a homomorphism of $\R$ into the group $\UU(E)$ of unitary operators on $E$ - this is a called a one-parameter group, i.e. a representation of $\R$.
Let $c_1,c_2:\R\rar E$ be two smooth curves in the Hilbert-space $E$. Verify that $$ \ftd t\la c_1(t),c_2(t)\ra=\la c_1^\prime(t),c_2(t)\ra+\la c_1(t),c_2^\prime(t)\ra~. $$
Conversely, suppose we are given a one-parameter group $t\mapsto U(t)$ mapping the group $\R$ homomorphically (and smoothly) into the unitary group $\UU(E)$ of some Hilbert-space $E$, then its generator $U^\prime(0)$ is skew-symmetric, for $t\mapsto\la U(t)x,U(t)y\ra$ is constant and its derivative at $t=0$ equals $$ \la U^\prime(0)x,y\ra+\la x,U^\prime(0)y\ra~. $$ Thus there is a self-adjoint operator $H\in\Hom(E)$ such that $U^\prime(0)=-iH$ and since $U^\prime(t)=-iHU(t)$ and $U(0)=1$, it follows by uniqueness of linear differential equations that $U(t)=\exp(-itH)$. The space of all these generators comprise a subalgebra of $\Hom(E)$, it's called the Lie-algebra $\uu(E)$ of the Lie-group $\UU(E)$ and it consists of all skew-symmetric operators $A\in\Hom(E)$, i.e. $\uu(E)=\{A\in\Hom(E):A^*=-A\}$. Hence $\uu(E)$ is up to the factor $-i$ just the space of all possible Hamiltonians on the space $E$, cf. chapter 3.
$\uu(E)$ is not closed by composition, i.e. if $A,B\in\uu(E)$, then $AB$ need not be skew symmetric. Apart from addition the second important algebraic operation on $\uu(E)$ is not composition, it's the commutator $[A,B]\colon=AB-BA$. Prove that if $A,B\in\uu(E)$, then $[A,B]\in\uu(E)$.
Suppose $e_1,e_2$ is an orthonormal basis and $He_1=e_2$, $He_2=e_1$. Prove that for $t\in\R$ the unitary operator $U(t)\colon=\exp(-itH)$ is given by: $U(t)=\cos(t)1_E-i\sin(t)H$.
An observable is merely another self-adjoint operator $A\in\Hom(E)$, i.e. $iA\in\uu(E)$; the mean value $\la A\ra_x$ of this observable in a particular state $x$ and its variance $\variance_x(A)$ in state $x$ are defined to be \begin{eqnarray*} \la A\ra_x&\colon=&\la Ax,x\ra\quad\mbox{and}\\ \variance_x(A)&\colon=&\norm{(A-\la Ax,x\ra)x}^2 =\norm{Ax}^2-\la A\ra_x^2 =\la A^2\ra_x-\la A\ra_x^2~. \end{eqnarray*} The observable $A$ is said to have a definite value in state $x$ if $\variance_x(A)=0$ - i.e. if $x$ is a normalized eigen-vector of $A$, a so called eigen-state or stable state. The definite values of $A$ are the observable values, i.e. the values a measurement may yield! So is an observable just another Hamiltonian? Yes, but usually in physics the observable values of Hamiltonians are energies whereas the observable values of observables can be moments, angular moments, positions and so on. The term observable is used in a broader sense than the term Hamiltonian, anyhow these are more or less sheer conventions! In case $E$ is not finite dimensional an observable on $E$ need not be defined on all of $E$, in most cases it's only defined on a dense subspace. But that's a mathematical issue we do not pursue!
Verify that $\psi_0(x)\colon=(\pi a)^{1/4}e^{-x^2/2a}$ is a state in the Hilbert-space $L_2(\R)$. Compute the mean values and the variances of the observables $P$ and $X$ given by $P\psi(x)\colon=-i\psi^\prime(x)$ and $X\psi(x)\colon=x\psi(x)$ in the state $\psi_0$. Neither $P$ nor $X$ admit stable states.
Suppose $e_1,e_2$ is an orthonormal basis and $He_1=e_2$, $He_2=e_1$. Determine the stable states of $H$ and its definite values - i.e. its eigen-vectors and its eigen-values.
Over time the original state $x$ evolves according to the time dependent Schrödinger equation $\psi^\prime(t)=-iH\psi(t)$, $\psi(0)=x$; at time $t$ we have the state $U(t)x\colon=\exp(-itH)x$ and the mean value of $A$ in this state is $$ \la AU(t)x,U(t)x\ra=\la U(t)^*AU(t)x,x=\la U(t)^{-1}AU(t)x,x\ra, $$ where in the last step we used unitarity of $U(t)$, i.e. $U(t)^*=U(t)^{-1}$. Now assume $[A,H]=0$, then we also have $[A,U(t)]=0$:
If $[A,H]=0$, then for all $n\in\N$: $[A,H^n]=0$ and $[A,U(t)]=0$.
Thus if $[A,H]=0$, then $U(t)^{-1}AU(t)=A$; in other words: as $x$ evolves, the mean value of $A$ remains constant - we therefore have a conservation law. By lemma also the converse holds: if the mean value of $A$ remains constant in the states $U(t)x$, then $[A,U(t)]=0$ and hence $0=\ttdl t0[A,e^{-itH}]=[A,-iH]$, i.e. $[A,H]=0$. Interchanging the roles of $A$ and $H$ we get:
$[H,A]=0$ iff $V(t)\colon=\exp(-itA)$ is for all $t\in\R$ a symmetry of $H$ and this holds iff for all $x\in E$ the function $t\mapsto\la H\ra_{V(t)x}$ is constant. If in addition $H$ happens to have a definite value $\l$ in state $x$, then $H$ has the same definite value in all states $V(t)x$, $t\in\R$.
$\proof$ The last assertion follows from the fact that if $[H,V(t)]=0$, then $V(t)$ leaves the eigen-spaces of $H$ invariant. $\eofproof$
Verify that the time dependent operator $A(t)\colon=U(t)^*AU(t)$ satisfies Heisenbergs evolution equation: $A^\prime(t)=i[H,A(t)]$ and that for all states $x$: $\la A(t)\ra_x=\la A\ra_{U(t)x}$. cf. Schrödinger vs. Heisenberg picture.
Suppose the observable $A$ has eigen-values $a_1,\ldots,a_n\in\Z$, then $V(t)\colon=\exp(-itA)$ has eigen-values $e^{-ita_1},\ldots,e^{-ita_n}$ and the same eigen-vectors. Putting $z=e^{-it}$ and $\Psi(z)\colon=V(t)$, we get a well-defined homomorphism $\Psi:S^1\rar\UU(E)$, i.e. a unitary representation of $S^1$ in $E$.

The bra-ket notation in QM

In QM the
"bra-ket" notation $\la x|$ and $|y\ra$ is widely used; the notion is just a play on words: it splits up the bracket $\la x,y\ra$, which is written as $\la x|y\ra$, into two constituents $\la x|$ and $|y\ra$, the bra-vector and the ket-vector. Suppose we are given an orthonormal basis of $E$. In linear algebra we would denote it by e.g. $e_1,\ldots,e_n$, but in QM the simplest way would be $|1\ra,\ldots,|n\ra$ (or $\la1|,\ldots\la n|$); if you want to give a more detailed description you may denote them by $|$"description of the first basis vector"$\ra$, $|$"description of the second basis vector"$\ra$ an so on. It should also be noted that the mapping $(x,y)\mapsto\la x|y\ra$ is commonly assumed to be linear in the second component and anti-linear in the first, contrary to our convention! Expansion of any bra-vector $\la y|$ and ket-vector $|x\ra$ respectively reads as: $$ \la y|=\sum\la y|j\ra\la j| \quad\mbox{and}\quad |x \ra=\sum|j\ra\la j|x\ra~. $$ The action of a linear operator $A$ on a ket $|x\ra$ is denoted by $A|x\ra$ and the product with a bra $\la y|$ is written as $\la y|A|x\ra$ (that's our $\la Ax,y\ra$). Moreover, some physicists prefer $A^\dagger$ over $A^*$, i.e. instead of $\la Ax,y\ra=\la x,A^*y\ra$ they write: $\cl{\la y|A|x\ra}=\la x|A^\dagger|y\ra$. Applying $A$ to a ket $|x\ra$ and expanding $|x\ra$ with respect to the orthonormal basis $|j\ra$, we get: $$ A|x\ra=\sum_j A|j\ra\la j|x\ra,\quad \la y|A|x\ra=\sum_j\la y|A|j\ra\la j|x\ra =\sum_{j,k}\la y|k\ra\la k|A|j\ra\la j|x\ra~. $$ Once you get used to this notation it's quite convenient, however we stick to the notations of linear algebra and put up with some inconveniences! If you feel curious about QM, there is an outstandingly simple account on quantum behavior: Feynman Lectures III, 1.

Emmy Noethers Theorem on Symmetries and Conservations

Cf. wikipedia.

Coordinate transforms

An important family of symmetries for operators can be traced back to coordinate transformations: As ambient space let us take (instead of $\R^3$) more generally an open subset $M$ of some $\R^n$. Symmetries of operators may emerge from coordinate transformations as follows: Suppose $H:C^\infty(M)\rar C^\infty(M)$ is a linear operator on $M$ (in abuse of notation) and $F$ a smooth bijection (i.e. a diffeomorphism or a coordinate transformation) $F:M\rar M$, such that for all $f\in C^\infty(M)$: $H(f\circ F^{-1})=(Hf)\circ F^{-1}$, i.e. $H$ commutes with the linear operator $\Psi(F):f\mapsto f\circ F^{-1}$ or, equivalently: $\Psi(F):f\mapsto f\circ F^{-1}$ is a symmetry of $H$. There is a good reason for defining $\Psi(F)(f)$ by $f\circ F^{-1}$ and not by $f\circ F$, the former is a representation whereas the latter is in general not!
1. The operators $\Psi(g)$, $g\in\OO(n)$ are symmetries of the Laplace operator on $M=\R^n$. 2. If $\Psi(F)$ is for some diffeomorphism $F:\R^n\rar\R^n$ a symmetry of the Laplacian, then $F$ is an isometry of the euclidean space $\R^n$.
Let's just indicate the importance of this result. Take e.g. Poisson's equation $\D U=4\pi G\r$, which in classical physics describes the gravitational potential $U$ of a mass distribution $\r$, cf. e.g. wikipedia. To compute the Laplacian $\D$ a physicist has to select an orthonormal basis $e_1,\ldots$ of the euclidean space and he's got to express both $U$ and $\r$ as functions of the coordinates $x_1,\ldots$ of a point $x$ with respect to the chosen basis, i.e. $x=\sum x_je_j$ and $U(x)=U_1(x_1,\ldots)$, $\r(x)=\r_1(x_1,\ldots)$. Another physicist may choose at discretion another orthonormal basis $b_1,\ldots$ and she will also express both $U$ and $\r$ as functions of the coordinates $y_1,\ldots$ of the point $x$ with respect to the chosen basis, i.e. $x=\sum y_jb_j$ and $U(x)=U_2(y_1,\ldots)$, $\r(x)=\r_1(y_1,\ldots)$. Obviously: $\r_1(x_1,\ldots)=\r_2(y_1,\ldots)$ and $U_1(x_1,\ldots)=U_2(y_1,\ldots)$. Now, let $F$ be the mapping $(x_1,\ldots)\mapsto(y_1,\ldots)$, then $U_1=U_2\circ F$ and $\r_1=\r_2\circ F$. Poisson's equation for the first and the second physicist comes down to $$ \sum\pa_{x_j}^2U_1=\r_1 \quad\mbox{and}\quad \sum\pa_{y_j}^2U_2=\r_2 $$ respectively. Now if Poisson's equation isn't just mathematical poppycock, we must have: $\sum\pa_{y_j}^2U_2(y_1,\ldots)=\sum\pa_{x_j}^2U_1(x_1,\ldots)$, for the left hand side equals $\r_1(x_1,\ldots)$ and the right hand side equals $\r_2(y_1,\ldots)$ and both must coincide; equivalently: $\D U_2(F(x_1,\ldots))=\D(U_2\circ F)(x_1,\ldots)$, i.e. $\Psi(F^{-1})$ must be a symmetry of the Laplacian! Sometimes this result shows up in disguise: the Laplacian is computed the same way, no matter what orthonormal basis you take! Now it should be evident that all of this originates in the geometric structure of the ambient space.
Just like $\SO(n)$ is a subgroup of the linear isometries $\OO(n)$ on $\R^n$ the Lorentz transformations are a subgroup of the group of linear isometries $\OO(n+1,1)$ on Minkowski space $\R_1^{n+1}$: This space is equipped with the following inner product (beware, this product is not euclidean: it doesn't satisfy $\la x,x\ra>0$ for $x\neq0$, but instead: for all $x\neq0$ there is some $y\in\R^{n+1}$ such that $\la x,y\ra\neq0$): $$ \la x,y\ra\colon=-x_0y_0+x_1y_1+\cdots+x_ny_n~. $$ It's called the canonical Lorentz-product on the vector-space $\R^{n+1}$ and $\R_1^{n+1}$ is just short hand for the vector-space $\R^{n+1}$ with this inner product. $L\in\Hom(\R^{n+1})$ is a linear isometry of $\R_1^{n+1}$ iff for all $x,y\in\R_1^{n+1}$: $\la Lx,Ly\ra=\la x,y\ra$; if in addition $\det L=+1$ ($L$ is orientation preserving) and $\la Le_0,e_0\ra < 0$ ($L$ is not time reversing), then $L$ is said to be a Lorentz transformation.
$L\in\Hom(\R^2)$ is a Lorentz transformation if and only if there is some $t\in\R$ such that the matrix of $L$ with respect to the canonical basis $e_0,e_1$ is given by: $$ \left(\begin{array}{cc} \cosh t&\sinh t\\ \sinh t&\cosh t \end{array}\right)~. $$
If $F:\R^{n+1}\rar\R^{n+1}$ is a diffeomorphism, then $\Psi(F)$ is a symmetry of the wave operator if and only if $F$ is an isometry of Minkowski space $\R_1^{n+1}$.
So, what the Laplacian is to euclidean geometry the wave operator is to Lorentz geometry!
Let $D_v$ be the directional derivative $D_v=\sum v_j\pa_j$ on $\R^n$. What condition a linear transformations $A\in\Hom(\R^n)$ must meet for $\Psi(A^{-1})$ to be a symmetry of $D_v$? Determine a necessary and sufficent condition for a diffeomorphism $F:\R^n\rar\R^n$ in order to make $\Psi(F^{-1})$ a symmetry of $D_v$?
By the chain rule we have $$ D_vf\circ A(x) =Df\circ A(x)v =Df(Ax)D_vA(x) =Df(Ax)Av =D_{Av}f(Ax), $$ therefore $\Psi(A^{-1})$ is a symmetry of $D_v$ iff $Av=v$. Similarly, if $F$ is a diffeomorhism, then by the chain rule $D_vf\circ F(x)=D_{D_vF(x)}f(Fx)$ and thus $\Psi(F^{-1})$ is a symmetry if for all $x\in\R^n$: $DF(x)v=v$.

A glance on Emmy Noethers theorem: equation of motion

In large part theoretical physics involves critical points of something called Lagrangian
. In the most elemetary case the Lagrangian is a smooth function ${\cal L}$ on the cartesian product of $\R$, an open subset $M$ of $\R^n$ and the vector-space $\R^n$ and the equation of motion is just a critical point (here a smooth curve $\g:t\mapsto\g(t)\in M$, $t\in[0,T]$) of the action functional $$ W(\g)\colon=\int_0^T{\cal L}(t,\g(t),\g^\prime(t))\,dt, $$ i.e. if $h:(-\e,\e)\times[t_0,t_1]\rar M$ is smooth, such that $h(0,t)=\g(t)$, $h(s,0)=\g(0)$ and $h(s,T)=\g(T)$, then: $\ttdl s0W(h_s)=0$ - $h$ is called a variation of $\g$ fixing endpoints (cf. Least Action Principle). It is well known - actually it's an application of integration by parts and a simple lemma - that $\g$ is critical for the action functional if and only if it satisfies the Euler-Lagrange equations: \begin{equation}\label{enteq1}\tag{ENT1} \forall j=1,\ldots,n:\quad \ftd t\Big(\pa_{v_j}{\cal L}(t,\g(t),\g^\prime(t))\Big) =\pa_{x_j}{\cal L}(t,\g(t),\g^\prime(t)), \end{equation} where the left hand side is the derivative of the function $t\mapsto\pa_{v_j}{\cal L}(t,\g(t),\g^\prime(t))$, which is often simply written in ambiguous form: $\ttd t\pa_{v_j}{\cal L}$, not to mention physicists notation (cf. wikipedia)! If ${\cal L}$ doesn't depend on $t$ explicitely, then ${\cal L}$ is invariant under the group of transformations $(t,x,v)\mapsto(t+s,x,v)$, $s\in\R$ and in this case Noether's Theorem implies energy conservation, i.e. the (generalized) energy function (called Hamiltonian again, cf. wikipedia) $$ {\cal H}(x,v)\colon=-{\cal L}(x,v)+\sum_{j=1}^n\pa_{v_j}{\cal L}(x,v)v_j $$ is a first integral of ${\cal L}$, meaning that for every critical curve $\g$ the function $t\mapsto{\cal H}(\g(t),\g^\prime(t))$ is constant. Indeed, if $\pa_t{\cal L}=0$, then we have by the chain rule, the Euler-Lagrange equations \eqref{enteq1} and the product rule: $$ \begin{eqnarray*} 0=\pa_t{\cal L}&=&\ftd t{\cal L}(\g,\g^\prime) -\sum_j\pa_{x_j}{\cal L}(\g,\g^\prime)\g^\prime -\sum_j\pa_{v_j}{\cal L}(\g,\g^\prime)\g^\dprime\\ &=&\ftd t{\cal L}(\g,\g^\prime) -\sum_j\Big(\ftd t\Big(\pa_{v_j}{\cal L}(\g,\g^\prime)\Big)\g^\prime +\pa_{v_j}{\cal L}(\g,\g^\prime)\g^\dprime\Big)\\ &=&\ftd t\Big({\cal L}(\g,\g^\prime) -\sum_j\pa_{v_j}{\cal L}(\g,\g^\prime)\g^\prime\Big) =-\ftd t{\cal H}(\g,\g^\prime)~. \end{eqnarray*} $$
Find the Euler-Lagrange equations \eqref{enteq1} for ${\cal L}(x,v)=\tfrac12\Vert v\Vert^2-V(x)$, $x,v\in\R^n$, and compute the Hamiltonian.
Find the Euler-Lagrange equations \eqref{enteq1} for ${\cal L}:\R^+\times(-\pi,\pi)\times\R\times\R\rar\R$ given by $(r,\theta,v_r,v_\theta)=\tfrac12(v_r^2+r^2v_\theta^2)-V(r)$ and compute the Hamiltonian. 2. Prove that the function $L(r,\theta,v_r,v_\theta)\colon=r^2v_\theta$ is another first integral of ${\cal L}$ - $L$ is called the angular momentum.
We have $\pa_r{\cal L}=rv_\theta^2-V^\prime(r)$, $\pa_{v_r}{\cal L}=v_r$ and $\pa_\theta{\cal L}=0$, $\pa_{v_\theta}{\cal L}=r^2v_\theta$. Thus a curve $\g:t\mapsto(r(t),\theta(t))$ is critical iff $$ r^\dprime=r\theta^{\prime2}-V^\prime(r), \quad\mbox{and}\quad \ftd t(r^2\theta^\prime)=0~. $$ The second equation shows that the angular momentum is a first integral of ${\cal L}$. Finally the Hamiltonian is given by $$ {\cal H} =-\tfrac12(v_r^2+r^2v_\theta^2)+V(r)+(v_r^2+r^2v_\theta^2) =\tfrac12(v_r^2+r^2v_\theta^2)+V(r)~. $$ Having two first integrals the equations can be integrated: ${\cal H}(\g,\g^\prime)=E$ and $L(\g,\g^\prime)=L$ imply: $$ r^{\prime2}=2E-L^2/r^2-V(r)~. $$ Notice that we didn't use the equation $r^\dprime=r\theta^{\prime2}-V^\prime(r)$ at all!
Suppose $H$ is a positive linear operator on Euclidean space $\R^n$. Verify that the Euler-Lagrange equations \eqref{enteq1} for ${\cal L}(x,v)=\tfrac12(\Vert v\Vert^2-\la Hx,x\ra)$ come down to $\g^\dprime(t)=-H\g(t)$ and the solution is given by: $$ \g(t)=\cos(t\sqrt H)\g(0)+\frac{\sin(t\sqrt H)}{\sqrt H}\g^\prime(0)~. $$
That was sort of 'time'-symmetry, next we are going to discuss a 'space'-symmetry version of Noether's Theorem. To start with, we need the following
Let ${\cal L}$ be a Lagrangian on $M\times\R^n$ and $F:U\rar M$ a diffeomorphism. Define $F^*{\cal L}:U\times\R^n\rar\R$ by $F^*{\cal L}(x,v)\colon={\cal L}(F(x),DF(x)v)$. Then $\g$ is critical for $F^*{\cal L}$ iff $F\circ\g$ is critical for ${\cal L}$. $F^*{\cal L}$ is called the pull-back of the Lagrangian ${\cal L}$, because the function ${\cal L}$ on $M$ is pulled back via $F:U\rar M$ to $U$.
$\proof$ Let $h_s$ be a variation of a smooth curve $\g$ in $U$, then $F\circ h_s$ is a variation of $F\circ\g$ and by the chain rule we have: \begin{eqnarray*} W(F\circ h_s) &=&\int_0^T{\cal L}(t,F(h_s(t)),DF(h_s(t))h_s^\prime(t))\,dt\\ &=&\int_0^T F^*{\cal L}(t,h_s(t),h_s^\prime(t))\,dt =W^*(h_s)~. \end{eqnarray*} $\eofproof$
Suppose that there is a vector field
, i.e. a map $\z:M\rar\R^n$ cf. wikipedia, such that its flow $\theta_s:M\rar M$ is a symmetry of the Lagrangian, i.e. for all $s\in\R$: $\theta_s^*{\cal L}={\cal L}$, then the following quantity is a first integral of ${\cal L}$: $$ F(x,v) \colon=\sum_{j=1}^n\pa_{v_j}{\cal L}(x,v)\z_j(x) =d{\cal L}_x(v)(\z(x))~. $$ where $d{\cal L}_x$ denotes the differential of the function $v\mapsto{\cal L}(x,v)$. Remember that the flow of a vector field $\z:M\rar\R^n$ is a mapping $\theta:(t,x)\mapsto\theta(t,x)$, such that for all $x\in M$: $\theta(0,x)=x$ and the curve $c:t\mapsto\theta(t,x)$ satisfies the ordinary differential equation $c^\prime(t)=\z(c(t))$.

Indeed, since ${\cal L}$ is $\theta_s$-invariant, we conclude from lemma that for all $s\in\R$ the curve $\vp(s,t)\colon=\theta_s(\g(t))$ is critical and hence it must be a solution to the Euler-Lagrange equations \eqref{enteq1}: $$ \forall j:\quad \pa_t\Big(\pa_{v_j}{\cal L}(\vp,\pa_t\vp)\Big)=\pa_{x_j}{\cal L}(\vp,\pa_t\vp)~. $$ By invariance we also have: ${\cal L}(\vp(s,t),\pa_t\vp(s,t))={\cal L}(\g(t),\g^\prime(t))$; it follows that: $$ 0=\pa_s{\cal L}(\vp,\pa_t\vp) =\sum_j\pa_{x_j}{\cal L}(\vp,\pa_t\vp)\pa_s\vp_j +\pa_{v_j}{\cal L}(\vp,\pa_t\vp)\pa_s\pa_t\vp_j $$ Since $\pa_s\pa_t=\pa_t\pa_s$, we infer from the above equation: $$ 0=\pa_t\Big(\sum_j\pa_{v_j}{\cal L}(\vp,\pa_t\vp)\Big)\pa_s\vp_j +\sum_j\pa_{v_j}{\cal L}(\vp,\pa_t\vp)\pa_t\pa_s\vp_j =\pa_t\Big(\sum_j\pa_{v_j}{\cal L}(\vp,\pa_t\vp)\pa_s\vp_j\Big) $$ i.e. $t\mapsto\sum\pa_{v_j}{\cal L}(\vp,\pa_t\vp)\pa_s\vp_j$ is constant. Finally, $\pa_s\vp(0,t)=\z(\g(t))$.

If ${\cal L}$ doesn't depend on the first coordinate $x_1$ explicitely, then $\pa_{v_1}{\cal L}$ is a first integral of ${\cal L}$: this is called (generalized) moment conservation in the first component.
Since the flow is given by $\theta(s,x)=x+se_1$, we get the vector field $\z(x)=\ttdl s0\theta(s,x)=e_1$ and therefore $F(x,v)=\pa_{v_1}{\cal L}(x,v)$ is a first integral of ${\cal L}$. Of course, that's a very simple example, which is also an immediate consequence of the Euler-Lagrange equations \eqref{enteq1}.
Suppose $b\in\R^3$ is a unit vector, $A$ the linear, skew symmetric operator $Ax\colon=b\times x$ and $\theta_t(x)\colon=\exp(tA)x$, i.e. $\theta_t$ is the rotation about the axis $b$ with angle $t$. Suppose $\theta_t$ is a symmetry of the Lagrangian ${\cal L}:\R^3\times\R^3\rar\R$, then $L(x,v)\colon=d{\cal L}_x(v)(b\times x)$ is a first integral of ${\cal L}$ - $d{\cal L}_x$ denotes the differential of the function $v\mapsto{\cal L}(x,v)$.
The vector field $\z$ is simply given by $\z(x)=Ax$ and its flow is $\theta$, thus $$ d{\cal L}_x(v)(\z(x)) =d{\cal L}_x(v)(b\times x)~. $$ In particular, if $V:\R_0^+\rar\R$ is any smooth function, then the Lagrangian $d{\cal L}(x,v)\colon=\tfrac12\Vert v\Vert^2-V(\Vert x\Vert)$ is invariant under all rotations; hence for all unit vectors $b$ we have a first integral of ${\cal L}$ $$ L(x,v) =\la v,b\times x\ra =\la b,x\times v\ra $$ Since this holds for all unit vectors $b$, the vector valued function $(x,v)\mapsto x\times v$, which is called the angular momentum, is a first integral of ${\cal L}$.

A glance on Emmy Noethers theorem: field equations

Obviously, there is a similar formulation for 'fields' cf.
wikipedia; again we stick to the simplest case of a function: Let $M$ be a bounded domain in $\R^n$ and ${\cal L}:\R\times M\times\R^n\rar\R$, $(u,x,v)\mapsto{\cal L}(u,x,v)$ a so called Lagrange density. A smooth function $\psi:M\rar\R$ is said to be critical for the action functional $$ W(\psi)\colon=\int_M{\cal L}(\psi(x),x,d\psi(x))\,dx $$ if for each variation $h(s,x)$ of $\psi$ with fixed boundary, i.e. $h(0,x)=\psi(x)$ and for all $x\in\pa M$: $h(s,x)=\psi(x)$: $\ttdl s0W(h_s)=0$. A simple application of the integration by parts formula gives the Euler-Lagrange field equation: \begin{equation}\label{enteq2}\tag{ENT2} \sum_{j=1}^n\pa_{x_j}\Big(\pa_{v_j}{\cal L}(\psi,.,d\psi)\Big) =\pa_u{\cal L}(\psi,.,d\psi)~. \end{equation}
Show that the Euler-Lagrange field equation \eqref{enteq2} for the Lagrange density ${\cal L}(u,x,v)=\tfrac12\Vert v\Vert^2+\r(x)u$ is Poisson's equation $\sum\pa_{x_j}^2\psi=\r$. Solution by T. Speckhofer.
Physicist often use complex fields, in particular in QM, so the Lagrange density is defined on $\C\times M\times\C^n$. Since ${\cal L}$ is still real valued, the corresponding Euler-Lagrange field equation \eqref{enteq2} doesn't change but the derivatives $\pa_u$ and $\pa_{v_j}$ are to be interpreted as Wirtinger operators and not as ordinary complex derivatives, i.e. the (real) differential of a smooth (in the real sense) complex or real valued function $f$ defined on an open subset $\O$ of $\C^n=\R^{2n}$ at a point $z\in \O$ is the $\R$-linear map $df(z):\C^n\rar\C$ given by $$ \begin{eqnarray*} \forall z\in \O:\quad df(z)&=&\sum_k\pa_{z_k}f(z)\,dz_k+\sum_k\pa_{\bar z_k}f(z)\,d\bar z_k,\quad\mbox{i.e.}\\ \forall z\in \O,\forall w\in\C^n:\quad df(z)w&=&\sum_k\pa_{z_k}f(z)\,w_k+\sum_k\pa_{\bar z_k}f(z)\,\bar w_k \end{eqnarray*} $$ where as usual $dz_k$ and $d\bar z_k$ are the differentials of the mappings $z\mapsto z_k$ and $z\mapsto\bar z_k$, respectively, and $\pa_{z_k}$ and $\pa_{\bar z_k}$ denote the Wirtinger derivatives. Thus for all $w=(w_1,\ldots,w_n)\in\C^n$: $dz_k(w)=w_k$ and $d\bar z_k(w)=\bar w_k$ - we write $dz_k(w)$ instead of $dz_k(z)w$ because $dz_k$ does not depend on $z$. Observe that $dz_k$ is $\C$-linear, i.e. $dz_k(\l w)=\l w_k=\l dz_k(w)$ for all $\l\in\C$ and all $w\in\C^n$, whereas $d\bar z_k$ is NOT $\C$-linear, for $dz_k(\l w)=\bar\l\bar w_k=\bar\l dz_k(w)$. $f$ is said to be complex differentiable or holomorphic if for all $k$: $\pa_{\bar z_k}f=0$ or equivalently: $f$ is holomorphic iff $df(z)$ is $\C$-linear for all $z\in \O$. How to compute the Wirtinger derivatives? Usually $f$ is given as a function of the real variables $x_k$ and $y_k$: $z_k=x_k+iy_k$, then the Wirtinger derivatives are given by $$ \pa_{z_k}f=\tfrac12(\pa_{x_k}f-i\pa_{y_k}f),\quad \pa_{\bar z_k}f=\tfrac12(\pa_{x_k}f+i\pa_{y_k}f) $$
Compute $\pa_zf$ and $\pa_{\bar z}f$ for $f(x+iy)=e^x(\cos(y)+i\sin(y))$.
We have $\pa_xf(z)=e^x(\cos y+i\sin y)$ and $\pa_yf(z)=e^x(-\sin y+i\cos y)$ and thus: $$ \begin{eqnarray*} \pa_zf(z)&=&\tfrac12e^x(\cos y+i\sin y)-i(-\sin y+i\cos y) =e^x(\cos y+i\sin y)\\ \pa_{\bar z}f(z)&=&\tfrac12e^x(\cos y+i\sin y)+i(-\sin y+i\cos y) =0 \end{eqnarray*} $$
Compute $\pa_zf$ and $\pa_{\bar z}f$ for $f(x+iy)=\sin(2(x+y))$.
We have $\pa_xf(z)=2\cos(2(x+y))$ and $\pa_yf(z)=2\cos(2(x+y))$ and thus: $$ \pa_zf(z)=(1-i)\cos(2(x+y)) \quad\mbox{and}\quad \pa_{\bar z}f(z)=(1+i)\cos(2(x+y))~. $$
Verify that for all smooth $f:\R^2\rar\C$: $4\pa_z\pa_{\bar z}f=\pa_x^2f+\pa_y^2f$.
Alternatively: once you've got $f$ as a function of the $2n$ real variables $x_k$ and $y_k$, further express it as a function of the $2n$ variables $z_k$ and $\bar z_k$ by replacing $x_k$ and $y_k$ with $(z_k+\bar z_k)/2$ and $(z_k-\bar z_k)/2i$. Finally take the partial derivatives with respect to these variables as if $z_k$ and $\bar z_k$ would be independent real variables.
Compute $\pa_zf$ and $\pa_{\bar z}f$ for $f(x+iy)=\sin(2(x+y))$.
Since $f(x+iy)=\sin(z+\bar z-i(z-\bar z))=\sin((1-i)z+(1+i)\bar z)$, it follows that $$ \pa_zf(z)=(1-i)\cos((1-i)z+(1+i)\bar z) \quad\mbox{and}\quad \pa_{\bar z}f(z)=(1+i)\cos((1-i)z+(1+i)\bar z)~. $$ Reverting to the variables $x$ and $y$ we are back to: $$ \pa_zf(z)=(1-i)\cos(2(x+y)) \quad\mbox{and}\quad \pa_{\bar z}f(z)=(1+i)\cos(2(x+y))~. $$
Compute $\pa_zf$ and $\pa_{\bar z}f$ for $f(x+iy)=|x+iy|^{2p}$, $p\in\R$.
Obviously $f(z)=z^p\bar z^p$ and therefore: $\pa_zf(z)=pz^{p-1}\bar z^p=p|z|^{p-1}\bar z$, $\pa_{\bar z}f(z)=pz^p\bar z^{p-1}=p|z|^{p-1}z$.
Assume $f:\O(\sbe\C^n)\rar\C$ is smooth (in the real sense) and $c:\R\rar \O$ is a smooth curve, then $$ \ttd t f\circ c(t) =\sum\pa_{z_j}f(c(t))c_j^\prime(t)+\pa_{\bar z_j}f(c(t))\bar c_j^\prime(t)~. $$
By the chain rule we have $(f\circ c)^\prime(t)=df(c(t))c^\prime(t)$ and since $df(z)=\sum_k\pa_{z_k}f(z)\,dz_k+\sum_k\pa_{\bar z_k}f(z)\,d\bar z_k$, the result follows.
Compute $(f\circ c)^\prime(t)$ for $f(z)=\exp(z-\bar z^2)$ and $c(t)=e^{it}$.
Find the Euler-Lagrange field equation \eqref{enteq2} for ${\cal L}(u,x,v)=\tfrac1{2}\sum|v_j|^{2}+V(x)|u|^{2}$.
Since ${\cal L}=\tfrac12\sum v_j\bar v_j+Vu\bar u$, it follows that $\pa_u{\cal L}=\bar u V$ and $\pa_{v_j}{\cal L}=\bar v_j/2$ and thus $$ -\tfrac1{2}\sum_j\pa_{x_j}^2\bar\psi+\bar\psi V=0 $$
Prove using a Lagrange multiplier that the Euler-Lagrange field equation \eqref{enteq2} for ${\cal L}(u,x,v)=\tfrac12\sum|v_j|^2+V(x)|u|^2$ under the condition $\int_M|\psi(x)|^2\,dx=1$ comes down to Schrödinger's equation: $$ -\tfrac12\sum\pa_{x_j}^2\bar\psi+V\bar\psi=\l\bar\psi~. $$
Now let us suppose we have a Lagrange density ${\cal L}:\C\times M\times\C^n\rar\R$ with $\UU(1)$-symmetry, i.e. for all $t\in\R$, all $z\in\C$ and all $v\in\C^n$: ${\cal L}(e^{it}z,.,e^{it}v)={\cal L}(z,.,v)$. Taking the derivative of $t\mapsto{\cal L}(e^{it}\psi,.,e^{it}d\psi)$ at $t=0$ we get by means of the chain rule for Wirtinger derivatives and the Euler-Lagrange field equation \eqref{enteq2}: \begin{eqnarray*} 0&=& i\pa_u{\cal L}(\psi,.,d\psi)\psi-i\pa_{\bar u}{\cal L}(\psi,.,d\psi)\bar\psi\\ &&+\sum_ji\pa_{v_j}{\cal L}(\psi,.,d\psi)\pa_{x_j}\psi -i\pa_{\bar v_j}{\cal L}(\psi,.,d\psi)\pa_{x_j}\bar\psi\\ &=&i\sum_j\pa_{x_j}\Big(\pa_{v_j}{\cal L}(\psi,.,d\psi)\Big)\psi +\pa_{v_j}{\cal L}(\psi,.,d\psi)\pa_{x_j}\psi\\ &&-i\sum_j\pa_{x_j}\Big(\pa_{\bar v_j}{\cal L}(\psi,.,d\psi)\Big)\bar\psi +\pa_{\bar v_j}{\cal L}(\psi,.,d\psi)\pa_{x_j}\bar\psi\\ &=&i\sum_j\pa_{x_j}\Big(\psi\pa_{v_j}{\cal L}(\psi,.,d\psi) -\bar\psi\pa_{\bar v_j}{\cal L}(\psi,.,d\psi)\Big), \end{eqnarray*} i.e. the divergence of the complex vector field $J$ with the components $$ i\Big(\psi\pa_{v_j}{\cal L}(\psi,.,d\psi) -\bar\psi\pa_{\bar v_j}{\cal L}(\psi,.,d\psi)\Big) $$ vanishes and, of course, $\psi$ obeys the field equation \eqref{enteq2}. In QM $J$ is usually called the probability current and the equation $\divergence J=0$ tantamounts to volume conservation, i.e. the flow of $J$ is volume preserving, which in physics comes down to mass or charge conservation: charge conservation is due to this kind of $\UU(1)$-symmetry. Take for example $$ {\cal L}(u,x,v)=\tfrac12\sum v_j\bar v_j+V(x)u\bar u, $$ then $\pa_{v_j}{\cal L}=\tfrac12\bar v_j$ and $\pa_{\bar v_j}{\cal L}=\tfrac12v_j$ thus the components of the vector field $J$ are $\tfrac12i(\psi\pa_{x_j}\bar\psi-\bar\psi\pa_{x_j}\psi)$. $\psi$ satisfies the Euler-Lagrange field equation $$ 0=-\tfrac12\D\bar\psi+V\bar\psi \quad\mbox{and}\quad 0=\sum_j\pa_{x_j}(\psi\pa_{x_j}\bar\psi-\bar\psi\pa_{x_j}\psi) =\psi\D\bar\psi-\bar\psi\D\psi. $$
Verify that $\psi(x)=e^{i\la k,x\ra}$ is a solution of the Euler Lagrange field equation: $\tfrac12\sum\pa_{x_j}^2\bar\psi+\tfrac12\norm k^2\bar\psi=0$ and compute the associated probability current.
We conclude this section by stating another version of Noether's theorem: if ${\cal L}$ doesn't depend on $x$ explicitely, i.e. ${\cal L}:\R\times\R^n\rar\R$, then Noether's Theorem implies that $\divergence T\colon=\sum_j\pa_j T_{jk}=0$, where $T$ is the so called energy-momentum tensor field (cf. subsection): $$ T=\sum_{jk}T_{jk}\,dx_j\otimes dx_k,\quad T_{jk}\colon= \Big(\pa_{x_k}\psi\pa_{v_j}{\cal L}(\psi,d\psi) -\d_{kj}{\cal L}(\psi,d\psi)\Big) $$ and as usual $\psi$ satisfies the Euler-Lagrange field equation \eqref{enteq2}. The proof is very similar to the previous proofs: We put $F\colon={\cal L}(\psi,\pa_1\psi,\ldots,\pa_n\psi)$; by the chain rule we get: $$ \pa_kF =\pa_t{\cal L}(\psi,\pa_1\psi,\ldots,\pa_n\psi)\pa_k\psi +\sum_j\pa_{v_j}{\cal L}(\psi,\pa_1\psi,\ldots,\pa_n\psi)\pa_k\pa_j\psi~. $$ Again we invoke the Euler-Lagrange field equation \eqref{enteq2} and conclude that: \begin{eqnarray*} \pa_kF&=&\sum_j\pa_j(\pa_{v_j}{\cal L}(\psi,\pa_1\psi,\ldots,\pa_n\psi))\pa_k\psi +\sum_j\pa_{v_j}{\cal L}(\psi,\pa_1\psi,\ldots,\pa_n\psi)\pa_k\pa_j\psi\\ &=&\sum_j\pa_j\Big(\pa_k\psi\pa_{v_j}{\cal L}(\psi,\pa_1\psi,\ldots,\pa_n\psi)\Big), \end{eqnarray*} which, by definition of $T$, exactly amounts to $\divergence T=0$.
Compute the energy-momentum tensor field for the Lagrange density ${\cal L}(u,x,v)=\tfrac12\Vert v\Vert^2-u^2$.
Cf. wikipedia, Y. Kosmann-Schwarzbach, The Noether Theorems, H. Fritzsch, Quantum Field Theory,J. Schwichtenberg: Physics from Symmetries.

Homogeneous and Harmonic Homogeneous Polynomials

Cf. students version in german

Homogeneous Polynomials

This section is not about representations per se but only about particular spaces, the groups may be represented in. For $l\in\N_0$ let ${\cal P}_l$ be the space of $l$-homogeneous complex-valued polynomial functions on $\R^3$. Then ${\cal P}_l$ is a complex vector-space and for $\a=(\a_1,\a_2,\a_3)\in\N_0^3$ satisfying $|\a|\colon=\a_1+\a_2+\a_3=l$ the polynomials $P_\a=x^\a\colon=x_1^{\a_1}x_2^{\a_2}x_3^{\a_3}$ form a basis of ${\cal P}_l$. Its dimension is $\dim{\cal P}_l=(l+2)(l+1)/2$: indeed write the sequence $\a_1,\a_2,\a_3$ as $1\ldots101\ldots101\ldots1$ with ones in the first $\a_1$ slots followed by zero, $\a_2$ ones in the following slots followed by zero and $\a_3$ ones in the remaining slots. Thus we have to fill in two zeros in $l+2$ slots!
What is the dimension of the space of $l$-homogeneous polynomial functions on $\R^n$?
Suppose $G$ is any subgroup of the orthogonal group $\OO(3)$; for $P\in{\cal P}_l$ we put \begin{equation}\label{hhpeq1}\tag{HHP1} \G(g)P(x)\colon=P(g^{-1}x) \end{equation} Then $\G$ is a representation of $G$ similar to the one we encountered in subsection. In order to turn it into a unitary representation we need to define a euclidean product on ${\cal P}_l$. So let us take the following product: \begin{equation}\label{hhpeq2}\tag{HHP2} \la P,Q\ra \colon=\frac1{c_l}\int_{\R^3}P(x)\cl{Q(x)}e^{-\Vert x\Vert^2/2}\,dx =\int_{S^2}P(x)\cl{Q(x)}\s(dx)~. \end{equation} where $c_l$ is a normalizing constant and $\s$ denotes the normalized surface measure on $S^2$, i.e. for all measurable subsets $A$ of $S^2$: $\s(A)\colon=\l((0,1]A)/\l(B)$, where $\l$ denotes Lebesgue measure in $\R^3$, $(0,1]A\colon=\{t\z:\,t\in(0,1],\z\in A\}$ and $B=(0,1]S^2$ the euclidean unit ball:
surface measure
With respect to this euclidean product $\G$ is unitary because of rotational invariance of Lebesgue-measure.
The constant $c_l$ is given by $4\pi\G(l+3/2)2^{l+1/2}$, where $\G$ denotes gamma function (cf. wikipedia); in particular: $c_0=(2\pi)^{3/2}$ and $c_{l+1}=(2l+3)c_l$.
We do not check the last two statements. By integration in spherical coordinates: $r\in\R^+$, $\z\in S^2$, we have for all integrable functions $f:\R^3\rar\C$: $$ \int_{\R^3} f(x)\,dx=4\pi\int_0^\infty r^2\int_{S^2}f(r\z)\,\s(d\z)\,dr $$ For $P,Q\in{\cal P}_l$ we put $f(x)=P(x)\bar Q(x)e^{-\Vert x\Vert^2/2}$; the above formula yields: $$ \la P,Q\ra =\frac{4\pi}{c_l}\int_0^\infty r^{2+2l}e^{-r^2/2} \,dr\int_{S^2}P(\z)\bar Q(\z)\,\s(d\z) $$ and therefore $$ c_l=4\pi\int_0^\infty r^{2+2l}e^{-r^2/2}\,dr=4\pi\G(l+3/2)2^{l+1/2}~. $$
The polynomials $x^\a$, $|\a|=l$, form an orthogonal basis of ${\cal P}_l$ for $l=1$ but not for $l=2$. Solution by T. Speckhofer
The mapping $\Psi(z)f\colon=f(zw)$ is a unitary representation of $S^1=\UU(1)$ in the space $L_2(\C,\g)$ with the gaussian measure $\g(dw)=(2\pi)^{-1}e^{-|w|^2/2}\,dw$. 2. Verify by means of exam that for smooth functions $f$: $$ \ttdl t0\Psi(e^{it})f(w)=iw\pa_wf(w)-i\bar w\pa_{\bar w}f(w) $$ Solution by T. Speckhofer
Let $E$ be the space of all complex valued polynomials on $\R$ of degree less or equal $n$ and $P\in\Hom(E)$ the operator $Pf\colon=-if^\prime$. Compute $\tr P$ and $\det P$.

Harmonic homogeneous polynomials

Let ${\cal H}_l$ be the subspace of ${\cal P}_l$, containing all harmonic polynomials, i.e. $P\in{\cal H}_l$ iff $P\in{\cal P}_l$ and $\D P=0$, cf wikipedia. Since for all $g\in\OO(3)$: $\D P\circ g=(\D P)\circ g$ - remember: all $g\in\OO(3)$ are symmetries of the Laplacian, the above representation $\G(g)P\colon=P\circ g^{-1}$ restricts to a representation of $G$ in the spaces ${\cal H}_l$, $l\in\N_0$.
The constant polynomial and the polynomials $x,y,z$ form orthogonal basis of the spaces ${\cal H}_0$ and ${\cal H}_1$ respectively. Moreover the polynomials $xy,yz,zx,x^2-y^2,2z^2-x^2-y^2$ form an orthogonal basis of ${\cal H}_2$. The corresponding normalized polynomials will be denoted by $s_1$, $p_1,p_2,p_3$ and $d_1,\ldots,d_5$ respectively. We have e.g. $s_1=1$, $p_1=\sqrt3x$, $d_1=\sqrt{15}xy$, $d_4=\sqrt{15/4}(x^2-y^2)$, $d_5=\sqrt{5/4}(2z^2-x^2-y^2)$.
We do not check orthogonality! By putting $Q=P\in{\cal P}_l$ in \eqref{hhpeq2} we obtain: $$ \int_{S^2}|P(\z)|^2\,\s(d\z) =\frac1{c_l}\int_{\R^3}|P(x)|^2e^{-\Vert x\Vert^2/2}\,dx~. $$ For $P(\z)=x^2$ the right hand side equals $3(2\pi)^{3/2}/c_2=1/5$ and for $P(\z)=xy$: $(2\pi)^{3/2}/c_2=1/15$, it follows that $$ \int_{S^2}x^4\,\s(d\z)=\frac1{5} \quad\mbox{and}\quad \int_{S^2}x^2y^2\,\s(d\z)=\frac1{15} $$ Our first goal is to determine the dimensions of the spaces ${\cal H}_l$. We'll employ the following simple result in linear algebra:
Suppose $E$ is a finite dimensional Hilbert-space and $A\in\Hom(E)$. Then $(\im A)^\perp=\ker A^*$ and $\im A^*=(\ker A)^\perp$. What can you say in the infinite dimensional case for bounded $A$?
For every $Q\in{\cal P}_l$ there exist uniquely determined polynomials $H\in{\cal H}_l$ and $P\in{\cal P}_{l-2}$, such that $Q=H+\Vert x\Vert^2 P$. The dimension of ${\cal H}_l$ thus equals $\dim{\cal P}_l-\dim{\cal P}_{l-2}=2l+1$.
$\proof$ We introduce a new complex euclidean product: Suppose $P=\sum_{|\a|=l}c_\a x^\a\in{\cal P}_l$ is a homogeneous polynomial of deree $l$; let $P(\pa)$ be the partial differential operator $\sum_{|\a|=l}c_\a\pa^\a$, then for $Q=\sum_{|\a|=l}d_\a\pa^\a$ we put $$ [P,Q]\colon=P(\pa)\bar Q =\sum_{|\a|=|\b|=l} c_\a \bar d_\b\pa^\a x^\b =\sum_{|\a|=l}\a!c_\a\bar d_\a $$ and get another euclidean product on ${\cal P}_l$. Now the kernel of $\D:{\cal P}_l\rar{\cal P}_{l-2}$ is ${\cal H}_l$ and for all $P\in{\cal P}_{l-2}$ and all $Q\in{\cal P}_l$ we have $$ [\D^*P,Q] =[P,\D Q] =P(\pa)\D\bar Q =\D P(\pa)\bar Q =[\Vert x\Vert^2P,Q] $$ and thus the adjoint $\D^*$ of $\D$ is $P\mapsto\Vert x\Vert^2P$. Since all spaces are finite dimensional, we conclude: $$ {\cal P}_l =\ker\D\oplus(\ker\D)^\perp ={\cal H}_l\oplus\im\D^*~. $$ $\eofproof$
The polynomials $x^\a/\sqrt{\a!}$, $|\a|=l$, form an orthonormal basis of $({\cal P}_l,[.,.])$.
The polynomials $xy,yz,zx,x^2-y^2,2z^2-x^2-y^2$ form an orthogonal basis of $({\cal H}_2,[.,.])$. Normalize these polynomials.
What is the dimension of the space of $l$-homogeneous harmonic polynomials on $\R^n$?
In dimension $n=2$ we have $\dim{\cal H}_l=2$. Show by means of exam that the polynomials $p_l(x,y)=\Re(x+iy)^l$ and $q_l(x,y)\colon=\Im(x+iy)^l$ form an orthogonal basis.
A homogeneous harmonic polynomial $H$ is certainly determined by its restriction $H|S^2$, which is called a spherical harmonic on $S^2$. Apparently, the previous lemma implies, that the restriction of every homogeneous polynomial to $S^2$ is a linear combination of spherical harmonics. But we are going to prove that much more is true: the spaces ${\cal H}_l$ form an orthogonal decomposition of the Hilbert-space $L_2(S^2)\colon=L_2(S^2,\s)$:
Suppose $f$ is a smooth function and $X$ a vector field on $\R^n$. Prove that $\divergence(fX)=\la X,\nabla f\ra+f\divergence(X)$.
The spaces ${\cal H}_l$ of spherical harmonics of dergee $l\in\N_0$ form an orthogonal decomposition of the Hilbert-space $L_2(S^2)$.
$\proof$ Assume $k\neq l$, $P\in{\cal H}_k$, $Q\in{\cal H}_l$ and let $N$ be the outer unit normalfield of $S^2=\pa B$. Then we get by homogeneity: $\la N,\nabla P\ra=kP$ and $\la N,\nabla Q\ra=lQ$. By the divergence theorem we therefore have \begin{eqnarray*} (k-l)\int_{S^2} P\bar Q\,d\s &=&\int_{S^2}\la N,\bar Q\nabla P-P\nabla\bar Q\ra\,d\s\\ &=&\frac1{4\pi}\int_{B}\divergence(\bar Q\nabla P-P\nabla\bar Q) =\frac1{4\pi}\int_{B}\bar Q\D P-P\D\bar Q=0~. \end{eqnarray*} By the previous remark and the Stone Weierstraß Theorem the sum $\sum{\cal H}_l$ is dense in $C(S^2)$ and thus dense in $L_2(S^2)$. $\eofproof$
Can you explain the last argument of the previous proof in detail?
Remark: Apparently the theorem holds in any dimension!
The significance of spherical harmonics in chemistry is due to the fact that they in a way describe the directional dependency of what is called an atomic orbital
of a spherically symmetric nucleus. Atomic orbital is nothing but another name of a particular stable state: a stable state of an electron bound by a nucleus. The state space is the Hilbert-space $L_2(\R^3)$ and it's known that all stable states $\psi$ can be described by functions of the form $\psi(r\z)\colon=u(r)H(\z)$, $r>0$, $\z\in S^2$ - this is due to the fact that in spherical coordinates $(r,\z)$ the Hamiltonian of a spherically symmetric nucleus is a sum of tensor products (cf. section in particular exam). $u$ reflects the radial dependency and $H$ the directional dependency; the latter are exactly the spherical harmonics. Thus if $H:S^2\rar\C$ denotes a normalized spherical harmonic, then $|H(\z)|^2$ is interpreted as the probability density to come across the electron in state $\psi$ in the direction $\z\in S^2$: for any (measurable) subset $A$ of $S^2$ the integral $\int_A|H(\z)|^2\s(d\z)$ is the probability of finding the electron in the cone $\R^+A\colon=\{ra:r > 0,a\in A\}$ generated by $A$, cf. wikipedia.
Work out the representation \eqref{hhpeq1} for $C_{2v}$ in ${\cal H}_l$ for $l=0,1,2$.
We determine the matrices of this representations with respect to the orthonormal bases given in example. First of all we have $C_2^{-1}(x,y,z)=(-x,-y,z)$, $\s_1^{-1}(x,y,z)=(x,-y,z)$, $\s_2^{-1}(x,y,z)=(-x,y,z)$.
For $l=0$ we get the trivial representation: $\G(g)=1$.
For $l=1$ the matrices of $\G(C_2),\G(\s_1),\G(\s_2)$ are given by $$ \left(\begin{array}{ccc} -1&0&0\\ 0&-1&0\\ 0&0&1 \end{array}\right),\quad \left(\begin{array}{ccccc} 1&0&0\\ 0&-1&0\\ 0&0&1 \end{array}\right),\quad \left(\begin{array}{ccccc} -1&0&0\\ 0&1&0\\ 0&0&1 \end{array}\right) $$ For $l=2$ we have: $\G(C_2)d_1=d_1$, $\G(C_2)d_2=-d_2$, $\G(C_2)d_3=-d_3$, $\G(C_2)d_4=d_4$ and $\G(C_2)d_5=d_5$, $\G(\s_1)d_1=-d_1$, $\G(\s_1)d_2=-d_2$, $\G(\s_1)d_3=d_3$, $\G(\s_1)d_4=d_4$ and $\G(\s_1)d_5=d_5$, $\G(\s_2)d_1=-d_1$, $\G(\s_2)d_2=d_2$, $\G(\s_2)d_3=-d_3$, $\G(\s_2)d_4=d_4$ and $\G(\s_2)d_5=d_5$. Thus we get for the matrices of $\G(C_2),\G(\s_1),\G(\s_2)$: $$ \left(\begin{array}{ccccc} 1&0&0&0&0\\ 0&-1&0&0&0\\ 0&0&-1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1 \end{array}\right),\quad \left(\begin{array}{ccccc} -1&0&0&0&0\\ 0&-1&0&0&0\\ 0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1 \end{array}\right),\quad \left(\begin{array}{ccccc} -1&0&0&0&0\\ 0&1&0&0&0\\ 0&0&-1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1 \end{array}\right) $$

Atomic orbitals

The stable states of a hydrogen atom are usually expressed by a triple of natural numbers: $n,l$ and $m$ - the quantum numbers: 1. $n\in\N$ is called the first quantum number, it determines the energy $E_n$ of the stable state: $E_n=-\g/n^2$. 2. $l\in\N_0$ is called the angular momentum quantum number and, of course, indicates the angular momentum; mathematically it's just an index for the spherical harmonics of dergee $l$. Given $n$ the number $l$ must be one of the numbers $0,1,\ldots,n-1$. 3. $m\in\Z$ is called the magnetic quantum number: for given $l$ we have $m\in\{-l,-l+1,\ldots,0,1,\ldots,l\}$; these are exactly $2l+1$ numbers and thus they build a convenient index set for a basis of the space ${\cal H}_l$ of spherical harmonics of degree $l$.
The energy $E_n$ of a stable state is an eigen-value of an operator - the Hamiltonian of the hydrogen atom - and the state itself is a corresponding eigen-function. Let us say a few words on the multiplicity of the eigen-value $E_n$: For every $n$ there are exactly $n$ values of $l$, namely $0,\ldots,n-1$; now for each of these values $l$ there are $2l+1$ pairwise orthonormal eigen-functions - an orthonormal basis of ${\cal H}_l$. All these functions give rise to eigen-functions of the Hamiltonian of the hydrogen atom and all these eigen-functions form a basis for the eigen-spaces of this Hamiltonian. Therefore the multipliticity of $E_n$ is: $1+3+5+\ldots+(2n-1)=n^2$. Now there is an additional quantum number: the spin, it can only have two values, which we simply denote by $\pm1/2$. It follows that the multipliticity of $E_n$ is $2n^2$ - we see that each energy level $E_n$ can only be occupied by at most $2n^2$ electrons. This is one of the basic principles of QM: two electrons bound to a nucleus, which are described by the same four quantum numbers, are identical.
A note on notational inconveniences: Let us lable the stable states of an electron in an hydrogen atom: we need to have a lable for the energy - we take $n\in\N$, a lable for the angular momentum - we take $l\in\{0,1,\ldots,n-1\}$, a lable for an orthonormal basis of ${\cal H}_l$ - we take $m\in\{-l,\ldots,l\}$ and a lable for the spin - we take $s\in\pm1/2$. In math these are labels for some function, so we would come up with something like $\psi_{n,l,m,s}$, but physicists familiar with the bra-ket notation just use $|n,l,m,s\ra$ or $|n,l,m,\uar\ra$ or any combination of four symbols, which properly lables the stable states. Apropos of notation: in chemistry or physics electrons whose first quantum number is $n$ and whose angular momentum quantum number is $l=0$, $l=1$, $l=2$ and $l=3$ respectively are called $ns$-, $np$-, $nd$- and $nf$ electrons. Electrons with rotational invariant orbits, i.e. directionally independent orbits are $s$ electrons. Sample points of a typical $f$-orbital are animated below: high density of points indicates a larger probability of finding the electron in the corresponding direction (the distribution was generated by
C code).
Now, what's about more complex atoms like nitrogen $\chem{N}$: in a first approximation nitrogen will be viewed as a helium atom with $5$ additional protons. As a consequence the two electrons get bounded more tightly. This gives us a $5$-fold positively charged nucleus, whose potential is assumed to be rotational invariant. But this assumption in turn implies that the directional dependency of the eigen-functions is exactly the same as the directional dependency of the eigen-functions of hydrogen. We won't elaborate on this statement, we just take it for granted! It remains to adjoin $5$ electrons to this nucleus. For these electrons there are $2\cdot 2^2=8$ possible states of minimal energy. Now we further assume that all these $5$ electrons can be adjoint independently, i.e. without changing the potential essentially. Well that's obviously a crude approximation, nevertheless it works quite well. Cf. e.g. Feynman Lectures III, 19, Periodic Table. In summary, nitrogen in its base configuration is written as $\chem{He.2s^2.2p^3}$, meaning: it's perceived as helium $\chem{He}$ plus two electrons in a $2s$ orbital and three electrons in a $2p$ orbital. Here are the configurations of the first elements: $\chem{H}$: $\chem{1s^1}$, $\chem{He}$: $\chem{1s^2}$, $\chem{Li}$: $\chem{He.2s^1}$, $\chem{Be}$: $\chem{He.2s^2}$, $\chem{B}$: $\chem{He.2s^2.2p^1}$, $\chem{C}$: $\chem{He.2s^2.2p^2}$, $\chem{N}$: $\chem{He.2s^2.2p^3}$, $\chem{O}$: $\chem{He.2s^2.2p^4}$, $\chem{F}$: $\chem{He.2s^2.2p^5}$, $\chem{Ne}$: $\chem{He.2s^2.2p^6}$, $\chem{Na}$: $\chem{Ne.3s^1}$, etc.
There is one point we have to keep in mind when simplifying things this way: in our simplification of nitrogen all $8$ possible states have the same energy but except for $\chem{H}$ and $\chem{He}$ the energy of the states increases as their angular momentum number increases, i.e. a $2s$ electron has lower energy than a $2p$ electron. Consequently it may happen, that the energy of a $3d$-state is higher than the energy of a $4s$-state: for example potassium $\chem{K}$ has configuration: $\chem{Ne.3s^2.3p^6.4s^1}$ and not $\chem{Ne.3s^2.3p^6.3d^1}$, also calcium $\chem{Ca}$ has configuration: $\chem{Ne.3s^2.3p^6.4s^2}$ and not $\chem{Ne.3s^2.3p^6.3d^2}$. Also the next inert gas argon $\chem{Ar}$ has configuration: $\chem{Ne.3s^2.3p^6}$ and there are still $2(2\cdot2 +1)=10$ possible $3d$-states. This indicates that there is a big energy gap seperating $3d$ electrons from $3p$ electrons.
energylevels

Molecular orbitals: $\chem{H_2O}$

The molecular orbital method, which we refer to (cf. wikipedia, ZLibrary), considers valence electrons as being distributed over the whole molecule. Instead of giving a detailed account on this theory, we highlight the method by a simple example: $\chem{H_2O}$. Again, we retain the notions from subsection and assume that the molecule lies in the $yz$-plane with the oxygen placed at the center! In its base configuration oxygen $\chem{O}$ is essentially helium $\chem{He}$ with $6$ additional electrons, two in a $2s$-orbital and four in a $2p$-orbital. For the state space $E_O$ of oxygen we therefore choose $E_O\colon={\cal H}_0\oplus{\cal H}_1$ - one $2s$-orbital and three $2p$-orbitals (notice that each of these orbitals can be occupied by two electrons with opposite spin). As orthonormal basis we choose $$ s_O=1,\quad p_1=\sqrt3x,\quad p_2=\sqrt3y\quad\mbox{and}\quad p_3=\sqrt3z~. $$ Each of the two hydrogen atoms has a $1s$ electron, thus we choose one dimensional state spaces $E_1=\C s_1$ and $E_2=\C s_2$. The symmetry group of water $\chem{H_2O}$ is $C_{2v}$; for each symmetry $g\in C_{2v}$ we define a unitary operator $\Psi(g)\in\UU(E_1\oplus E_2\oplus E_O)$ as follows: $g$ leaves the central oxygen atom invariant and for $Y\in E_O$ we set $\Psi(g)Y\colon=Y\circ g^{-1}$; $g$ either interchanges the hydrogen atoms or leaves them invariant; in the former case $\Psi(g)$ interchanges the spaces $E_1$ and $E_2$ thus: \begin{eqnarray*} &&\Psi(C_2)s_1=s_2, \Psi(C_2)s_2=s_1, \Psi(C_2)s_O=s_O, \Psi(C_2)p_1=-p_1, \Psi(C_2)p_2=-p_2, \Psi(C_2)p_3=p_3,\\ &&\Psi(\s_1)s_1=s_2, \Psi(\s_1)s_2=s_1, \Psi(\s_1)s_O=s_O, \Psi(\s_1)p_1=p_1, \Psi(\s_1)p_2=-p_2, \Psi(\s_1)p_3=p_3,\\ &&\Psi(\s_2)s_1=s_1, \Psi(\s_2)s_2=s_2, \Psi(\s_2)s_O=s_O, \Psi(\s_2)p_1=-p_1, \Psi(\s_2)p_2=p_2, \Psi(\s_2)p_3=p_3~. \end{eqnarray*} This representation is similar to the one discussed in subsection and in particular subsection. Eventually we get for the matrices of $\Psi(C_2),\Psi(\s_1),\Psi(\s_2)$: $$ \left(\begin{array}{cccccc} 0&1&0&0&0&0\\ 1&0&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&-1&0&0\\ 0&0&0&0&-1&0\\ 0&0&0&0&0&1 \end{array}\right), \left(\begin{array}{cccccc} 0&1&0&0&0&0\\ 1&0&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&-1&0\\ 0&0&0&0&0&1 \end{array}\right), \left(\begin{array}{cccccc} 1&0&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&-1&0&0\\ 0&0&0&0&1&0\\ 0&0&0&0&0&1 \end{array}\right) $$ So far we've got only a representation of the symmetry group of water in a six dimensional space. The molecular orbitals we are looking for are linear combinations of the five chosen atomic orbitals - that's what molecular orbital theory postulates, but which linear combinations? Well, the exact answer involves the Hamiltonian of the molecule or at least some simplification of this animal and it's not that kind of thing we're after. We simply want to exploit symmetry to limit the possible variety of linear combinations without even knowing a smidgon about the Hamiltonian, beside its symmetries of course and that means: the Hamiltonian $H$ commutes with the representation $\Psi$, i.e. for all $g\in C_{2v}$: $[H,\Psi(g)]=0$. That's our only prerequisite (cf. section)!
Find the representation $\Psi:C_{3v}\rar\UU(E_N\oplus E_1\oplus E_2\oplus E_3)$ of the symmetry group of ammonia $\chem{NH_3}$, where $E_N={\cal H}_0\oplus{\cal H}_1$ and $E_1=E_2=E_3={\cal H}_0$.
$C_3$ permutes the hydrogen atoms cyclically: $1\to2\to3\to1$; thus $\Psi(C_3)$ maps the $s$-orbital $s$ of nitrogen onto itself and the $s$-orbitals $s_1,s_2,s_3$ of the hydrogen atoms are permuted: $\Psi(C_3)s_1=s_2$, $\Psi(C_3)s_2=s_3$ and $\Psi(C_3)s_3=s_1$. As for the remaining three $p$-orbitals $p_1=\sqrt3x,p_2=\sqrt3y,p_3=\sqrt3z$ of nitrogen we get: \begin{eqnarray*} \Psi(C_3)p_1(x,y,z) &=&p_1\circ C_3^{-1}(x,y,z)\\ &=&p_1(-\tfrac12x+\tfrac{\sqrt3}2y,-\tfrac{\sqrt3}2x-\tfrac12y,z)\\ &=&\sqrt3(-\tfrac12x+\tfrac{\sqrt3}2y) =-\tfrac12p_1+\tfrac{\sqrt3}2p_2 \end{eqnarray*} and $\Psi(C_3)p_2=-\tfrac{\sqrt3}2p_1-\tfrac12p_2$, $\Psi(C_3)p_3=p_3$. Similarly, $\s_1$ permutes the hydrogen atoms as follows: $1\to1,2\to3,3\to2$. $\Psi(\s_1)s_1=s_1$, $\Psi(\s_1)s_2=s_3$ and $\Psi(\s_1)s_3=s_2$. $$ \Psi(\s_1)p_1(x,y,z) =p_1\circ\s_1^{-1}(x,y,z) =p_1(x,-y,z) =\sqrt3x =p_1 $$ and $\Psi(\s_1)p_2=-p_2$, $\Psi(\s_1)p_3=p_3$. This gives us the matrices: $$ \Psi(C_3)= \left(\begin{array}{ccccccc} 1&0&0&0&0&0&0\\ 0&-\tfrac12&-\tfrac{\sqrt3}2&0&0&0&0\\ 0&\tfrac{\sqrt3}2&-\tfrac12&0&0&0&0\\ 0&0&0&1&0&0&0\\ 0&0&0&0&0&0&1\\ 0&0&0&0&1&0&0\\ 0&0&0&0&0&1&0 \end{array}\right),\quad \Psi(\s_1)=\left(\begin{array}{ccccccc} 1&0&0&0&0&0&0\\ 0&1&0&0&0&0&0\\ 0&0&-1&0&0&0&0\\ 0&0&0&1&0&0&0\\ 0&0&0&0&1&0&0\\ 0&0&0&0&0&0&1\\ 0&0&0&0&0&1&0 \end{array}\right)~. $$
Find the representation $\Psi:T_d\rar\UU(E_C\oplus E_1\oplus E_2\oplus E_3\oplus E_4)$ of the symmetry group of methane $\chem{CH_4}$, where $E_C={\cal H}_0\oplus{\cal H}_1$ and $E_1=E_2=E_3=E_4={\cal H}_0$.

Spin $l/2$ particles

A spin $l/2$ particle, $l\in\N_0$, is a particular quantum mechanical system: a complex Hilbert space $E$ of dimension $l+1$ - the state space of the particle - and an irreducible (cf. section) unitary representation $\Psi:\SU(2)\rar\UU(E)$ of $\SU(2)$ in $E$. The unitary representation is just the way the angular momentum operators act on the particle (cf. section). Analogously, irreducible unitary representations of $\SU(3)$ play a prominent role in e.g. quantum chromodynamics, cf. wikipedia. In the simplest non trivial case of a spin $1/2$ particle (also called qubit) the representation is the standard representation of $\SU(2)$ and the angular momentum operators (in the direction of an orthonormal basis of $\R^3$) are given by the Pauli spin matrices: $$ \s_1\colon= \left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right),\quad \s_2\colon= \left(\begin{array}{cc} 0&-i\\ i&0 \end{array}\right),\quad \s_3\colon= \left(\begin{array}{cc} 1&0\\ 0&-1 \end{array}\right) $$ multiplied by $1/2$.
Verify that $[\s_1,\s_2]=2\s_3$, $[\s_2,\s_3]=2\s_1$ and $[\s_3,\s_1]=2\s_2$
The matrices $\s_j$ are unitary, but that's not the point, it's rather the fact that the matrices $\s_j/2i$ generate the Lie-algebra of $\SU(2)$: Suppose $a=(a_1,a_2,a_3)$ is a normalized vector in $\R^3$ and put $$ S_a\colon=\tfrac12(a_1\s_1+a_2\s_2+a_3\s_3)~. $$ In QM the operator $S_a$ is an observable (i.e. it's self-adjoint on $\C^2$) and it's called the angular momentum operator of a spin $1/2$ particle in the direction of $a$, so $\s_j/2$ is the angular momentum operator in the direction of the $j$-th axis.
Prove that for all $t\in\R$: $\exp(-itS_a)\in\SU(2)$. Suggested solution.
Remember: for any $x\in\C^2$ the function $\psi:t\mapsto\exp(-itS_a)x$ is the solution to the time dependent Schrödinger equation $\psi^\prime(t)=-iS_a\psi(t)$, $\psi(0)=x$. In math the operators $\s_1/2i,\s_2/2i,\s_3/2i$ are called generators of the Lie-algebra $\su(2)$ of $\SU(2)$, cf. chapter. For the time being this means that $\exp:\su(2)\rar\SU(2)$ and every element of $\su(2)$ can be written as a (unique) linear combination of these operators and since $$ -itS_a=t\sum_j a_j\tfrac1{2i}\s_j $$ the operators $iS_a$ are elements of $\su(2)$.
Compute the eigen-values and eigen-vectors of $S_a$ for $a=(\cos\vp\sin\theta,\sin\vp\sin\theta,\cos\theta)\in S^2$. Suggested solution.
For a spin $l/2$ particle we will see that the angular momentum operator in any direction admits the eigen-values: $l/2,l/2-1,l/2-2,\ldots,-l/2$ (cf. exam, exam and the subsequent remarks). We'll take this up again in the chapter on matrix groups!

Further references

M. Robinson: Symmetry and the Standard Model, Mathematics and Particle Physics, A. Flournoy, Particle Physics.
← Index → Representation of Finite Groups

<Home>
Last modified: Mon Oct 21 16:17:50 CEST 2024