← Finite Groups → Representations of SU(3)

What should you be acquainted with?

1. Linear Algebra in particular inner product spaces both over the real and the complex numbers. 2. Basics in Differential Equations in particular linear differential equations. 3. Very basics in Group Theory. 4. Very basics of differential geometry. An elementary introduction and loads of additional information can be found in Brian Hall, Lie Groups, Lie Algebras, and Representations or John Stillwater, Naive Lie Theory. A. Henriques: Lie Algebras and their Representations, A. Baker, Matrix Groups.

Acknowledgments

I'm especially indebted to Thomas Speckhofer for his accurate revision of this chapter.

Matrix Lie-Groups

Subgroups of $\Gl(n,\bK)$

In this section we are going to discuss some matrix subgroups, which are in general not finite but carry an additional structure: real differentiability. First let us fix some notation. By $\Ma(n,\bK)$ we denote the algebra of $n\times n$-matrices with entries in a field $\bK$ - in this chapter we will deal exclusively with the cases $\bK=\R$ or $\bK=\C$. The set $$ \Gl(n,\bK)\colon=\{A\in\Ma(n,\bK):\det A\neq0\} $$ with matrix multiplication is the so called linear group over the field $\bK$. In particular we notice the special linear, the orthogonal and the special orthogonal group: \begin{eqnarray*} \Sl(n,\bK)&\colon=&\{A\in\Gl(n,\bK):\det A=1\},\\ \OO(n)&\colon=&\{A\in\Gl(n,\R):A^tA=1\},\\ \SO(n)&\colon=&\OO(n)\cap\Sl(n,\R)~. \end{eqnarray*} The last two groups are subgroups of $\Gl(n,\R)$. Replacing the real with the complex numbers we similarly get the unitary and the special unitary group: \begin{eqnarray*} \UU(n)&\colon=&\{A\in\Gl(n,\C):\bar A^tA=1\},\\ \SU(n)&\colon=&\{A\in\UU(n):\det A=1\}, \end{eqnarray*} where as usual $A^t$ denotes the transpose and $\bar A$ the complex conjugate of the matrix $A$. Also, the $n$-dimensional torus $\TT^n$ is isomorphic to a commutative subgroup of $\UU(n)$: it's the set of all diagonal matrices $diag\{z_1,\ldots,z_n\}$, $z_1,\ldots,z_n\in S^1$. All these groups admit the standard representation: every $g=(g_{jk})\in G\sbe\Gl(n,\bK)$ operates on $\bK^n$ by mapping $(x_1,\ldots,x_n)\in\bK^n$ to the vector $(y_1,\ldots,y_n)\in\bK^n$, where $$ y_j=\sum_{k=1}^n g_{jk}x_k \quad\mbox{or more concisely}\quad y=g\cdot x, $$ where $x$ and $y$ are interpreted as $n\times1$ matrices, i.e. column-vectors and $\cdot$ is matrix multiplication. We know that every representation of a finite group can be decomposed into irreducibles and we will see that this also holds for compact groups; however, it's no longer true for arbitray groups, even for the additive group $(\R,+)$ this doesn't hold! Take e.g. the following representation $A:\R\rar\Gl(2,\R)$ of the additive group $\R$: $$ A(a)\colon=\left(\begin{array}{cc} 1&a\\ 0&1 \end{array}\right),\quad\mbox{then}\quad A(a)A(b)=A(a+b), $$ so it's indeed a representation. Obviously $\R e_1$ is an invariant subspace of this representation, but for every vector $x\colon=ue_1+ve_2$, $v\neq0$, the vectors $A(a)x=(u+av)e_1+ve_2$, $a\in\R$, generate the whole space $\R^2$. So $\R^2$ cannot be decomposed into irreducible subspaces.

Prove that the set of all matrices $A\in\Gl(3,\R)$ of the form $$ \left(\begin{array}{ccc} 1&a&c\\ 0&1&b\\ 0&0&1 \end{array}\right) $$ is a non commutative subgroup of $\Gl(3,\R)$, it's called the Heisenberg group. Cf. wikipedia.
Prove that the standard representation of the Heisenberg group is not decomposable into irreducible representations.

Tangent spaces

All of the matrix groups we'll be considering are real submanifolds
of some space $\R^{n+k}$. Thus we need a handy definition of a submanifold: A subset $M$ of $\R^{n+k}$ is called an $n$-dimensional submanifold of $\R^{n+k}$ if for every point $x\in M$ there is a smooth function $F:U\rar\R^k$ defined on an open neighborhood $U$ of $x$ in $\R^{n+k}$ such that $M\cap U=[F=0]$ and $DF(x)$ is onto. Most if not all of our submanifolds will actually be of the form $[F=0]$, i.e. there will only be one function $F$. The kernel $\ker DF(x)$ is called the tangent space of $M$ at $x$, denoted by $T_xM$, it's evidently a real vector-space. $T_xM$ coincides with the set of derivatives of smooth curves, i.e. $$ T_xM=\{c^\prime(0):c:(-\d,\d)\rar M\mbox{ is smooth and }c(0)=x\}~. $$ The right hand side is a subspace of the left hand side by the chain rule, but as for the converse we need some form of the implicite function theorem, which implies that there is some smooth function $H$ in a neighborhood $V$ of $0$ in $\R^n$ such that $H(0)=x$ and $\im DH(0)=\ker DF(x)$ - actually $[F=0]$ is locally the graph of $H$. Now let $G$ be any closed subgroup of $\Gl(n,\bK)$ - we simply call it a matrix Lie-group. It's indeed a submanifold of $\Gl(n,\bK)$ (this is not at all obvious!) and thus we may put: $$ \GG\colon=T_1G\colon=\{c^\prime(0):c:(-\d,\d)\rar G, c(0)=1\}~. $$ This is the tangent space of $G$ at the identity $1$ i.e. unit matrix and it's generally called the Lie-algebra of $G$. $\GG$ is a real vector-space, i.e. if $t\in\R$, $X,Y\in\GG$, then $tX,X+Y\in\GG$. Moreover, we'll shortly see that it's furnished with an additional algebraic structure.
If $M,N$ are two submanifolds of e.g. $\R^n$ and $F:M\rar N$ is smooth, i.e. $F$ is the restriction of a smooth map from an open superset of $M$ into $\R^n$ such that $F(M)\sbe N$, then it's derivative $DF(x)$ at any point $x\in M$ maps the tangent space $T_xM$ of $M$ at $x$ into the tangent space $T_{F(x)}N$. Suggested solution
We set out to lable the most important Lie-algebras: \begin{eqnarray*} \gl(n,\bK)=T_1\Gl(n,\bK)&=&\Ma(n,\bK)\\ \sla(n,\bK)=T_1\Sl(n,\bK)&=&\{A\in\Ma(n,\bK):\tr A=0\}\\ \oh(n)=\so(n)=T_1\SO(n)&=&\{A\in\Ma(n,\R):A^t+A=0\}\\ \uu(n)=T_1\UU(n)&=&\{A\in\Ma(n,\C):\bar A^t+A=0\}\\ \su(n)=T_1\SU(n)&=&\{A\in\Ma(n,\C):\bar A^t+A=0,\tr A=0\}~. \end{eqnarray*} We should note that $\sla(n,\C)$ is also a complex vector-space but both $\uu(n)$ and $\su(n)$ are only real vector-spaces, for $\cl{\l A}^t=\bar\l\bar A^t$ and thus for $A\in\uu(n)$: $\cl{\l A}^t+\l A$ need not vanish if $\Im\l\neq0$!
Being Lie-algebras not merely vector-spaces these spaces carry an additional algebraic structure: it's not matrix multiplication, it's the Lie-bracket $[A,B]\colon=AB-BA$, cf. lemma. Generally, an algebra will be a vector-space $A$ with an additional binary operation $(x,y)\mapsto xy$, which is assumed to be bi-linear; in our case this operation is the Lie-bracket. So be careful, the Lie-algebra $\Ma(n,\bK)$ is not the same as the algebra $\Ma(n,\bK)$ with matrix multiplication!
Verify that for all spaces $\GG\in\{\sla(n,\bK),\so(n),\uu(n),\su(n)\}$, ${\cal G}$ is closed under the Lie-bracket, i.e. if $A,B\in\GG$ then $[A,B]\in\GG$.
Find a real basis of $\uu(n)$ and $\su(n)$, respectively, and prove that their (real) dimension is $n^2$ and $n^2-1$, respectively.
Let $E^{jk}$ denote the standard basis of $\Ma(n,\C)$, i.e. there is only one non vanishing entry: the $j$-th row and $k$-th column is $1$. For $\uu(n)$ we may take: $E^{11},\ldots,E^{nn}$, and if $j < k$: $E^{jk}-E^{kj}$ and $iE^{jk}+iE^{kj}$. For $\su(n)$ we take instead of the first $n$ basis vectors the vectors $E^{11}-E^{22},\ldots,E^{n-1,n-1}-E^{nn}$.
If $e_1,\ldots$ is an orthonormal basis of a Hilbert-space, then $$ b_k\colon=\sqrt{\frac k{k+1}}\Big(\frac{e_1+\cdots+e_k}k-e_{k+1}\Big) $$ is also an orthonormal basis.
Let $D\in\Ma(n+1,\R)$ be the diagonal matrix $diag\{-1,1,\ldots,1\}$. The group $\OO(n,1)\colon=\{U\in\Gl(n+1,\R): U^tDU=D\}$ is called the Lorentz group. Show that it's tangent space $T_1\OO(n,1)$ is given by $\{X\in\Ma(n+1,\R): X^t=-DXD\}$.
Let $1\in\Ma(n,\bK)$ be the identity matrix and $J\in\Ma(2n,\bK)$ the block matrix $$ \left(\begin{array}{cc} 0&-1\\ 1&0 \end{array}\right)~. $$ The group $\Sp(n,\bK)\colon=\{S\in\Gl(2n,\bK): S^tJS=J\}$ is called the symplectic group. Prove that it's tangent space $T_1\Sp(n,\bK)$ is given by $\{X\in\Ma(2n,\bK): X^t=JXJ\}$. The group $\Sp(n)\colon=\Sp(n,\C)\cap\UU(2n)$ is called the compact symplectic group and its tangent space is denoted by $\spa(n)=\{X\in\Ma(2n,\C): X^t=JXJ,\bar A^t+A=0\}$.
Identify the tangent space of the Heisenberg group.
Prove that the space $T_1\OO(n,1)$ is closed under Lie-brackets.
Since $D^2=1$, we get: $[A,B]^t=B^tA^t-A^tB^t=DBDDAD-DADDBD=-D[A,B]D$.
Identify the tangent space of the torus $\TT^{n-1}$ as a subspace of $\SU(n)$.
$\TT^{n-1}$ is the set of all diagonal matrices $diag\{e^{it_1},\ldots,e^{it_{n-1}},e^{-i(t_1+\cdots+t_{n-1})}\}$ and thus its tangent space is the subspace of all traceless diagonal matrices $diag\{iu_1,\ldots,iu_n\}$, i.e. $u_1,\ldots,u_n\in\R$ such that $\sum u_j=0$.
Prove that $\{A\in\Gl(n):\det A\in\Q\sm\{0\}\}$ is not a closed subgroup of $\Gl(n)$.

Left invariant vector fields

As in the case of a submanifold we need a working definition of vector fields
on submanifolds of $\R^{n+k}$: Given a submanifold $M$ of $\R^{n+k}$ a vector field $X$ on $M$ is a smooth mapping $m\mapsto X_m$ defined in a neighborhood of $M$ with values in $\R^{n+k}$, such that for all $m\in M$: $X_m\in T_mM$. The flow of $X$ is a smooth map $\theta:{\cal D}\rar M$ defined on an open subset ${\cal D}$ of $\R\times M$ such that By the well known Picard-Lindelöf Theorem from ODE the flow exists and is uniquely determined by $X$. If ${\cal D}=\R\times M$, then the vector field $X$ is said to be complete. Here we are only concerned with the special case where $M$ is a matrix Lie-group $G$. So we set out to discuss the simplest case - the group $\Gl(n,\R)$, which is an open subset of $\Ma(n,\R)=\R^{n^2}$. For any $X\in\Ma(n,\R)$ we define a vector field $X_A\colon=AX$ on $\Gl(n,\R)$, i.e. a smooth mapping $A\mapsto X_A$ from $\Gl(n,\R)$ into $\Ma(n,\R)$. $X_A$ is uniquely defined by the following condition: if $L_A$ denotes left translation, i.e.: $L_A(B)=AB$, then $X_A=DL_A(1)X=AX$ - this is called left invariance of the vector field. Similarly $R_A(B)\colon=BA$ is right translation! The flow $\theta(t,A)$ of the vector field $X_A$ is defined by $\theta(0,A)=A$ and $\pa_t\theta(t,A)=X_{\theta(t,A)}$, i.e.. $\theta_t(A)\colon=\theta(t,A)$ is the solution to the initial value problem $$ \ttd t\theta_t(A)=X_{\theta_t(A)}\colon=\theta_t(A)X,\quad \theta_0(A)=A~. $$ This is a linear ODE whose solution can be calculated easily: $\theta(t,A)=A\exp(tX)$, for $$ \ttd tA\exp(tX) =A\exp(tX)X =DL_{A\exp(tX)}(X) =X_{A\exp(tX)}~. $$ In particular: left invariant vector fields are complete. The general case of an arbitrary matrix Lie-group $G$ with Lie-algebra $\GG$ turns out to be almost obvious: Since for $X\in\GG$ the restriction of the left invariant vector field $A\mapsto X_A=DL_A(1)X=AX$ to $G$ is a vector field on $G$, i.e. $X_A\in T_AG\colon=L_A(T_1G)$, we have: $$ X\in\GG\quad\Rar\quad\exp(tX)\in G, $$ $\exp:\GG\rar G$ is called the exponential map of the Lie-algebra $\GG$ of $G$ into $G$, cf. e.g. wikipedia. Thus for all matrix Lie-groups $G$ the exponential map $\exp:\GG\rar G$ is just the restriction of the exponential map $\exp:\Ma(n)\rar\Gl(n)$.
Verify by series expansion of $\exp(tX)$ that $X\in\GG$ implies: $\exp(tX)\in G$, for $G=\OO(n)$ and $G=\UU(n)$.
Thus for $X\in\GG$ the mapping $t\mapsto\exp(tX)$ is a smooth curve in $G$. Since every $X\in\GG$ determines a unique left invariant vector field $X_A$ on $G$ satisfying $X_1=X$, left invaraint vector fields may be identified with elements in the Lie-algebra $\GG$.
exponential
Finally $\exp:\GG\rar G$ is a local diffeomorphism at $0\in\GG$, because its differential $D\exp(0):\GG\rar\GG$ at $0$ is given by $D\exp(0)X=\ttdl t0\exp(tX)=X$, i.e. $D\exp(0)=id$ and thus the assertion follows from the inverse function theorem (for manifolds), cf. e.g.
wikipedia. In general the exponential map $\exp:\GG\rar G$ is not onto, but there are notable cases where $\exp:\GG\rar G$ is onto: 1. if $G=\Gl(n,\C)$ and 2. if $G$ is connected and compact. The discussion following exam anticipated that $\exp:\so(3)\rar\SO(3)$ is onto.
A manifold $M$ is called connected, if for all $x,y\in M$ there is a continuous function $c:[0,1]\rar M$ such that $c(0)=x$ and $c(1)=y$. In topology this is usually called path-connected.
From topology we know that a manifold $M$ is connected iff it cannot be decomposed into two non-empty disjoint open subsets.
1. $\Sl(2,\R)$ is connected. 2. Suppose $\tr X=0$ and put $D=-\det X$, then $\exp(X)=\cosh(\sqrt D)+X\sinh(\sqrt D)/\sqrt D$. 3. Show that the exponential mapping $\exp:\sla(2,\R)\rar\Sl(2,\R)$ is not onto.
Show that the exponential mapping $\exp:\sla(2,\C)\rar\Sl(2,\C)$ is not onto.
Show by means of the spectral theorem that the exponential map $\exp:\su(n)\rar\SU(n)$ is onto and therefore $\SU(n)$ is connected.
Show by means of the spectral theorem that the exponential map $\exp:\so(n)\rar\SO(n)$ is onto and therefore $\SO(n)$ is connected.
Prove that for every symmetric matrix $A\in\Ma(n,\R)$ the matrix $\exp(A)$ is strictly positive definite.
Verify the following assertions for $n=2$: 1. $\lim_{t\to0}(\det(1+tB)-1)/t=\tr B$ and conclude that for all $A\in\Gl(n,\bK)$: $$ D\det(A)B=\det(A)\tr(A^{-1}B)~. $$ 2. Find the formula for $A\notin\Gl(n,\bK)$ using the adjugate matrix $A^{ad}$ (cf. wikipedia).
Liouville's formula is well known in ODE:
For all $X\in\Ma(n,\bK)$ we have: $$ \det\exp(X)=e^{\tr X}~. $$
$\proof$ Define a smooth function $f(t)\colon=\det\exp(tX)$ and use $D\det(A)B=\det(A)\tr(A^{-1}B)$, then by the chain rule: $$ f^\prime(t) =D\det(\exp(tX))e^{tX}X =\det(\exp(tX))\tr(e^{-tX}e^{tX}X) =f(t)\tr X $$ and by uniqueness of solutions to linear ODEs: $f(t)=f(0)e^{t\tr X}$. $\eofproof$
Prove using Liouville's formula that $X\in\GG$ implies: $\exp(tX)\in G$, for $G=\Sl(n,\R)$ and $G=\Sl(n,\C)$.
Suppose we are given any representation $\Psi:G\rar\Gl(E)$ of $G$, then we get for every $X\in T_1G$ a one parameter group (cf. wikipedia): $t\mapsto\Psi(\exp(tX))$, whose generator is a linear operator defined on a subspace of $E$ by: $$ x\mapsto\ftdl t0\Psi(\exp(tX))x =\lim_{t\to0}\frac{\Psi(\exp(tX))x-x}t~. $$ and the set of all $x\in E$, for which this limit exists comprise a subspace.
Verify that the set of all $x\in E$, for which the above limit exists comprise a subspace.
If $\Psi$ is differentiable, then, of course, this limit exists for all $x\in E$ and is given by $D\Psi(1)(X)x$. As for matrix Lie-groups we will tacitly assume all representations to be smooth! So let $\Psi:G\rar\Gl(E)$ be such a representation and assume that $E$ is finite dimensional; since $\Gl(E)$ may also be considered as a matrix Lie-group, $\Psi$ is a homomorphism of matrix Lie-groups. In the following section we compute $\psi(X)\colon=D\Psi(1)X$ for a particular representation. In any case, we are aiming at examining the simpler linear map $\psi$ instead of the representation $\Psi$, which in general is not linear!

The adjoint representation

Let $\Psi:G\rar\Gl(E)$ be a smooth representation of $G$ in a finite dimensional space $E$, then for all $X\in\GG$: $t\mapsto\Psi(\exp(tX))$ is a one parameter group with generator $$ \ttdl t0\Psi(\exp(tX)) =D\Psi(1)X=\colon\psi(X)\in\Hom(E) $$ and thus $$ \ttd t\Psi(\exp(tX)) =\Psi(\exp(tX))\psi(X)~. $$ Hence by uniqueness of solutions to linear ODEs: \begin{equation}\label{adreq1}\tag{ADR1} \forall X\in\GG:\quad \Psi(\exp(tX)) =\exp(t\psi(X)), \end{equation} where the exponential on the right is just the exponential of the endomorphism $t\psi(X)\in\Hom(E)$. In this section we study a basic representation: the adjoint representation $\Ad:G\rar\Gl(\GG)$, it's defined as follows: for $g\in G$ and $X\in\GG$: \begin{equation}\label{adreq2}\tag{ADR2} \Ad(g)X\colon=gXg^{-1}~. \end{equation} $\Ad:G\rar\Gl(\GG)$ is indeed a representation of $G$: first of all we have to verify that $\Ad(g)$ maps $\GG$ into itself; this is because the inner automorphism $i_g(h)\colon=ghg^{-1}$ maps $G$ into $G$, $i_g(1)=1$ and $\Ad(g)$ is just the derivative $Di_g(1)$, so $\Ad(g):T_1G\rar T_1G$; secondly $$ \Ad(gh)X=(gh)X(gh)^{-1}=ghXh^{-1}g^{-1}=\Ad(g)(hXh^{-1})=\Ad(g)\Ad(h)X~. $$
Compute the adjoint representation of $\SU(2)$ explicitely. Suggested solution.
Next we are going to compute the generator of the one parameter group: $t\mapsto\Ad(\exp(tX))$. It will turn out that it's the operator $\ad(X)\in\Hom(\GG)$ defined by \begin{equation}\label{adreq3}\tag{ADR3} \ad(X)Y=[X,Y]\colon=XY-YX, \end{equation} i.e. $\ad(X)Y$ is just the commutator of $X$ and $Y$. To establish this, we need to calculate $$ \ad(X) \colon=D\Ad(1)X =\ftdl t0\Ad(\exp(tX)) $$
1. For all $X,Y\in\GG$ we have $[X,Y]\in\Hom(\GG)$, \begin{eqnarray*} [X,Y]&=&\pa_t|_{t=0}\pa_s|_{s=0}\exp(tX)\exp(sY)\exp(-tX) \quad\mbox{and}\\ [X,Y]&=&\ftdl t0\Ad(\exp(tX))Y, \end{eqnarray*} i.e. $Y\mapsto[X,Y]$ is indeed the generator of the one parameter group $t\mapsto\Ad(\exp(tX))$. 2. Let $\theta_t(g)=g\exp(tX)$ be the flow of the left invariant vector field $X_g$, then $$ [X,Y] =\ftdl t0D\theta_{-t}(1)(Y_{\theta_t(1)}), $$ Geometrically: Starting at $1\in G$ follow the flow of $X$ for some short length $t$ - you arrive at $\theta_t(1)$; determine the vector field $Y$ at this point and look what vector in $T_1G$ you get when you apply the inverse of the tangent map $D\theta_t(1):T_1G\rar T_{\theta_t(1)}G$. $[X,Y]$ describes the rate of change of this vector.
$\proof$ 1. For $X,Y\in\GG$ we clearly have: $\exp(tX)\exp(sY)\exp(-tX)\in G$ and since $\ttdl s0\exp(sY)=Y$ we get $\exp(tX)Y\exp(-tX)\in\GG$, which in turn implies that $$ [X,Y]=XY-YX=\ftdl t0\exp(tX)Y\exp(-tX)\in\GG $$ in particular $\GG$ is closed under the Lie-bracket and $$ [X,Y] =\ftdl t0\exp(tX)Y\exp(-tX) =\ftdl t0\Ad(\exp(tX))Y =\ad(X)Y~. $$ 2. Since for all $Z\in\GG$: $D\theta_t(1)(Z)=Z\exp(tX)$ and $Y_g=gY$, we infer that \begin{eqnarray*} [X,Y] &=&\ftdl t0\exp(tX)Y\exp(-tX)\\ &=&\ftdl t0D\theta_{-t}(1)(\exp(tX)Y) =\ftdl t0D\theta_{-t}(1)(Y_{\theta_t(1)})~. \end{eqnarray*} $\eofproof$
Thus for $\Psi=\Ad$ we have $\psi=\ad$ and therefore by \eqref{adreq1}: \begin{equation}\label{adreq4}\tag{ADR4} \Ad(\exp(tX))=e^{t\ad(X)}~. \end{equation} It might be a good exercise to repeat the argument at the beginning of this section in the particular case of the adjoint representation: For $X,Y\in\GG$ let $a(t)$ be the curve $a(t)\colon=\Ad(\exp(tX))Y$. Since $a(0)=Y$ and \begin{eqnarray*} a^\prime(t) &=&\ftd t\exp(tX)Y\exp(-tX)\\ &=&X\exp(tX)Y\exp(tX)-\exp(tX)YX\exp(-tX)\\ &=&Xa(t)-a(t)X =\ad(X)a(t) \end{eqnarray*} we conclude that $a(t)$ is the solution of the ODE $a^\prime(t)=\ad(X)a(t)$ satisfying $a(0)=Y$; but this solution is simply $t\mapsto e^{t\ad(X)}Y$, i.e. we recover \eqref{adreq4}.
Let's assume for example that $\ad(X)^2Y=0$, i.e. $[X,[X,Y]]=0$, then \eqref{adreq4} implies: $$ e^{tX}Ye^{-tX}=\Ad(e^{tX})Y=e^{t\ad(X)}Y=Y+t\ad(X)Y~. $$ As a byproduct of the proof of
lemma we got closedness of $\GG$ under the Lie-bracket, i.e. the Lie-bracket maps $\GG\times\GG$ into $\GG$ and is thus just another binary operation on $\GG$; since the Lie-bracket is bi-linear, the vector-space $(\GG,[.,])$ is an algebra. It's this algebra that will be in the focus of our study!
Verify that for all $U\in\Gl(n,\C)$ the map $v:\Ma(n,\C)\rar\Ma(n,\C)$, $X\mapsto UXU^{-1}$ is an algebra-isomomorphism, i.e. $v([X,Y])=[v(X),v(Y)]$.
In the following example we utilize the (intuitively clear) fact that the mapping that sends $X\in\Ma(n,\C)$ to its eigen-values $\l_1,\ldots,\l_n$ is continuous; formally the range space is the orbit space $\C^n/S(n)$, because there is no natural order of the eigen-values. However locally you may think of some continuous way to order eigen-values and then you can think of a continuous map into $\C^n$. Anyhow, if you don't feel convenient with this sort of arguments you should fall back to Jordan's normal form of $X$. Let $E^{jk}$ be the standard basis of $\Ma(n,\C)$, i.e. $E_{lm}^{jk}=\d_{lj}\d_{mk}$, then \begin{eqnarray*} [X,E^{jk}] &=&\sum_{l,m}((XE^{jk})_{lm}-(E^{jk}X)_{lm})E^{lm}\\ &=&\sum_{l,m,p}(X_{lp}E_{pm}^{jk}-E_{lp}^{jk}X_{pm})E^{lm} =\sum_{l,m,p}(X_{lp}\d_{pj}\d_{mk}-\d_{lj}\d_{pk}X_{pm})E^{lm}\\ &=&\sum_{l,m}(X_{lj}\d_{mk}-\d_{lj}X_{km})E^{lm} =\sum_lX_{lj}E^{lk}-\sum_mX_{km}E^{jm} =\sum_r(X_{rj}E^{rk}-X_{kr}E^{jr})~. \end{eqnarray*}
Suppose $X\in\Ma(n,\C)$ has eigen-values $\l_1,\ldots,\l_n$. Prove that $\ad(X):\Ma(n,\C)\rar\Ma(n,\C)$ has eigen-values $\l_{jk}\colon=\l_j-\l_k$.
\begin{eqnarray*} [X,E^{jk}]_{lm} &=&\sum_r(X_{rj}E_{lm}^{rk}-X_{kr}E_{lm}^{jr})\\ &=&\sum_r(X_{rj}\d_{lr}\d_{mk}-X_{kr}\d_{lj}\d_{mr}) =X_{lj}\d_{mk}-X_{km}\d_{lj} =\l_j\d_{lj}\d_{mk}-\l_k\d_{lj}\d_{km} =(\l_j-\l_k)E_{lm}^{jk}~. \end{eqnarray*} In particular $\ad(X)$ is an isomorphism iff $X$ has pairwise distinct eigen-values.
For all $X,Y,Z\in\Hom(E)$ we have $[XY,Z]=X[Y,Z]+[X,Z]Y$.
However the mapping $(X,Y)\mapsto[X,Y]$ is not associative, instead it satisfies Jacobi’s-identity: $$ \forall X,Y,Z\in\Ma(n,\C):\quad [X,[Y,Z]]+[Y,[Z,X]]+[Z,[X,Y]]=0 $$ or equivalently $$ \ad([X,Y])=[\ad(X),\ad(Y)]\colon=\ad(X)\ad(Y)-\ad(Y)\ad(X)~. $$
Verify that the above identities are in fact equivalent, i.e. Jacobi's identity holds if and only if $\ad([X,Y])=[\ad(X),\ad(Y)]$.
Endowing $\Hom(\GG)$ with the standard Lie-bracket $(A,B)\mapsto AB-BA$, this simply says that $\ad:\GG\rar\Hom(\GG)$ is an algebra-homomorphism. More generally, we will see below - cf. theorem - that for every representation $\Psi:G\rar\Gl(E)$ the map $\psi:\GG\rar\Hom(E)$, $\psi(X)\colon=D\Psi(1)X$, is an algebra-homomorphism of Lie-algebras - any algebra-homomorphism $\GG\rar\Hom(E)$ will be called a representation of the Lie-algebra $\GG$. Since the differential $\psi$ is linear, it's much easier to handle than the representation $\Psi$ itself. For example: irreducibility; can we read off irreducibility from $\psi$? Yes, for suppose $F$ is an invariant subspace of $\Psi:G\rar\Gl(E)$, then for all $X\in\GG$ and all $x\in F$: $$ \psi(X)x=\ftdl t0 \Psi(\exp(tX))x\in F $$ i.e. for all $X\in\GG$: $\psi(X)(F)\sbe F$. We say that $\psi$ is irreducible, if the only subspaces of $E$ invariant under all mappings $\psi(X)$, $X\in\GG$, are $\{0\}$ and the whole space $E$. Conversely, if this holds, then for all $x\in F$ and all $X\in\GG$: $$ \Psi(\exp(tX))x=\exp(t\psi(X))x\in F~. $$ Since $\exp$ is open at about $1$, $\exp$ is a local diffeomorpism and since $G$ is connected, it follows by corollary that $F$ is invariant under all mappings $\Psi(g)$, $g\in G$. Also two representations $\Psi$ and $\Phi$ are equivalent if for some isomorphism $J$: $\Phi_2(g)=J\Psi_1(g)J^{-1}$, which implies $\vp(X)=J\psi(X)J^{-1}$ - we say $\psi$ and $\vp$ are equivalent - and a similar argument as before proves the converse implication in case $G$ is connencted.
If $F$ is an invariant subspace of the representation $\Psi:G\rar\Gl(E)$ then $F$ is an invariant subspace of $\psi$, i.e. if for all $X\in\GG$: $\psi(X)F\sbe F$. The converse holds if $G$ is connected.
Finally $\Psi$ is unitary iff $\la\Psi(g)x,\Psi(g)y\ra=\la x,y\ra$, taking derivatives at $g=1$ we get $$ \la\psi(X)x,y\ra+\la x,\psi(X)y\ra=0 $$ i.e. $\psi(X)$ is skew-symmetric. Again, if $G$ is connected, then the converse also holds. Thus we have the following
A representation $\Psi:G\rar\UU(E)$ of a connected matrix Lie-group $G$ in a finite dimensional Hilbert-space $E$ is irreducible if and only if it's differential: $\psi:\GG\rar\Hom(E)$ is irreducible. $\Phi$ is equivalent to $\Psi$ iff their differentials are equivalent. $\Psi$ is unitary iff $\psi$ is skew-symmetric, i.e. for all $X\in\GG$: $\psi(X)^*=-\psi(X)$.
Suppose $\Psi$ is a representation of the matrix group $G$ with Lie-algebra $\GG$, i.e. $\Psi(\exp(X))=e^{\psi(X)}$. If $H\in\Hom_G(\Psi)$ is an intertwining operator (in case of a unitary representation this is equivalent to: all $\Psi(g)$, $g\in G$, are symmetries of $H$), then $$ H=\Psi(\exp(tX))H\Psi(\exp(-tX))=e^{t\psi(X)}He^{-t\psi(X)} $$ taking the derivative at $t=0$ gives: $[H,\psi(X)]=0$; conversely, if $H$ and $\psi(X)$ commute for all $X\in\GG$, then also $H$ and $\Psi(\exp(tX))$ commute and if $G$ is connected: $H\in\Hom_G(\Psi)$.

Traces of adjoint representations

Characters are traces of irreducible representations; so let us compute the trace of the adjoint representation $\Ad$ of $\Gl(n,\bK)$: let $E^{jk}$ denote the standard basis of $\Ma(n,\bK)$, then $(gE^{jk}g^{-1})_{lm}=g_{lj}g^{km}$ and thus $$ \tr\Ad(g) =\sum_{j,k}(gE^{jk}g^{-1})_{jk} =\sum_{j,k} g_{jj}g^{kk} =\tr(g)\tr(g^{-1}) $$
Use the euclidean product $(A,B)\mapsto\tr(AB^t)$ on the tangent spaces to compute the trace of the adjoint representation of $\Sl(n,\R)$ and $\SO(n)$.
1. Since $\sla(n)$ is orthogonal to $e\colon=id/\sqrt n$, we have in the first case: $$ \tr\Ad(g)=\tr(g)\tr(g^{-1})-\la geg^{-1},e\ra=\tr(g)\tr(g^{-1})-1~. $$ 2. The matrices $(E^{jk}-E^{kj})/\sqrt2$, $j < k$, form an orthonormal basis for $\so(n)$ and for $g\in\SO(n)$ we have: $g^{jk}=g_{kj}$: \begin{eqnarray*} \tr\Ad(g)&=&\tfrac12\sum_{j < k}\la g(E^{jk}-E^{kj})g^{-1},(E^{jk}-E^{kj})\ra\\ &=&\tfrac12\sum_{j < k} (g(E^{jk}-E^{kj})g^{-1})_{jk}-(g(E^{jk}-E^{kj})g^{-1})_{kj}\\ &=&\tfrac14\sum_{j,k}(gE^{jk}g^{-1})_{jk}-(gE^{kj}g^{-1})_{jk} -(gE^{jk}g^{-1})_{kj}+(gE^{kj}g^{-1})_{kj}\\ &=&\tfrac14\sum_{j,k}(g_{jj}g^{kk}-g_{jk}g^{jk}-g_{kj}g^{kj}+g_{kk}g^{jj})\\ &=&\tfrac12\sum_{j,k}(g_{jj}g_{kk}-g_{jk}g_{kj}) =\tfrac12\Big(\tr(g)^2-\tr(g^2)\Big)~. \end{eqnarray*}
Compute the trace of the adjoint representation of $\SU(n)$.
Prove by induction on $m\in\N$: for all $X,Y\in\Ma(n,\bK)$: $$ \ad(X)^m(Y)=\sum_{k=0}^m{m\choose j}X^kY(-X)^{m-k}~. $$
Now suppose that $I$ is a subspace of $\GG$ invariant under $\Ad$, i.e. for all $Y\in I\sbe\GG$ and all $X\in\GG$: $\ad(X)Y\in I$, i.e. $[X,Y]\in I$. That precisely means that $I$ is an ideal in $\GG$. Hence the adjoint representation $\Ad:G\rar\Hom(\GG)$ is irreducible iff $\GG$ has trivial ideals only. More generally we say that a subalgebra $I$ of an algebra $A$ is an ideal if $$ \forall x\in A:\qquad xI\sbe I\quad\mbox{and}\quad Ix\sbe I~. $$ If the algebra $A$ only admits the trivial ideals $\{0\}$ and $A$, then $A$ is called simple. Usually in case of a Lie-algebra we also demand that $\dim A\geq2$, for one dimensional Lie-algebras are commutative and thus more or less trivial.
The algebra $\Ma(n,\bK)$ with matrix multiplication is an example of a simple algebra, for if $Y\in I\sm\{0\}$, then for all $j,k=1,\ldots,n$: $$ Y_{jk}1=\sum_{p=1}^n E^{pj}YE^{kp} $$ and thus $1\in I$, i.e. $I=\Ma(n,\bK)$. The Lie-algebra $\Ma(n,\bK)$ is not simple, for it contains the ideal generated by $1$. However the Lie-algebra $\sla(n)$ is simple. The following example covers the case $n=2$, for the general case cf.
exam
The Lie-algebra $\sla(2)$ is simple.
A basis of $\sla(2)$ is given by $$ H\colon=\left(\begin{array}{ccc} 1&0\\ 0&-1 \end{array}\right), X\colon=\left(\begin{array}{ccc} 0&1\\ 0&0 \end{array}\right), Y\colon=\left(\begin{array}{ccc} 0&0\\ 1&0 \end{array}\right)~. $$ The crucial commutator relations are $$ [H,X]=2X,\quad [H,Y]=-2Y,\quad [X,Y]=H~. $$ Assume $Z=aH+bX+cY\in I\sm\{0\}$ and $c\neq0$, then $[X,Z]=-2aX+cH$ and $[X,[X,Z]]=-2cX$, i.e. $X\in I$ and thus $H=[X,Y]\in I$, it follows that $Y=-\tfrac12[Y,H]\in I$. It remains to verify that if $Z=aH+bX\in I\sm\{0\}$, then again $X,Y,H\in I$.

Some Lie-algebras

Abstract Lie-algebras

Let us summerize the properties of the Lie-bracket: $[X,Y]\colon=XY-YX$ on the Lie-algebra $\GG$ of a matrix Lie-group $G$: Every vector-space $A$ endowed with a Lie-bracket structure is called a Lie-algebra
and any homomorphism $\psi:A\rar\Hom(E)$ of a Lie-algebra $A$ into the Lie-algebra $\Hom(E)$ is called a representation of $A$. If $E$ is finite dimensional the representation is said to be finite dimensional. Theorem states that every representation $\Psi:G\rar\Gl(E)$ of the matrix Lie-group $G$ induces a representation $\psi:\GG\rar\Hom(E)$ of its Lie-algebra $\GG$ defined by $$ \psi(X)x\colon=\ftdl t0\Psi(e^{tX})x~. $$
If $A$ is a Lie-algebra, then for all $X,Y\in A$: $[X,[Y,[X,Y]]]=[Y,[X,[X,Y]]]$.
A Lie-algebra $A$ is said to be commutative (abelian) if for all $x,y\in A$: $[x,y]=0$. Recall (cf. subsection) that a Lie-algebra $A$ is called simple if $\dim(A)\geq2$ and if the trivial ideals $\{0\}$ and $A$ are the only ideals in $A$.
A subspace $I$ of Lie-algebra $A$ is an ideal if for all $X\in I$ and all $Y\in A$: $[X,Y]\in I$, i.e. $\im\ad(X)\sbe I$, in particular $I$ is invariant under the adjoint representation.
If $G$ is a commutative matrix Lie-group, then it's Lie-algebra is commutative.
In the finite dimensional case a Lie-algebra can be described conveniently by a set of constants: the structure constants. Suppose $e_1,\ldots,e_n$ is a vector-space basis of the Lie-algebra $A$, then for some structure constants $c_{jk}^l\in\R$ or $\C$: $$ [e_j,e_k]=\sum c_{lk}^je_l, $$ i.e. for all $j$ the matrix $(c_{l,k}^j)_{l,k=1}^n$ is the matrix of the linear map $\ad(e_j):A\rar A$. The coefficients $c_{lk}^j$ must satisfy some relations, e.g. $c_{lj}^k=-c_{lk}^j$ and some additional relations in order to guarantee Jacobi's identity: $$ \forall i,j,k,l:\quad \sum_m(c_{mj}^ic_{lk}^m+c_{mk}^jc_{li}^m+c_{mi}^kc_{lj}^m)=0~. $$
Compute the structure constants for the Lie-algebra $\Ma(n)$ with respect to the standard Basis $E^{jk}$.
We simply have to compute the commutators $[E^{rs},E^{jk}]$: \begin{eqnarray*} [E^{rs},E^{jk}]_{lm} &=&(E^{rs}E^{jk}-E^{jk}E^{rs})_{lm} =E_{lj}^{rs}\d_{mk}-\d_{lj}E_{km}^{rs}\\ &=&\d_{lr}\d_{js}\d_{mk}-\d_{lj}\d_{kr}\d_{ms} =\d_{js}E_{lm}^{rk}-\d_{rk}E_{lm}^{js} \end{eqnarray*} Thus we have $$ \forall j,k,r,s:\quad [E^{rs},E^{jk}]=\d_{js}E^{rk}-\d_{rk}E^{js}~. $$ and in particular: $[E^{kj},E^{jk}]=E^{kk}-E^{jj}$ and $[E^{jj},E^{kk}]=0$.
Compute for any $X\in\Ma(n)$ and $j,k,l,m\in\{1,\ldots,n\}$: $[[X,E^{jk}],E^{lm}]$. Verify that the Lie-algebra $\sla(n)$ is simple.
Find all two dimensional real Lie-algebras.
Suppose $\la.,.\ra$ is a eucildean product on the Lie-algebra $A$ and $c_{lk}^j$ the structure constants with respect to an orthonormal basis $e_1,\ldots,e_n$. Find a nesessary and sufficent condition on these constants such that for all $j$ the maps $x\mapsto[e_j,x]$ are skew-symmetric with respect to $\la.,.\ra$.
Suppose $E$ is a finite dimensional real vector-space and $A\in\Hom(E)$. Then there exists a euclidean product $\la.,.\ra$ on $E$ such that $A:(E,\la.,.\ra)\rar(E,\la.,.\ra)$ is self-adjoint if and only if $A$ is diagonalizable and all eigen-values are real.
If $A$ is self-adjoint then there exists a basis $e_1,\ldots,e_n$ and $a_1,\ldots,a_n\in\R$ such that $Ae_j=\a_je_j$. Conversely if there exists a basis $e_1,\ldots,e_n$ and $a_1,\ldots,a_n\in\R$ such that $Ae_j=\a_je_j$, then we can find a euclidean product $\la.,.\ra$ such that this basis is orthonormal and therefore $A$ is self-adjoint.
Let $\psi:A\rar\Hom(E)$ be a representation of a finite dimensional Lie-algebra $A$ and denote by $b_1,\ldots,b_n$ an ordered basis of $A$. Then any expression of the form $\psi(b_{j_1})\cdots\psi(b_{j_m})$ can be expressed as a linear combination of terms of the form $\psi(b_1)^{k_1}\cdots\psi(b_n)^{k_n}$.
$\proof$ We just explain the idea: whenever we encounter a product like $\psi(b_k)\psi(b_j)$ for some $j < k$ we use the commutation relation: $$ \psi(b_k)\psi(b_j) =\psi(b_j)\psi(b_k)+\psi([b_k,b_j]) =\psi(b_j)\psi(b_k)+\sum_lc_{lj}^k\psi(b_l) $$ and handle the term involving the sum by induction on the number of factors. $\eofproof$
Schur's lemma plays an essential role in group representation; here's the Lie-algebra version of this lemma: Let $\psi:A\rar\Hom(E)$, $\vp:A\rar\Hom(F)$ be a finite dimensional irreducible representation of a Lie-algebra $A$. If $J:E\rar F$ is an intertwining operator, i.e. $J$ is linear and for all $X\in A$: $\vp(X)J=J\psi(X)$, then either $J=0$ or $J$ is an isomorphism, i.e. $\vp$ and $\psi$ are equivalent. Moreover, if in addition $\psi$ is a complex representation, then any intertwining operator $J:E\rar E$ is a multiple of the identity. The proof of this statement is just a copy of the proof of the group version with obvious changes, cf.
lemma.

The Lie-algebra $\R^3$

Probably the most familiar Lie-algebra is the vector-space $\R^3$ with the usual vector product: $(x,y)\mapsto x\times y$, which satisfies Jacobi's identity (cf. e.g. wikipedia): $$ x\times(y\times z)+y\times(z\times x)+z\times(x\times y)=0~. $$
Compute the structure constants for the Lie-algebra $(\R^3,\times)$.
Verify geometrically that the Lie-algebra $(\R^3,\times)$ is simple.
This Lie-algebra appears in several disguises: 1. It is isomorphic to the Lie-algebra $\so(3)=\{A\in\Ma(3,\R):\,A^t=-A\}$ - an isomorphism is given by $\R^3\rar\so(3)$, $a\mapsto l_a$, where $l_a(x)\colon=a\times x$ i.e. $e_j\mapsto l_j$, where $l_j\colon=l_{e_j}$, i.e. $$ l_1\colon= \left(\begin{array}{ccc} 0&0&0\\ 0&0&-1\\ 0&1&0 \end{array}\right),\quad l_2\colon= \left(\begin{array}{ccc} 0&0&1\\ 0&0&0\\ -1&0&0 \end{array}\right),\quad l_3\colon= \left(\begin{array}{ccc} 0&-1&0\\ 1&0&0\\ 0&0&0 \end{array}\right) $$
Verify that $a\mapsto l_a$ is indeed an isomorphism of Lie-algebras.
Since $e_1\times e_2=e_3$, $e_2\times e_3=e_1$ and $e_3\times e_1=e_2$, it remains to check that $[l_1,l_2]=l_3$, $[l_2,l_3]=l_1$ and $[l_3,l_1]=l_2$, which is pretty much straightforward.
For any $a\in\R^3$ the isometry $u(t)\colon=\exp(tl_a)\in\SO(3)$ is a rotations about the axis $\R a$ and the angle of rotation is $t\norm a$. For all $x\in\R^3$ the function $t\mapsto u(t)x$ is the solution to the initial value problem: $y^\prime(t)=l_ay(t)$, $y(0)=x$.
Prove that a linear map $A\in\Hom(\R^3)$ is an algebra-homomorphism of $(\R^3,\times)$ if $A\in\SO(3)$.
Prove that $l^2\colon=l_1^2+l_2^2+l_3^2\in\Hom(\R^3)$ is a multiple of the identity; in math it is known as the quadratic Casimir invariant of the Lie-algebra $\so(3)$. Beware $l^2\notin\so(3)$.
2. $\so(3)$ in turn is isomorphic to the Lie-algebra $\su(2)=\{A\in\Ma(2,\C):\,\tr A=0,\bar A^t=-A\}$; in this case an isomorphism is given by: $l_j\mapsto\tfrac1{2i}\s_j$, where $\s_1,\s_2,\s_3$ denote the so called Pauli spin matrices $$ \s_1\colon= \left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right),\quad \s_2\colon= \left(\begin{array}{cc} 0&-i\\ i&0 \end{array}\right),\quad \s_3\colon= \left(\begin{array}{cc} 1&0\\ 0&-1 \end{array}\right) $$
Verify that $\s_1^2=\s_2^2=\s_3^2=1$, $\s_1\s_2=-\s_2\s_1=i\s_3$ and $[\s_1,\s_2]=2i\s_3$. Moreover the spin matrices are unitary and self-adjoint (hermitean).
We can make these isomorphisms unitary by endowing the corresponding spaces with the following euclidean products: $(A,B)\mapsto\tr(AB^t)/2$ on $\so(3)$ and $(A,B)\mapsto2\tr(A\bar B^t)$ on $\su(2)$. With respect to these products the bases $l_1,l_2,l_3$ of $\so(3)$ and $\s_1/2i,\s_2/2i,\s_3/2i$ of $\su(2)$ form orthonormal bases.
The Lie-algebras $\su(2)$ and $\so(3)$ are simple.

Poisson bracket in Hamiltonian mechanics

Let $M$ be an open set in $\R^{2n}$ and $C^\infty(M)$ the set of all smooth maps $f:M\rar\R$. Denoting by $(q_1,\ldots,q_n,p_1,\ldots,p_n)$ the coordinates of a point in $M$, the Poisson-bracket of $f,g\in C^\infty(M)$ is defined by $$ \{f,g\}\colon=\sum_{j=1}^n\pa_{q_j}f\pa_{p_j}g-\pa_{p_j}f\pa_{q_j}g $$ it makes the vector-space $C^\infty(M)$ a Lie-algebra. In Hamiltonian mechanics the dynamics of a system is described as the evolution of a point $t\mapsto(q(t),p(t))=(q_1(t),\ldots,q_n(t),p_1(t),\ldots,p_n(t))$ in what is called the 'phase space' $M$; it evolves according to Hamilton's equations: $$ q_j^\prime(t)=\pa_{p_j}H(q(t),p(t)), \quad p_j^\prime(t)=-\pa_{q_j}H(q(t),p(t))~. $$ In other words: if $X^H\colon=(\pa_{p_1}H,\ldots,\pa_{p_n}H,-\pa_{q_1}H,\ldots,-\pa_{q_n}H)$ denotes the Hamiltonian vector field associated with $H$, then the flow of $X^H$ is just the evolution of the system. Now for any smooth function $F:M\rar\R$, the derivative of $t\mapsto F(q(t),p(t))$ is given by: $$ \sum_j\pa_{q_j}Fq_j^\prime+\pa_{p_j}Fp_j^\prime =\sum_j\pa_{q_j}F\pa_{p_j}H-\pa_{p_j}F\pa_{q_j}H =\{F,H\}~. $$ Thus $F$ is a first integral of the evolution, i.e. $t\mapsto F(q(t),p(t))$ is constant for every solution of Hamilton's equations, iff $\{F,H\}=0$. Moreover, if $F$ and $G$ are first integrals, then so is $\{F,G\}$, for Jacobi's identity implies $$ \{H,\{F,G\}\}=-\{F,\{G,H\}\}-\{G,\{H,F\}\}=0~. $$ Hence the set of first integrals forms a Lie-subalgebra of the Lie-algebra $(C^\infty(M),\{.,.\})$.
For $H(q,p)=\tfrac1{2m}\sum p_j^2+V(q)$ and $j\neq k$ the functions $L_{jk}\colon=q_jp_k-q_kp_j$ are first integrals. Compute $\{L_{12},L_{23}\}$.
It's a well known fact that the Euler-Lagrange equations for a Lagrangian ${\cal L}$ are equivalent to Hamilton's equations for a certain Hamiltonien: we give a brief description of this equivalence: Suppose ${\cal L}:M\times\R^n\rar\R$ is a Lagrangian and $$ (v_1,\ldots,v_n)\mapsto(p_1,\ldots,p_n) \colon=(\pa_{v_1}{\cal L}(q,v),\ldots,\pa_{v_n}{\cal L}(q,v)) $$ is for all $q\in M$ a diffeomorphism of $\R^n$. Writing the energy function $$ {\cal H}(x,v) \colon=-{\cal L}(x,v)+\sum_{j=1}^n\pa_{v_j}{\cal L}(x,v)v_j =-{\cal L}(x,v)+\sum_{j=1}^np_jv_j $$ as a function of $q\in M$ an $p\in\R^n$, we get a smooth function $(q,p)\mapsto H(q,p)$ and the Euler-Lagrange equations for $t\mapsto q(t)$ are equivalent to Hamilton's equations for $t\mapsto(q(t),p(t))$: $$ \forall j:\quad q_j^\prime(t)=\pa_{p_j}H(q(t),p(t)), \quad p_j^\prime(t)=-\pa_{q_j}H(q(t),p(t))~. $$ Thus Hamilton's equation may be considered as sort of non-straightforward reduction of differential equations of order two - the Euler-Lagrange equations - to differential equations of order one.
For ${\cal L}(q,v)=\tfrac12m\sum v_j^2-V(q)$ we have $p_j=mv_j$ and ${\cal H}(q,v)=\tfrac12m\sum v_j^2+V(q)$, i.e. $H(q,p)=\tfrac1{2m}\sum p_j^2+V(q)$. The Euler-Lagrange equations $q_j^\dprime(t)=-\pa_{q_j}V(q(t))$ are equivalent to Hamilton's equations: $q_j^\prime(t)=p_j(t)/m$ and $p_j^\prime(t)=-\pa_{q_j}V(q(t))$.
The Poisson brackets of classical mechanics an "awful lot look like the cannonical commutation relations" in QM. To see this let us denote the projections onto the coordinates also by $q_j$ and $p_k$ respectively, then $$ \{q_j,p_k\}=\d_{jk},\quad \{q_j,q_k\}=\{p_j,p_k\}=0 $$ and this obviously resembles the cannonical commutation relations in QM $$ [Q_j,P_k]=i\d_{jk}1_E,\quad [Q_j,Q_k]=[P_j,P_k]=0, $$ where $Q_j,P_j$ are linear operators on (a dense subspace of) a Hilbert-space $E$.
These commutation relations can never hold on a finite dimensional space $E$. Hint: consider traces.
We may wonder if there is a Lie-algebra homomorphism $\psi$ which maps functions in $q_j$ and $p_k$ to operators on the Hilbert-space $E$. This can be achived but only for polynomial functions of degree two at most: $$ \psi(1)\colon=-i1_E,\quad \psi(q_j)\colon=-iQ_j,\quad \psi(p_j)\colon=-iP_j, $$ and for the quadratic terms: $$ \psi(q_jp_j)\colon=-\tfrac12i(Q_jP_j+P_jQ_j),\quad \psi(q_j^2)\colon=-iQ_j^2,\quad \psi(p_j^2)\colon=-iP_j^2~. $$ Finally, for $j\neq k$ define: $$ \psi(q_jq_k)\colon=-iQ_jQ_k,\quad \psi(q_jp_k)\colon=-iQ_jP_k,\quad \psi(p_jp_k)\colon=-iP_jP_k~. $$ The $-i$ is just a convention, we may take $+i$ as well. Let's check by example that this is actually a homomorphism: $\{q_jp_j,q_j\}=-q_j$ and $[Q_jP_j+P_jQ_j,Q_j]=(P_jQ_j-Q_jP_j)Q_j-Q_j(Q_jP_j-P_jQ_j)=-2iQ_j$, i.e. we indeed have $[\psi(q_jp_j),\psi(q_j)]=iQ_j=-\psi(q_j)$.
Suppose $F,H$ are polynomials in $q,p$ of degree $2$ at most. If $F$ is a first integral of $H$, then $[\psi(F),\psi(H)]=0$ and by
proposition the mean value of $\psi(F)$ remains constant along the solution $t\mapsto x(t)$ of the time-dependent Schrödinger equation $x^\prime(t)=\psi(H)x(t)$. A typical example thereof is the harmonic oscillator:
For the functions $H$ and $L_{jk}$ in exam compute $\psi(H)$ and $\psi(L_{jk})$.

Further examples

$\Ma(n,\R)$ is a Lie-algebra with $[A,B]=AB-BA$, more generally, every associative algebra $(A,+,.)$ with $[x,y]\colon=xy-yx$ is a Lie-algebra. Remember, by an algebra $(A,+,.)$ we always understand a vector-space $(A,+)$ over the real or complex numbers and a bi-linear mapping $A\times A\rar A$: $(x,y)\mapsto xy$. Associativity means that for all $x,y,z\in A$: $x(yz)=(xy)z$.
For an open subset $M$ of $\R^n$ let $\G(M)$ be the set of all differential operators $X=\sum f_j\pa_j$ such that $f_j\in C^\infty(M)$ and for $\psi\in C^\infty(M)$: $X\psi\colon=\sum f_j\pa_j\psi$. Then $\G(M)$ is a $C^\infty(M)$-module and for $X,Y\in\G(M)$ we have: $[X,Y]\colon=XY-YX\in\G(U)$ and thus $\G(M)$ is a Lie-algebra.
For $X=\sum f_j\pa_j$, $Y=\sum g_j\pa_j$ and $\psi\in C^\infty(M)$ we get: $$ XY\psi =\sum_j\sum_kf_j\pa_j(g_k\pa_k\psi) =\sum_j\sum_kf_j\pa_jg_k\pa_k\psi+f_jg_k\pa_j\pa_k\psi, $$ and since $\pa_j\pa_k=\pa_k\pa_j$, it follows that $$ XY-YX=\sum_k\Big(\sum_j(f_j\pa_jg_k-g_j\pa_jf_k)\Big)\pa_k\in\G(M)~. $$ In modern differential geometry a vector field on a manifold $M$ is defined as a linear mapping $X:C^\infty(M)\rar C^\infty(M)$ such that for all $f,g\in C^\infty(M)$: $$ X(fg)=(Xf)g+f(Xg)~. $$ It's easily verified that for any pair of vector fields $X,Y$ the commutator $[X,Y]\colon=XY-YX$ is again a vector field and therefore the space of vector fields on $M$ is a Lie-algebra. In case $M$ is an open subset of $\R^n$ it can be proved that for any vector field $X$ there are functions $\z_1,\ldots,\z_n\in C^\infty(M)$ such that $$ X=\sum\z_j\pa_j~. $$ This algebraic approach to vector fields is way easier to work with then the allegedly intuitive geometric approach we used to follow before.
Let $X^H$ and $X^F$ be the Hamiltonian vector fields for the functions $H$ and $F$, i.e. $$ X^H\colon=\sum_j\pa_{p_j}H\pa_{q_j}-\pa_{q_j}H\pa_{p_j} $$ Verify that $[X^H,X^F]$ is the Hamiltonian vector fields for the function $-\{H,F\}$. Suggested solution.

Representations of $\SO(3)$ in ${\cal H}_l$

For $l\in\N_0$ let us denote by ${\cal H}_l$ the space of $l$-homogeneous and harmonic polynomials on $\R^3$. From lemma we know that $\dim{\cal H}_l=2l+1$. We consider this spaces as subspaces of $L_2(\R^3,\g)$ or $L_2(S^2,\s)$: $$ \la P,Q\ra \colon=\frac1{c_l}\int_{\R^3}P(z)\cl{Q(z)}e^{-\norm x^2/2}\,dx =\int_{S^2}P\cl{Q}\,d\s~. $$ Let us look at the representation $\G:\SO(3)\rar\UU(L_2(S^2))$, $\G(U)f\colon=f\circ U^{-1}$ from subsection. For a smooth function $f$ and $X\in\so(3)$ we get: $$ \g(X)f(x) =\ftdl t0 f(\exp(-tX)x) =-Df(z)(Xx) =-(\pa_1f(x),\pa_2f(x),\pa_3f(x))Xx $$ In particular for $X=l_1,l_2,l_3$: \begin{eqnarray*} \g(l_1)f(x)&=&x_3\pa_2f(x)-x_2\pa_3f(x),\\ \g(l_2)f(x)&=&x_1\pa_3f(x)-x_3\pa_1f(x),\\ \g(l_3)f(x)&=&x_2\pa_1f(x)-x_1\pa_2f(x)~. \end{eqnarray*} Though the operators $\g(l_j)$ are not bounded on $L_2(S^2)$, they are obviously bounded on all subspaces ${\cal H}_1,{\cal H}_2,\ldots$, which form an orthogonal decomposition of the Hilbert-space $L_2(S^2)$. In QM the the self-adjoint operators $L_j\colon=i\g(l_j)$ are called the angular momentum operators in the directions of the canonical basis of $\R^3$ and for $a=(a_1,a_2,a_3)\in\R^3$ the operator $L_a\colon=a_1L_1+a_2L_2+a_3L_3=i\g(l_a)$ is called the angular momentum operator in the directions of $a$. The representations $\G:\SO(3)\rar{\cal H}_l$ are exactly the irreducible subrepresentations: $\G:\SO(3)\rar{\cal H}_l$ is irreducible iff $\g:\R^3\rar\Hom({\cal H}_l)$ is, i.e. the only non trivial subspace of ${\cal H}_l$ invariant under all operators $L_a$, $a\in\R^3$, is ${\cal H}_l$ - we will defer this to another theorem. Therefore angular momentum operators just stem from representation of the Lie-algebra of $\SO(3)$ (up to the constant $i$): $l_a\mapsto iL_a$, and the building blocks of these representations are the irreducibles. The operators $L_a$ obey the rule: $[L_a,L_b]=iL_{a\times b}$. This follows from the facts that $\g$ and $a(\in\R^3)\mapsto l_a$ are Lie-algebra homomorphisms: $$ [L_a,L_b] =[i\g(l_a),i\g(l_b)] =-[\g(l_a),\g(l_b)] =-\g([l_a,l_b]) =-\g(l_{a\times b}) =iL_{a\times b}~. $$ As we will see, not every irreducible representation of the Lie-algebra $\R^3$ is induced by a representation of $\SO(3)$, only the representations with integer 'spin' originate in an irreducible representation of $\SO(3)$ - cf. theorem. The half integer spin representations do not occur in the representation $\G$ of $\SO(3)$ in $L_2(S^2)$ and thus spin $1/2$ particles are not described by the usual Schrödinger model of e.g. the hydrogen atom. It needs $\SU(2)$ representations to get all of them and it's more conveniently described in an augmented space: for spin $1/2$ particles we usually take the space $L_2(S^2,\C^2)$ of square integrable functions with values in $\C^2$.
Find a solution $u$ to the initial value problem $\pa_tu=-iL_ju$, $u(0,x)=f(x)$ for a smooth function $f:\R^3\rar\C$. Suggested solution
Let $a,b\in\R^3$ be unit vectors, $l\in\N_0$ and put $P(x)\colon=(\la a,x\ra+i\la b,x\ra)^l$. Then $P$ is an $l$-homogeneous polynomial and it is harmonic iff $a\perp b$.
Let $P(x,y,z)\colon=(x+iy)^l$. Verify that $\in{\cal H}_l$ and $P$ is an eigen-function of $L_3$ with eigen-value $l$.

From Group Representations to Algebra Representations

Group and Algebra representations

For every smooth representation $\Psi:G\rar\Gl(E)$ its differential $\psi:\GG\rar\Hom(E)$, $\psi(X)\colon=D\Psi(1)X$, is by definition linear. We are going to show that $\psi:\GG\rar\Hom(E)$ is in fact an algebra-homomorphismus, i.e.: $$ \forall X,Y\in\GG\quad \psi([X,Y])=[\psi(X),\psi(Y)]\colon=\psi(X)\psi(Y)-\psi(Y)\psi(X)~. $$
For every representation $\Psi:G\rar\Gl(E)$ the differential $\psi(X)\colon=D\Psi(1)X$ defines an algebra-homomorphism $\psi:\GG\rar\Hom(E)$, i.e. $\psi$ is linear and $\psi([X,Y])=[\psi(X),\psi(Y)]$.
$\proof$ By lemma, linearity of $\psi$ and the chain rule we have \begin{eqnarray*} \psi([X,Y]) &=&\psi\Big(\pa_s|_{s=0}\pa_t|_{t=0}e^{tX}e^{sY}e^{-tX}\Big)\\ &=&\pa_s|_{s=0}\psi\Big(\pa_t|_{t=0}e^{tX}e^{sY}e^{-tX}\Big)\\ &=&\pa_s|_{s=0}D\Psi(1)\Big(\pa_t|_{t=0}e^{tX}e^{sY}e^{-tX}\Big)\\ &=&\pa_s|_{s=0}\pa_t|_{t=0}\Psi(e^{tX}e^{sY}e^{-tX})\\ &=&\pa_t|_{t=0}\pa_s|_{s=0}\Psi(e^{tX})\Psi(e^{sY})\Psi(e^{-tX})\\ &=&\pa_t|_{t=0}\Psi(e^{tX})\psi(Y)\Psi(e^{-tX})\\ &=&D\Psi(1)X\psi(Y)-\psi(Y)D\Psi(1)X =\psi(X)\psi(Y)-\psi(Y)\psi(X)~. \end{eqnarray*} $\eofproof$
What we really need is the fact that a representation of the group always gives rise to a representation of the corresponding Lie-algebra. As we will see, the reverse implication isn't necessarily true, i.e. a representation of the Lie-algebra of a matrix Lie-group $G$ is not neccessarily the derivative of a representation of $G$. For this to hold there is a topological obstruction: the group $G$ must be simply connected.
A manifold $M$ is said to be simply connected, if it is connected and every continuous function $c:S^1(=[|z|=1])\rar M$ is the restriction of a continuous function $f:D(=[|z|\leq1])\rar M$.
For $r\in[0,1]$ the loops $c_r:t(\in[0,2\pi])\mapsto f(re^{it})$ are continuous deformations of $c=c_1$ and $c_0$ is constant. Thus a connected manifold $M$ is simply connected iff every loop $c:S^1(=[|z|=1])\rar M$ can be continuously deformed into a constant loop. $S^n$, $n\geq2$, is simply connected and every star-shaped open subset of $\R^n$ is simply connected. $S^1$ and $\C\sm\{0\}$ are not simply connected. Simply connected open subsets of $\C$ are of importance in the celebrated Residue Theorem of complex analysis.
Show by means of the Residue Theorem that $\C\sm\{0\}$ is not simply connected.
The function $f:z\mapsto1/z$ is holomorphic on $\C\sm\{0\}$; if $\C\sm\{0\}$ is simply connected, then by the Residue Theorem: $\int_cf(z)\,dz=0$, where $c(t)=e^{it}$, $t\in(0,2\pi)$, bur $\int_cf(z)\,dz=2\pi i$.
Show that $S^1$ is not simply connected.
If the matrix Lie-group $G$ is simply connected, then for every representation $\psi:\GG\rar\Hom(E)$ there is a representation $\Psi:G\rar\Gl(E)$ such that $D\Psi(1)=\psi$. Fortunately all groups $\SU(n)$, $n\geq2$, are simply connected, but $\SO(n)$, $n\geq2$, is not!

Tensor product representation

If $\Psi$ and $\Phi$ are representations of $G$ in some finite dimensional spaces $E$ and $F$, respectively. What's the differential of the tensor product representation $\Psi\otimes\Phi$? Since $(x,y)\mapsto x\otimes y$ is bi-linear, it follows from the product rule that for all $X\in\GG$ and all $x\in E$, $y\in F$: \begin{eqnarray*} D\Psi\otimes\Phi(1)(X)(x\otimes y) &=&\ftdl t0\Psi(e^{tX})x\otimes\Phi(e^{tX})y\\ &=&D\Psi(1)(X)x\otimes\Phi(1)y+\Psi(1)x\otimes D\Phi(1)(X)y\\ &=&\psi(X)x\otimes y+x\otimes\vp(Y)y \end{eqnarray*} and therefore: \begin{equation}\label{grareq1}\tag{GRA1} D\Psi\otimes\Phi(1)(X) =\psi(X)\otimes1+1\otimes\vp(X)~. \end{equation} This also tells us how to define the tensor product of two representations $\psi$ and $\vp$ of a Lie-algebra $A$ - obviously, we will NOT denote it by $\psi\otimes\vp$!: $$ \forall x\in A:\quad \psi(x)\otimes1+1\otimes\vp(x)~. $$

The dual representation

The matrix of the dual $A^*$ of $A\in\Hom(E)$ with respect to the dual basis $e_j^*$ of a basis $e_j$ of $E$ is just the transpose $A^t$ of $A$ and therefore: $\Psi^d(e^{X})=\Psi(e^{-X})^t$. For the differential $\psi^d(X)$ we get: $$ \psi^d(X) \colon=\ftdl s0\Psi^d(e^{sX}) =\ftdl s0\Psi(e^{-sX})^t =-\psi(X)^t, $$ i.e. $\psi^d$ is the negative of the transpose of $\psi$. If $\Psi$ is unitary, then $\Psi^d$ is of course equivalent to the contragredient representation, cf. example. Thus we will also use the notation $\bar\psi\colon=\psi^d$. Moreover, from section we know that in case of a unitary representation $\Psi:G\rar\UU(E)$ the differential $\psi(X)$ is skew symmetric, i.e. for all $X\in\GG$: $\psi(X)^*=-\psi(X)$.
If $\Psi:G\rar\Gl(E)$ is a representation, then for all $g\in G$: $\Psi\circ i_g$ is another (equivalent) representation. If $\psi=D\Psi(1)$ is the differential of $\Psi$ at $1$, then $\psi\circ\Ad(g)$ is the differential of $\Psi\circ i_g$ at $1$ and we have: $$ \psi(\Ad(g)X) =\Psi(g)\psi(X)\Psi(g)^{-1} =\Ad(\Psi(g))\psi(X)~. $$
Since $\Psi$ is a representation $\Psi(i_g(x))=\Psi(gxg^{-1})=\Psi(g)\Psi(x)\Psi(g)^{-1}$ and the differential of this map at $1$ is $X\mapsto\Psi(g)\psi(X)\Psi(g^{-1})$.

Complex structures on a real vector-spaces

Let $E$ be an $n$-dimensional real vector-space. Sometimes you may find it more convenient to work with "the same space" but with complex linear combinations. In case of e.g. $\R^n$ you may simply take $\C^n$ but for an arbitrary real vector-space $E$ we need some kind of construction to get from the real to the complex space: The space $\C\otimes E$ is a good prospect; it's an $n$-dimensional complex vector-space with scalar multiplication $a(b\otimes x)\colon=(ab)\otimes x$. If $a,b\in\R$, then $b\otimes x=1\otimes bx$ and therefore the subspace $1\otimes E$ is canonically isomorphic to $E$. $E^\C\colon=\C\otimes E$ is called the complexification
. Obviously, any linear map $u\in\Hom(E,F)$ has a unique extension $1\otimes u$ to a $\C$-linear map $E^\C\rar F^\C$, which we will also denote by $u$. More generally, any $p$-linear map $u:E_1\times\cdots\times E_p\rar F$ has a unique $p$-linear extension $u:E_1^\C\times\cdots\times E_p^\C\rar F^\C$.
If the bi-linear map $u:E_1\times E_2\rar F$ is symmetric (skew-symmetric), then its extension $u:E_1^\C\times E_2^\C\rar F^\C$ is symmetric (skew-symmetric).
In particular, if $\la.,.\ra$ is a euclidean product on $E$ then there is exactly one bi-linear extension $(a\otimes x,b\otimes y)\mapsto ab\la x,y\ra$, but that's not a complex euclidean product; however it just needs a minor correction: there is only one complex euclidean product, which extends $\la.,.\ra$: $$ \la a\otimes x,b\otimes y\ra\colon=a\bar b\la x,y\ra $$ and it's easily verified that this product restricted to $1\otimes E$ makes this space isometrically isomorphic to $E$. Usually we will write for $(a+ib)\otimes x$: $ax+ibx$ and not $a\otimes x+i\otimes bx$.
If $e_1,\ldots,e_n$ is a basis for $E$, then $1\otimes e_1,\ldots,1\otimes e_n$ is a basis for $E^\C$ and the matrix of $A\in\Hom(E)$ with respect to $e_1,\ldots,e_n$ and the matrix of its extension with respect to $1\otimes e_1,\ldots,1\otimes e_n$ coincide.
If $E$ is a finite dimensional real euclidean space and $A\in\Hom(E)$ is skew symmetric, then there exists an orthonormal basis $x_1,\ldots,x_n,y_1,\ldots,y_n$ of $\ker A^\perp$ and $a_j > 0$ such that: $Ax_j=-a_jy_j$ and $Ay_j=a_jx_j$. Suggested solution.
If $E$ is a finite dimensional real euclidean space and $U\in\Hom(E)$ is orthogonal, then there exists an orthonormal basis $x_1,\ldots,x_n,y_1,\ldots,y_n$ of $(\ker(1+U)+\ker(1-U))^\perp$ and $\vp_1,\ldots,\vp_n\in(0,2\pi)$ such that: $$ Ux_j=\cos(\vp_j)x_j+\sin(\vp_j)y_j \quad\mbox{and}\quad Uy_j=-\sin(\vp_j)x_j+\cos(\vp_j)y_j~. $$
We will commonly need to complexify a real subspace $E$ of a complex vector-space $W$.
Suppose $W$ is a complex vector-space and $E\sbe W$ is a real subspace, such that for all $x\in E\sm\{0\}$: $ix\notin E$. Then $\C\otimes E$ is isomorphic to the subspace $\{x+iy:x,y\in E\}$ of $W$ and the mapping $x+i\otimes y\mapsto x+iy$ is an isomorphism,
$\proof$ $(a+ib)(X+i\otimes Y)=aX-bY+i\otimes(bX+aY)$ is mapped to $aX-bY+i(bX+aY)=(a+ib)(X+iY)$ and by assumption $X+iY=0$ iff $X=0$ and $Y=0$. $\eofproof$
Given a Lie-algebra $A$, there is a unique bi-linear and skew-symmetric extension of the bracket to the complexification $A^\C$. The map $$ (u,v,w)\rar[u,[v,w]]+[v,[w,u]]+[w,[u,v]] $$ is $3$-linear on $A^\C\times A^\C\times A^\C$ and vanishes for $u=1\otimes x$, $u=1\otimes y$ and $u=1\otimes z$. Therefore it vanishes identically, i.e. $A^\C$ is a Lie-algebra.
1. For all $x,y,u,v\in A$ we have $$ [x+i\otimes y,u+i\otimes v] =[x,y]-[y,v]+i\otimes([x,v]+[y,u])~ $$ 2. The center of $A$ is trivial iff the center of $A^\C$ is trivial.
Let's have a look on the case $A=\uu(n)$, i.e. $\uu(n)\sbe\Ma(n,\C)$ is the space of all matrices $(a_{jk})$ such that $a_{kj}+\bar a_{jk}=0$. If $(a_{jk})\in\uu(n)$, then $i(a_{jk})\in\uu(n)$ iff $a_{kj}-\bar a_{jk}=0$, which holds if and only if $a_{jk}=0$. Therefore the complexification of $\uu(n)$ is isomorphic to the set $\{X+iY:X,Y\in\uu(n)\}$, which is the whole space $\Ma(n,\C)$, i.e. $\uu(n)^\C\simeq\gl(n,\C)=\Ma(n,\C)$.
The complexification of $\su(n)$ is isomorphic to $\sla(n,\C)$.
$X\in\su(n)$ iff $X\in\uu(n)$ and $\tr X=0$, thus by the preceding considerations: $$ \su(n)^\C\simeq\{X\in\gl(n,\C):\tr X=0\}=\sla(n,\C)~. $$
The complexification of $\spa(n)$ is isomorphic to $\{X\in\Ma(2n,\C):X^t=JXJ\}$.
Sometimes a real vector-space carries itself a complex structure: the standard example is $\R^2$; identifying $\R^2$ via $(x,y)\mapsto x+iy$ with $\C$, we see that there is a distinguished map - multiplication by $i$. As a map in $\Hom(\R^2)$ this is the map $J:(x,y)\mapsto(-y,x)$ and since $i^2=-1$, we must have: $J^2=-1$. Now suppose $E$ is an $N$-dimensional real vector-space. An isomorphism $J:E\rar E$ satisfying $J^2=-1$ is said to be a complex structure on $E$. If $E$ admits a complex structure $J$, then $$ (-1)^N=\det(-1)=\det J^2=(\det J)^2 $$ and thus $N=2n$ must be even. Also, there is a euclidean product $\la.,.\ra$ on $E$, such that $J:(E,\la.,.\ra)\rar(E,\la.,.\ra)$ is an isometry: define for an arbitrary euclidean product $(.|.)$ on $E$: $$ \la x,y\ra\colon=(x|y)+(J(x)|J(y))~. $$ Now if $J$ is both an isometry of $(E,\la.,.\ra)$ and a complex structure on $E$, then the bi-linear map $$ \o^2(x,y)\colon=\la x,Jy\ra $$ is a so called symplectic form, i.e. $\o^2(x,y)=-\o^2(y,x)$: $$ \o^2(y,x)=\la y,Jx\ra=\la J^*y,x\ra=\la J^{-1}y,x\ra=-\la Jy,x\ra=-\o^2(x,y) $$ and for all $x\neq0$ there is some $y\in E$ such that $\o^2(x,y)\neq0$ - i.e. $\o^2$ is non-singular: $$ \o^2(x,-Jx)=\la x,-J^2x\ra=\la x,x\ra~. $$
If $E$ is finite dimensional and $B:E\times E\rar\bK$ bi-linear and non-singular, then for every linear functional $x^*\in E^*$ there is exactly one $x^{*\sharp}\in E$ such that for all $y\in E$: $x^*(y)=B(x^{*\sharp},y)$. The mapping $x\mapsto x^{*\sharp}$ is an isomorphism and its inverse is denoted by $x\mapsto x^\flat$, i.e. $x^\flat(y)=B(x,y)$. Of course, these mappings are called musical isomorphisms.
A linear map $u\in\Hom(E)$ is said to be symplectic, if $\o^2(u(x),u(y))=\o^2(x,y)$. This is the case iff $u^*Ju=J$. Therefore the group $\Sp(n,\R)$ is the set of symplectic transformations with respect to the so called standard symplectic form on $\R^{2n}$: $$ \o^2(x,y)=\la x,Jy\ra \quad\mbox{where}\quad Je_j=e_{n+j}, Je_{n+j}=-e_j~. $$
Let $(E,\la.,.\ra)$ be a real euclidean vector-space of dimension $2n$ and let $J:E\rar E$ be both an isometry and a complex structure on $E$.
  1. For all $\l=a+ib\in\C$ and all $x\in E$: $\l x\colon=ax+bJ(x)$ is a scalar multiplication and $J:E\rar E$ is $\C$-linear, i.e. for all $\l\in\C$ and all $x\in E$: $J(\l x)=\l J(x)$.
  2. Put $\la x,y\ra_\C\colon=\la x,y\ra+i\la x,Jy\ra$, then this is a complex euclidean product on $E$, i.e. for all $\l\in\C$ and all $x,y,z\in E$: $$ \la\l x,y\ra_\C=\l\la x,y\ra_\C,\quad \la x+y,z\ra_\C=\la x,z\ra_\C+\la y,z\ra_\C,\quad \la y,x\ra_\C=\cl{\la x,y\ra}_\C~. $$ Moreover, the mapping $J:(E,\la.,.\ra_\C)\rar(E,\la.,.\ra_\C)$ is an isometry.
  3. There is an orthogonal decomposition $E=F\oplus G$ of $(E,\la.,.\ra)$, such that $J(F)=G$ and $J(G)=F$.
  4. The complexification $\C\otimes F$ is isometrically isomorphic to $(E,\la.,.\ra_\C)$ and $U:(a+ib)\otimes x\mapsto ax+bJx$ is an isometric isomorphism.
  5. An $\R$-linear map $u:E\rar E$ is $\C$-linear iff $uJ=Ju$, i.e. $\Hom_\C(E)=\{u\in\Hom_\R(E): uJ=Ju\}$. In that case the adjoint of $u:(E,\la.,.\ra_\C)\rar(E,\la.,.\ra_\C)$ coincides with the adjoint of $u:(E,\la.,.\ra)\rar(E,\la.,.\ra)$.
  6. If $u$ is $\C$-linear and symplectic, then $u^*u=1$, i.e. $u$ is orthogonal. Converseley if $u$ is $\R$-linear, symplectic and orthogonal, then $u$ is $\C$-linear.
$\proof$ 1. Since $\l(x+y)=\l x+\l y$ and $J^2=-1$, we conclude that \begin{eqnarray*} \l(\mu x) &=&\l(cx+dJ(x)) =\l(cx)+\l(dJ(x))\\ &=&acx+bcJ(x)+adJ(x)-bdx =(ac-bd)x+(bc+ad)J(x) =(\l\mu)x \end{eqnarray*} 3. Since $J$ is a real isometry and $J^2=-1$, there exists an orthonormal basis $x_1,\ldots,x_n,y_1,\ldots,y_n$ for $E$ such that $Jx_j=y_j$ and $Jy_j=-x_j$, i.e.: $ix_j=y_j$ and $iy_j=-x_j$. Put $F\colon=[x_1,\ldots,x_n]$ and $G=[y_1,\ldots,y_n]$, then $E=F\oplus G$ is an orthogonal decomposition and $J(F)=G$, $J(G)=F$.
4. $U$ is $\C$-linear, because by definition of the scalar multiplication on $\C\otimes F$ and 1. we have $$ U(\l(\mu\otimes x)) =U((\l\mu)\otimes x) =(\l\mu)x =\l(\mu x) =\l U(\mu\otimes x)~. $$ Moreover, for all $x,y\in F$ and all $\l,\mu\in\C$: $$ \la U(\l\otimes x),U(\mu\otimes y)\ra_\C =\la\l x,\mu y\ra_\C =\l\bar\mu\la x,y\ra_\C =\l\bar\mu\la x,y\ra =\la\l\otimes x,\mu\otimes y\ra~. $$ 5. $u\in\Hom_\C(E)$ iff $u((a+ib)x)=(a+ib)u(x)$ and this holds if and only if $au(x)+buJ(x)=au(x)+bJu(x)$, i.e. iff $uJ=Ju$. Let $u^*$ be the adjoint of $u:(E,\la.,.\ra)\rar(E,\la.,.\ra)$, then \begin{eqnarray*} \la u^*(x),y\ra_C &=&\la u^*(x),y\ra+i\la u^*(x),J(y)\ra =\la x,u(y)\ra+i\la x,uJ(y)\ra\\ &=&\la x,u(y)\ra+i\la x,Ju(y)\ra =\la x,u(y)\ra_\C~. \end{eqnarray*} 6. If $u\in\Hom_\R(E)$ is $\C$-linear and symplectic, then $u^*u=-u^*JJu=-u^*JuJ=-JJ=1$. Conversely if $u\in\Hom_\R(E)$ is orthogonal and symplectic, then: $u^{-1}Ju=u^*Ju=J$, i.e. $Ju=uJ$. $\eofproof$
The map $A:(x,y)\mapsto(x,-y)$ on $\R^2$ is $\R$-linear but not $\C$-linear.
1. As a map on $\C$ this is given by $z\mapsto\bar z$, which is not $\C$-linear.
2. $AJ(x,y)=A(-y,x)=(-y,-x)$ but $JA(x,y)=J(x,-y)=(y,x)$.
Determine the matrix of any symplectic transformation (with respect to the standard symplectic form) on $\R^2$.
In general any non-singular and anti-symmetric bi-linear form $\o:E\times E\rar\R$ on a real vector-space $E$ is called a symplectic form on $E$. Thus the symplectic group $\Sp(n,\R)$ is just a particular case of a group of symplectic transformations: However, it's also the general case, because any other group of symplectic transformations is isomorphic to some $\Sp(n,\R)$. Indeed for any euclidean product $(.|.)$ on $E$ there is a skew-symmetric isomorphism $A\in\Hom(E)$such that $\o(x,y)=(x|Ay)$. By exam we can find an orthonormal basis $x_1,\ldots,x_n,y_1,\ldots,y_n$ of $E$ such that $Ax_j\colon=-a_jy_j$ and $Ay_j=a_jx_j$ for some $a_j > 0$. Now put $b_j\colon=x_j/\sqrt{a_j}$ and $e_j\colon=y_j/\sqrt{a_j}$, then $$ \o(e_j,b_k)=(e_j|Ab_k)=(y_j|-a_ky_k)/\sqrt{a_ja_k}=-\d_{jk} $$ i.e. the matrix of $\o$ with respect to the basis $e_1,\ldots,e_n,b_1,\ldots,b_n$ is the block matrix: $$ \left(\begin{array}{cc} 0&-1\\ 1&0 \end{array}\right) $$ which coincides with the matrix of the standard symplectic form on $\R^{2n}$ with respect to the canonical basis.
Let $\o$ be the standard symplectic form on $\R^{2n}$. Verify that for $j=1,\ldots,2n$: $e_j^{*\sharp}=Je_j$ and $e_j^\flat=J^*e_j^*$. In particular if $x^*(e_j)=x_j$, then $$ x^{*\sharp} =x_1e_{n+1}+\cdot+x_ne_{2n}-x_{n+1}e_1-\cdots-x_{2n}e_n =(-x_{n+1},\ldots,-x_{2n},x_1,\ldots,x_n)~. $$ 2. Verify that the Hamiltonian vector field for the function $H$ on an open subset $M$ of $\R^{2n}$ satisfies for all $x\in M$: $-dH(x)v=\o(X_x^H,v)$.

The groups $\SU(2)$ and $\SO(3)$

The covering map $\SU(2)\rar\SO(3)$

$\so(3)$ and $\su(2)$ are isometrically isomorphic to the Lie-algebra $\R^3$, but $\SO(3)$ and $\SU(2)$ are not even homeomorphic.
$\SU(2)$ is homeomorphic to $S^3$.
$\proof$ Let $\s_0\in\SU(2)$ denote the unit matrix and $\s_1,\s_2,\s_3$ the Pauli spin matrices (cf. subsection), every unitary matrix $U\in\UU(2)$ has the following form: $$ \left(\begin{array}{cc} x_0+ix_3&x_2+ix_1\\ -x_2+ix_1&x_0-ix_3 \end{array}\right) =x_0\s_0+ix_3\s_3+ix_1\s_1+ix_2\s_2 $$ and $\det U=1$ tantamounts to: $x_0^2+x_1^2+x_2^2+x_3^2=1$. Thus $\SU(2)$ is homeomorphic to $S^3$ and in fact they are isometric if $\SU(2)$ inherits the Hilbert-Schmidt metric multiplied by $1/\sqrt2$ and $S^3$ the euclidean metric. $\eofproof$
Of course, $S^3$ is simply connected and therefore $\SU(2)$ must also be simply connected, whereas $\SO(3)$ is not, meaning that there is a curve $c:S^1\rar\SO(3)$ which cannot be continuously deformed into a constant curve within $\SO(3)$. This can be depicted as follows: take a belt, fix one of its ends and twist the other end by an angle of $2\pi$ about its longer side. Then it's impossible to straighten out the belt by moving it in space keeping the ends parallel; however, this can be done if the other end is rotated by an angle of $4\pi$!
Nevertheless we will try to explain that there is a surjective homomorphism $\Phi:\SU(2)\rar\SO(3)$ satisfying \begin{equation}\label{suteq1}\tag{SUT1} \Phi\Big(\exp\Big(\frac t{2i}(a_1\s_1+a_2\s_2+a_3\s_3)\Big)\Big) =\exp\Big(t(a_1l_1+a_2l_2+a_3l_3)\Big)~. \end{equation} Thus $\Phi$ is an orthogonal representation of $\SU(2)$ in $\R^3$. First of all, for a unit vector $a\in\R^3$ and $t\in\R$ we put \begin{eqnarray*} U(t/2,a) &\colon=&\exp\Big(\frac t{2i}(a_1\s_1+a_2\s_2+a_3\s_3)\Big)\\ &=&\cos(t/2)-i\sin(t/2)(a_1\s_1+a_2\s_2+a_3\s_3)\in\SU(2)~. \end{eqnarray*} For $(x_1,x_2,x_3)\in\R^3$ define $(y_1,y_2,y_3)\in\R^3$ by \begin{eqnarray*} y_1\s_1+y_2\s_2+y_3\s_3 &\colon=&U(t/2,a)(x_1\s_1+x_2\s_2+x_3\s_3)U(t/2,a)^{-1}\\ &=&\Ad(U(t/2,a))(x_1\s_1+x_2\s_2+x_3\s_3)\in\su(2)~. \end{eqnarray*} Then the mapping $(x_1,x_2,x_3)\mapsto(y_1,y_2,y_3)$ is a positive isometry $R(t,a)$: its axis is $a$ and its angle of rotation is $t$ (this needs some computation, which we skip but it's a good exercice to do it), i.e. $$ R(t,a)=\exp\Big(t(a_1l_1+a_2l_2+a_3l_3)\Big)~. $$ More generally: Defining $\Phi(U)\in\SO(3)$, $\Phi(U):(x_1,x_2,x_3)\mapsto(y_1,y_2,y_3)$, where $$ y_1\s_1+y_2\s_2+y_3\s_3\colon=\Ad(U)(x_1\s_1+x_2\s_2+x_3\s_3)~. $$ we get a surjective homomorphism $\Phi:\SU(2)\rar\SO(3)$ with kernel $\{\pm1\}$. $\Phi$ maps $\exp(t\s_j/2i)$ on the rotation about the $j$-th axis through the angle $t$. Since $|\ker\Phi|=2$ the homomorphism $\Phi$ is said to be a double covering map and $\SU(2)$ is said to be a double cover of $\SO(3)$.
The above procedure mimics a general construction: If the Lie-algebras of two closed matrix subgroups are isomorphic, then there is a homomorphism of the first onto the second, provided the first is simply connected.

Spin $l/2$ Representations of $\SU(2)$ in ${\cal P}_l$

For $l\in\N_0$ denote by ${\cal P}_l$ the space of $l$-homogeneous polynomials on $\C^2$. Then $\dim{\cal P}_l=l+1$ and for $m=0,\ldots,l$ the polynomials $P_m(z)\colon=z_1^mz_2^{l-m}$ form a basis. All these spaces are of course subspaces of $L_2(\C^2,\g)$ or $L_2(S^3)$ endowed with the euclidean product $$ \la P,Q\ra \colon=\frac1{C_l}\int_{\C^2}P(z)\cl{Q(z)}e^{-\norm z^2/2}\,dz =\int_{S^3}P(z)\cl{Q(z)}\,\s(dz)~. $$
Compute the constant $C_l$, cf. exam.
Verify that the polynomials $P_0,\ldots,P_l$ are orthogonal in $L_2(\C^2,\g)$. Compare exam.
Let us consider the representation $\G$ of $\SU(2)$ in $L_2(\C^2,\g)$: $\G(U)f\colon=f\circ U^{-1}$. For a holomorphic function $f$ and $X\in\su(2)$ we get: $$ \g(X)f(z) =\ftdl t0 f(\exp(-tX)z) =-Df(z)(Xz) =-(\pa_1f(z),\pa_2f(z))Xz $$ In particular for $X=\s_1/2i,\s_2/2i,\s_2/2i$: \begin{eqnarray*} \g(\s_1/2i)f(z)&=&-\tfrac1{2i}(\pa_1f(z)z_2+\pa_2f(z)z_1),\\ \g(\s_2/2i)f(z)&=&\tfrac12(\pa_1f(z)z_2-\pa_2f(z)z_1),\\ \g(\s_3/2i)f(z)&=&\tfrac1{2i}(-\pa_1f(z)z_1+\pa_2f(z)z_2)~. \end{eqnarray*} The mapping $U:S^1\rar\SU(2)$, $U(a)\colon=diag\{a,a^{-1}\}$ is an isomorphism onto a subgroup of $\SU(2)$. By the spectral theorem of unitary operators, every element $u$ of $\SU(2)$ is conjugate to some $U(a)$, i.e. $gug^{-1}=U(a)$ for some $g\in\SU(2)$. Moreover for every pair $a\neq b$: $U(a)$ and $U(b)$ are not conjugate.
Now we will show that the representations $\G:\SU(2)\rar\UU({\cal P}_l)$ are irreducible: For all $a\in S^1$ and all $l,m$ we have $$ \G(U(a))P_m(z) =P_m(U(a)^{-1}z) =P_m(a^{-1}z_1,az_2) =a^{l-2m}P_m(z) $$ i.e. $P_m$ is an eigen-function of $\G(U(a))$ with eigen-value $a^{l-2m}$. Next we choose $a\in S^1$ such that all values $a^l,a^{l-2},\ldots,a^{-l}$ are pairwise distinct; it follows that all eigen-values of $\G(U(a))$ are simple. Let $A\in\Hom({\cal P}_l)$ be an operator commuting with all $\G(U(a))$. Since all eigen-values of $\G(U(a))$ are simple, $P_m$ must also be an eigen-function of $A$, i.e. $AP_m=c_mP_m$. Finally, for $t\in\R$ let $R(t)$ be the rotation: $$ R(t)\colon=\left(\begin{array}{cc} \cos t&-\sin t\\ \sin t&\cos t \end{array}\right)\in\SU(2)~. $$ It follows that $$ \G(R(t))P_l(z) =P_l(R(-t)z) =(z_1\cos t+z_2\sin t)^l =\sum_mP_m(z){l\choose m}\cos^mt\sin^{l-m}t $$ and therefore: $$ A\G(R(t))P_l=\sum_mc_mP_m{l\choose m}\cos^mt\sin^{l-m}t $$ On the other hand we have $$ \G(R(t))AP_l =c_l\G(R(t))P_l =\sum_mc_lP_m{l\choose m}\cos^mt\sin^{l-m}t $$ but this is only possible, if for all $m=0,\ldots,l$: $c_m=c_l$, i.e. $A$ is a multiple of the identity.
Verify that the above representation $\G$ of $\SU(2)$ in ${\cal P}_l$ is irreducible by employing exam and exam.
For all $l\in\N_0$ the $(l+1)$-dimensional representations $\G:\SU(2)\rar\UU({\cal P}_l)$ is irreducible. Since the largest eigen-value of $L_3\colon=i\g(\s_3/2i)$ on ${\cal P}_l$ is $l/2$ it's called the spin $l/2$ representation.
$\proof$ We simply have to verify that the eigen-values of $L_3$ on ${\cal P}_l$ are $-l/2,-l/2+1,\ldots,l/2-1,l/2$: $$ L_3P_m(z) =i\g(\s_3/2i)P_m(z) =\tfrac1{2}(\pa_1P_m(z)z_1-\pa_2P_m(z)z_2) =-\tfrac12(l-2m)P_m(z)~. $$ $\eofproof$
Let us look at the spin $1/2$ representation $\G:\SU(2)\rar\UU({\cal P}_1)$ directly: $\G(U)$ maps the orthonormal basis $az_1$, $az_2$ onto $a(u_{11}^*z_1+u_{12}^*z_2)$, $a(u_{21}^*z_1+u_{22}^*z_2)$ and thus the matrix of $\G(U)$ with respect to this basis is given by $$ \left(\begin{array}{cc} u_{11}^*&u_{21}^*\\ u_{12}^*&u_{22}^* \end{array}\right) =\left(\begin{array}{cc} \bar u_{11}&\bar u_{12}\\ \bar u_{21}&\bar u_{22} \end{array}\right) $$ i.e. it's the contragredient representation of the standard representation of $\SU(2)$, which is equivalent to the standard representation of $\SU(2)$, cf.
example.

Irreducible representations of $\su(2)$

Our final task will be to show that any finite dimensional irreducible unitary represenation $\Psi:\SU(2)\rar\UU(E)$ is equivalent to one of the previous spin $l/2$ representations $\G:\SU(2)\rar\UU({\cal P}_l)$. Since $\SU(2)$ is simply connected it suffices to prove that an arbitrary irreducible representation $\psi:\su(2)\rar\Hom(E)$ is equivalent to the Lie-algebra representation $\g:\su(2)\rar\Hom({\cal P}_l)$. Therefore we start with an arbitrary irreducible representation $\psi:\su(2)\rar\Hom(E)$. In order to guarantee the existence of eigen-vectors we need to complexify $\su(2)$, i.e. we start with a complex representation $\psi:\sla(2,\C)\rar\Hom(E)$. For $n=2$ we choose the following three matrices as basis for $\sla(2,\C)$ (non of these matrices lies in $\su(2)$!) $$ H\colon=\left(\begin{array}{ccc} 1&0\\ 0&-1 \end{array}\right), X\colon=\left(\begin{array}{ccc} 0&1\\ 0&0 \end{array}\right), Y\colon=\left(\begin{array}{ccc} 0&0\\ 1&0 \end{array}\right) $$ The crucial commutator relations are \begin{equation}\label{suteq2}\tag{SUT2} [H,X]=2X,\quad [H,Y]=-2Y,\quad [X,Y]=H~. \end{equation}
The Lie-algebras $\sla(2,\R)$ and $\su(2)$ are not isomorphic.
$H,X$ and $Y$ form a basis for $\sla(2,\R)$ and by the above commutation properties the algebra generated by $H$ and $X$ is two dimensional. A basis of $\su(2)$ is given by $a_j\colon=\tfrac1{2i}\s_j$, $j=1,2,3$ (cf. exam), which satisfy the commutation relations: $$ [a_1,a_2]=a_3, [a_2,a_3]=a_1, [a_3,a_1]=a_2, $$ It remains to prove that given two linearly independent linear combinations $x$ and $y$ of $a_1,a_2,a_3$, their commutator $[x,y]$ is linearly independent of $x$ and $y$.This follows from the fact that the mapping $a_j\to e_j$ is a Lie-algebra isomorphism of $\su(2)$ onto the Lie-algebra $\R^3$ with $[x,y]\colon=x\times y$ (cf. section) and the fact that the vector product of two linearly independent vectors $x$ and $y$ is a non-zero vector orthogonal to both $x$ and $y$.
The standard representation $A\mapsto A$ of $\sla(2,\C)$ is equivalent to its dual representation $A\mapsto-A^t$.
For all $A\in\sla(2,\C)$ we have $-A^t=JAJ^{-1}$, where $$ J=\left(\begin{array}{ccc} 0&1\\ -1&0 \end{array}\right)~. $$
Suppose $A,B\in\Hom(E)$ satisfy $[A,B]=\b B$. If $x$ is an eigen-vector for $A$ with eigen-value $\a$, then $B^nx$ is either zero or it is an eigen-vector for $A$ with corresponding eigen-value $\a+n\b$. $B$ is called a ladder operator for $A$.
$\proof$ We have: $ABx=[A,B]x+BAx=\b Bx+\a Bx$ and by induction on $n$: $$ AB^{n+1}x =[A,B]B^nx+BAB^nx =\b B^{n+1}x+n\a B^{n+1}x~. $$ $\eofproof$
Suppose $A,B\in\Hom(E)$ satisfy $[A,B]=\b B$. Prove by induction on $n$ that for all $n\in\N$ and all $\l\in\C$: $(A-\l-\b)^nB=B(A-\l)^n$. Suggested solution.
Now let $\psi:\sla(2,\C)\rar\Hom(E)$ be any finite dimensional representation. From the above commutation relations \eqref{suteq2} we infer that if $x$ is an eigen-vector for $\psi(H)$ with eigen-value $\l$, than $\psi(X)x$ and $\psi(Y)x$ are eigen-vectors of $\psi(H)$ with corresponding eigen-values $\l+2$ and $\l-2$, for $\psi(X)$ and $\psi(Y)$ are ladder operators for $\psi(H)$. Now choose an eigen-vector $u_\l$ of $\psi(H)$ whose eigen-value has maximal real part. The vector $u_{\l-2m}\colon=\psi(Y)^mu_\l$ is either zero or an eigen-vector of $\psi(H)$ with eigen-value $\l-2m$ and since $[X,Y]=H$, we conclude that \begin{eqnarray*} \psi(X)u_{\l-2(m+1)} &=&[\psi(X),\psi(Y)]u_{\l-2m}+\psi(Y)\psi(X)u_{\l-2m}\\ &=&\psi(H)u_{\l-2m}+\psi(Y)\psi(X)u_{\l-2m} =(\l-2m)u_{\l-2m}+\psi(Y)\psi(X)u_{\l-2m} \end{eqnarray*} For $m=0$ we have by maximality of $\Re\l$: $\psi(X)u_\l=0$ and thus: $\psi(X)u_{\l-2}=\l u_\l$, $\psi(X)u_{\l-4}=(\l-2)u_{\l-2}+\l \psi(Y)u_{\l}=2(\l-1)u_\l$ and by induction: \begin{equation}\label{suteq3}\tag{SUT3} \psi(X)u_{\l-2m}=m(\l-m+1)u_{\l-2(m-1)}, \end{equation} which implies that if $u_{\l-2(m-1)}\neq0$ and $m\notin\{0,\l+1\}$ then $u_{\l-2m}\neq0$. However, since $E$ is finite-dimensional and the vectors $u_\l,u_{\l-2},\ldots$ are eigen-vectors of $\psi(H)$ for different eigen-values, i.e. they are linearly independent (cf. e.g. lemma), there must be some $l$ such that $u_{\l-2l}\neq0$ and $u_{\l-2(l+1)}=\psi(Y)u_{\l-2l}=0$, implying: $\psi(X)u_{\l-2(l+1)}=0$, which by \eqref{suteq3} can only happen if $(l+1)(\l-l)=0$, i.e. $\l=l$. It follows that the space generated by $u_l,u_{l-2},\ldots,u_{-l}$ is invariant and \begin{eqnarray*} \psi(H)u_{l-2m}&=&(l-2m)u_{l-2m},\\ \psi(X)u_{l-2m}&=&m(l-m+1)u_{l-2(m-1)},\\ \psi(Y)u_{l-2m}&=&u_{l-2m-2}~. \end{eqnarray*} Since $u_l,u_{l-2},\ldots,u_{-l}$ are linearly independent, they span a space of dimension $l+1$.
Suppose $\psi:\sla(2,\C)\rar\Hom(E)$ is an arbitrary finite dimensional representation. Then the following holds: 1. Every eigen-value of $\psi(H)$ is an integer and if an eigen-vector $x$ of $\psi(H)$ with eigen-value $\l$ satisfies $\psi(X)x=0$, then $\l\in\N_0$. 2. If $\l\in\Z$ is an eigen-value of $\psi(H)$, then so are $-|\l|,-|\l|+2,\ldots,|\l|-2,|\l|$. 3. $\psi(X)$ and $\psi(Y)$ are nilpotent.
$\proof$ We only need to prove the third statement: By the following exam we have that if $x\in\ker(\psi(H)-\l)^n$, then $\psi(X)x\in\ker(\psi(H)-\l-2)^n$. Hence $\psi(X)$ must be nilpotent, for otherwise the eigen-values of $\psi(H)$ would not be bounded from below. $\eofproof$
From exam infer that if $x\in E(\l)\colon=\ker(\psi(H)-\l)^n$, then $\psi(X)x\in E(\l+2)$ and $\psi(Y)\in E(\l-2)$.
If $\psi$ is in addition irreducible, then we conclude that: $\dim E=l+1$. Conversely, given a basis $u_l,u_{l-2},\ldots,u_{-l}$ of $\C^{l+1}$ we define $\psi_l(H)$, $\psi_l(X)$ and $\psi_l(Y)$ by \begin{eqnarray}\label{suteq4}\tag{SUT4} \psi_l(H)u_{l-2m}&=&(l-2m)u_{l-2m},\\ \psi_l(X)u_{l-2m}&=&m(l-m+1)u_{l-2(m-1)},\\ \psi_l(Y)u_{l-2m}&=&u_{l-2m-2}, \quad u_{-l-2}=0~. \end{eqnarray} Then it is easily checked that $\psi_l$ is an irreducible representation. So for every $l\in\N_0$ there is exactly one irreducible representation of $\SU(2)$ of dimension $l+1$ and since the spin $l/2$ representation is irreducible and of dimension $l+1$ we thus have established the first two parts part of the following
For every irreducible representations $\psi:\sla(2,\C)\rar\Hom(E)$ there is some $l\in\N_0$ such that $\psi$ is equivalent to $\psi_l$. 2. Every irreducible representation $\Psi:\SU(2)\rar\UU(E)$ is equivalent to a spin $l/2$ representation $\G:\SU(2)\rar\UU({\cal P}_l)$ for some $l\in\N_0$. 3. The character $\chi_l$ corresponding to the irreducible representation $\Psi_l:\SU(2)\rar\UU(l+1)$ is given by $$ \chi_l(e^{itH})=\frac{\sin((l+1)t)}{\sin t}~. $$
$\proof$ 2. Every $U\in\SU(2)$ is conjugate to $e^{itH}$ for some $t\in\R$ and thus $\chi_l(U)=\chi_l(e^{itH})=\tr\exp(it\psi_l(H))$. Now, the eigen-values of $\psi_l(H)$ are $l,l-2,\ldots,-l$ and therefore $$ \chi_l(e^{itH}) =\tr\exp(it\psi_l(H)) =e^{itl}+e^{it(l-2)}+\cdots+e^{-itl} =\frac{\sin((l+1)t)}{\sin t}~. $$ $\eofproof$
The spin $l/2$ representations for $l=1,2,3$ are given by: $\psi_1(H)$ sends $e_{-1}$ to $-e_{-1}$ and $e_1$ to $e_1$, also: $\psi_1(X):e_{-1}\to e_1,e_1\to0$, $\psi_1(Y):e_{-1}\to0,e_1\to e_{-1}$.
$\psi_2(H):e_{-2}\to-2e_1,e_{0}\to0,e_2\to2e_2$, $\psi_2(X):e_{-2}\to2e_0,e_0\to2e_1,e_2\to0$, $\psi_2(Y):e_{-2}\to0,e_0\to e_{-2},e_2\to e_0$.
$\psi_3(H):e_{-3}\to-3e_{-3},e_{-1}\to-e_{-1},e_1\to e_1,e_3\to 3e_3$, $\psi_3(X):e_{-3}\to3e_{-1},e_{-1}\to4e_1,e_1\to3e_3,e_3\to0$, $\psi_3(Y):e_{-3}\to0,e_{-1}\to e_{-3},e_1\to e_{-1},e_3\to e_1$.
The following diagrams depict these representations: The red circle indicates the action of $\psi_l(H)$, it maps each of the basis vectors to a multiple of this vector and this multiple is the encircled number. $\psi_l(X)$ and $\psi_l(Y)$ act on the basis vectors as indicated by the blue and the green arrows. If there is no outgoing blue or green arrow from one of the circles, then the corresponding basis vector is mapped by $\psi_l(X)$ or $\psi_l(Y)$ to the null vector!
spin representations
Compute for any representation $\psi:\sla(2,\C)\rar\Hom(E)$: $e^{\ad(\psi(X))}(\psi(H))$ and $e^{\ad(\psi(X))}(\psi(Y))$.
Since $\ad(\psi(X))(\psi(H))=\psi([X,H])=-2\psi(X)$ and $$ \ad(\psi(X))^2(\psi(H)) =\ad(\psi(X))(\ad(\psi(X))(\psi(H))) =-2\ad(\psi(X))(\psi(X))=0 $$ it follows that $e^{\ad(\psi(X))}(\psi(H))=\psi(H)-2\psi(X)$. In an analogous manner: $\ad(\psi(X))(\psi(Y))=\psi([X,Y])=2\psi(H)$ and $$ \ad(\psi(X))^2(\psi(Y)) =\ad(\psi(X))(\ad(\psi(X))(\psi(Y))) =\ad(\psi(X))(\psi(H))=-2\psi(X) $$ and thus $\ad(\psi(X))^3(\psi(Y))=0$; it follows that $$ e^{\ad(\psi(X))}(\psi(Y))=\psi(Y)+2\psi(H)-\psi(X) $$
Prove that the operator $L^2\colon=\psi(H)^2+2\psi(X)\psi(Y)+2\psi(Y)\psi(X)$ commutes with all operators $\psi(H),\psi(X),\psi(Y)$ (Casimir invariant). If $\psi$ is the irreducible representation of dimension $l+1$, then $L^2$ is just multiplication by $l(l+2)$.
$[\psi(H)^2,\psi(X)]=\psi(H)[\psi(H),\psi(X)]+[\psi(H),\psi(X)]\psi(H)=2\psi(H)\psi(X)+2\psi(X)\psi(H)$, $[\psi(X)\psi(Y),\psi(X)]=\psi(X)[\psi(Y),\psi(X)]=-\psi(X)\psi(H)$, $[\psi(Y)\psi(X),\psi(X)]=[\psi(Y),\psi(X)]\psi(X)=-\psi(H)\psi(X)$ and thus $[L^2,\psi(X)]=0$. $$ L^2u_l =\psi(H)^2u_l+2\psi(X)\psi(Y)u_l =l^2+\psi(X)u_{l-2} =l^2+2lu_l/2 =l(l+2)u_l~. $$ and thus: $L^2\psi(Y)^mu_l=\psi(Y)^mL^2u_l=l(l+2)\psi(Y)^mu_l$.

Tensor products: Clebsch-Gordan Theory

There's another, more general way of producing all irreducible representations of $\sla(2,\C)$ (cf. e.g. section): We start with the spin $1/2$ representation, i.e. the standard representation of $\sla(2,\C)$ and build the tensor product: for all $A\in\sla(2,\C)$: $\pi(A)\colon=A\otimes1+1\otimes A$. The vector $w\colon=e_1\otimes e_1$ is an eigen-vector of $\pi(H)$ with maximal eigen-value $2$ and therefore $\pi(X)w=0$; $\pi(Y)w$ and $\pi(Y)^2w$ are eigen-vectors for $\pi(H)$ with eigen-values $0$ and $-2$ and $\pi(Y)^3w=0$. By the previous subsection the space $E$ generated by $w,\pi(Y)w,\pi(Y)^2w$ is $\pi$-invariant (cf. also section) and the representation $A\mapsto\pi(A)|E$ is equivalent to the spin $1$ representation. The same procedure applied to the $l$-fold tensor product of the standard representation and $$ \pi(A)\colon=A\otimes1\otimes\cdots\otimes1 +\cdots +1\otimes\cdots\otimes1\otimes A $$ gives a representation equivalent to the spin $l/2$ representation: we start with $e_1\otimes\cdots\otimes e_1$ and apply $\pi(Y)$ repeatedly untill we end up with $0$.
Suppose an operator $L\in\Hom(\C^2\otimes\C^2)$ commutes with all $\pi(A)$, $A\in\sla(2,\C)$, then $L$ can have at most two distinct eigen-values and $e_1\otimes e_{-1}-e_{-1}\otimes e_1$ must be an eigen-vector. 2. What does $L$ look like?
The space carrying the spin $1$ representation is generated by $$ e_1\otimes e_1,\quad \pi(Y)(e_1\otimes e_1)=e_{-1}\otimes e_1+e_1\otimes e_{-1}\quad\mbox{and}\quad \pi(Y)^2(e_1\otimes e_1)=2e_{-1}\otimes e_{-1}~. $$ Thus the space carrying the trivial representation is generated by $e_{-1}\otimes e_1-e_1\otimes e_{-1}$ - check it! 2. The operator $L$ is a multiple $2a$ of the orthogonal projection on the three dimensional subspace plus another multiple $2b$ of the orthogonal projection on the one dimensional subspace. Hence we get for the matrix of $L$ with respect to the basis $e_1\otimes e_1$, $e_1\otimes e_{-1}$, $e_{-1}\otimes e_1$, $e_{-1}\otimes e_{-1}$: $$ \left(\begin{array}{cccc} 2a&0&0&0\\ 0&a+b&a-b&0\\ 0&a-b&a+b&0\\ 0&0&0&2a \end{array}\right) $$ For $a=-b=1/2$ this is called the spin exchange operator, cf. exam. Apart from the identity the spin exchange operator is the only operator, which commutes with the representation $\pi$; since $\sla(2,\C)$ is also the complexification of $\so(3)\simeq\su(2)$ this can be interpreted as follows: the spin exchange operator is essentially the only self-adjoint operator in $\Hom(\C^2\otimes\C^2)$, that doesn't depend on the choice of an orthonormal basis for $\R^3$, i.e. it commutes with the group representation $\Pi:\SU(2)\rar\Gl(4,\C)$ associated with the algebra representation $\pi:\su(2)\rar\Ma(4,\C)$.
For $n\in\N_0$ let $\psi_n:\sla(2,\C)\rar\Hom(E_n)$ be a spin $n/2$ representation. For $m\geq n$ put $\pi=\psi_m\otimes1+1\otimes\psi_n$. Then $\pi$ is equivalent to the sum of irreducible representations: $$ \pi\simeq\psi_{m+n}\oplus\psi_{m+n-2}\oplus\cdots\oplus\psi_{m-n}~ $$
$\proof$ We will assume that we have chosen euclidean products such that all representations restricted to $\su(2)$ are skew-symmetric; this implies that $\psi_m(H)$, $\psi_n(H)$ and $\pi(H)$ are self-adjoint! Indeed, $iH\in\su(2)$ and $\psi(iH)=i\psi(H)$ is skew-symmetric by assumption for $\psi=\psi_m,\psi_n,\pi$. Now we choose bases $e_{-m},e_{-m+2},\ldots,e_m$ and $b_{-n},b_{-n+2},\ldots,b_n$ of $E_m$ and $E_n$, respectively, such that $\psi_m(H)e_j=je_j$, $\psi_m(Y)e_j=e_{j-2}$ and $\psi_n(H)b_k=kb_k$, $\psi_n(Y)e_k=e_{k-2}$ (cf. \eqref{suteq4}). Then $$ \pi(H)e_j\otimes b_k=(j+k)e_j\otimes b_k, $$ i.e. the eigen-values of $\pi(H)$ are $m+n-2l$, $l\in\{0,1,\ldots,m+n\}$, with multiplicities $d_{m+n-2l}$, these are given by the number of pairs $(j,k)$ such that $j+k=l$, $j\in\{0,1,\ldots,m\}$, $n\in\{1,\ldots,n\}$. From the picture below we infer that this number is $1$ for $l=0$ and it increases by $1$ until we reach $l=n$, where we get $n+1$ such pairs. From there it remains constant until we reach $l=m$, where it is still $n+1$. Eventually it decreases by $1$ until we end up with $l=m+n$, where it is $1$. Thus we have $$ d_{n+m-2l} =\left\{\begin{array}{cl} l+1&\mbox{if $l\leq n$}\\ n+1&\mbox{if $n\leq l\leq m$}\\ n+m-l+1&\mbox{if $m\leq l$}~. \end{array}\right. $$
clebsch
Applying $\pi(Y)$ to $w_1\colon=e_m\otimes b_n$ repeatedly generates $n+m+1$ eigen-vectors of $\pi(H)$ with eigen-values $m+n,m+n-2,\ldots,-m-n$. The restriction of $\pi$ to this subspace $F$ gives a representation equivalent to the spin $(m+n)/2$ representation of $\sla(2,\C)$. Since $\pi(Z)$ is skew-symmetric for all $Z\in\su(2)$, it follows that $F^\perp$ is invariant under $\su(2)$; but $F^\perp$ is a complex subspace and thus it must be invariant under the complexification $\sla(2,\C)$ of $\su(2)$. Moreover, since $\pi(H)$ is self-adjoint the restriction $\psi(H)|F^\perp$ must have the eigen-values $m+n-2,\ldots,-m-n$ with multplicities lowered by one - because for each of these eigen-values we have taken exactly one eigen-vector to generate $F$. In particular the eigen-value $m+n-2$ must be simple; let $w_2$ be the corresponding eigen-vector, then $\pi(X)w_2=0$ and thus we get a subspace $G$ of $F$ by applying $\pi(Y)$ repeatedly to $w_2$; the restriction of $\psi$ to this subspace is equivalent to $\psi_{m+n-2}$, etc. $\eofproof$
In QM this is called spin addition
or addition of angular momentum: if you combine a spin $m/2$ particle and a spin $n/2$ particle you get a spin $l/2$ particle, but only the values $l=m+n,m-n-2,\ldots,|m-n|$ are possible.

Irreducible representations of $\SO(3)$

Let $\Phi:\SU(2)\rar\SO(3)$ be the homomorphism constructed in
subsection. If $\Psi:\SO(3)\rar\UU(E)$ is an irreducible representation, then $\wt\Psi\colon=\Psi\circ\Phi$ is an irreducible representation of $\SU(2)$ such that $\wt\Psi(-1)=1_E$, i.e. $\wt\Psi$ is equivalent to a spin $l/2$ representation $\Psi_l:\SU(2)\rar\UU(l+1)$ and since $\Psi_l(-1)=1$ iff $(-z_1)^{l-m}(-z_2)^m=z_1^{l-m}z_2^m$, it follows that $l\in2\N_0$. Conversely each spin $l$ representations for $l\in\N_0$ descend to a representations $\wt\Psi_l$ of $\SO(3)$ - which we will call the spin $l$ representation of $\SO(3)$ for $l\in\N_0$, i.e. for all $g\in\SU(2)$ $$ \Psi(\Phi(g))=\wt\Psi(g)=J\Psi_l(g)J^{-1}=J\wt\Psi_{2l}(\Phi(g))J^{-1}, $$ i.e. for all $U\in\SO(3)$ the spin $l$ representation of $\SO(3)$ is given by: $\Psi(U)=J\wt\Psi_{2l}(U)J^{-1}$.
Suppose $\Psi:\SO(3)\rar\UU(E)$ is a finite dimensional irreducible representation. Then $\dim E=2l+1$ for some $l\in\N_0$ and $\Psi$ is equivalent to the spin $l$ representation of $\SO(3)$.
Finally we want to show that the spin $l$ representation of $\SO(3)$ is equivalent to the representation $\G:\SO(3)\rar\UU({\cal H}_l)$ from subsection. To this end it suffices to verify that the corresponding Lie-algebra representation $\g:\so(3)\rar\Hom({\cal H}_l)$ is equivalent to the representation $\su(2)\rar\Ma(2l+1,\C)$. The construction will essentially duplicate the construction for the irreducible representations of $\su(2)$ given in subsection. The differential $\g(X)\colon=D\G(1)$ is given by (cf. subsection): \begin{eqnarray*} \g(l_1)f&=&z\pa_yf-y\pa_zf,\\ \g(l_2)f&=&x\pa_zf-z\pa_xf,\\ \g(l_3)f&=&y\pa_xf-x\pa_yf~. \end{eqnarray*} Formally these are the components of the vector product of $\nabla f$ and the position vector $(x,y,z)$! As in subsection we define the operators $L_j\colon=i\g(l_j)$ - in QM these are the the angular momentum operators (cf. wikipedia).
Why don't we need to check that all these operators $L_j$ map ${\cal H}_l$ into itself? Why are these operators $L_j$ self-adjoint?
Defining the complex variable $w\colon=x+iy$, verify that $L_3f=w\pa_wf-\bar w\pa_{\bar w}f$, where $\pa_w$ and $\pa_{\bar w}$ denote the Wirtinger derivatives.
Compute $L_jf$ for $f(x,y,z)=u(x^2+y^2+z^2)$ and some smooth function $u:\R^+\rar\C$.
In addition we define operators $$ L_+\colon=L_1+iL_2 \quad\mbox{and}\quad L_-\colon=L_1-iL_2~. $$ The commutation properties of $l_1,l_2,l_3$ and the fact that $\g$ is a Lie-algebra homomorphism imply the following relations: $$ [L_1,L_2]=iL_3,\quad [L_2,L_3]=iL_1,\quad [L_3,L_1]=iL_2 $$ and therefore $$ [L_3,L_-]=-L_-,\quad [L_3,L_+]=L_+,\quad [L_+,L_-]=2L_3, $$ i.e. $L_-$ and $L_+$ are ladder operators for $L_3$ and since the operators $L_j$ are self-adjoint: $L_+=L_-^*$. By the way, the operators $2L_3,L_+,L_-$ satisfy the same commutation relations as the operators $\psi(H),\psi(X),\psi(Y)$ , cf. \eqref{suteq2}. If we knew that $\g:\so(3)\rar\Hom({\cal H}_l)$ is irreducible then by the fact that its dimension equals $2l+1$, we'd infer that $\g$ is equivalent to the spin $2l/2$ representation of $\su(2)$. We will prove that $L_3$ has only simple eigen-values and thus an irreducible subrepresentation of $\g$ must contain at least one of its eigen-vectors. Moreover, we will also prove that $L_1$ and $L_2$ applied to this eigen-vector generate the whole space ${\cal H}_l$, which demonstrates that $\g$ is irreducible.
The operator $L^2\colon=L_1^2+L_2^2+L_3^2$ commutes with all operators $L_j$. In QM $L^2$ is called the square of the magnitude of the angular momentum.
With the help of the operator $L^2$ we also get: \begin{equation}\label{suteq5}\tag{SUT5} L_+L_-=L_1^2+L_2^2+iL_2L_1-iL_1L_2=L^2-L_3^2+L_3 \end{equation} As in subsection we start with the eigen-function of $L_3$ with maximal eigen-value. This time it's the polynomial $P_l\colon=(x+iy)^l$; we have $P\in{\cal H}_l$, for $x+iy\mapsto(x+iy)^l$ is holomorphic and thus harmonic. Furthermore $$ L_3P_l =i(y\pa_xP_l-x\pa_yP_l) =i(yl(x+iy)^{l-1}-xli(x+iy)^{l-1}) =lP_l $$ i.e. $P_l$ is an eigen-function of $L_3$ with eigen-value $l$; next we calculate $L_+P_l$ and $L_-P_l$: \begin{eqnarray*} L_1P_l&=&i(z\pa_yP_l-y\pa_zP_l)=-lz(x+iy)^{l-1}\\ L_2P_l&=&i(x\pa_zP_l-z\pa_xfP_l)=-ilz(x+iy)^{l-1}~. \end{eqnarray*} It follows that we indeed have $L_+P_l=0$. Thus the eigen-vectors of $L_3$ should be obtained by applying $L_-$ to $P_l$ successively: since $L_-$ is a ladder operator for $L_3$, we infer from $[L_3,L_-]=-L_-$ that $L_3P_l^m=(l-m)P_l^m$, where $P_l^m\colon=L_-^mP_l$. Now we calculate $L^2P_l$.
Verify the following formulas: \begin{eqnarray*} L_1^2P_l&=&l(x+iy)^{l-1}x-l(l-1)(x+iy)^{l-2}z^2\\ L_2^2P_l&=&l(l-1)(x+iy)^{l-2}z^2+il(x+iy)^{l-1}y\\ L^2P_l&=&l(l+1)P_l \end{eqnarray*}
This implies that the operator $L^2$ is just multiplication by $l(l+1)$ on the space generated by $P_l^m$, $m\in\N_0$, indeed $$ L^2P_l^m=L^2L_+^mP_l=L_+^mL^2P_l=l(l+1)L_+^mP_l=l(l+1)P_l^m~. $$ Finally we just need to prove that the polynomials $P_l^m$, $m=0,\ldots,2l$ are linearly independent. Since these polynomials are eigen-function of $L_3$ with pairewise different eigen-values, it suffices to show that for all $m=0,\ldots,2l$: $P_l^m\neq0$. Here we could just copy the argument from subsection, however, the following argument allows the computations of the norms: from $L^2P_l^m=l(l+1)P_l^m$, $L_3P_l^m=(m-l)P_l^m$, $L_-^*=L_+$ and \eqref{suteq5} we conclude: \begin{eqnarray*} \Vert P_l^{m+1}\Vert^2 &=&\Vert L_-P_l^m\Vert^2 =\la L_+L_-P_l^m,P_l^m\ra =\la(L^2-L_3^2+L_3)P_l^m,P_l^m\ra\\ &=&(l(l+1)-(l-m)^2+(l-m))\Vert P_l^m\Vert^2 =(2l-m)(m+1)\Vert P_l^m\Vert^2 \end{eqnarray*} which readily implies that for all $m=0,\ldots,2l$: $P_l^m\neq0$ and $P_l^{2l+1}=0$.
Compute the norm of $P_l^m$ in terms of the norm of $P_l$.
For all $f,g\in C^\infty(\R^3)$ we have: $L_j(fg)=(L_jf)g+f(L_jg)$.
Let $P$ be the polynomial $P=(x+iy)^n(x-iy)^mz^k$ for $n,m,k\in\N_0$. Verify that $L_jP$ is always a linear combination of polynomials of this type. For which values of $n,m,k$ is $P$ harmonic? Suggested solution.
What does it mean to a physicist that $L^2$ is just multiplication by $l(l+1)$ on ${\cal H}_l$? Well, this space is a model for the state space of a spin $l$ particle and the magnitude of the angular momentum of that particle is $\sqrt{l(l+1)}$.
Let $a=(a_1,a_2,a_3)$ be a unit vector in $\R^3$; $L_a\colon=a_1L_1+a_2L_2+a_3L_3$ is an observable, it's the angular momentum operator in the direction of $a$. Verify that: $$ L_a\G(U)=\G(U)L_3, $$ where $U\in\SO(3)$ satisfies $Ue_3=a$. Conclude that the eigen-values of $L_a$ are: $l,l-1,\ldots,-l+1,-l$.
The point is that $l_a=Ul_3U^{-1}$ (cf. subsection), because for all $x\in\R^3$: $l_a(x)=a\times x$ and therefore $$ Ul_3U^{-1}x=U(e_3\times U^{-1}x)=Ue_3\times x=a\times x=l_a(x)~. $$ Now by exam for all $U\in\SO(3)$ and all $X\in\so(3)$: $\psi(\Ad(U)X)=\Ad(\Psi(U))\psi(X)$; from this it follows that $$ L_a =i\g(l_a) =i\g(Ul_3U^{-1}) =i\G(U)\g(l_3)\G(U)^{-1} =\G(U)L_3\G(U)^{-1} $$ If $L_af(x)=\l f(x)$, then for $g=f\circ U=\G(U^{-1})f$: $L_3g=\l L_3g$.
Let $a=(a_1,a_2,a_3)$ be a unit vector in $\R^3$; $S_a\colon=\tfrac12(a_1\s_1+a_2\s_2+a_3\s_3)$. Verify that there is some $U\in\SU(2)$ such that $S_a=U\s_3U^{-1}$, i.e. $S_a$ and $\s_3$ are similar.
For the spin $l/2$ representation $\psi_l:\sla(2,\C)\rar\Hom(\C^{l+1})$ the operator $\psi_l(S_a)$ is called the angular momentum operator of a spin $l/2$ particle in the direction of $a$. The above example proves that $\psi_l(S_a)$ has eigen-values: $l/2,l/2-1,l/2-2,\ldots,-l/2$.
Verify that the character $\chi$ of the spin $l$ representation $\SO(3)\rar\UU({\cal H}_l)$ is given by $$ \chi(e^{tl_3})=\frac{\sin((l+1)t/2)}{\sin(t/2)}~. $$
From \eqref{suteq1} and $\s_3=H$ we infer that $$ \exp(tl_3) =\Phi(\exp(t\s_3/2i)) =\Phi(\exp(-i(t/2)H))~. $$ Finally theorem implies that $$ \chi(e^{tl_3}) =\chi(e^{-i(t/2)H}) =\frac{\sin((l+1)t/2)}{\sin(t/2)}~. $$ $\eofproof$

Baker-Campbell-Hausdorff Formula

For commuting matrices $X,Y\in\Ma(n,\C)$ we have $e^Xe^Y=e^{X+Y}$ or in case $X$ and $Y$ are sufficently close to $0$: $\log(e^Xe^Y)=X+Y$. The BCH-formula will give us an expression for the left hand side in terms of the Lie-algebra generated by $X$ and $Y$ for arbitrary $X,Y\in\Ma(n,\C)$!

A special case

For $X,Y\in\Ma(n,\C)$ such that $[Y,[X,Y]]=0$, the following holds: 1. For all $t\in\R$: $[X,e^{tY}]=t[X,Y]e^{tY}$. 2. For all $n\in\N$: $[X,Y^n]=n[X,Y]Y^{n-1}$. 3. If moreover $[X,[X,Y]]=0$, then we have the Campell-Hausdorff formula: $$ e^{Y}e^{X}=e^{Y+X+\frac12[Y,X]}~. $$
$\proof$ 1. Put $f(t)=[X,e^{tY}]$, then $$ f^\prime(t)=[X,e^{tY}Y]=e^{tY}[X,Y]+[X,e^{tY}]Y=f(t)Y+e^{tY}[X,Y] $$ and the relation $[Y,[X,Y]]=0$ implies: $$ f^\dprime(t)=f^\prime(t)Y+e^{tY}[X,Y]Y $$ i.e. $f$ is a solution of the linear ODE $f^\dprime-2f^\prime Y+fY^2=0$. It is easily verified that $g(t)\colon=t[X,Y]e^{tY}$ is also a solution of this ODE; since $f(0)=g(0)$ and $f^\prime(0)=g^\prime(0)$, both must coincide by uniqueness of solutions of linear ODEs.
2. follows from 1. by taking $n$-fold derivatives at $t=0$.
3. Put $f(t)\colon=e^{tY}e^{tX}$, then we have (cf.
section): $e^{t\ad(Y)}=X+t[Y,X]$ and thus we get the ODE \begin{eqnarray*} f^\prime(t) &=&Ye^{tY}e^{tX}+e^{tY}Xe^{tX} =Yf(t)+e^{tY}Xe^{-tY}e^{tY}e^{tX}\\ &=&(Y+\Ad(e^{tY})X)f(t) =(Y+e^{t\ad(Y)}X)f(t) =(Y+X+t[Y,X])f(t)~. \end{eqnarray*} Since $[X,Y]$ commutes with $X+Y$, the solution is given by $f(0)\exp(t(X+Y)+\tfrac12t^2[X,Y])$. $\eofproof$

Derivative of the exponential

Cf. e.g. wikipedia. For $X,Y\in\Ma(n,\C)$ put $U(s,t)\colon=e^{t(X+sY)}$, then $\pa_tU=(X+sY)U$ and therefore $$ \pa_t\pa_sU =\pa_s\pa_tU =\pa_s((X+sY)U) =YU+(X+sY)\pa_sU $$ Thus putting for fixed $s$: $V(t)\colon=\pa_sU(s,t)$, $V$ solves the following inhomogeneous linear ODE: $$ V^\prime(t)=(X+sY)V(t)+YU(s,t)~. $$ Since $V(0)=\pa_sU(s,0)=0$, it follows from the solution formula for inhomogeneous linear ODE: \begin{eqnarray*} \pa_sU(s,t) =V(t) &=&U(s,t)\int_0^t\exp(-r(X+sY))YU(s,r)\,dr\\ &=&U(s,t)\int_0^t\Ad(e^{-r(X+sY)})Y\,dr\\ &=&U(s,t)\int_0^te^{-r\ad(X+sY)}Y\,dr =U(s,t)\frac{1-e^{-t\ad(X+sY)}}{\ad(X+sY)}Y~. \end{eqnarray*} This in particular yields for $t=1$ and $s=0$: \begin{eqnarray*} D\exp(X)Y &=&\ftdl s0e^{X+sY} =e^X\Big(\frac{1-e^{-\ad(X)}}{\ad(X)}\Big)Y\\ &=&e^X\Big(\sum_{k=0}^\infty\frac{(-1)^k}{(k+1)!}\ad(X)^k\Big)Y\\ &=&e^X\Big(Y-\tfrac1{2!}[X,Y]+\tfrac1{3!}[X,[X,Y]]-+\cdots\Big)~. \end{eqnarray*}
$D\exp(X):\Ma(n,\C)\rar\Ma(n,\C)$ is an isomorphism iff for all eigen-values $\l_1,\ldots,\l_n$ of $X$: $\l_j-\l_k\notin2\pi i\Z$.
By the formula above $D\exp(X)$ is an isomorphism iff $$ \Big(\frac{1-e^{-\ad(X)}}{\ad(X)}\Big) $$ is an isomorphism. Since the eigen-values of $\ad(X)$ are $\l_j-\l_k$, $j,k=1,\ldots,n$ (cf. exam), this holds if and only if for all $\l_j\neq\l_k$: $1-e^{\l_j-\l_k}=0$, i.e. $\l_j-\l_k\notin2\pi i\Z$.
Suppose $X,Y\in\Ma(n,\C)$, $[X,Y]=aY$ and for some $n\in\Z$: $a=2\pi in$. Prove that $$ e^{X+tY}=\exp\Big(X+\frac{at}{1-e^{-a}}\,Y\Big)~. $$
Let $B$ be the open subset of $\Ma(n,\C)$ such that for all $X\in B$ the singular values of $X$ are strictly smaller than $2\pi$. Then for any matrix Lie-group $G$ $\exp(X):B\cap\GG\rar G$ is open. If $G=\SO(n)$, then $\exp(X):B\cap\so(n)\rar\SO(n)$ is onto.

BCH formula

Cf. e.g.
wikipedia. By the chain rule we get for any smooth curve $Z:\R\rar\Ma(n,\C)$: $$ \ftd s e^{Z(s)} =D\exp(Z(s))Z^\prime(s) =e^{Z(s)}\Big(\frac{1-e^{-\ad(Z(s))}}{\ad(Z(s))}\Big)Z^\prime(s) $$ Now if $\exp(Z(s))=\exp(X)\exp(sY)$, then $$ \ftd s e^{Z(s)}=e^Xe^{sY}Y=e^{Z(s)}Y $$ and therefore $$ \Big(\frac{1-e^{-\ad(Z(s))}}{\ad(Z(s))}\Big)Z^\prime(s)=Y, \quad\mbox{i.e.}\quad Z^\prime(s)=\Big(\frac{1-e^{-\ad(Z(s))}}{\ad(Z(s))}\Big)^{-1}Y=g(e^{\ad(Z(s))})Y, $$ where $g:\{|z-1| < 1\}\rar\C$, $g(z)=\log(z)/(1-1/z)$ is holomorphic. So let us compute $\ad(Z(s))$: Sincs $\Ad$ is a representation, we have by definition $\Ad(e^{Z(s)})=\Ad(e^X)\Ad(e^{sY})$ and therefore: $e^{\ad(Z(s))}=e^{\ad(X)}e^{s\ad(Y)}$. This gives us the Baker-Campell-Hausdorff formula: \begin{equation}\label{bcheq1}\tag{BCH1} \log(e^Xe^Y)=Z(1)=X+\int_0^1g(e^{\ad(X)}e^{s\ad(Y)}\,ds, \end{equation} where $\log:\{|z-1| < 1\}\rar\C$ is defined by power series expansion: $\log(z)=(z-1)-(z-1)^2/2+-\cdots$. By ignoring terms of order three or higher we get: \begin{equation}\label{bcheq2}\tag{BCH2} \log(e^Xe^Y)=X+Y+\tfrac12[X,Y]+\tfrac1{12}[X,[X,Y]]+\tfrac1{12}[Y,[Y,X]]+\cdots \end{equation}
← Representation of Finite Groups → Representations of SU(3)

<Home>
Last modified: Thu Aug 1 15:01:37 CEST 2024