← $\SU(3)$ → Root Systems
What should you be acquainted with? 1. Linear Algebra, in particular inner product spaces both over the real and the complex numbers. This chapter is essentially taken from Brian Hall, Lie Groups, Lie Algebras, and Representations, Chapter 7

Semi-simple Lie-Algebras

Semi-simple Lie-Algebras

We will always assume that $A$ is a real Lie-algebra admitting an euclidean product $\la.,.\ra$ such that $$ \forall x,y,z\in A:\qquad \la\ad(x)y,z\ra=-\la y,\ad(x)z\ra $$ i.e. $\ad(x)$ is skew-symmetric on $(A,\la.,.\ra)$. Assume further that $A\cap iA=\emptyset$, then by proposition: $A^\C=A+iA$ and $$ \forall x\in A\ \forall z,w\in A^\C:\qquad \la\ad(x)z,w\ra=-\la z,\ad(x)w\ra, $$ where $\ad(x)(u+iv)\colon=\ad(x)u+i\ad(x)v$ is the unique $\C$-linear extension and $\la.,.\ra$ the unique complex euclidean product, i.e. $$ \la u+iv,x+iy\ra\colon=\la u,x\ra-i\la u,y\ra+i\la v,x\ra+\la v,y\ra~. $$ For $z=x+iy\in A^\C$, $x,y\in A$, we put \begin{equation}\label{ssleq1}\tag{SSL1} z^*\colon=-x+iy^* \end{equation} Then $\ad(z)^*=\ad(z^*)$, indeed: $$ \ad(x+iy)^*=\ad(x)^*-i\ad(y)^*=-\ad(x)+i\ad(y)=\ad(z^*) $$
For $z=u+iv$ show that $\la z,z^*\ra=-\Vert u\Vert^2+\Vert v\Vert^2-2i\Re\la u,v\ra$.
The center $Z(A)$ of the Lie-algebra $A$ is an ideal, i.e. $Z(A)$ is invariant under the adjoint representation: indeed if $z\in Z(A)$, then $[x,z]=0$ for all $x\in A$ and we get for any $y\in A$ by Jacobi's identity $$ [x,[y,z]]=-[y,[z,x]]-[z,[x,y]]=0 \quad\mbox{i.e.}\quad \forall y\in A:\quad[y,z]\in Z(A)~. $$ Moreover the orthogonal complement $I^\perp$ of an ideal is again an ideal, this is because if $I$ is $\ad$-invariant, then so is its orthogonal complement $I^\perp$: for all $x\in A$, all $y\in I^\perp$ and all $z\in I$: $$ \la\ad(x)y,z\ra =\la y,\ad(x^*)z\ra =0 \quad\mbox{i.e.}\quad \ad(x)I^\perp\sbe I^\perp~. $$ Moreover $A$ is the Lie-algebra direct sum of $I$ and $I^\perp$, because for all $y\in I^\perp$ and all $z\in I$: $[y,z]\in I\cap I^\perp=\{0\}$.
A Lie-algebra $A$ is said to be semi-simple if $A$ has trivial center.
As $Z(A)^\C=Z(A^\C)$, $A$ is semi-simple iff $A^\C$ is semi-simple. Decomposing $A=Z(A)\oplus Z(A)^\perp$ we assert that $Z(A)^\perp$ is semi-simple, because any point $x$ in the center of $Z(A)^\perp$ satisfies $[x,y]=0$ for all $y\in Z(A)^\perp$ and by definition of the center: $[x,y]=0$ for all $y\in Z(A)$.
Every semi-simple Lie-algebra $A$ is the Lie-algebra direct sum of pairwise orthogonal simple Lie-algebras.
$\proof$ We know that $A$ is the direct sum of pairwise orthogonal $\ad$-invariant sub-spaces $A_j$, irreducible under the action of the adjoint representation. As $A_j$ is $\ad$-invariant, $A_j$ is an ideal and irreducibility exactly means that the Lie-Algebra $A_j$ is simple: indeed, for all $k\neq j$: $[A_k,A_j]=\{0\}$ and therefore $A_j$ is irreducible under the action of the adjoint representation iff it is simple as a sub-algebra of $A$. $\eofproof$
For all $n\geq2$ the Lie-algebra $\sla(n,\C)$ is semi-simple.
Let $X$ be any element in the center of $\sla(n,\C)=\{A\in\Ma(n,\C):\tr A=0\}$, i.e. $[X,Z]=0$ for all $Z\in\sla(n,\C)$, then by exam for all $j\neq k$: $$ 0 =[X,E^{jj}-E^{kk}] =\sum_r(X_{rj}E^{rj}-X_{jr}E^{jr}-X_{rk}E^{rk}+X_{kr}E^{kr}) $$ and in particular we infer for the $(j,k)$ and $(k,j)$ entries: $$ -2X_{jk}=0 \quad\mbox{and}\quad 2X_{kj}=0, $$ i.e. $X$ must be diagonal. Finally we get for diagonal $X$: $$ 0=[X,E^{jk}]=(X_{jj}-X_{kk})E^{jk} $$ which proves that $X=0$.

Roots

Cartan-algebras

A sub-algebra $H$ of a semi-simple Lie-algebra $A$ is said to be a Cartan-algebra if the following holds:
  1. For all $x,y\in H$: $[x,y]=0$.
  2. If for some $x\in A$ and all $h\in H$: $[x,h]=0$, then $x\in H$.
  3. For all $h\in H$: $\ad(h)$ is diagonalizable.
I.e. $H$ is a maximal commutative sub-algebra of $A$ and the set of linear mappings $\{ad(h):h\in H\}$ is jointly diagonalizable.
If any two such Cartan sub-algebras of $A$ are isomorphic, the dimension of these algebras is said to be the rank
.
Suppose $H$ is a Cartan-algebra of the real semi-simple Lie-algebra $A$. Then $H^\C$ is a Cartan-algebra of the Lie-algebra $A^\C$.
$\proof$ Obviously $A^\C$ is commutative. Assume $z=x+iy\in A^\C$ and $[z,h]=0$ for all $h\in H^\C$. In particular for $h\in H$: $0=[z,h]=[x,h]+i[y,h]$. As $A\cap iA=\{0\}$ we infer that by the maximality of $H$: $x,y\in H$ and thus $z\in H^\C$. Finally, all $x,y\in H$ are diagonalizable and commute. Hence all $x+iy\in H^\C$ are diagonalizable. $\eofproof$
Suppose $H$ is a Cartan sub-algebra of the real semi-simple Lie-algebra $A$. A vector $r\in H\sm\{0\}$ is called a root if $$ \exists x\in A\sm\{0\}\,\forall h\in H:\quad \ad(h)x=\la h,r\ra x~. $$ $x$ is said to be a root vector and the set of all roots is denoted by $R$.
As we assume that for a real Lie-algebra $A$ all mappings $\ad(x)$, $x\in A$, are skew-symmetric, all eigen-value of $\ad(x)$ are purely imaginary. Moreover $\la.,.\ra$ is real on $H\times H$ and therefore any root $r$ must be in $iH\sbe H^\C$. Hence for any root $r$ the linear mapping $$ \ad(r):A^\C\rar A^\C \quad\mbox{is self-adjoint.} $$ Given any $r\in A$ we define the vector-space $A_r$ to be the space of all $x\in A$ such that for all $h\in H$: $\ad(h)x=\la h,r\ra x$, i.e. \begin{equation}\label{rooeq1}\tag{ROO1} A_r\colon=\bigcap_{h\in H}\ker(\ad(h)-\la h,r\ra)~. \end{equation} For $r=0$ we conclude by the maximality of $H$: $A_0=H$ and for $r\notin R\cup\{0\}$ we have $A_r=\{0\}$. Since all $\ad(h)$, $h\in H^\C$ are jointly diagonalizable, we get a decomposition $$ A^\C=H^\C\oplus\bigoplus_{r\in R}A_r^\C $$ As $H^\C$ is spanned by its self-adjoint elements, i.e. elements $h$ satisfying $h^*=h$, we may assume that this decomposition is orthogonal.
1. For all $r,s\in A$ we have $[A_r,A_s]\sbe A_{r+s}$. 2. If $z\in A_r$, then $z^*\in A_{-r}$. 3. $\lhull{R}=H$.
$\proof$ 1. Cf. e.g. lemma. Suppose $x\in A_r$ and $y\in A_s$; we have to verify that for all $h\in H$: $[h,[x,y]]=\la h,r+s\ra[x,y]$. By Jacobi`s identity we have: $$ [h,[x,y]] =-[x,[y,h]]-[y,[h,x]]~. $$ Now $[h,x]=\la h,r\ra x$ and $[h,y]=\la h,s\ra y$ and thus $$ [h,[x,y]] =[x,\la h,s\ra y]-[y,\la h,r\ra x] =\la h,s+r\ra [x,y]~. $$ 2. Suppose $z=x+iy$ and $[h,x+iy]=\la h,r\ra(x+iy)$. As $(\l z)^*=\bar\l z^*=\bar\l(-x+iy)$ and $\la h,r\ra$ is purely imaginary we get: $$ -\la h,r\ra(-x+iy) \cl{\la h,r\ra}(x+iy)^* =[h,x+iy]^* =([h,x]+i[h,y])^* =-[h,x]+i[h,y] =[h,-x+iy]~. $$ This shows that $-r$ is a root with root-vector $z^*$.
3. If $\lhull{R}\neq H$ then there is some normed vector $h\in H$ such that for all $r\in R$: $\la h,r\ra=0$; since $r$ is a root we must have for all $h_1\in H$ and all $x_r\in A_r$: $[h_1,x_r]=\la h_1,r\ra x_r$, in particular: $[h,A_r]=0$. As $[h,A_0]=0$ we infer that $h$ commutes with all $x\in A$, i.e. $h$ is in the center of $A$. On the other hand $A$ is semi-simple, i.e. its center is trivial and thus $h=0$. $\eofproof$

Sub-algebras isomorphic to $\sla(2,\C)$

Given a root $r\in H$ and a root vector $x\in A_r$, we will show that the sub-algebra generated by $r,x$ and $x^*$ is isomorphic to $\sla(2)$. By definition we have $[r,x]=\Vert r\Vert^2 x$ and $[r,x^*]=-\Vert r\Vert^2 x^*$ and thus both $[r,x]$ and $[r,x]$ lie in this sub-algebra, but what about $[x,x^*]$?
For all $x\in A_r$, all $y\in A_{-r}$ we have: $[x,y]=\la y,x^*\ra r$.
$\proof$ By proposition: $[A_r,A_{-r}]\sbe A_0=H$ and thus $[x,y]\in H$. Next we notice that $\ad(x)^*=\ad(x^*)$ and by proposition for all $h\in H$: $\ad(h)x^*=\la h,-r\ra x^*$; therefore: \begin{equation}\label{rooeq2}\tag{ROO2} \la h,[x,y]\ra =\la h,\ad(x)y\ra =\la\ad(x^*)h,y\ra =-\la\ad(h)x^*,y\ra =\la\la h,r\ra x^*,y\ra =\la h,r\ra\la x^*,y\ra~. \end{equation} Hence for $h\perp r$ we also have $[x,y]\perp h$; as $[x,y]$ is in $H$ it follows that $[x,y]=cr$ for some $c\in\C$. Putting $h=r$ in \eqref{rooeq2} we get: $$ \bar c\Vert r\Vert^2 =\la r,[x,y]\ra =\Vert r\Vert^2\la x^*,y\ra, \quad\mbox{i.e.}\quad \bar c=\la x^*,y\ra~. $$ $\eofproof$
For all roots $r$, all $x\in A_r\sm\{0\}$ and all $y\in A_{-r}\sm\{0\}$ the sub-space $\lhull{r,x,y}$ is a sub-algebra satisfying $$ [r,x]=\Vert r\Vert^2 x,\quad [r,y]=-\Vert r\Vert^2 y,\quad [x,y]=\la y,x^*\ra r~. $$ As $r,x,y$ are eigen-vectors of the self-adjoint operator $\ad(r):A\rar A$ with eigen-values $0,\Vert r\Vert^2$ and $-\Vert r\Vert^2$, they must also be linearly independent. To verify that this sub-algebra is isomorphic to $\sla(2,\C)$ we are looking for scalars $\a,\b,\g$, such that $h_r\colon=\a r$, $x_r\colon=\b x$, $y_r\colon=\g x^*$ satisfy the commutation relations: \begin{equation}\label{rooeq3}\tag{ROO3} [h_r,x_r]=2x_r,\quad [h_r,y_r]=-2y_r,\quad [x_r,y_r]=h_r~. \end{equation} This implies that: \begin{eqnarray*} [h_r,x_r]&=&\a\b[r,x]=\a\b\Vert r\Vert^2 x=\a \Vert r\Vert^2x_r\\ [h_r,y_r]&=&-\a\g[r,x^*]=-\a\g\Vert r\Vert^2 x^*=-\a\Vert r\Vert^2y_r\\ [x_r,y_r]&=&\b\g[x,x^*]=\b\g c_rr=\b\g c_rh_r/\a~. \end{eqnarray*} Thus the numbers $\a,\b,\g$ simply need to satisfy: $\a\Vert r\Vert^2=2$ and $\b\g c_r/\a=1$, e.g.: $$ \a=\frac2{\Vert r\Vert^2} \quad\mbox{and}\quad \b=\g=\sqrt{\frac{\a}{c_r}}=\sqrt{\frac{2}{c_r\Vert r\Vert^2}}~. $$ The vector $r^\vee\colon=h_r\colon=2r/\Vert r\Vert^2$ is called the co-root of the root $r$ and the vectors $x_r\in A_r$ and $y_r\in A_{-r}$ satisfy \eqref{rooeq3} and: \begin{equation}\label{rooeq4}\tag{ROO4} y_r=x_r^*\quad\mbox{and}\quad \la r,h_r\ra=2~. \end{equation} Suppose $s=cr$ is another root for some $c\in\C\sm\{0\}$, then $h_s=2cr/\Vert cr\Vert^2=h_r/c$. Now $\ad:\lhull{h_r,x_r,x_r^*}\rar\Hom(A)$ is a representation and \begin{equation}\label{rooeq5}\tag{ROO5} \ad(h_r)x_s =\ad(ch_s)x_s =2cx_s \end{equation} i.e. $x_s$ is an eigen-vector of $\ad(h_r)$ with eigen-value $2c$. By proposition: $2c\in\Z$. Exchanging the roles of $r$ and $s$, we get: $2/c\in\Z$. Hence $c\in\{\pm1/2,\pm1,\pm2\}$ and the only possible roots are $\pm r/2,\pm r,\pm2r$. Choosing among these the one having minimal norm and renaming this to be $r$, we conclude that only $\pm r,\pm2r$ are possible roots. So let $B$ be the sub-space generated by $h_r$, $A_r$, $A_{2r}$, $A_{-r}$ and $A_{-2r}$. Then by proposition and lemma $B$ is in fact a sub-algebra and therefore $$ \ad:\lhull{h_r,x_r,x_r^*}\rar\Hom(B) $$ is a representation. Of course $B$ contains the sub-space $\lhull{h_r,x_r,x_r^*}$ invariant under this representation. The orthogonal complement $Z$ of $\lhull{h_r,x_r,x_r^*}$ in $B$ is also invariant. What are the eigen-values of the self-adjoint operator $\ad(h_r):Z\rar Z$? As a self-adjoint mapping on $B$ $\ad(h_r)$ has the eigen-values $0,\pm2$ and by \eqref{rooeq5} possibly $\pm4$ and the only eigen-vector with eigen-value $0$ is $h_r$. The self-adjoint mapping $$ \ad(h_r):\lhull{h_r,x_r,x_r^*}\rar\lhull{h_r,x_r,x_r^*} $$ has the eigen-values $-2,0,2$. Now, if $Z\neq\{0\}$, then $0\neq x_{2r}\in B$ and by \eqref{rooeq5}: $\ad(h_r)x_{2r}=4x_{2r}$. Again by proposition this implies that $\ad(h_r):Z\rar Z$ must have the eigen-values $\pm4,\pm2$ and $0$. However, the only eigen-vector of $\ad(h_r):B\rar B$ with eigen-value $0$ is $h_r$. Therefore $Z=\{0\}$, i.e. $B=\lhull{h_r,x_r,x_r^*}$, $A_r=\lhull{x_r}$ and $A_{-r}=\lhull{x_r^*}$.
For each root $r$, the only multiples of $r$, that are roots are $r$ and $-r$. 2. For each root $r$: $\dim A_r=1$ and thus $\dim A=\dim H+|R|$. 3. For all roots $r,s$ we have $$ \la h_r,s\ra=\frac{2\la s,r\ra}{\Vert r\Vert^2}\in\Z~. $$
$\proof$ We only have to verify the third statement. The adjoint action restricted to the algebra $\lhull{h_r,x_r,x_r^*}$ gives a representation of this algebra in $\Hom(A)$. If $y$ is a root vector for the root $s$, then in particular $[h_r,y]=\la h_r,s\ra y$, i.e. $y$ is an eigen-vector for $\ad|\lhull{h_r,x_r,x_r^*}$. By proposition this implies that $\la h_r,s\ra\in\Z$. $\eofproof$

The Weyl group

The roots of $\sla(3,\C)$ are given by $\pm H_j$, $j=1,2,3$, and the Weyl group of $\sla(3,\C)$ turned out to be the group generated by the reflections about the lines orthogonal to the roots - cf. subsection.
The Weyl group $W(A)$ of a semi-simple Lie-algebra $A$ is the group generated by the reflections about hyperspaces orthogonal to $r\in R$: $$ R_r:H\rar H,\quad R_r(h)\colon=h-\frac{2\la h,r\ra}{\Vert r\Vert^2}r~. $$
In the previous discussion of the Weyl group of $\sla(3,\C)$ a representation of $\SU(3)$ had been our starting point. In proposition we verified that the linear map $\Ad(g):\sla(3,\C)\rar\sla(3,\C)$, $X\mapsto gXg^{-1}$ maps root vectors for the root $s\in{\cal H}$ - now we denote this space by $H$ - to root vectors for the root $\Ad(g)s$, provided ${\cal H}$ is $\Ad(g)$ invariant, i.e. $g\in N({\cal H})$. Inspired by this result and the fact that for any linear operator $W\in\Hom(E)$: $\Ad(e^W)=e^{\ad(W)}$, cf. subsection, a substitute for the mapping $\Ad(g)$ will be a mapping of type $e^{\ad(x)}\in\Hom(A)$.
Now we are given the reflection $R_r$, mapping the root $s$ to $R_r(s)$ - which we`d like to prove to be a root as well - and we need to find a linear mapping (actually an isomorphism) $S_r^{-1}:A\rar A$ such that for all root vectors $x$ for the root $s$: $$ \ad(h)(S_r^{-1}(x))=\la h,R_r(s)\ra S_r^{-1}(x), $$ i.e. $S_r^{-1}(x)$ is a root vector for the root $R_r(s)$. Now $$ \la h,R_r(s)\ra S_r^{-1}(x) =\la R_r(h),s\ra S_r^{-1}(x) =S_r^{-1}(\la R_r(h),s\ra x) =S_r^{-1}\ad(R_r(h))x $$ which evidently shows $R_r(s)$ to be a root with root vector $S_r^{-1}(x)$ provided $S_r$ meets the following condition: \begin{equation}\label{rooeq6}\tag{ROO6} \forall h\in H:\qquad S_r\ad(h)S_r^{-1}=\ad(R_r(h))~. \end{equation}
For any representation $\psi:A\rar\Hom(E)$ in a finite dimensional space $E$: $$ \forall x,y,z\in A:\quad e^{\psi(x)}\psi(y)e^{-\psi(x)}=\psi(e^{\ad(x)}y) \quad\mbox{and}\quad e^{\psi(z)}e^{\psi(x)}\psi(y)e^{-\psi(x)}e^{-\psi(z)} =\psi(e^{\ad(z)}e^{\ad(x)}y) $$
$\proof$ By subsection we have for any linear operator $W\in\Hom(E)$: $\Ad(e^W)=e^{\ad(W)}$, i.e. $\Ad(e^{\psi(x)})\psi(y)=e^{\ad(\psi(x))}\psi(y)$. As $\psi:A\rar\Hom(E)$ is an algebra homomorphism, i.e. $\ad(\psi(x))\psi(y)=\psi(\ad(x)y)$, we also have: $$ e^{\ad(\psi(x))}\psi(y)=\psi(e^{\ad(x)}y) $$ It follows that $$ \forall x,y\in A:\quad e^{\psi(x)}\psi(y)e^{-\psi(x)} =\Ad(e^{\psi(x)})\psi(y) =e^{\ad(\psi(x))}\psi(y) =\psi(e^{\ad(x)}y)~. $$ and applying the formula twice: $$ e^{\psi(z)}e^{\psi(x)}\psi(y)e^{-\psi(x)}e^{-\psi(z)} =e^{\psi(z)}\psi(e^{\ad(x)}y)e^{-\psi(z)} =\psi(e^{\ad(z)}e^{\ad(x)}y) $$ $\eofproof$
We are going to apply the previous lemma to the representation $\ad:\lhull{h_r,x_r,y_r}\rar A$. Can we find a vector $v\in\lhull{h_r,x_r,y_r}$ such that $S_r=e^{\ad(v)}$? By \eqref{rooeq6} and lemma $v$ must satisfy \begin{equation}\label{rooeq7}\tag{ROO7} \forall h\in H:\qquad \ad(e^{\ad(v)}h)=\ad(R_r(h))~. \end{equation} Assume $S_r=e^{\ad(v)}$, for arbitrary $v\in\lhull{h_r,x_r,y_r}$. If $h\perp r$ then $[h,h_r]=0$ and $[h,x_r]=\la h,r\ra x_r=0=\la h,-r\ra y_r=[h,y_r]$ and thus $[h,v]=0$. Hence $e^{\ad(v)}h=h$; on the other hand $R_r(h)=h$ and we conclude that \eqref{rooeq7} holds for all $v\in\lhull{h_r,x_r,y_r}$ and all $h\perp r$. Thus our only requirement \eqref{rooeq7} is a consequence of \begin{equation}\label{rooeq8}\tag{ROO8} e^{\ad(v)}h_r=-h_r~. \end{equation} This is just a matter of solving an equation in the three dimensional space $\lhull{h_r,x_r,y_r}$. The matrices of $\ad(h_r)$, $\ad(x_r)$ and $\ad(y_r)$ with respect to the basis $h_r,x_r,y_r$ are easily calculated: $$ \left(\begin{array}{ccc} 0&0&1\\ 0&2&0\\ 0&0&-2 \end{array}\right), \left(\begin{array}{ccc} 0&0&1\\ -2&0&0\\ 0&0&0 \end{array}\right), \left(\begin{array}{ccc} 0&-1&0\\ 0&0&0\\ 2&0&0 \end{array}\right) $$ Now put $v=\a h_r+\b x_r+\g y_r$, then \eqref{rooeq8} is equivalent to the matrix equation $$ \exp\left(\begin{array}{ccc} 0&-\g&\b\\ -2\b&2\a&0\\ 2\g&0&-2a \end{array}\right) \left(\begin{array}{c} 1\\ 0\\ 0 \end{array}\right) = \left(\begin{array}{c} -1\\ 0\\ 0 \end{array}\right) $$ This equation doesn`t have a solution if we assume only one of the coefficients $\a,\b$ or $\g$ differs from zero. So let`s assume $\a=0$ but $\b,\g\neq0$; putting $\b\g=p^2$ the matrix equation comes down to two equations: $$ e^{-2p}(e^{4p}+1)=-2\quad\mbox{and}\quad ae^{-2p}(e^{4p}-1)=0~. $$ Thus $p\in\{i\pi/2,i\pi,3i\pi/2\}$ and $e^{-2p}=-1$, i.e. $p=i\pi/2$ and we choose $\b=\pi/2$ and $\g=-\pi/2$; for this choice we get $$ \exp\left(\begin{array}{ccc} 0&\frac12\pi&\frac12\pi\\ -\pi&0&0\\ -\pi&0&0 \end{array}\right) =\left(\begin{array}{ccc} -1&0&0\\ 0&0&-1\\ 0&-1&0 \end{array}\right) $$ Thus we finally found our operator $S_r$ \begin{equation}\label{rooeq9}\tag{ROO9} S_r=e^{\ad(v)}=e^{\frac12\pi\ad(x_r-y_r)}~. \end{equation} On $\lhull{h_r,x_r,y_r}$ this mapping is given by $$ h_r\mapsto-h_r, x_r\mapsto-y_r, y_r\mapsto-x_r~. $$
The mapping $\ad(x_r-y_r)$ is skew symmetric on the whole space $A$ and therefore $S_r\in\Hom(A)$ is an isometry - actually on $\lhull{h_r,x_r,y_r}$ it`s a rotation about the axis defined by the vector $x_r-y_r$ with angle $\pi\norm{x_r-y_r}/2$.
The Weyl group $W(A)$ is finite and for each root $r$, the mapping $R_r$ maps the set of roots onto itself. Hence the set of roots is invariant under the Weyl group $W(A)$. Moreover for each root vector $x$ with root $s$ the vector $e^{-\frac12\pi\ad(x_r-y_r)}x$ is a root vector with root $R_r(s)$.
$\proof$ We are left proving finiteness of the Weyl group: As any element $g$ of the Weyl group is a linear transformation on $H$ and the set of roots $R$ spans the Cartan algebra $H$, $g$ is determined by its action on $R$. But $g$ simply permutes the roots. Hence $W$ is isomorphic to a subgroup of the group of permutations of $R$. $\eofproof$
Remember, if $\ad(h_r)$ is self-adjoint, then the vectors $h_r,x_r,y_r$ are pairwise orthogonal but not orthonormal, for $\norm{h_r}\neq1$ in general. The vectors $x_r$ and $y_r$ can always be choosen to be normalized and thus both $h_r/\norm{h_r}=r/\Vert r\Vert,x_r,y_r$ and $$ b_0\colon=ih_r/\norm{h_r}=r/\Vert r\Vert^2,\quad b_1\colon=(x_r-y_r)/\sqrt2,\quad b_2\colon=i(x_r+y_r)/\sqrt2 $$ form orthonormal bases of $\lhull{h_r,x_r,y_r}$. The advantage of the latter over the former: they are skew symmetric and thus the linear mappings $\ad(b_j)$, $j=0,1,2$, are isometries.
Compute the matrix of $\ad(\a b_0+\b b_1+\g b_2)$ with respect to the basis $b_0,b_1,b_2$.
Other choices of $v$ are possible, e.g. $v=\frac12i\pi(x_r+y_r)$. For this choice we get: $$ \exp\left(\begin{array}{ccc} 0&-\frac12i\pi&\frac12i\pi\\ -i\pi&0&0\\ i\pi&0&0 \end{array}\right) =\left(\begin{array}{ccc} -1&0&0\\ 0&0&1\\ 0&1&0 \end{array}\right) $$
Employing the second formula in lemma we may look for the linear operator $S_r$ in the group generated by the set $\{e^{\psi(x)}:x\in\lhull{h_r,x_r,y_r}\}$. Verify that the matrices of the linear operators $e^{\ad(sx_r)}$ and $e^{\ad(ty_r)}$ with respect to the basis $h_r,x_r,y_r$ are given by: $$ \left(\begin{array}{ccc} 1&0&s\\ -2s&1&-s^2\\ 0&0&1 \end{array}\right), \left(\begin{array}{ccc} 1&-t&0\\ 0&1&0\\ 2t&-t^2&1 \end{array}\right) $$ Show that $e^{\psi(x_r)}e^{-\psi(y_r)}e^{\psi(x_r)}$ and $e^{\psi(y_r)}e^{-\psi(x_r)}e^{\psi(y_r)}$ are possible choices for $S_r$.
Suppose $\b,\g\in\C$. Find all eigen-values and eigen-vectors of the matrix $$ \left(\begin{array}{ccc} 0&-\g&\b\\ -2\b&0&0\\ 2\g&0&0 \end{array}\right) $$
The characteristic polynomial: $p(z)=(4\b\g-z^2)z$. Hence the eigen-values are $0$, $-2\sqrt{\b\g}$ and $2\sqrt{\b\g}$. The corresponding eigen-vectors are the columns of the matrix: $$ \left(\begin{array}{ccc} 0&\sqrt{\b\g}&\sqrt{\b\g}\\ \b&\b&-\b\\ \g&-\g&\g \end{array}\right) $$
Suppose $\a,\b,\g\in\C$. Find all eigen-values and eigen-vectors of the matrix $$ \left(\begin{array}{ccc} 0&-\g&\b\\ -2\b&2\a&0\\ 2\g&0&-2\a \end{array}\right) $$
For any Lie algebra $A$ define the Killing form $B:A\times A\rar\bK$ by $$ B(x,y)\colon=\tr(\ad(x)\ad(y))~. $$ 1. Show that $B$ is symmetric and $B(\ad(x)y,z)=-B(y,\ad(x)z)$. 2. If $A$ is semi-simple than $B(x,y)$ is not degenerated, i.e. for all $x\neq0$ there is some $y$ such that $B(x,y)\neq0$. Moreover, if $\ad(x)$ is skew adjoint, then $-B(x,y)$ is euclidean. 3. If $A$ is nilpotent, then for all $x,y\in A$: $B(x,y)=0$.
1. By Jacobi`s identity we have: $\ad([x,y])=\ad(x)\ad(y)-\ad(y)\ad(x)$ and thus $$ \ad(\ad(x)y)\ad(z) =\ad([x,y])\ad(z) =\ad(x)\ad(y)\ad(z)-\ad(y)\ad(x)\ad(z) $$ and similarly: $$ \ad(y)\ad(\ad(x)z) =\ad(y)\ad(x)\ad(z)-\ad(y)\ad(z)\ad(x) $$ It follows that $$ \tr(\ad(\ad(x)y)\ad(z)+\ad(y)\ad(\ad(x)z)) =\tr(\ad(x)\ad(y)\ad(z)-\ad(y)\ad(z)\ad(x)) =0~. $$ 2. Assume $\ad(x)^*=\ad(x^*)$, then $B(x,x^*)=\tr(\ad(x)\ad(x)^*)$, which vanishes iff $\ad(x)=0$, i.e. iff $x$ is in the center of $A$. If in addition $\ad(x)^*=-ad(x)$, then $-B(x,y)=\tr(\ad(x)\ad(x)^*)\geq0$. 3. As $A$ is nilpotent there is some $n\in\N$ such that for all $x_1,\ldots,x_n\in A$: $\ad(x_1)\cdots\ad(x_n)=0$. Hence the operator $\ad(x)\ad(y)$ is nilpotent, which implies that its only eigen-value is $0$, in particular $\tr(\ad(x)\ad(y))=0$.

Some Examples

The Lie algebra $\sla(n,\C)$

As underlying real Lie-algebra we choose $\su(n)$ and a Cartan subalgebra $$ \Big\{diag\{ia_1,\ldots,ia_n\}:\,a_1,\ldots,a_n\in\R,\sum a_j=0\Big\}~. $$ Any matrix $T\in\su(n)$ is similar to a matrix in this Cartan subalgebra, i.e. there exists a matrix $U\in\SU(n)$ such that $T=\Ad(U)D$ for some matrix $D$ in this Cartan subalgebra.
Anyway, $\sla(n,\C)=\su(n)_\C$ and the complexification ${\cal H}$ of the Cartan subalgebra is the set of all diagonal matrices $diag\{a_1,\ldots,a_n\}$ satisfying $a_j\in\C$ and $\sum a_j=0$. Obviously for all $H_1,H_2\in{\cal H}$: $[H_1,H_2]=0$. Thus ${\cal H}$ is commutative. Suppose for all $H\in{\cal H}$: $[X,H]=0$, then for all $j < k$: $$ XE^{jj}_{lm}=\sum_r X_{lr}E^{jj}_{rm}=X_{lj}\d_{jm} \quad\mbox{and}\quad E^{jj}X_{lm}=\sum_r E^{jj}_{lr}X_{rm}=X_{jm}\d_{jl} $$ and therefore by e.g.
exam: $$ [X,E^{jj}-E^{kk}]_{lm} =X_{lj}\d_{jm}-X_{jm}\d_{jl}-X_{lk}\d_{km}+X_{km}\d_{kl} $$ For $l,m\notin\{j,k\}$ or $l=m=j$ or $l=m=k$ the right hand side equals $0$. Finally, for $l=j$ and $m=k$ we get $-2X_{jk}$ and for $l=k$ and $m=j$: $2X_{kj}$. Hence, if $[X,E^{jj}-E^{kk}]=0$ for all $j < k$, then $X$ must be diagonal, proving that ${\cal H}$ is maximal. Finally, from exam we infer that for all $X\in\su(n)$ the mapping $\ad(X)$ is skew symmetric with respect to the euclidean inner product $\la X,Y\ra\colon=\tr(XY^*)$ - and $X^*\colon=\bar X^t$. Which implies that all mappings $\ad(H)$ are diagonalizable. Thus ${\cal H}$ is indeed a Cartan sub-algebra.
In order to compute the roots of $\sla(n,\C)$ we take any $H\in{\cal H}$ and calculate $\ad(H)E^{jk}$; by e.g. exam we get: $$ \ad(H)E^{jk}=(a_j-a_k)E^{jk}, $$ i.e. for all $j\neq k$ the matrix $E^{jk}$ is a root vector and we are left to find the correspondig root $R^{jk}\in{\cal H}$ such that $a_j-a_k=\tr(HR^{jk^*})$. So obviously $$ R^{jk}=E^{jj}-E^{kk}~. $$ By theorem we know that the root system encompasses exactly $\dim\sla(n,\C)-\dim{\cal H}=n^2-1-(n-1)=n(n-1)$ roots, which is exactly the number of matrices $E^{jk}$, $j\neq k$. Hence we`ve got all the roots and all root vectors. Moreover we have $$ \la R^{jk},R^{lm}\ra\in\{0,\pm1,\pm2\}\in\Z $$ Finally for $j\neq k$ the reflection $S$ about the hyperplane $\{X:\tr(XR^{jk})=0\}$ operates on the set of all diagonal matrices by interchanging the $j$-th and the $k$-th diagonal entries. Hence the Weyl group $W(\sla(n,\C))$ is isomorphic to the group of permutations $S(n)$ of $n$ elements. In particular the reflection $S$ about the hyperplane $\{X:\tr(XR^{jk})=0\}$ permutes the roots as follows: $$ \forall l\neq j\,\forall m\neq k:\quad R^{jk}\mapsto R^{kj},\quad R^{jm}\mapsto R^{km},\quad R^{lk}\mapsto R^{lj},\quad R^{lm}\mapsto R^{lm}~. $$
We may identify ${\cal H}$ via $E^{jj}\mapsto e_j$ with the hyperplane orthogonal to $e_1+\cdots+e_n$ of $\C^n$ and then the roots are the vectors $e_j-e_k$, $j\neq k$ and the Weyl group is just the group of permutation matrices in $n$ dimensions.

The Lie algebra $\so(2n,\C)$

Of course, the underlying real algebra is $\so(2n)\colon=\{A\in\Ma(2n,\R): A+A^t=0\}$ and a Cartan subalgebra in $\so(2n)$ is the set of block diagonal matrices $$ diag\{a_1J,\ldots,a_nJ\} \quad\mbox{where}\quad J\colon=\left(\begin{array}{cc}0&-1\\1&0\end{array}\right)\in\Ma(2,\R)~. $$ for some $a_1,\ldots,a_n\in\R$. Any matrix $T\in\so(2n)$ is similar to a matrix in this Cartan subalgebra, i.e. there exists a matrix $U\in\SO(2n)$ such that $T=\Ad(U)D$ for some matrix $D$ in this Cartan subalgebra. By definition $\so(2n,\C)\colon=\so(2n)_\C=\{A\in\Ma(2n,\C): A+A^t=0\}$ and the complexification ${\cal H}$ of the Cartan subalgebra is the set of all block diagonal matrices $$ diag\{a_1J,\ldots,a_nJ\} \quad\mbox{where}\quad a_1,\ldots,a_n\in\C~. $$ As $J^2=-1$ and $J^*=-J$ it follows that $H_1H_2^*=diag\{a_1\bar b_1I,\ldots,a_n\bar b_nI\}$ where $I$ is the identity in $\Ma(2,\C)$. Defining the euclidean product on $\so(2n,\C)$ by $\la X,Y\ra\colon=\frac12\tr(XY^*)$, we make all mapping $\ad(H)$, $H\in{\cal H}$, diagonalizable. Moreover ${\cal H}$ is commutative and for ${\cal H}$ to be a Cartan sub-algebra only maximality needs to be proved: So suppose $X\in\so(2n,\C)$ satisfies $[X,H]=0$ for all $H\in{\cal H}$. Writing $X$ as an $n\times n$ block matrix bith $2\times2$ blocks $X_{jk}$ and denoting by $H^j\in{\cal H}$ the block diagonal matrix with $a_j=1$ and for $l\neq j$: $a_l=0$, we get $$ (XH^j)_{lm}=\sum_r X_{lr}H^j_{rm}=X_{lj}H^j_{jm} \quad\mbox{and}\quad (H^jX)_{lm}=\sum_r H^j_{lr}X_{rm}=X_{jm}H^j_{lj}~. $$ which implies: $$ [X,H^j]_{lm} =X_{lj}H^j_{jm}-X_{jm}H^j_{lj} $$ For $l,m\neq j$ or $l=m=j$ this equals $0$. For $l=j\neq m$: $-X_{jm}E^{jj}$ and for $m=j\neq l$ we get: $X_{lj}H^j_{jj}$. As $H^j_{jj}=J$ is invertible, we conclude that $X$ must be a block diagonal matrix, each block a $2\times2$ matrix. However, such a matrix in $\so(2n,\C)$ must be an element of ${\cal H}$. Hence ${\cal H}$ is a Cartan algebra.
For $n\geq2$ the Lie-algebra $\so(2n,\C)$ is semi-simple.
Next we want to find root vectors $X\in\so(2n,\C)$; again we assume $j < k$ and write $X$ as an $n\times n$ block matrix consisting of $2\times2$ blocks, but this time we assume that the only non-zero block is the $(j,k)$ block, which we will denote by $C\in\Ma(2,\C)$ - and of course the $(k,j)$ block, which must equal $-C^t$. For any $H\in{\cal H}$ we have $[H,X]_{lm}=0$ if $l\neq j$ and $m\neq k$ (or $l\neq k$ and $m\neq j$): $$ [H,X]_{jk} =H_{jj}C-CH_{kk} $$ Now put $$ C=\left(\begin{array}{cc}a&b\\c&d\end{array}\right) $$ Then the equation $[H,X]=\l X$ comes down to the matrix equation $$ \left(\begin{array}{cc}-a_kb-a_jc&-a_jd+a_ka\\-a_kd+a_ja&a_kc+a_jb\end{array}\right) =\l\left(\begin{array}{cc}a&b\\c&d\end{array}\right), $$ which in turn is an eigen-value equation: $$ \left(\begin{array}{cccc} 0&-a_k&-a_j&0\\ a_k&0&0&-a_j\\ a_j&0&0&-a_k\\ 0&a_j&a_k&0 \end{array}\right) \left(\begin{array}{c} a\\ b\\ c\\ d \end{array}\right) =\l \left(\begin{array}{c} a\\ b\\ c\\ d \end{array}\right) $$ This gives us four matrices $C$: \begin{equation}\label{rooeq10}\tag{ROO10} \left(\begin{array}{cc}1&-i\\i&1\end{array}\right),\quad \left(\begin{array}{cc}1&i\\-i&1\end{array}\right),\quad \left(\begin{array}{cc}1&-i\\-i&-1\end{array}\right),\quad \left(\begin{array}{cc}1&i\\i&-1\end{array}\right),\quad \end{equation} with eigen-values $\l$: $$ -i(a_j-a_k),\quad i(a_j-a_k),\quad i(a_j+a_k),\quad -i(a_j+a_k)~. $$ Finally we get for each pair $j < k$ four roots $R$ by solving: $\l=\frac12\tr(HR^*)$. Put $R=diag\{b_1J,\ldots,b_nJ\}$, then $R^*=diag\{-\bar b_1J,\ldots,-\bar b_nJ\}$ and $\frac12\tr(HR^*)=\sum_la_l\bar b_l$. It follows that $b_l=0$ for all $l\notin\{j,k\}$ and $$ b_j=i,b_k=-i\quad\mbox{or}\quad b_j=-i,b_k=i\quad\mbox{or}\quad b_j=-i,b_k=-i\quad\mbox{or}\quad b_j=i,b_k=i~. $$ These roots will be denoted by $R^{jkp}$ for $j < k$ and $p=1,2,3,4$. Thus typical roots are - we put $j=1$ and $k=2$: $$ diag\{iJ,-iJ,0,\ldots,0\},\quad diag\{-iJ,iJ,0,\ldots,0\},\quad diag\{-iJ,-iJ,0,\ldots,0\},\quad diag\{iJ,iJ,0,\ldots,0\}~. $$ As $\dim\so(2n,\C)=2n(2n-1)/2$ and $\dim{\cal H}=n$ we must have $(2n-1)-n=2n(n-1)$ root vectors. On the other hand we`ve found $4n(n-1)/2=2n(n-1)$ root vectors $R^{jkp}$, $j < k\in\{1,\ldots,n\}$ and $p\in\{1,2,3,4\}$, i.e. we`ve got all root vectors! For $\{l,m\}\cap\{j,k\}=\emptyset$ we have $\la R^{jkp},R^{lmq}\ra=0$; for $l=j$ and $m=k$ the four root vectors $R^{jkp}$, $p=1,2,3,4$, form an equilateral quadrangle; if $\{l,m\}\cap\{j,k\}$ contains exactly one point, then $$ \la R^{jkp},R^{lmq}\ra=\pm1~. $$
Identifying ${\cal H}$ via $iH^{j}\mapsto e_j$ with $\C^n$ the roots become the vectors $\pm(e_j+e_k),\pm(e_j-e_k)$, $j < k$. In particular the root system of $\so(4,\C)$ is isometric to the set of vertices $\pm e_1\pm e_2$ of a regular symmetric quadrangle.
so(4)
If $r,s$ are roots such that $s\perp r$ and neither $r+s$ nor $r-s$ is a root, then the sub-space $\lhull{r,x_r,x_sr^*,s,x_s,x_s^*}$ is a sub-algebra isomorphic to $\sla(2,\C)\oplus\sla(2,\C)$.
$\so(4,\C)$ is isomorphic to $\sla(2,\C)\oplus\sla(2,\C)$.
Denote the matrices in \eqref{rooeq10} by $C_1,C_2=\bar C_1,C_3$ and $C_4=\bar C_3$, then the root spaces of $\so(4,\C)$ are generated by four matrices: $$ X^1\colon=\left(\begin{array}{cc}0&C_1\\-C_1^t&0\end{array}\right),\quad Y^1\colon=-\left(\begin{array}{cc}0&C_2\\-C_2^t&0\end{array}\right),\quad X^2\colon=\left(\begin{array}{cc}0&C_3\\-C_3^t&0\end{array}\right),\quad Y^2\colon=-\left(\begin{array}{cc}0&C_3\\-C_4^t&0\end{array}\right)~. $$ i.e. $Y^j=X^{j*}$, with roots: $$ R^1\colon=\left(\begin{array}{cc}iJ&0\\0&-iJ\end{array}\right),\quad -R^1=\left(\begin{array}{cc}-iJ&0\\0&iJ\end{array}\right),\quad -R^2\colon=\left(\begin{array}{cc}-iJ&0\\0&-iJ\end{array}\right),\quad R^2=\left(\begin{array}{cc}iJ&0\\0&iJ\end{array}\right), $$ which are self-adjoint and ${\cal H}$ is generated by $R^1$ and $R^2$. Moreover $$ \la R^1,R^2\ra=\tfrac12\tr(R^1R^2)=0 \quad\mbox{and}\quad \la R^1,R^1\ra=2 $$ The sub-algebras $\lhull{R^1,X^1,Y^1}$ and $\lhull{R^2,X^2,Y^2}$ are both isomorphic to $\sla(2,\C)$ cf. the discussion preceding theorem. Moreover, as $R^1\pm R^2$ is not a root and $R^1\perp R^2$, we must have for all $X\in\lhull{R^1,X^1,Y^1}$ and all $Y\in\lhull{R^2,X^2,Y^2}$: $[X,Y]=0$. Hence $\so(4,\C)$ is isomorphic to $\sla(2;\C)\oplus\sla(2,\C)$.
Prove that $\so(4)$ is isomorphic to $\su(2)\oplus\su(2)$.
$$ H^1\colon=\left(\begin{array}{cccc} 0&-1&0&0\\ 1&0&0&0\\ 0&0&0&1\\ 0&0&-1&0 \end{array}\right),\quad X^1\colon=\left(\begin{array}{cccc} 0&0&1&0\\ 0&0&0&1\\ -1&0&0&0\\ 0&-1&0&0 \end{array}\right),\quad Y^1\colon=\left(\begin{array}{cccc} 0&0&0&-1\\ 0&0&1&0\\ 0&-1&0&0\\ 1&0&0&0 \end{array}\right), $$ It follows that $[H^1,X^1]=2Y^1$, $[H^1,Y^1]=-2X^1$ and $[X^1,Y^1]=2H^1$. On the other hand putting $$ h\colon=i\s_3=\left(\begin{array}{cc} i&0\\ 0&-i \end{array}\right),\quad x\colon=i\s_2=\left(\begin{array}{cc} 0&1\\ -1&0 \end{array}\right),\quad y\colon=i\s_1=\left(\begin{array}{cc} 0&i\\ i&0 \end{array}\right), $$ where $\s_j$ denote the Pauli spin matrices -cf. subsection, we get: $[h,x]=2y$, $[h,y]=-2x$ and $[x,y]=2h$ and thus the sub-algebra $\lhull{H^1,X^1,Y^1}$ is isomorphic to $\su(2)$. Analogously the sub-algebra generated by $$ H^2\colon=\left(\begin{array}{cccc} 0&-1&0&0\\ 1&0&0&0\\ 0&0&0&-1\\ 0&0&1&0 \end{array}\right),\quad X^2\colon=\left(\begin{array}{cccc} 0&0&1&0\\ 0&0&0&-1\\ -1&0&0&0\\ 0&1&0&0 \end{array}\right),\quad Y^2\colon=\left(\begin{array}{cccc} 0&0&0&1\\ 0&0&1&0\\ 0&-1&0&0\\ -1&0&0&0 \end{array}\right) $$ is also isomorphic to $\su(2)$. Finally for all $Z^1\in\lhull{H^1,X^1,Y^1}$ and all $Z^2\in\lhull{H^2,X^2,Y^2}$: $[Z^1,Z^2]=0$.

The Lie algebra $\so(2n+1,\C)$

We get a Cartan sub-algebra of $\so(2n+1)$ by taking all elements of the Cartan sub-algebra of $\so(2n)$ augmented by one row and one column of zeros. As in the previous subsection it can be proved that this is indeed a Cartan sub-algebra of $\so(2n+1)$.
For $n\geq1$ the Lie-algebra $\so(2n+1,\C)$ is semi-simple.
Cartan sub-algebra of $\so(2n+1)$ is clearly isomorphic to the Cartan sub-algebra of $\so(2n)$. However there are additional roots and additional root vectors. Let`s assume these additional root vectors have the following form: $$ \left(\begin{array}{cc} 0&-x^t\\ x&0 \end{array}\right) $$ for some $x=(x_1,\ldots,x_{2n})\in\C$. Thus for all $H\in{\cal H}\sbe\so(2n,\C)$: $$ \left(\begin{array}{cc} H&0\\ 0&0 \end{array}\right) \left(\begin{array}{cc} 0&-x^t\\ x&0 \end{array}\right) - \left(\begin{array}{cc} 0&-x^t\\ x&0 \end{array}\right) \left(\begin{array}{cc} H&0\\ 0&0 \end{array}\right) =\l \left(\begin{array}{cc} 0&-x^t\\ x&0 \end{array}\right) $$ i.e. $$ \left(\begin{array}{cc} 0&-Hx^t\\ 0&0 \end{array}\right) - \left(\begin{array}{cc} 0&0\\ xH&0 \end{array}\right) = \left(\begin{array}{cc} 0&-Hx^t\\ -xH&0 \end{array}\right) = \l \left(\begin{array}{cc} 0&-x^t\\ x&0 \end{array}\right) $$ As $H^t=-H$ we end up in solving $Hx^t=\l x^t$. Now the eigen-values of $J$ are $\pm i$ with eigen-vectors $(1,i)^t$ and $(1,-i)^t$ and thus the eigen-values of $H=diag\{a_1J,\ldots,a_nJ\}$ are $\pm ia_1,\ldots,\pm ia_n$ with eigen-vectors $$ (a_1,ia_1,0,\ldots,0,0)^t,\ldots(0,0,\ldots,0,a_n,ia_n)^t \quad\mbox{and}\quad (a_1,-ia_1,0,\ldots,0,0)^t,\ldots(0,0,\ldots,0,a_n,-ia_n)^t~. $$ Finally, $ia_j=\frac12\tr(HR^{j*})$ and $R^j=diag\{b_1J,\ldots,b_nJ\}$ implies $b_l=0$ for all $l\neq j$ and $b_j=i$. Hence the coresponding roots are: $\pm iH^j$. We`ve found $2n^2=2n(n-1)+2n=(2n+1)2n/2-n=\dim\so(2n+1,\C)-\dim{\cal H}$ roots, which means we`ve got all!
Identifying ${\cal H}$ isometrically via $iH^{j}\mapsto e_j$ with $\C^n$ the roots become the vectors $\pm(e_j+e_k),\pm(e_j-e_k)$, $j < k$ and $\pm e_j$, $j=1,\ldots,n$. In particular the root system of $\so(5,\C)$ is isometric to the set comprising the four vertices $\pm e_1\pm e_2$ of a regular symmetric quadrangle and the four midpoints of its edges.
so(5)

The Lie algebra $\spa(n,\C)$

First, we define the Lie algebra $\spa(n,\C)=\{X\in\Ma(2n,\C): X^t=JXJ\}$ where $J\in\Ma(2n,\C)$ now denotes the block matrix $$ J\colon=\left(\begin{array}{cc}0&-1\\1&0\end{array}\right)~. $$ The intersection $\spa(n,\C)\cap\su(2n)$ is called the compact symplectic algebra and its complexification coincides with $\spa(n,\C)$. Writing $X\in\Ma(2n,\C)$ as block matrix $$ X=\left(\begin{array}{cc}A&B\\C&D\end{array}\right) \quad\mbox{then $X\in\spa(n,\C)$ iff}\quad B^t=B,C^t=C,D=-A^t~. $$ and $X\in\spa(n)$ iff $$ \bar A^t=-A,B^t=B,C=-\bar B^t,D=-A^t~. $$ the conditions $\bar D^t=-D$ and $C^t=C$ follow from these. Thus we need to choose $A\in\Ma(n,\C)$ arbitrary and $B\in\Ma(n,\C)$ symmetric and put: $C=-\bar B^t$ and $D=-A^t$, i.e. the dimension of $\spa(n,\C)$ is $n^2+n(n+1)$. The Cartan sub-algebra ${\cal H}$ will be the complexification of the sub-space $$ \{diag\{ia_1,\ldots,ia_n,-ia_1,\ldots,-ia_n\}:a_1,\ldots,a_n\in\R\} $$ which is of course commutative and its dimension is $n$. Taking the euclidean product $\tr(XY^*)/2$ we have for all $H\in{\cal H}$: $\ad(H)^*=\ad(H^*)=-\ad(H)$ and thus all mappings $\ad(H)$ are diagonalizable. Finally its maximal, for suppose $B,C\in\Ma(n,\C)$ are symmetric and $A\in\Ma(n,\C)$, then \begin{equation}\label{rooeq11}\tag{ROO11} H\colon=\left(\begin{array}{cc}D&0\\0&-D\end{array}\right),\quad X\colon=\left(\begin{array}{cc}A&B\\C&-A^t\end{array}\right),\quad [H,X]=\left(\begin{array}{cc}[D,A]&-BD-DB\\CD+DC&-[D,A]^t\end{array}\right) \end{equation} Hence $[H,X]=0$ iff $A$ is diagonal and for all diagonal matrices $D\in\Ma(n,\C)$: $CD+DC=BD+DB=0$, which holds if and only if $B=C=0$. Hence $X\in{\cal H}$ and therefore ${\cal H}$ is a Cartan sub-algebra.
For $n\geq1$ the Lie-algebra $\spa(n,\C)$ is semi-simple.
Next we need to find $n^2+n(n+1)-n=2n^2$ root vectors $X$ and roots $R$: 1. According to subsection we should try for $j\neq k$: $$ X\colon=\left(\begin{array}{cc}E^{jk}&0\\0&-E^{kj}\end{array}\right) $$ By the previous formula \eqref{rooeq11} for the commutator we get: $$ \ad(H)X =\left(\begin{array}{cc}[D,E^{jk}]&0\\0&-[D,E^{jk}]^t\end{array}\right) $$ and by exam this equals $$ (D_{jj}-D_{kk})\left(\begin{array}{cc}E^{jk}&0\\0&-E^{kj}\end{array}\right) =(D_{jj}-D_{kk})X~. $$ The associated root $R\in{\cal H}$ is determined by $\frac12\tr(HR^*)=D_{jj}-D_{kk}$, i.e. $$ R=\left(\begin{array}{cc}E^{jj}-E^{kk}&0\\0&-E^{jj}+E^{kk}\end{array}\right)~. $$ As the off diagonal entries of any root vector $X$ must be symmetric matrices we will try the simplest: $$ X\colon=\left(\begin{array}{cc}0&E^{jk}+E^{kj}\\0&0\end{array}\right) \quad\mbox{and}\quad Y\colon=\left(\begin{array}{cc}0&0\\E^{jk}+E^{kj}&0\end{array}\right)~. $$ Putting $E\colon=E^{jk}+E^{kj}$ we get by \eqref{rooeq11}: $$ [H,X]=\left(\begin{array}{cc}0&-ED-DE\\0&0\end{array}\right) \quad\mbox{and}\quad [H,Y]=\left(\begin{array}{cc}0&0\\ED+DE&0\end{array}\right) $$ Finally $ED+DE=(D_{jj}+D_{kk})E$ and therefore $$ [H,X]=(D_{jj}+D_{kk})X \quad\mbox{and}\quad [H,Y]=-(D_{jj}+D_{kk})Y $$ i.e. $X$ and $Y$ are root vectors with roots $$ \left(\begin{array}{cc}E^{jj}+E^{kk}&0\\0&-E^{jj}-E^{kk}\end{array}\right) \quad\mbox{and}\quad \left(\begin{array}{cc}-E^{jj}-E^{kk}&0\\0&E^{jj}+E^{kk}\end{array}\right) $$ Together we`ve found $(n^2-n)+2(n(n+1)/2)=2n^2$ of $\spa(n,\C)$, which means we`ve found all roots.
Identifying ${\cal H}$ isometrically via $$ \left(\begin{array}{cc}E^{jj}&0\\0&-E^{jj}\end{array}\right) \mapsto e_j $$ with $\C^n$ the roots become the vectors $\pm(e_j+e_k),\pm(e_j-e_k)$, $j < k$ and $\pm 2e_j$, $j=1,\ldots,n$. In particular the root system of $\spa(2,\C)$ is isometric to the set comprising the four vertices $\pm2e_1$ and $\pm2e_2$ of a regular symmetric quadrangle and the four midpoints of its edges.
sp(2)
The complexification of the Lie-algebra of the Heisenberg group is the sub-algebra $A$ of $\sla(3,\C)$: $$ \left(\begin{array}{ccc} 0&a_1&a_2\\ 0&0&a_3\\ 0&0&0 \end{array}\right),\quad a_1,a_2,a_3\in\C $$ Compute the center of $A$ and determine all maximal commutative sub-algebras. Compute the adjoint representation. Conclude that $A$ does`t have a Cartan sub-algebra. Compute $\ad(A)^*$ with respect to the euclidean product $\tr(AB^*)$.
We have $$ \ad(A)X=\left(\begin{array}{ccc} 0&0&a_1x_3-a_3x_1\\ 0&0&0\\ 0&0&0 \end{array}\right), $$ from which we conclude that the center of $A$ is $\lhull{E^{13}}$. Suppose $X\colon=x_1E^{12}+x_3E^{23}$ is another element in the commutative sub-algebra ${\cal H}$, then $[X,Y]=x_1y_3-x_3y_1=0$ i.e. $Y=\l X$. Any maximal commutative sub-algebra must be a sub-space of the form $\lhull{E^{13},x_1E^{12}+x_3E^{23}}$. Finally the matrix of $\ad(A)$ with respect to the basis $E^{12},E^{13},E^{23}$ is $$ \left(\begin{array}{ccc} 0&0&0\\ -a_3&0&a_1\\ 0&0&0 \end{array}\right)~. $$ Now $\ad(A)^2=0$ and this means that $\ad(A)$ is diagonalizable iff $A=0$. As $E^{12},E^{13},E^{23}$ is an orthonormal basis we get for the matrix of $\ad(A)^*$: $$ \left(\begin{array}{ccc} 0&-\bar a_3&0\\ 0&0&0\\ 0&\bar a_1&0 \end{array}\right)~. $$
Let $A$ be the sub-space of all $$ \left(\begin{array}{ccc} c_{11}&c_{12}&b_1\\ c_{21}&c_{22}&b_2\\ 0&0&0\end{array}\right) \in\Ma(3,\C) $$ such that $c_{11}+c_{22}=0$. Then $A$ is a sub-algebra of the Lie-algebra $\Ma(3,\C)$. Choose the following basis: $$ H\colon=\left(\begin{array}{ccc} 1&0&0\\ 0&-1&0\\ 0&0&0\end{array}\right), X\colon=\left(\begin{array}{ccc} 0&1&0\\ 0&0&0\\ 0&0&0\end{array}\right), Y\colon=\left(\begin{array}{ccc} 0&0&0\\ 1&0&0\\ 0&0&0\end{array}\right), E\colon=\left(\begin{array}{ccc} 0&0&1\\ 0&0&0\\ 0&0&0\end{array}\right), F\colon=\left(\begin{array}{ccc} 0&0&0\\ 0&0&1\\ 0&0&0\end{array}\right)~. $$ 2. Prove that the only non trivial ideal in $A$ is $\lhull{E,F}$. 3. $A$ is not semi-simple. 4. $\lhull{H}$ is a maximal commutative sub-algebra of $A$ and $\ad(H)$ is diagonalizable with eigen-values $0,\pm1$ and $\pm2$.

Simple Lie-Algebras

← $\SU(3)$ → Root Systems

<Home>
Last modified: Wed Jun 16 13:31:42 CEST 2021