1 SU(3) ← Matrix Lie-Groups → Semi-simple Lie-Algebras
What should you be acquainted with? 1. Linear Algebra, in particular inner product spaces both over the real and the complex numbers. 2. Very basics in Group Theory and Complex Analysis. This chapter is essentially taken from Brian Hall, Lie Groups, Lie Algebras, and Representations, Chapter 6.

Representations of $\su(3)$ and $\SU(3)$

Weights and Roots

Complexification of su(n)

Let $E^{jk}$ denote the standard basis of $\Ma(n,\C)$, then the matrices $iH_1\colon=i(E^{11}-E^{22})$, $\ldots$, $iH_{n-1}\colon=i(E^{n-1,n-1}-E^{nn})$ and for $j < k$: $A_{jk}\colon=E^{jk}-E^{kj}$ and $B_{jk}\colon=iE^{jk}+iE^{kj}$ form a basis of $\su(n)$. Hence for any $X\in\su(n)$ there are real numbers $h_j,a_{jk},b_{jk}$ such that $$ X=\sum_{j < n} ih_jH_j+\sum_{j < k}a_{jk}A_{jk}+b_{jk}B_{jk} $$ The complexification of $\su(n)$ is the space of these linear combinations but the reals $h_j,a_{jk},b_{jk}$ replaced with complex numbers. As we`ve seen (
exam) the complexification coincides with the complex Lie-algebra $\sla(n,\C)$ of traceless complex $n$ by $n$ matrices.
The matrices $iH_j/\sqrt2$, $A_{jk}/\sqrt2$ and $B_{jk}/\sqrt2$ form a normalized basis of $\su(n)$ with respect to its canonical euclidean product: $\la A,B\ra=\tr(AB^*)$ and the extension of this product to $\sla(n,\C)$ is also given by $\la A,B\ra=\tr(AB^*)$.
For $n=3$ we choose the following eight matrices as basis for $\sla(3,\C)$: $$ H_1\colon=\left(\begin{array}{ccc} 1&0&0\\ 0&-1&0\\ 0&0&0 \end{array}\right), X_1\colon=\left(\begin{array}{ccc} 0&1&0\\ 0&0&0\\ 0&0&0 \end{array}\right)=E^{12}, Y_1\colon=\left(\begin{array}{ccc} 0&0&0\\ 1&0&0\\ 0&0&0 \end{array}\right)=E^{21} $$ $$ H_2\colon=\left(\begin{array}{ccc} 0&0&0\\ 0&1&0\\ 0&0&-1 \end{array}\right), X_2\colon=\left(\begin{array}{ccc} 0&0&0\\ 0&0&1\\ 0&0&0 \end{array}\right)=E^{23}, Y_2\colon=\left(\begin{array}{ccc} 0&0&0\\ 0&0&0\\ 0&1&0 \end{array}\right)=E^{32} $$ $$ X_3\colon=\left(\begin{array}{ccc} 0&0&1\\ 0&0&0\\ 0&0&0 \end{array}\right)=E^{13}, Y_3\colon=\left(\begin{array}{ccc} 0&0&0\\ 0&0&0\\ 1&0&0 \end{array}\right)=E^{31} $$ Notice: $H_j\in i\su(3)$ and $X_j,Y_j\notin\su(3)$! The spaces $\lhull{H_1,X_1,Y_1}$ and $\lhull{H_2,X_2,Y_2}$ are obviously sub-algebras and isomorphic to $\sla(2,\C)$, i.e. $$ \forall j\in\{1,2\}\quad [H_j,X_j]=2X_j, [H_j,Y_j]=-2Y_j, [X_j,Y_j]=H_j $$ Also, the complex subspace $\lhull{H_1,H_2}$ is the (complexified) Lie-algebra of the torus $\TT^2$, i.e. the traceless diagonal matrices; this sub-algebra is commutative and thus $[H_1,H_2]=0$. Moreover we have the following commutation relations: \begin{eqnarray*} &&[H_1,X_2]=-X_2, [H_1,Y_2]=Y_2, [H_1,X_3]=X_3, [H_1,Y_3]=-Y_3\\ &&[H_2,X_1]=-X_1, [H_2,Y_1]=Y_1, [H_2,X_3]=X_3, [H_2,Y_3]=-Y_3 \end{eqnarray*}
Verify that $X_1X_2=X_3$, $Y_2Y_1=Y_3$ and all other products $X_jX_k$ and $Y_jY_k$ vanish. Thus we have $X_3=[X_1,X_2]$ and $Y_3=-[Y_1,Y_2]$.
Put $H_3\colon=-H_1-H_2$, then $$ [H_3,X_3]=-2X_3, [H_3,Y_3]=2Y_3, [X_3,Y_3]=-H_3 $$ and the space $\lhull{H_3,X_3,Y_3}$ is a sub-algebra isomorphic to $\sla(2,\C)$.
As $\sla(3,\C)$ is simple, the adjoint representation $\ad(X)Y=[X,Y]$ of $\su(3)$ is irreducible $$ \begin{array}{r|cccccccc} \ad&H_1&H_2&X_1&X_2&X_3&Y_1&Y_2&Y_3\\ \hline \ad(H_1)&0&0&2X_1&-X_2&X_3&-2Y_1&Y_2&-Y_3\\ \ad(H_2)&0&0&-X_1&2X_2&X_3&Y_1&-2Y_2&-Y_3\\ \ad(X_1)&-2X_1&X_1&0&X_3&0&H_1&0&0\\ \ad(X_2)&X_2&-2X_2&-X_3&0&0&0&H_2&Y_1\\ \ad(X_3)&-X_3&-X_3&0&0&0&-X_2&X_1&-H_3\\ \ad(Y_1)&2Y_1&-Y_1&-H_1&0&X_2&0&-Y_3&0\\ \ad(Y_2)&-Y_2&2Y_2&0&-H_2&-X_1&Y_3&0&0\\ \ad(Y_3)&Y_3&Y_3&0&-Y_1&H_3&0&0&0\\ \end{array} $$

Weights

The classification of the irreducible representations of $\SU(2)$ heavily relied on the fact that both $X_1$ and $Y_1$ are ladder operators for $H_1$. This is still true for $\SU(3)$ but there are several additional commutation relations, one of which is $[H_1,H_2]=0$, which implies that given any representation $\psi:\sla(n,\C)\rar\Hom(E)$ in a finite dimensional complex vector-space $E$, the linear mappings $\psi(H_1)$ and $\psi(H_2)$ have at least one joint eigen-vector - if both $\psi(H_1)$ and $\psi(H_2)$ are diagonalizable, then $E$ has a basis of joint eigen-vectors. Indeed, we can always assume that these operators are jointly diagonalizable, because $\su(n)$ is the Lie-algebra of the compact group $\SU(n)$ and hence for any representation $\psi:\sla(n,\C)\rar\Hom(E)$ there is a euclidean product on $E$ such that for all $X\in\su(n)$: $\psi(X)^*=-\psi(X)$ (cf
section). This implies that $\psi$ is the direct sum of irreducible representations. Indeed if $F$ is invariant, then so is $F^\perp$, because for all $y\in F^\perp$ and all $x\in F$: $$ \la\psi(X)y,x\ra =\la y,\psi(X)^*x\ra =-\la y,\psi(X)x\ra=0~. $$ From this it follows (cf. proposition) that $\psi$ is the direct sum of irreducible representations.
Let ${\cal A}$ be the space generated by $\psi(H_1)$ and $\psi(H_2)$, then by subsection a weight of ${\cal A}$ can be identified with a pair $\l\colon=(\l(H_1),\l(H_2))\in\C^2$, such that there exists a common eigen-vector $x\in E\sm\{0\}$: $$ \psi(H_1)x=\l(H_1)x \quad\mbox{and}\quad \psi(H_2)x=\l(H_2)x~. $$ $\l$ will be called a weight of the representation $\psi$ and $x$ is called a weight vector corresponding to the weight $\l$. The space of all weight vectors for the weight $\l$ is called the weight space of $\l$ and the dimension of the weight space of $\l$ is called the multiplicity of the weight $\l$. By restricting $\psi$ to the sub-algebra generated by $H_1,X_1,Y_1$ and $H_2,X_2,Y_2$ respectively, we know from proposition that both components of a weight must be integers.
The weights of a representation $\psi:\sla(2,\C)\rar\Hom(E)$ are just the eigen-values of $\psi(H)$ -cf. subsection. Thus the weights of the standard representation of $\sla(2,\C)$ are $\pm1$.
Let us calculate the weights of the standard representation of $\sla(3,\C)$ and the weights of its dual representation: obviously the canonical basis $e_1,e_2,e_3$ of $\C^3$ are weight vectors with weights: $(1,0)$, $(-1,1)$ and $(0,-1)$. Since the dual representation is the negative of the transpose, $e_1,e_2$ and $e_3$ are also the weight vectors of the dual with weights: $(-1,0)$, $(1,-1)$, $(0,1)$. Thus the weights of the dual of the standard representation are the negatives of the weights of the standard representation. This is actually true for any representation if we assume that $\psi(H_1)$ and $\psi(H_2)$ are jointly diagonalizable, which is no loss of generality. Under this condition the weights of the dual representation $\psi^d$ of $\psi:\sla(3,\C)\rar\Hom(E)$ are the negatives of the weights of $\psi$: Choose a basis $e_1,\ldots,e_n$ of $E$ such that both $\psi(H_1)$ and $\psi(H_2)$ are diagonal with respect to this basis and $\psi(H_j)e_k=m_{jk}e_k$. Then the matrix of $\psi^d(H_j)$ with respect to the dual basis $e_1^*,\ldots,e_n^*$ is the negative of the transpose of the matrix of $\psi(H_j)$ with respect to the basis $e_1,\ldots,e_n$ (cf. section). Hence $\psi^d(H_j)e_k^*=-m_{jk}e_k^*$.

Roots

A pair $r\colon=(r(H_1),r(H_2))\in\C^2\sm\{(0,0)\}$ is said to be a root if for some $Z\in\sla(3,\C)$: $[H_1,Z]=r(H_1)Z$ and $[H_2,Z]=r(H_2)Z$, i.e. $Z$ is a ladder operator for both $H_1$ and $H_2$ and it doesn`t commute with both. $Z\in\sla(3,\C)$ is called a root vector. Equivalently we may say that $Z$ is an eigen-vector for both $\ad(H_1)$ and $\ad(H_2)$ and the corresponding eigen-values are $r(H_1)$ and $r(H_2)$ and this in turn means:
Roots are the non zero weights of the adjoint representation.
The above commutation relations give us the following roots $$ \begin{array}{r|c} \mbox{root}&\mbox{root vector}\\ \hline r_1=(2,-1)&X_1\\ r_2=(-1,2)&X_2\\ r_3=r_1+r_2=(1,1)&X_3\\ -r_1=(-2,1)&Y_1\\ -r_2=(1,-2)&Y_2\\ -r_3=(-1,-1)&Y_3 \end{array} $$ i.e. all vectors $X_j,Y_j$, $j=1,2,3$ are root vectors. As both $H_1$ and $H_2$ are weight vectors, the vectors $X_j,Y_j$, $j=1,2,3$ and $H_1/\sqrt2,H_2/\sqrt2$ form a basis of normed weight vectors in $\sla(3,\C)$ furnished with the euclidean product $\tr(XY^*)$.
The roots of $\sla(2,\C)$ are $2$ and $-2$ with root vectors $X$ and $Y$, respectively - cf. subsection.
The following is a straightforward generalization of example:
Let $r$ be a root with corresponding root vector $Z$. Suppose $\psi:\sla(3,\C)\rar\Hom(E)$ is a finite dimensional representation, $\l$ a weight for $\psi$ and $x\neq0$ a weight vector. Then for $j=1,2,3$: $$ \psi(H_j)\psi(Z)x=(\l(H_j)+r(H_j))\psi(Z)x~. $$ Therefore, either $\psi(Z)x=0$ or it`s a new weight vector with weight $\l+r$. In the particular case of the adjoint representation we get: if $Z,X$ are root vectors with roots $r$ and $s$, respectively, then: $$ \ad(H_j)[Z,X]=(s(H_j)+r(H_j))[Z,X], $$ i.e. either $[Z,X]=0$ or $s+r$ is another root with root vector $[X,Z]$.
$\proof$ Since $\psi$ is an algebra homomorphism and $[H_j,Z]=r(H_j)Z$, it follows that \begin{eqnarray*} \psi(H_j)\psi(Z)x &=&[\psi(H_j),\psi(Z)]x+\psi(Z)\psi(H_j)x\\ &=&r(H_j)\psi(Z)x+\l(H_j)\psi(Z)x =(\l(H_j)+r(H_j))\psi(Z)x \end{eqnarray*} $\eofproof$
Just like eigen-vectors for pairwise distinct eigen-values are linearly independent, root vectors for pairwise distinct roots are linearly independent:
Suppose $x_1,\ldots,x_n$ are weight vectors for mutually distinct weights $\l_1,\ldots,\l_n$. If $x_1+\cdots+x_n$ is a weight vector for the weight $\l$, then for some $l$: $\l=\l_l$, $x=x_l$ and for all $k\neq l$: $x_k=0$.
$\proof$ 1. Suppose $x_1+\cdots+x_n=0$ and $n\geq2$. We proceed by induction on $n$ to prove that this assumption implies that all vectors $x_1,\ldots,x_n$ are $0$. For $n=1$ the assertion is obvious. So assume $n\geq2$; since $\l_1\neq\l_2$, there is some $j$ such that $\l_1(H_j)\neq\l_2(H_j)$ and $$ 0=(\psi(H_j)-\l_1(H_j))\sum_k x_k =\sum_{k\geq2}(\l_k(H_j)-\l_1(H_j))x_k~. $$ By induction hypothesis all terms of the last sum vanish, in particular $x_2=0$ and thus: $x_1+x_3+\cdots+x_n=0$ which again by induction hypothesis implies that all vectors $x_k$ vanish.
2. If $x\colon=x_1+\cdots+x_n$ is a non zero weight vector with weight $\l$, then for all $j$: $$ 0=(\psi(H_j)-\l(H_j))x=\sum_k(\l_k(H_j)-\l(H_j))x_k $$ and by 1. it follows that for all $k$: $(\l_k(H_j)-\l(H_j))x_k=0$. Since $x\neq0$, there is some $l$ such that $x_l\neq0$ and thus for all $j$: $\l_l(H_j)=\l(H_j)$, i.e. $\l=\l_l$. If $k\neq l$ then there is some $j$ such that $\l_k(H_j)\neq\l_l(H_j)$ and therefore $x_k=0$. $\eofproof$
Suppose $\psi:\sla(3,\C)\rar\Hom(E)$ is a finite dimensional irreducible complex representation. Then $E$ is the direct sum of its weight spaces.
$\proof$ This is obvious if $\psi(H_j)$ are diagonalizable, but we actually don`t need this assumption. We already observed that $\psi(H_1)$ and $\psi(H_2)$ have at least one common complex eigen-vector. Now let $W\sbe E$ be the subspace of all weight spaces, then $W\neq\{0\}$ and since $X_j$ and $Y_j$, $j=1,2,3$, are root vectors, the operators $\psi(X_j)$ and $\psi(Y_j)$ map $W$ into itself by lemma. Obviously, $W$ is invariant under $\psi(H_j)$ and thus it`s invariant under all operators $\psi(X)$, $X\in\sla(3,\C)$. By irreducibility $W=E$ is the sum of its weight spaces and by lemma the sum is direct. $\eofproof$

Highest Weight Theorem

Similar to the $\sla(2,\C)$ case we may start with some weight $\l$ and some weight vector $x$ and apply the operators $\psi(X_j)$ and $\psi(Y_j)$ in any order to get some new weight vectors according to lemma. However, since $E$ is finite dimensional we must eventually get the null vector. Is there a way to start with some "highest" weight and work down to get all the others? Yes, this can be done, but it`s not as easy as in the $\sla(2,\C)$ case: First let us single out the roots $r_1=(2,-1)$ and $r_2=(-1,2)$ (with root vectors $X_1$ and $X_2$; it`s easily checked that all the other roots are linear combination of these two: e.g. the root corresponding to the root vector $X_3$ is just the sum of $r_1$ and $r_2$.
Let $\psi:\sla(3,\C)\rar\Hom(E)$ be a representation. On the set of all weights we define an order by putting: $\l_1\preceq\l_2$ if the difference $\l_2-\l_1$ is a non negative (i.e. $a_1,a_2\geq0$) real linear combination $a_1r_1+a_2r_2$ of $r_1$ and $r_2$ - we say $\l_2$ is higher than $\l_1$. If $\psi:\sla(3)\rar\Hom(E)$ is a representation and $\l$ a weight of $\psi$, then $\l$ is said to be a highest weight if for all weights $\mu$ of $\psi$ we have: $\mu\preceq\l$.
In the picture below the weight (0,0) is higher than any weight in the blue sector!
highest weight
The weight $(1,0)$ for the standard representation of $\sla(3,\C)$ is higher than the weight $(0,0)$, because $(1,0)-(0,0)=\tfrac23r_1+\tfrac13r_2$. Also, since $(1,0)-(-1,1)=(2,1)$ and $(1,0)-(0,-1)=(1,1)=r_1+r_2$, $(1,0)$ is a highest weight for the standard representation. Similarly $(0,1)$ is a highest weight for the dual representation.
A representation can only have one highest weight.
A highest weight is of course a maximal element (with respect to $\preceq$) but in general it`s not the other way round; even if the set of weights is finite - which is the case if the representation is finite dimensional - a highest weight may not exist! So let us start with a maximal weight $\l$ of a representation $\psi:\sla(3,\C)\rar\Hom(E)$ with corresponding weight vector $x$, then $\psi(X_1)x=\psi(X_2)x=0$, for otherwise $\l$ wouldn`t be maximal. Moreover, since the root $(1,1)$ for the root vector $X_3$ can be written as $(1,1)=r_1+r_2$, we must also have $\psi(X_3)x=0$. We say $\psi:\sla(3,\C)\rar\Hom(E)$ is a highest weight cyclic representation if the following holds:
  1. There exists a weight vector $x\in E\sm\{0\}$ with weight $\l$.
  2. For all $j=1,2,3$: $\psi(X_j)x=0$.
  3. $x$ is a cyclic vector, i.e. the space generated by $\psi(X)x$, $X\in\sla(3,\C)$ is all of $E$.
$\psi:\sla(2,\C)\rar\Hom(E)$ is a highest weight cyclic representation if there exists a weight vector $x\in E\sm\{0\}$ with weight $\l$, $\psi(X)x=0$ and the space generated by $\psi(Z)x$, $Z\in\sla(2,\C)$ is all of $E$. This holds if and only if $\psi$ is irreducible and $\l$ is the maximal eigen-value of $\psi(H)$.
Restricting a highest weight cyclic representation to the sub-algebras generated by $H_1,X_1,Y_1$ and $H_2,X_2,Y_2$ respectively, we get representations of $\sla(2,\C)$ such that $\psi(X_1)=\psi(X_2)=0$ and thus by proposition we infer that both $\l(H_1)$ and $\l(H_2)$ must be non negative integers. Also, if a weight vector for a maximal weight is cyclic, then $\psi$ is a highest weight cyclic representation and both components of the weight are non negative integers. But more is true and this also explains why a highest weight cyclic representation is called highest weight representation:
Let $\psi:\sla(3,\C)\rar\Hom(E)$ be a highest weight cyclic representation with weight $\l$ and weight vector $x$. Then $\l$ is the highest weight of $\psi$ and the corresponding weight space is one dimensional.
$\proof$ First we proof that the space $W$ generated by $\psi(Y_{j_1})\cdots\psi(Y_{j_m})x$ is invariant: By the reordering lemma it suffices to prove that all elements of the form $$ \psi(Y_{j_1})\cdots\psi(Y_{j_m}) \psi(H_1)^{k_1}\psi(H_2)^{k_2} \psi(X_1)^{l_1}\psi(X_2)^{l_2}\psi(X_3)^{l_3}x $$ live in $W$, but $\psi(X_j)x=0$ and $\psi(H_j)x=\l(H_j)x$ and thus $W$ is $\psi$-invariant by its definition. Since $x$ is cyclic, we conclude: $W=E$. Now suppose $y$ is any vector in $E=W$ of the form $y=\psi(Y_{j_1})\cdots\psi(Y_{j_m})x$; since $Y_1,Y_2,Y_3$ are root vectors with roots $-r_1,-r_2,-r_1-r_2$, $y$ is a weight vector and by lemma its weights are strictly lower than $\l$ unless $y=x$. Therefore $W$ has a basis $x,y_1,\ldots,y_m$ of weight vectors and each weight of $y_j$ is strictly lower than $\l$. By lemma we are done. $\eofproof$
Now assume moreover that $\psi$ is irreducible, then any vector in $E\sm\{0\}$ is cyclic and thus any non zero weight vector for a maximal weight is cyclic and satisfies $\psi(X_j)=0$, i.e. every irreducible representation is a highest weight cyclic representation. Also the converse holds
Let $\psi:\sla(3,\C)\rar\Hom(E)$ be a finite dimensional representation. Then $\psi$ is irreducible if and only if $\psi$ is a highest weight cyclic representation.
$\proof$ As $E$ is finite dimensional it decomposes into sub-spaces $E=\bigoplus E_j$, such that $\psi:\sla(3,\C)\rar\Hom(E_j)$ is irreducible. Each of these spaces $E_j$ in turn decomposes by proposition into its weight spaces and by lemma the weight vector $x$ must lie in one of these spaces and consequently in one of the spaces $E_j$. Since $x$ is cyclic and $E_j$ is invariant, we must have $E=E_j$. $\eofproof$
So far we have established that an irreducible representation is the same as a highest weight cyclic representation, moreover the weight space of the highest weight of this representations is one dimensional and the components of the highest weight are non negative. Our next goal is to verify the following
Two irreducible representations with the same highest weight $\l$ are equivalent.
$\proof$ Let $\psi:\sla(3,\C)\rar\Hom(E)$ and $\vp:\sla(3,\C)\rar\Hom(F)$ be irreducible representations with the highest weight $\l$ and let $u\in E$, $v\in F$ be weight vectors with this weight. Then $(u,v)$ is a weight vector of the representation $\pi:\sla(3,\C)\rar\Hom(E\times F)$, $$ \pi(X)(x,y)=(\psi(X)x,\vp(X)y). $$ Indeed, we have $\pi(H)(x,y)=\l(H)(u,v)$ and for all $j=1,2$: $\pi(X_j)(u,v)=(0,0)$. Thus if $W$ denotes the subspace generated by $\{\pi(X)(u,v):\,X\in\sla(3,\C)\}$, then $\pi:\sla(3,\C)\rar\Hom(W)$ is a highest weight cyclic representation; by proposition it`s irreducible. Further, the projections $P:W\rar E$, $(x,y)\mapsto x$ and $Q:W\rar F$, $(x,y)\mapsto y$ are both intertwining operators, i.e. $P\pi(x,y)=\psi(x)=\psi(P(x,y))$ and $Q\pi(x,y)=\vp(y)=\vp(Q(x,y))$ and since $P(u,v)=u\neq0$ and $Q(u,v)=v\neq0$, both $P:W\rar E$ and $Q:W\rar F$ must be isomorphisms by Schur`s lemma for Lie algebras, proving that both irreducible representations $\psi:\sla(3,\C)\rar\Hom(E)$ and $\vp:\sla(3,\C)\rar\Hom(F)$ are equivalent to the irreducible representation $\pi:\sla(3,\C)\rar\Hom(W)$. $\eofproof$
Our last step is the construction of irreducible representations
For every pair of non negative integers $(m_1,m_2)$ there is an irreducible representation of $\sla(3,\C)$, whose highest weight $\l$ satisfies $\l(H_j)=m_j$.
$\proof$ The trivial representation $X\mapsto0$ has highest weight $(0,0)$, the standard representation $X\mapsto X$ has highest weight $(1,0)$ and it`s dual $X\mapsto\bar X$ has highest weight $(0,1)$. Put $E=F=\C^3$ and let $u\in E$, $v\in F$ be weight vectors for these two representations with the highest weight $(1,0)$ and $(0,1)$. Let $\pi$ be the following representation $$ \pi:\sla(3,\C)\rar\Hom(E\otimes\cdots\otimes E\otimes F\otimes\cdots\otimes F) $$ where we take $m_1$-fold tensor products of $E$ and $m_2$-fold tensor products of $F$. $\pi$ is defined by the sum of $m_1+m_2$ terms: \begin{eqnarray*} \pi(X)\colon &=&X\otimes1\otimes\cdots\otimes1 +\cdots +1\otimes\cdots\otimes1\otimes X\otimes1\otimes\cdots\otimes1\\ &&+1\otimes\cdots\otimes1\otimes\bar X\otimes1\otimes\cdots\otimes1 +\cdots +1\otimes\cdots\otimes1\otimes\bar X \end{eqnarray*} Finally put $w\colon=u\otimes\cdots\otimes u\otimes v\otimes\cdots\otimes v$. Then it follows that $$ \pi(H_j)w=m_jw \quad\mbox{and}\quad \pi(X_j)w=0~. $$ Thus $\pi$ restricted to the space $W$ invariant under $\pi(Y_j)$, $j=1,2,3$, is an irreducible representation with highest weight $(m_1,m_2)$. In order to get $W$ we observe that $Y_3=[Y_2,Y_1]$ and thus we only need to apply the operators $\pi(Y_j)$, $j=1,2$, in every order to $w$. $\eofproof$
Suppose $u$ and $v$ are weight vectors for $\psi$ and $\vp$ with weights $\l$ and $\mu$. Then $u\otimes v$ is a weight vector for $\pi\colon=\psi\otimes 1+1\otimes\vp$ with weight $\l+\mu$.
$$ \pi(H_j)u\otimes u =\psi(H_j)u\otimes v+u\otimes\vp(H_j)v =(\l(H_j)+\mu(H_j))u\otimes v $$
The adjoint representation of $\su(3)$ has highest weight $(1,1)$ with cyclic highest weight vector $X_3$. Thus the adjoint representation is irreducible.
We have $[H_1,X_3]=[H_2,X_3]=X_3$ and $[X_1,X_3]=[X_2,X_3]=[X_3,X_3]=0$. Finally, from the table of the adjoint representation we infer that $X_3$ is a cyclic vector. Hence $(1,1)$ is the highest weight with weight vector $X_3$. Alternatively: from the root table (or the table of the adjoint representation) we see that all the weights are given by $r_1,r_2,r_1+r_2,-r_1,-r_2,-r_1-r_2$ and $0$, which has multiplicity $2$; obviously $(1,1)=r_1+r_2$ is the highest weight.

Some Examples

$1$-, $3$- and $\bar 3$- representation, quark notation

These just denote the trivial, the standard and its dual representation. In particle physics the weight vectors $e_1,e_2,e_3$ of the standard representation, i.e. the common eigen-vectors of $H_1$ and $H_2$ are denoted by $u,d,s$ - up, down and strange quark. Hence $u,d,s$ form an orthonormal basis of the $\mathbf{3}$-representation; accordingly, the antiparticles $\bar u,\bar d,\bar s$ form an orthonormal basis of the $\mathbf{\bar3}$-representation. Mathematically they are simply the weight vectors of the standard representation and its dual. In physics particles made of any number of quarks are called hadrons
, also the matrices $$ I_3\colon=\tfrac12H_1,\quad Y\colon=\tfrac13(H_1+2H_2),\quad Q\colon=\tfrac12Y+I_3=\tfrac23H_1+\tfrac13H_2 $$ are used. Their eigen-values corresponding to the eigen-vectors $u,d,s$ are called the isospin, the hypercharge and the (electric) charge, respectively, cf. wikipedia. We will see below (cf. section) that these are in a way dual operators to $H_1$ and $H_2$.
Verify that $I_3$, $Y$ and $Q$ are given by: $$ \left(\begin{array}{ccc} 1/2&0&0\\ 0&-1/2&0\\ 0&0&0 \end{array}\right),\quad \left(\begin{array}{ccc} 1/3&0&0\\ 0&1/3&0\\ 0&0&-2/3 \end{array}\right),\quad \left(\begin{array}{ccc} 2/3&0&0\\ 0&-1/3&0\\ 0&0&-1/3 \end{array}\right) $$ Hence the isospins, the hypercharges and the charges of $u,d,s$ are $(-1/2,1/2,0)$, $(1/3,1/3,-2/3)$ and $(2/3,-1/3,-1/3)$. The diagonal matrices $B\colon=diag\{1/3,1/3,1/3\}$, $\s\colon=diag\{1/2,1/2,1/2\}$ and $S\colon=diag\{0,0,1\}$ are called baryon number, spin and strangeness and none of them belongs to $\sla(3,\C)$!

Highest Weight $(2,0)$ - the $\mathbf{6}$-representation

If $m_1=2$ and $m_2=0$, then $\pi(X)=X\otimes1+1\otimes X$ and the highest weight vector is $e_1\otimes e_1$. We have to find the smallest space $W$ containing $w\colon=e_1\otimes e_1$ and invariant under the action of the operators $$ \pi(Y_1)=Y_1\otimes1+1\otimes Y_1 \quad\mbox{and}\quad \pi(Y_2)=Y_2\otimes1+1\otimes Y_2 $$ applied in any order. \begin{eqnarray*} \pi(Y_1)(e_1\otimes e_1)&=&e_2\otimes e_1+e_1\otimes e_2\\ \pi(Y_2)(e_1\otimes e_1)&=&0\\ \pi(Y_1)(e_2\otimes e_1+e_1\otimes e_2)&=&2e_2\otimes e_2\\ \pi(Y_2)(e_2\otimes e_1+e_1\otimes e_2)&=&e_3\otimes e_1+e_1\otimes e_3\\ \pi(Y_1)(e_2\otimes e_2)&=&0\\ \pi(Y_2)(e_2\otimes e_2)&=&e_3\otimes e_2+e_2\otimes e_3\\ \pi(Y_1)(e_3\otimes e_2+e_2\otimes e_3)&=&0\\ \pi(Y_2)(e_3\otimes e_2+e_2\otimes e_3)&=&2e_3\otimes e_3\\ \pi(Y_1)(e_3\otimes e_3)&=&0\\ \pi(Y_2)(e_3\otimes e_3)&=&0 \end{eqnarray*} Thus the irreducible representation with highest weight $(2,0)$ is of dimension $6$ - hence it`s called the $\mathbf{6}$-representation - with basis: $$ e_1\otimes e_1, e_2\otimes e_2, e_3\otimes e_3, e_1\otimes e_2+e_2\otimes e_1, e_2\otimes e_3+e_3\otimes e_2, e_3\otimes e_1+e_1\otimes e_3. $$ In particle physics the vectors $e_1\otimes e_1,\ldots,e_3\otimes e_3$ are designated as $uu,\ldots,ss$, which flag particles composed of two quarks. Hence we have the following orthonormal basis in "quark"-notation: $$ uu,dd,ss, \frac{ud+du}{\sqrt2}, \frac{ds+sd}{\sqrt2}, \frac{us+su}{\sqrt2}~. $$ Since $u,d,s$ have weights $(1,0),(-1,1),(0,-1)$, these vectors have the weights: $$ (1,0)+(1,0)=(2,0),(-2,2),(0,-2),(0,1),(-1,0),(1,0)+(0,-1)=(1,-1)~. $$ The following diagram depicts the $\mathbf{6}$-representation $\pi$: the mappings $\pi(Y_1)$, $\pi(Y_2)$, $\pi(X_1)$ and $\pi(X_2)$ map each basis vector, represented by a circle enclosing its weight, to a multiple of another basis vector. This multiple is attached to the correspondingly colored arrow, i.e. the blue, cyan, red and green arrow indicates the action of $\pi(Y_1)$, $\pi(Y_2)$, $\pi(X_1)$ and $\pi(X_2)$. If there is e.g. no outgoing blue arrow from one of the circles, then the corresponding basis vector is mapped by $\pi(Y_1)$ to the null vector!
6-representation

Highest Weight $(1,1)$ - the $\mathbf{8}$- or adjoint representation

This time we have $\pi(X)=X\otimes1+1\otimes\bar X$, where $\bar X\colon=-X^t$. Hence the matrices $\bar H_j$, $\bar X_j$, $\bar Y_j$ are given by: $$ \bar H_1\colon=\left(\begin{array}{ccc} -1&0&0\\ 0&1&0\\ 0&0&0 \end{array}\right), \bar X_1\colon=\left(\begin{array}{ccc} 0&0&0\\ -1&0&0\\ 0&0&0 \end{array}\right), \bar Y_1\colon=\left(\begin{array}{ccc} 0&-1&0\\ 0&0&0\\ 0&0&0 \end{array}\right) $$ $$ \bar H_2\colon=\left(\begin{array}{ccc} 0&0&0\\ 0&-1&0\\ 0&0&1 \end{array}\right), \bar X_2\colon=\left(\begin{array}{ccc} 0&0&0\\ 0&0&0\\ 0&-1&0 \end{array}\right), \bar Y_2\colon=\left(\begin{array}{ccc} 0&0&0\\ 0&0&-1\\ 0&0&0 \end{array}\right) $$ $$ \bar X_3\colon=\left(\begin{array}{ccc} 0&0&0\\ 0&0&0\\ -1&0&0 \end{array}\right), \bar Y_3\colon=\left(\begin{array}{ccc} 0&0&-1\\ 0&0&0\\ 0&0&0 \end{array}\right) $$ Again, let $e_1,e_2,e_3$ be the standard basis of $\C^3$, then $e_1$ is the weight vector for the standard representation with the highest weight $(1,0)$ and $\bar e_3$ is the weight vector for the dual representation with the highest weight $(0,1)$. We are left to identify the smallest space $W$ containing $w\colon=e_1\otimes \bar e_3$ and invariant under $$ \pi(Y_1)=Y_1\otimes1+1\otimes\bar Y_1 \quad\mbox{and}\quad \pi(Y_2)=Y_2\otimes1+1\otimes\bar Y_2~. $$ We start out with the vector $e_1\otimes \bar e_3$: \begin{eqnarray*} Y_1e_1\otimes \bar e_3+e_1\otimes\bar Y_1\bar e_3&=&e_2\otimes \bar e_3\\ Y_2e_1\otimes \bar e_3+e_1\otimes\bar Y_2\bar e_3&=&-e_1\otimes \bar e_2 \end{eqnarray*} with weights: Thus we have to apply the operators to $e_2\otimes \bar e_3$ and $e_1\otimes \bar e_2$: \begin{eqnarray*} Y_1e_2\otimes \bar e_3+e_2\otimes\bar Y_1\bar e_3&=&0\\ Y_2e_2\otimes \bar e_3+e_2\otimes\bar Y_2\bar e_3&=&e_3\otimes \bar e_3-e_2\otimes \bar e_2\\ Y_1e_1\otimes \bar e_2+e_1\otimes\bar Y_1\bar e_2&=&e_2\otimes \bar e_2-e_1\otimes \bar e_1\\ Y_2e_1\otimes \bar e_2+e_1\otimes\bar Y_2\bar e_2&=&0 \end{eqnarray*} This gives another two vectors: $e_3\otimes \bar e_3-e_2\otimes \bar e_2$ and $e_2\otimes \bar e_2-e_1\otimes \bar e_1$: \begin{eqnarray*} \pi(Y_1)(e_3\otimes \bar e_3-e_2\otimes \bar e_2)&=&e_2\otimes \bar e_1\\ \pi(Y_2)(e_3\otimes \bar e_3-e_2\otimes \bar e_2)&=&-2e_3\otimes \bar e_2\\ \pi(Y_1)(e_2\otimes \bar e_2-e_1\otimes \bar e_1)&=&-2e_2\otimes \bar e_1\\ \pi(Y_2)(e_2\otimes \bar e_2-e_1\otimes \bar e_1)&=&e_3\otimes \bar e_2 \end{eqnarray*} which gives the two vectors: $e_2\otimes \bar e_1$ and $e_3\otimes \bar e_2$: \begin{eqnarray*} \pi(Y_1)(e_2\otimes \bar e_1)&=&0\\ \pi(Y_2)(e_2\otimes \bar e_1)&=&e_3\otimes \bar e_1\\ \pi(Y_1)(e_3\otimes \bar e_2)&=&-e_3\otimes \bar e_1\\ \pi(Y_2)(e_3\otimes \bar e_2)&=&0 \end{eqnarray*} Finally: \begin{eqnarray*} \pi(Y_1)(e_3\otimes \bar e_1)&=&0\\ \pi(Y_2)(e_3\otimes \bar e_1)&=&0 \end{eqnarray*} The space $W$ is generated by the eight vectors and therefore it`s called the $\mathbf{8}$-representation: $$ e_1\otimes \bar e_2, e_1\otimes \bar e_3, e_2\otimes \bar e_1, e_2\otimes \bar e_3, e_3\otimes \bar e_1, e_3\otimes \bar e_2, e_3\otimes \bar e_3-e_2\otimes \bar e_2, e_2\otimes \bar e_2-e_1\otimes \bar e_1~. $$ Finally we compute the weights of these vectors: the weights of $e_1,e_2,e_3$ are $(1,0),(-1,1),(0,-1)$ and the weights of $\bar e_1,\bar e_2,\bar e_3$ are $(-1,0),(1,-1),(0,1)$. Since the weights of $e_j\otimes\bar e_k$ is the sum of the weights of $e_j$ and $\bar e_k$, we get the weights: \begin{eqnarray*} &&(1,0)+(1,-1)=(2,-1),\quad (1,0)+(0,1)=(1,1),\quad (-1,1)+(-1,0)=(-2,1),\\ &&(-1,1)+(0,1)=(-1,2),\quad (0,-1)+(-1,0)=(-1,-1),\quad (0,1)+(1,-1)=(1,-2), \end{eqnarray*} and the last two vectors have the weight $(0,0)$. Here comes a diagrammatic visualization: $\pi(Y_1)$, $\pi(Y_2)$, $\pi(X_1)$ and $\pi(X_2)$ are again depicted as blue, cyan, red and green arrows. But this time the weight space with weight $(0,0)$ is two dimensional, thus it has two basis vectors and the numbers attached to an outgoing arrow from this space denote the multiples of the weight vector, these vectors are mapped on. Also an arrow pointing to the weight space with weight $(0,0)$ has an additional number put in parentheses, indicating the first or the second basis vector of the weight space.
8-representation
In particle physics the vectors $e_1\otimes\bar e_1,\ldots,e_3\otimes\bar e_3$ are designated as $u\bar u,\ldots,s\bar s$, which again are particles composed of a quark and an anti-quark and are called mesons
. This notation saves you a lot of writing!
Show that $(s\bar s-d\bar d)/\sqrt2$ and $(d\bar d-2u\bar u+s\bar s)/\sqrt6$ is an orthonormal basis of the weight space with weight $(0,0)$. Compute $\pi(Y_1)$, $\pi(Y_2)$, $\pi(X_1)$ and $\pi(X_2)$ with respect to this basis.
Compute the charge of the states $u\bar d,u\bar s,d\bar u,d\bar s,s\bar u,s\bar d$, $(u\bar u-d\bar d)/\sqrt2$ and $(d\bar d-u\bar u)/\sqrt2$.
We have to compute $\pi(Q)u\bar d=(Qu)\bar d+u(\bar Q\bar d)$, which comes down to: $(2/3-1/3)u\bar d$. so the charge of $u\bar d$ is $1/3$.
For this orthogonal basis we get the following table of mesons: $$ \begin{array}{r|ccc|r} \mbox{quark content}&\mbox{isospin}&\mbox{hypercharge}&\mbox{charge}&\mbox{particle}\\ \hline u\bar d&1&0&1&\pi^+\mbox{-meson}\\ u\bar s&1/2&1&1&K^+\mbox{-meson}\\ d\bar u&-1&0&-1&\pi^-\mbox{-meson}\\ d\bar s&-1/2&1&0&K^0\mbox{-meson}\\ s\bar u&-1/2&-1&-1&K^-\mbox{-meson}\\ s\bar d&1/2&-1&0&\bar K^0\mbox{-meson}\\ s\bar s-d\bar d&0&0&0&\eta\mbox{-meson}\\ d\bar d-2u\bar u+s\bar s&0&0&0&\pi^0\mbox{-meson}\\ \end{array} $$

Highest Weight $(3,0)$ - the $\mathbf{10}$-representation

This time we use the "quark" notation; it will really make the calculations more clearly laid out: The highest weight vector is $uuu$, $Y_1$ sends $u$ to $d$ and $d,s$ to $0$, $Y_2$ sends $d$ to $s$ and $u,s$ to $0$. Applying $$ \pi(Y_j)=Y_j\otimes1\otimes1+1\otimes Y_j\otimes1+1\otimes1\otimes Y_j $$ for $j=1$ to $uuu$ repeatedly gives: \begin{eqnarray*} uuu&\to&duu+udu+uud,\\ duu+udu+uud&\to&2(ddu+dud+udd),\\ ddu+dud+udd&\to&3ddd,\\ ddd&\to&0 \end{eqnarray*} with weights $(3,0),(1,1),(-1,2),(-3,3)$. Applying $\pi(Y_2)$ recursively gives: \begin{eqnarray*} uuu&\to&0\\ duu+udu+uud&\to&suu+usu+uus,\\ ddu+dud+udd&\to&sdu+dsu+sud+dus+usd+uds,\\ ddd&\to&sdd+dsd+dds,\\ suu+usu+uus&\to&0,\\ sdu+dsu+sud+dus+usd+uds&\to&2(ssu+sus+uss),\\ sdd+dsd+dds&\to&2(ssd+sds+dss),\\ ssu+sus+uss&\to&0,\\ ssd+sds+dss&\to&3sss,\\ sss&\to&0. \end{eqnarray*} Finally apply $\pi(Y_1)$ to the newly obtained vectors: \begin{eqnarray*} suu+usu+uus&\to&sdu+sud+dsu+usd+dus+uds,\\ sdu+dsu+sud+dus+usd+uds&\to&2(sdd+dsd+dds),\\ sdd+dsd+dds&\to&0,\\ ssu+sus+uss&\to&ssd+sds+dss,\\ ssd+sds+dss&\to&0,\\ sss&\to&0. \end{eqnarray*} Since we have not produced any new vector, the irreducible representation lives in the space generated by $10$ weight vectors.
We consider particles $q_1\cdots q_n$, $q_j\in\{u,d,s\}$, made of $n$ quarks. Two of these particles are said to have the same quark content if both have the same number of $u$, $d$ and $s$ quarks. Show that there are $(n+2)(n+1)/2$ classes of different quark content. By the way, this is the number of ways to distribute $n$ undistinguishable presents among $3$ kids!
$$ \begin{array}{ccccccc} \mbox{weight}&\mbox{vector}&\mbox{quark content}&\mbox{isospin}&\mbox{hypercharge}&\mbox{charge}&\mbox{particle}\\ \hline (3,0)&uuu&uuu&3/2&1&2&\D^{++}\\ (1,1)&duu+udu+uud&uud&1/2&1&1&\D^+\\ (-1,2)&ddu+dud+udd&udd&-1/2&1&0&\D^0\\ (-3,3)&ddd&ddd&-3/2&1&-1&\D^-\\ (2,-1)&suu+usu+uus&uus&1&0&1&\Sigma^{*+}\\ (0,0)&sdu+dsu+sud+dus+usd+uds&uds&0&0&0&\Sigma^{*0}\\ (1,-2)&ssu+sus+uss&uss&1/2&-1&0&\Xi^{*0}\\ (-2,1)&sdd+dsd+dds&dds&-1&0&-1&\Sigma^{*-}\\ (-1,1)&ssd+sds+dss&dss&-1/2&-1&-1&\Xi^{*-}\\ (1,-1)&sss&sss&0&-2&-1&\O^- \end{array} $$ Particles made of three quarks are called baryons.

Decomposing $\pi:\su(3)\rar\Hom(\C^{27})$ into irreducibles

Let us find all irreducible sub-representations of $\pi$: 1. We already got the $\mathbf{10}$-representation.
2. Subtracting the orthogonal projection of $uud$ on the space $E$ carrying the $\mathbf{10}$-representation we get: $$ uud-(1/3)(duu+udu+uud)=(2uud-udu-duu)/3. $$ Hence the vector $2uud-udu-duu$ is orthogonal to $E$ and since all the maps $\pi(X_j)$ send this vector to $0$, we get another highest weight cyclic representation. Now $$ \pi(H_1)(2uud-udu-duu)=2uud-udu-duu=\pi(H_2)(2uud-udu-duu) $$ and therefore its heighest weight is $(1,1)$, i.e. it`s equivalent to the adjoint or $\mathbf{8}$-representation. To get the space generated by $2uud-udu-duu$ we need to apply $\pi(Y_j)$, $j=1,2$, repeatedly to $2uud-udu-duu$. Applying $\pi(Y_1)$ repeatedly gives: \begin{eqnarray*} 2uud-udu-duu&\to&2dud+2udd-ddu-udd-ddu-dud=dud+udd-2ddu,\\ dud+udd-2ddu&\to&0, \end{eqnarray*} generating one new vector with weight $(-1,2)$. Applying $\pi(Y_2)$ repeatedly to the two vectors $2uud-udu-duu$ and $dud+udd-2ddu$: \begin{eqnarray*} 2uud-udu-duu&\to&2uus-usu-suu\\ 2uus-usu-suu&\to&0,\\ dud+udd-2ddu&\to&sud+dus+usd+uds-2sdu-2dsu,\\ sud+dus+usd+uds-2sdu-2dsu&\to&sus+sus+uss+uss-2ssu-2ssu=2(sus+uss-2ssu),\\ sus+uss-2ssu&\to&0, \end{eqnarray*} generating three new vectors with weights $(2,-1)$, $(0,0)$, $(1,-2)$. Now again apply $\pi(Y_1)$ to the newly obtained vectors: \begin{eqnarray*} 2uus-usu-suu&\to&2dus+2uds-dsu-usd-sdu-sud,\\ 2dus+2uds-dsu-usd-sdu-sud&\to&2dds+2dds-dsd-dsd-sdd-sdd=2(2dds-dsd-sdd)\\ sud+dus+usd+uds-2sdu-2dsu&\to&sdd+dds+dsd+dds-2sdd-2dsd=-sdd-dsd+2dds,\\ -sdd-dsd+2dds&\to&0, \end{eqnarray*} generating two new vectors with weights $(0,0)$, $(-2,1)$. Apply $\pi(Y_2)$ once more to the newly obtained vectors: \begin{eqnarray*} 2dus+2uds-dsu-usd-sdu-sud&\to&2sus+2uss-ssu-uss-ssu-sus=sus+uss-2ssu,\\ sus+uss-2ssu&\to&0,\\ -sdd-dsd+2dds&\to&-ssd-sds-ssd-dss+2sds+2dss=-2ssd+sds+dss,\\ -2ssd+sds+dss&\to&0, \end{eqnarray*} only the second vector is new with weight $(-1,-1)$ and since it`s in the kernel of $\pi(Y_1)$, we are done! $$ \begin{array}{cccccc} \mbox{weight}&\mbox{vector}&\mbox{quark content}&\mbox{isospin}&\mbox{hypercharge}&\mbox{charge}&\mbox{particle}\\ \hline (1,1)&2uud-udu-duu&uud&1/2&1&1&p\\ (-1,2)&dud+udd-2ddu&udd&-1/2&1&0&n\\ (2,-1)&2uus-usu-suu&uus&1&0&1&\Sigma^+\\ (0,0)&uds+usd+dus-2dsu+sud-2sdu&uds&0&0&0&\Sigma^0\\ (1,-2)&sus+uss-2ssu&uss&1/2&-1&0&\Xi^0\\ (0,0)&2uds-usd+2dus-dsu-sud-sdu&uds&0&0&0&\Lambda^0\\ (-2,1)&-sdd-dsd+2dds&dds&-1&0&-1&\Sigma^-\\ (-1,-1)&-2ssd+sds+dss&dss&-1/2&-1&-1&\Xi^- \end{array} $$ The weight space of the weight $(0,0)$ is of dimension $2$ and the states displayed in the table above are not orthogonal!
3. The space comprising states of quark content $uud$ is three dimensional, but the sum of the spaces of the two irreducible sub-representations above only contain two vectors which are not orthogonal to the space of quark content $uud$. Thus we try e.g. $udu$; its component orthogonal to the sum of the first two spaces: $$ udu-(1)(duu+udu+uud)/3-(-1)(2uud-udu-duu)/6 =(udu-duu)/2 $$ Since $\pi(X_1)(udu-duu)=0=\pi(X_2)(udu-duu)$ and $\pi(H_1)(udu-duu)=udu-duu$ and $\pi(H_2)(udu-duu)=(udu-duu)$, the sub-representation in the space generated by $udu-duu$ is again a highest weight $(1,1)$ cyclic representation. i.e. an $\mathbf{8}$-representation.
4. Now $26-10-8-8=1$ and thus the last irreducible sub-representation of $\pi$ is equivalent to the trivial representaion. Since the only weight of this sub-representation is $(0,0)$ there must be a vector of quark content $uds$ which gets maped to $0$ by all operators $\pi(X_j),\pi(Y_j)$. This vector is $$ uds-dus+dsu-sdu+sud-usd~. $$

Irreducible representation of $\SU(3)$ in ${\cal P}_n$

Let ${\cal P}_n$ be the homogeneous polynomials on $\C^3$ of degree $n\in\N_0$ and $\G:\SU(3)\rar\Hom({\cal P}_n)$ the representation $\G(U)f\colon=f\circ U^{-1}$. Let us first compute $\g(X)\colon=D\G(1)X$: For any holomorphic function $f:\C^3\rar\C$ we have (cf.
section) $$ \g(X)f(z)=\ftdl t0f(e^{-tX}z)=-(\pa_1f(z),\pa_2f(z),\pa_3f(z))Xz $$ and thus \begin{array}{ccc} \g(X_1)f(z)=-\pa_1f(z)z_2,&\g(X_2)f(z)=-\pa_2f(z)z_3,&\g(X_3)f(z)=-\pa_1f(z)z_3,\\ \g(Y_1)f(z)=-\pa_2f(z)z_1,&\g(Y_2)f(z)=-\pa_3f(z)z_2,&\g(Y_3)f(z)=-\pa_3f(z)z_1~. \end{array} Next we will verify that any non trivial invariant subspace contains the polynomial $z_3^n$: $\g(X_2)$ and $\g(X_3)$ act on a polynomial by decreasing the degree in $z_2$ and $z_1$, respectively, by one and increasing the degree of $z_3$ by one. $z_1^{n_1}z_2^{n_2}z_3^{n_3}$ will be mapped by $\g(X_3)^{n_1}$ to a non zero multiple of $z_2^{n_2}z_3^{n_1+n_3}$ and this gets mapped by $\psi(X_2)^{n_2}$ to a non zero multiple of $z_3^n$. All monomials $z_1^{l_1}z_2^{l_2}z_3^{l_3}$ with $l_1 < n_1$ or $l_2 < n_2$ are mapped to $0$. Now if $f$ is a polynomial in an invariant subspace it must be a linear combination of monomials; we pick the one with the lowest degree $n_3$, say. Then $\psi(X_2)^{n_2}\g(X_3)^{n_1}p$ is just a non zero multiple of $z_3^n$. The space generated by $\psi(Y_2)^{l_2}\g(Y_3)^{l_1}z_3^n$ is just ${\cal P}_n$, i.e. $\g$ and thus $\G$ is irreducible. Eventually $\g(X_1)z_3^n=\g(X_2)z_3^n=0$, i.e. $z_3^n$ is a highest weight cyclic vector with weight $(0,n)$, because $\g(H_1)z_3^n=0$, $\g(H_2)z_3^n=-(z_2\pa_2z_3^n-z_3\pa_3z_3^n)=nz_3^n$.

The Weyl Group

Cartan algebra and its normalizer

Once a weight vector of a representation $\psi$ is known, we can find other weights by applying the operations $\psi(Y_j)$ or $\psi(X_j)$; but at the time being we don`t know if this gives us another weight vector or null. The Weyl group is some sort of a symmetry group of the weights and thus given any weight - we don`t need to know a corresponding weight vector - the Weyl group gives us other weights, which in addition have the same multiplicity.
We start by another more geometric approach to weights: The two dimensional sub-algebra ${\cal H}$ generated by $H_1$ and $H_2$ is a maximal commutative sub-algebra of $\sla(3,\C)$, it`s called the Cartan sub-algebra
of $\sla(3,\C)$. The set $N({\cal H})$ of all $g\in\SU(3)$ that leave ${\cal H}$ invariant under the adjoint representation $\Ad$, i.e. $\Ad(g)({\cal H})\sbe{\cal H}$ - this is called the normalizer of ${\cal H}$ and it`s obviously a subgroup of $\SU(3)$ (cf. section). Let us investigate this subgroup more closely: for any $g\in N$ the matrices $gH_jg^{-1}$ are in ${\cal H}$, hence they must be diagonal, i.e. the standard basis vectors $e_1,e_2,e_3$ are eigen-vectors. Since on the other hand the eigen-vectors of $gH_jg^{-1}$ are $ge_1,ge_2$ and $ge_3$, we infer that $ge_1,ge_2$ and $ge_3$ must be a permutation of the standard basis $e_1,e_2$ and $e_3$ up to factors of modulus $1$, i.e.: there are $\theta_1,\theta_2,\theta_3\in\R$ and a permutation $\pi\in S_3$ such that $$ ge_j=e^{i\theta_j}e_{\pi(j)} \quad\mbox{and}\quad \sign(\pi)e^{i\sum\theta_j}=1~. $$ Conversely, any such $g$ lies in $N({\cal H})$ and therefore $$ N({\cal H})=\{g\in\SU(3):ge_j=e^{i\theta_j}e_{\pi(j)}, \sign(\pi)e^{i\sum\theta_j}=1\} \sim S^1\times S^1\times S_3~. $$

Cartan algebra as weight space

A weight $\l$ was defined as a pair of complex numbers: the eigen-values of $\psi(H_1)$ and $\psi(H_2)$ for some common eigen-vector $x$, but in fact for every $H\in{\cal H}$ we have $H=a_1H_1+a_2H_2$ and $\l(H)\colon=a_1\l(H_1)+a_2\l(H_2)$ is the eigen-value of $H$ with eigen-vector $x$. Thus we may conveniently think of a weight as a linear functional $\l$ on the space ${\cal H}$. Moreover, endowing ${\cal H}$ with the euclidean product: $$ \la H,G\ra\colon=\tr(HG^*) $$ we know that any linear functional $\l$ can be written as $\l(H)=\la H,G\ra$ for a unique $G\in{\cal H}$. Thus we`ve got our new definition: a weight of a representation $\psi$ of $\sla(3,\C)$ in some finite dimensional vector-space $E$ is a vector $\l\in{\cal H}$ such that there exists some $x\in E\sm\{0\}$ such that $$ \forall H\in {\cal H}:\quad \psi(H)x=\la H,\l\ra x $$ Hence weights are particular vectors in the Cartan algebra ${\cal H}$ of $\sla(3,\C)$!
Compute $\l_j\in{\cal H}$, $j=1,2$, such that 1. $\la H_j,\l_k\ra=\d_{jk}$, i.e. $\l_1,\l_2$ is the dual basis to $H_1,H_2$. Beware, non of them is an orthogonal basis! 2. Show that $H_3\colon=-H_1-H_2$ is orthogonal to $\l_3\colon=\l_1-\l_2$. 3. Express both $\l_1,\l_2$ and $\l_3$ as linear combinations of $H_1$ and $H_2$ and vice versa. 4. Compute the norms of $H_j$ and $\l_j$.
1. If $\l_1=diag\{x,y,z\}$, then $x+y+z=0$, $x-y=1$ and $y-z=0$, i.e. $y=-1/3$, $z=-1/3$ and $x=2/3$, i.e. $\l_1$ is the electric charge operator $Q$; analogously we get: $\l_2=diag\{1/3,1/3,-2/3\}$ which is the hypercharge operator $Y$. Moreover: $\la H_1+H_2,\l_1-\l_2\ra=1-0+0-1=0$
3. By definition of the hypercharge and the charge we have: $$ \l_1=Q\colon=\tfrac23H_1+\tfrac13H_2\perp H_2,\quad \l_2=Y\colon=\tfrac13H_1+\tfrac23H_2\perp H_1,\quad \l_3=\tfrac13H_1-\tfrac13H_2\perp H_3 $$ $\norm{H_j}=\sqrt2$ and $\norm{\l_j}=\sqrt{2/3}$. $$ H_1=2\l_1-\l_2=2Q-Y,\quad H_2=2\l_2-\l_1=2Y-Q,\quad H_3=-\l_1-\l_2=-Q-Y~. $$ Suppose we are given two weights $\l$ and $\mu$ in ${\cal H}$. What does it mean in our new notation that $\l$ is higher than $\mu$? In our old notation it means that $\l-\mu=a_1(2,-1)+a_2(-1,2)$ for some $a_j\geq0$, which now says, that $\l,\mu\in{\cal H}$ can be written as $$ \l-\mu=a_1H_1+a_2H_2 $$ for some real non-negative numbers $a_1,a_2$. Since $\l_1,\l_2\in{\cal H}$ is the dual basis to $H_1,H_2$, this is equivalent to \begin{equation}\label{weyeq1}\tag{WEY1} \la\l-\mu,\l_1\ra\geq0 \quad\mbox{and}\quad \la\l-\mu,\l_2\ra\geq0~. \end{equation}

$N({\cal H})$ acting on ${\cal H}$ via $\Ad$

Since $\SU(3)\sbe\UU(3)$ we infer from exam that $\Ad:\SU(3)\rar\Gl(\su(3))$ is unitary. Hence for all $g\in N({\cal H})$ the mapping $\Ad(g)$ acts isometrically on $({\cal H},\la.,.\ra)$. Also, roots are weights of the adjoint representation; hence, in our new interpretation, a root is an element $r\in{\cal H}$ such that for some root vector $X\in\sla(3,\C)$: $$ \forall H\in{\cal H}:\quad \ad(H)X=\la H,r\ra X~. $$
The Cartan algebra of $\sla(2,\C)$ is just the linear hull of $H$ and there are only two roots: $H$ and $-H$ with root vectors $X$ and $Y$, respectively.
Determine $r_1,r_2\in{\cal H}$ such that 1. $\la H_1,r_1\ra=2$ and $\la H_2,r_1\ra=-1$, 2. $\la H_1,r_2\ra=-1$ and $\la H_2,r_2\ra=2$.
Since $\l_1,\l_2$ is the dual basis to $H_1,H_2$, we must have: $r_1=2\l_1-\l_2=diag\{1,-1,0\}=H_1$ and $r_2=-\l_1+2\l_2=diag\{0,1,-1\}=H_2$.
Prove that for all $j=1,2$ and all $H\in{\cal H}$: $\ad(H)X_j=\la H,H_j\ra X_j$ and $\ad(H)Y_j=\la H,-H_j\ra X_j$. Thus $H_j$ and $-H_j$ are roots and the corresponding root vectors are $X_j$ and $Y_j$. $-H_3$ and $H_3$ are also roots with root vectors $X_3$ and $Y_3$, respectively.
The root of $X_1$ is $(2,-1)$, which is in our new interpretation: $2\l_1-\l_2=H_1$; similarly the root of $X_2$ is $(-1,2)$, which now becomes $-\l_1+2\l_2=H_2$. The root of $X_3$ is $(1,1)$, which now becomes $H_1+H_2=-H_3$. Finally the roots of the root vectors $Y_j$ are just the negatives of the roots of $X_j$.
Determine $r\in{\cal H}$ such that $\la H_1,r\ra=1$ and $\la H_2,r\ra=0$.
$H_1,H_2$ and $H_3\colon=-H_1-H_2$ form an equilateral triangle in ${\cal H}$. 2. If $\l\in{\cal H}$ is the highest weight of an irreducible representation $\psi$, then all other weights $\mu$ must be of the form $\l-n_1H_1-n_2H_2$ and $n_j=\la\l-\mu,\l_j\ra\in\N_0$. In particular there exists a weight vector $x$ such that $$ \psi(H_1)x=\la\l,H_1\ra-2n_1-n_2,\quad \psi(H_2)x=\la\l,H_2\ra-n_1-2n_2~. $$
Let $\Psi:\SU(3)\rar\Gl(E)$ be any finite dimensional representation with derivative $\psi\colon=D\Psi(1)$. If $\l\in{\cal H}$ is a weight for $\psi$, then for all $g\in N({\cal H})$ $\Ad(g)\l$ is also a weight for $\psi$ with the same multiplicity.
$\proof$ First we show, that for all $g\in N({\cal H})$ the vector $\Psi(g)x$ is a weight vector with weight $\Ad(g)\l$: for all $H\in{\cal H}$ we have $g^{-1}Hg\in{\cal H}$ and $\psi(g^{-1}Hg)=\Psi(g)^{-1}\psi(H)\Psi(g)$ (cf. example) and thus \begin{eqnarray*} \psi(H)\Psi(g)x &=&\Psi(g)\Psi(g)^{-1}\psi(H)\Psi(g)x\\ &=&\Psi(g)\psi(g^{-1}Hg)x =\Psi(g)(\la g^{-1}Hg,\l\ra x)\\ &=&\la g^{-1}Hg,\l\ra \Psi(g)x =\la\Ad(g^{-1})H,\l\ra \Psi(g)x =\la H,\Ad(g)\l\ra \Psi(g)x, \end{eqnarray*} where the last equality follows from the fact that $\Ad$ acts isometrically on $({\cal H},\la.,.\ra)$. Therefore $\Psi(g)$ maps the weight space with weight $\l$ into the weight space with weight $\Ad(g)\l$ and since $\Ad$ and $\Psi$ are representation $\Psi(g^{-1})$ is the inverse of this map, i.e. the dimensions of the weight spaces with weight $\l$ and $\Ad(g)\l$ coincide. $\eofproof$

The Weyl group

Of course only those elements $g\in N({\cal H})$ are of interest for which $\Ad(g)|{\cal H}$ is different from the identity, thus we factor the kernel $Z$ of $\Ad:N({\cal H})\rar\Gl({\cal H})$: $Z$ is given by $$ Z=\{g\in N({\cal H}):\forall H\in{\cal H}:\ \Ad(g)H=H\}, $$ which is the centralizer of ${\cal H}$ in $N({\cal H})$, cf. section. $Z$ is the set of all diagonal matrices $g\in N({\cal H})$ satisfying $$ ge_j=e^{i\theta_j}e_j \quad\mbox{and}\quad \sum\theta_j=0~. $$ Factorizing $\Ad:N({\cal H})\rar\Gl({\cal H})$ gives us an injective homomorphism $N({\cal H})/Z\rar\Gl({\cal H})$. The group $N({\cal H})/Z$ is called the Weyl group $W(\su(3))$ of $\SU(3)$; it's isomorphic to $S_3$: to every permutation $\pi\in S_3$ we associate it's permutation matrix and - in order to get a determinant one matrix, we may multiply by $\sign(\pi)$, but for the calculations of $\Ad(g)H$ this doesn`t matter. We define the action of $W(\su(3))$ on ${\cal H}$ by \begin{equation}\label{weyeq2}\tag{WEY2} w\cdot H\colon=\Ad(g)H, \end{equation} where $[g]=w$ is identified with a permutation of three elements. Let us compute the action of the permutation $w=(231)$ on ${\cal H}$: $$ \Ad(g)H =\left(\begin{array}{ccc} 0&0&1\\ 1&0&0\\ 0&1&0 \end{array}\right) \left(\begin{array}{ccc} h_1&0&0\\ 0&h_2&0\\ 0&0&h_3 \end{array}\right) \left(\begin{array}{ccc} 0&1&0\\ 0&0&1\\ 1&0&0 \end{array}\right) =\left(\begin{array}{ccc} h_3&0&0\\ 0&h_1&0\\ 0&0&h_2 \end{array}\right)~. $$ In general if $H=diag\{h_1,h_2,h_3\}$ then $w\cdot H=diag\{h_{w^{-1}(1)},h_{w^{-1}(2)},h_{w^{-1}(3)}\}$.
Let $\Psi:\SU(3)\rar\Gl(E)$ be any finite dimensional representation with derivative $\psi\colon=D\Psi(1)$. If $\l=diag\{h_1,h_2,h_3\}$ is a weight for $\psi$, then for any permutation $w\in S_3$: $diag\{h_{w(1)},h_{w(2)},h_{w(3)}\}$ is also a weight for $\psi$ with the same multiplicity.
The permutations $w\in\{(213),(132),(321),(231),(312)\}$ act on the pair of vectors $(H_1,H_2)$, yielding pairs $(w\cdot H_1,w\cdot H_2)$, which are given by $$ (-H_1,-H_3), (-H_3,-H_2), (-H_2,-H_1), (H_3,H_1), (H_2,H_3)~. $$ In other words: if $H_1$ is a weight of the representation $\psi$ of multiplicity $m_1$, then so are $-H_1,-H_3,-H_2,H_3,H_2$ and if $H_2$ is a weight of the representation $\psi$ of multiplicity $m_2$, then so are $-H_3,-H_2,-H_1,H_1,H_3$.
We are now going to describe the action of the Weyl group on the two dimensional space ${\cal H}$ geometrically: The action of the first permutation $(213)$ is a reflection about the line $\R\l_1=[\la.,H_2\ra=0]$; the action of the second $(132)$ is a reflection about the line $\R\l_2=[\la.,H_1\ra=0]$ and the action of the third $(321)$ is a reflection about the line $\R(\l_1-\l_2)=[\la.,H_3\ra=0]$. The action of the forth $(231)$ is a rotation by $4\pi/3$ and the action of the fifth $(312)$ is a rotation by $2\pi/3$. Hence the Weyl group is isomorphic to the symmetry group $C_{3v}$ of the regular triangle formed by $\l_3\colon=\l_1-\l_2,\l_2,-\l_1$ and the reflections about the three lines orthogonal to $H_j$, $j=1,2,3$ generate this group of symmetries. That`s the way we look at the Weyl group: a subgroup of the group of isometries of the two dimensional space ${\cal H}$. The picture below illustrates the lattice points $\Z\l_1+\Z\l_2$ (which contains all possible weights), the regular triangle (of which the Weyl group is the symmetry group) and the positive cone $$ P\colon=\R_0^+\l_1+\R_0^+\l_2 =[\la.,H_1\ra\geq0]\cap[\la.,H_2\ra\geq0] $$ of non negative weights bounded by half-lines through $\l_1$ and $\l_2$. In addition, the cone indicates all weights lower than $-H_3$.
weyl1
Some conclusions can be drawn from this interpretation immediately:
  1. If $\l\neq0$ is any weight, then the set $\{w\cdot\l:w\in W(\su(3))\}$ contains either three or six points, depending on whether $\l$ lies on one of the lines generated by $\l_1,\l_2$ or $\l_1-\l_2$ or not.
  2. The vectors $\pm H_j$, $j=1,2,3$, form a regular hexagon and for every lattice point in $\l\in\Z\l_1+\Z\l_2$ there is some $w$ in the Weyl group $W(\su(3))$ such that $w\cdot\l$ lies in the positive cone $P$ generated by $\l_1$ and $\l_2$, i.e. both $\la w\cdot\l,H_1\ra$ and $\la w\cdot\l,H_2\ra$ are non negative.
$\l_1$ is a weight of the $\mathbf{3}$-representation. Applying the Weyl group gives the additional weights: $-\l_1+\l_2$ and $-\l_2$.
The $\mathbf{1}$-representation has the heighest weight $(1,0)$, i.e. $\l_1$, then it must also have the weights $-\l_1+\l_2$ and $-\l_2$, which in old notation is $(-1,1)$ and $(0,-1)$.
The $\mathbf{8}$-representation is the highest weight $(1,1)$ cyclic representation, i.e. the highest weight in ${\cal H}$ is $\l_1+\l_2$ and its multiplicity is $1$. Applying the Weyl group yields the new weights $$ -\l_1+2\l_2,\quad -\l_1-\l_2,\quad 2\l_1-\l_2,\quad -2\l_1+\l_2,\quad \l_1-2\l_2, $$ all of which have multiplicity $1$.
Assume a representation $\psi=D\Psi(1)$ has weights $(0,3)$ and $(1,1)$. What additional weights $\psi$ must have? These weights in our new approach are given by $3\l_1$ and $\l_1+\l_2=-H_3$, thus it must have the additional weights: $3(\l_1-\l_2)$, $-3\l_1$ and $H_3,\pm H_1,\pm H_2$.
Assume a representation $\psi=D\Psi(1)$ has weights $(3,2)$ and $(1,2)$. What are the additional weights $\psi$ must have?
The reflection about $\R\l_1$ and $\R\l_2$, respectively, are given by $w_1\cdot H=H-\la H,H_2\ra H_2$ and $w_2\cdot H=H-\la H,H_1\ra H_1$, in particular: \begin{eqnarray*} w_1\cdot(n_1H_1+n_2H_2) &=&n_1H_1+n_2H_2-(-n_1+2n_2)H_2 =n_1H_1+(n_1-n_2)H_2 =n_1H_3-n_2H_2\\ w_2\cdot(n_1H_1+n_2H_2) &=&n_1H_1+n_2H_2-(2n_1-n_2)H_1 =(n_2-n_1)H_1+n_2H_2 =n_2H_3-n_1H_1~. \end{eqnarray*} Similarly we get for the reflection $w_3$ about $\R\l_3$: $$ w_3\cdot(n_1H_1+n_2H_2) =n_1H_1+n_2H_2-\la n_1H_1+n_2H_2,H_3\ra H_3 =n_1H_1+n_2H_2-(n_1+n_2)(H_1+H_2) =-n_2H_1-n_1H_2~. $$
Compute $w_j\cdot(\l_1+2\l_2)$ for $j=1,2,3$.
$$ \l_1+2\l_2-\la\l_1+2\l_2,H_2\ra(2\l_2-\l_1) =\l_1+2\l_2-2(2\l_2-\l_1) =3\l_1-2\l_2 $$
Let $\l$ be the highest weight of the irreducible representation $\psi$ and let $w_3$ be the reflection about the line $\R\l_3=[\la.,H_3\ra=0]$. 1. $w_3\cdot\l$ is the lowest weight of $\psi$. 2. $-w_3\cdot\l$ is the highest weight of its dual $\bar\psi$.
Denote by $\mu$ the lowest weight; as the representation is irreducible, it must be of the form $\mu=\l-n_1H_1-n_2H_2$, Assume $w_3\cdot\l=\l-2\la\l,H_3\ra H_3/2$ is strictly greater than $\mu$, i.e. $n_1H_1+n_2H_2 > \la\l,H_3\ra H_3$. Hence we have: \begin{eqnarray*} w_3\cdot\mu&=&\mu-\la\mu,H_3\ra H_3 =\l-n_1H_1-n_2H_2-2\la\l-n_1H_1-n_2H_2,H_3\ra H_3/2\\ &=&\l-n_1H_1-n_2H_2-2\la\l,H_3\ra H_3/2+2\la n_1H_1+n_2H_2,H_3\ra H_3/2\\ &>&\l+2(-n_1H_1-n_2H_2+\la n_1H_1+n_2H_2,H_1+H_2\ra(H_1+H_2)/2)\\ &=&\l+(n_1+n_2)(H_1+H_2)-2n_1H_1-2n_2H_2)=\l+(n_2-n_1)H_1+(n_1-n_2)H_2~. \end{eqnarray*} On the other hand $w_3\cdot\mu$ must be smaller than $\l$.
2. The weights of the dual representation are the negatives the weights of the original representation!
The Weyl group as a subgroup of the isometries on the Cartan algebra ${\cal H}$ does not contain the inversion. Show that $\psi$ and its dual $\bar\psi$ are equivalent iff the weights of $\psi$ are invariant under the inversion.

The weights of irreducible representations

Starting with the highest weight of an irreducible representation can we determine all its weights? The following examples give an hint about finding the dimensions of the weight spaces.
Suppose an irreducible representations $\psi$ has highest weight $\l$ with weight vector $x$. Prove that the weight space with weight $\l-H_1-H_2$ is spanned by $x_1\colon=\psi(Y_1)\psi(Y_2)x$ and $x_2\colon=\psi(Y_2)\psi(Y_1)x$ and is thus at most two dimensional.
Suppose $E$ carries an euclidean product $\la.,.\ra$ such that we have for an irreducible representations $\psi:\sla(3,\C)\rar\Hom(E)$: $\psi(X)^*=-\psi(X)$ for all $X\in\su(3)$. Let $\l=(m_1,m_2)$ be the highest weight of $\psi$ with normed weight vector $x$. 1. Show that $\psi(X_j)^*=\psi(Y_j)$, 2. $\la x_1,x_1\ra=m_2(m_1+1)$, $\la x_2,x_2\ra=m_1(m_2+1)$ and $\la x_1,x_2\ra=m_1m_2$. 3. If $m_1\geq1$ and $m_2\geq1$, then $x_1$ and $x_2$ are linearly independent. 4. If $m_1=0$ and $m_2\geq1$ or conversely, then $x_1$ and $x_2$ are linearly dependent.
$X_j-Y_j,iX_j+iY_j\in\su(3)$ and thus \begin{eqnarray*} \psi(X_j)^*-\psi(Y_j)^*&=&-\psi(X_j)+\psi(Y_j)\quad\mbox{and}\\ -i\psi(X_j)^*-i\psi(Y_j)^*&=&-i\psi(X_j)-i\psi(Y_j) \end{eqnarray*} i.e. $\psi(X_j)^*=\psi(Y_j)$. Moreover, as $H_jx=m_jx$ and $\psi(X_j)x=0$: $$ =\norm{\psi(Y_j)x}^2 =\la x,\psi(X_j)\psi(Y_j)x\ra =\la x,\psi(H_j)x+\psi(Y_j)\psi(X_j)x\ra =m_j $$ For claerity we drop $\psi$ in the subsequent calculations, i.e. we write $X_1$ for $\psi(X_1)$, etc. As $[X_1,Y_2]=[X_2,Y_1]=0$ and $X_jx=0$ we get $X_1Y_2x=0$ and therefore by the commutation relations: \begin{eqnarray*} \la x_1,x_1\ra &=&\la Y_1Y_2x,Y_1Y_2x\ra =\la x,X_2X_1Y_1Y_2)x\ra\\ &=&\la x,X_2([X_1,Y_1]+Y_1X_1)Y_2)x\ra =\la x,X_2H_1Y_2x\ra\\ &=&\la x,([X_2,H_1]+H_1X_2)Y_2x\ra =\la x,(X_2Y_2+H_1([X_2,Y_2]+Y_2X_2)x\ra\\ &=&\la x,([X_2,Y_2]+Y_2X_2+H_1H_2)x\ra =m_2+m_1m_2 \end{eqnarray*} The verification of the other relations is quite similar.
3.,4. We compute $$ a^2\colon=\norm{x_1}^2\norm{x_2}^2-\la x_1,x_2\ra =m_1m_2(m_1+1)(m_2+1)-m_1^2m_2^2 =m_1m_2(m_1+m_2+1)~. $$ If $m_1,m_2\geq1$, then $a^2 > 0$ and thus by the equality case in the Cauchy-Schwarz-inequality: $x_1$ and $x_2$ are not col-linear. On the other hand, if $m_1=0$ or $m_2=0$, the $x_2$ are col-linear.
Suppose $\l$ is the highest weight of an irreducible representation $\psi$. Then the convex hull $C$ of $\{w\cdot\l:w\in W(\su(3))\}$ contains all weights of $\psi$.
$\proof$ In the previous subsection we found that for any weight $\mu$ of $\psi$ there is some $w\in W(\su(3))$ such that : $w\cdot\mu\in P$. Reflecting the weight $\l$ about the lines $\R\l_1=[\la.,H_2\ra=0]$ and $\R\l_2=[\la.,H_1\ra=0]$ gives us two weights $w_1\cdot\l$ and $w_2\cdot\l$ in the set $$ D_0\colon=[\la.,\l_1\ra\leq\la\l,\l_1\ra]\cap [\la.,\l_2\ra\leq\la\l,\l_2\ra], $$ Since a reflection about $\R\l_j$ doesn`t alter the inner product with $\l_j$, these two weights are on the boundary of $D_0$.
convex
We claim that $w\cdot\mu$ is in the intersection $$ D=P\cap D_0, $$ which follows from the fact that $w\cdot\mu$ is lower than $\l$, i.e. $\la w\cdot\mu,\l_j\ra\leq\la\l,\l_j\ra$. Now put $K\colon=\convex{0,w_1\cdot\l,w_2\cdot\l,\l}$; as $0=\tfrac16\sum_w w\cdot\l\in C$, we have $K\sbe C$. Finally we see from the above picture that $D\sbe K$. Hence $D\sbe C$ and $$ \mu \in\bigcup_{u\in W(\su(3))} u\cdot D \sbe\bigcup u\cdot C =C~. $$ $\eofproof$
Let $\mu$ be a weight of an irreducible representation $\psi:\sla(3,\C)\rar\Hom(E)$. If $w\in W(\su(3))$ is a reflection about the line orthogonal to $H_j$, $j=1,2,3$, then any point on the line segment joining $\mu$ to $w\cdot\mu$ of the form $\mu+\Z H_j$ is a weight of $\psi$. Moreover $\mu$ must be an element in $\Z\l_1+\Z\l_2$.
$\proof$ The sub-algebras $A_j\colon=\lhull{H_j,X_j,Y_j}$, $j=1,2,3$ are all isomorphic to $\sla(2,\C)$. W.l.o.g we may assume $j=1$. So let $F$ be the subspace of $E$ generated by all weight vectors in $E$ whose weights lie on the line $\mu+\R H_1$. These weights are shifted by $\psi(X_1)$ and $\psi(Y_1)$ by $\pm H_1$ and therefore the restrictions $\psi|A_1$ gives a representation $\vp$ of $\sla(2,\C)$ in $F$. Since $w\cdot H_1=-H_1$ and $w^*=w$, we have: $$ \la H_1,w\cdot\mu\ra=\la w\cdot H_1,\mu\ra=-\la H_1,\mu\ra, $$ i.e. $\vp$ has at least the weights $\la H_1,\mu\ra$ and $-\la H_1,\mu\ra$. By proposition: $\la H_1,\mu\ra\in\Z$ and $\vp(H_1)\in\Hom(F)$ must have the eigen-values $$ -|\la H_1,\mu\ra|,-|\la H_1,\mu\ra|+2,\ldots,|\la H_1,\mu\ra|-2,|\la H_1,\mu\ra|, $$ which coincides with the set $\{\la H_1,\nu\ra\}$, where $\nu$ is the set of points $\nu=\mu+nH_1$, $n\in\Z$, on the line segment joining $\mu$ to $w\cdot\mu$. Thus for any such $\nu$ there must be an eigen-vector $x\in F$ of $\psi(H_1)$ such that: $\psi(H_1)x=\la H_1,\nu\ra x$ and this eigen-vector can be obtained by starting with a normed weight vector $x_0$ for the weight $\mu$ (or $w\cdot\mu$) and applying $\psi(Y_1)$ $n$ times, i.e. $x=\psi(Y_1)^nx_0$; as $x\neq0$, it`s a weight vector for $\psi$. For the moreover part we remark that we just proved: $\la H_1,\mu\ra,\la H_2,\mu\ra\in\Z$. Since $\l_1,\l_2$ is the dual Basis of $H_1,H_2$ this means: $\mu\in\Z\l_1+\Z\l_2$. $\eofproof$
Let $\psi:\sla(3,\C)\rar\Hom(E)$ be an irreducible representation with highest weight $\l$. $\mu$ is a weight of $\psi$ if and only if the following three conditions hold:
  1. $\mu\in\Z\l_1+\Z\l_2$,
  2. $\mu\in\l-\N_0H_1-\N_0H_2$,
  3. $\mu\in C\colon=\convex{w\cdot\l:w\in W(\su(3))}$,
$\proof$ The necessity is clear from lemma, lemma and the fact that every weight smaller than $\l$ must be of the form: $\l-n_1H_1-n_2H_2$, $n_1,n_2\in\N_0$. Thus we only need to prove that these conditions are sufficient. By lemma each point in $\l-\N_0H_1-\N_0H_2$ on the boundary of $C$ is a weight. So assume $\mu=\l-n_1H_1-n_2H_2$ is in the interior of $C$; w.l.o.g. we may also assume: $n_2\leq n_1$, then $$ \mu=\l-mH_1+n_2H_3, \quad\mbox{with}\quad m=n_1-n_2\geq0~. $$ Starting at $\mu$ in the direction of $-H_3$ we end up in $\nu\colon=\l-(n_1-n_2)H_1$ after $n_2$ steps. We claim that $\nu$ lies on the boundary of the convex set $C$. Firstly it cannot lie on a boundary part of $C$ parallel to $H_3$. Secondly the intersection with the line passing through $\l$ and parallel to $H_2$ is given by the equation $\l-mH_1+x_3H_3=\l-x_2H_2$, i.e.: $mH_1=x_2H_2+x_3H_3$, i.e. $x_2=x_3=-m$. Hence $\l-mH_2=\l+mH_2$, which is not in $C$ unless $m=0$. Thus $\nu$ is a weight and so is its reflection $w\cdot\nu$ about the line orthogonal to $H_3$; by lemma $\mu$ must also be a weight. $\eofproof$
The weight diagram for the highest weight $\l_1+2\l_2$ irreducible representation $\psi$: the blue points represent the points in the lattice $\Z\l_1+\Z\l_2$ and the orange encircled points represent the weights of $\psi$. The light blue area is the convex hull of the set $\{w\cdot\l:w\in C_{3v}\}$, where $C_{3v}$ denotes the group of symmetry of the dashed triangle in the center, i.e. the Weyl group of $\su(3)$. For all weights on the boundary of the convex hull the dimension of the associated weight spaces equals $1$ and for the remaining three weights in the interior the dimension equals $2$. Adding up to a dimension of $15$.
wh1-2
Assume an irreducible representation $\psi:\sla(3,\C)$ has highest weight $3\l_1+2\l_2$. Identify all additional weights of $\psi$.
← Matrix Lie-Groups → Semi-simple Lie-Algebras

<Home>