What should you be acquainted with? 1. Linear Algebra, in particular inner product spaces both over the real and the complex numbers. 2. Very basics in Group Theory and Complex Analysis. This chapter is essentially taken from Brian Hall, Lie Groups, Lie Algebras, and Representations, Chapter 6.
For $n=3$ we choose the following eight matrices as basis for $\sla(3,\C)$:
$$
H_1\colon=\left(\begin{array}{ccc}
1&0&0\\
0&-1&0\\
0&0&0
\end{array}\right),
X_1\colon=\left(\begin{array}{ccc}
0&1&0\\
0&0&0\\
0&0&0
\end{array}\right)=E^{12},
Y_1\colon=\left(\begin{array}{ccc}
0&0&0\\
1&0&0\\
0&0&0
\end{array}\right)=E^{21}
$$
$$
H_2\colon=\left(\begin{array}{ccc}
0&0&0\\
0&1&0\\
0&0&-1
\end{array}\right),
X_2\colon=\left(\begin{array}{ccc}
0&0&0\\
0&0&1\\
0&0&0
\end{array}\right)=E^{23},
Y_2\colon=\left(\begin{array}{ccc}
0&0&0\\
0&0&0\\
0&1&0
\end{array}\right)=E^{32}
$$
$$
X_3\colon=\left(\begin{array}{ccc}
0&0&1\\
0&0&0\\
0&0&0
\end{array}\right)=E^{13},
Y_3\colon=\left(\begin{array}{ccc}
0&0&0\\
0&0&0\\
1&0&0
\end{array}\right)=E^{31}
$$
Notice: $H_j\in i\su(3)$ and $X_j,Y_j\notin\su(3)$! The spaces $\lhull{H_1,X_1,Y_1}$ and $\lhull{H_2,X_2,Y_2}$ are obviously sub-algebras and isomorphic to $\sla(2,\C)$, i.e.
$$
\forall j\in\{1,2\}\quad
[H_j,X_j]=2X_j,
[H_j,Y_j]=-2Y_j,
[X_j,Y_j]=H_j
$$
Also, the complex subspace $\lhull{H_1,H_2}$ is the (complexified) Lie-algebra of the torus $\TT^2$, i.e. the traceless diagonal matrices; this sub-algebra is commutative and thus $[H_1,H_2]=0$. Moreover we have the following commutation relations:
\begin{eqnarray*}
&&[H_1,X_2]=-X_2,
[H_1,Y_2]=Y_2,
[H_1,X_3]=X_3,
[H_1,Y_3]=-Y_3\\
&&[H_2,X_1]=-X_1,
[H_2,Y_1]=Y_1,
[H_2,X_3]=X_3,
[H_2,Y_3]=-Y_3
\end{eqnarray*}
As $\sla(3,\C)$ is simple, the adjoint representation $\ad(X)Y=[X,Y]$ of $\su(3)$ is irreducible
$$
\begin{array}{r|cccccccc}
\ad&H_1&H_2&X_1&X_2&X_3&Y_1&Y_2&Y_3\\
\hline
\ad(H_1)&0&0&2X_1&-X_2&X_3&-2Y_1&Y_2&-Y_3\\
\ad(H_2)&0&0&-X_1&2X_2&X_3&Y_1&-2Y_2&-Y_3\\
\ad(X_1)&-2X_1&X_1&0&X_3&0&H_1&0&0\\
\ad(X_2)&X_2&-2X_2&-X_3&0&0&0&H_2&Y_1\\
\ad(X_3)&-X_3&-X_3&0&0&0&-X_2&X_1&-H_3\\
\ad(Y_1)&2Y_1&-Y_1&-H_1&0&X_2&0&-Y_3&0\\
\ad(Y_2)&-Y_2&2Y_2&0&-H_2&-X_1&Y_3&0&0\\
\ad(Y_3)&Y_3&Y_3&0&-Y_1&H_3&0&0&0\\
\end{array}
$$
Weights
The classification of the irreducible representations of $\SU(2)$ heavily relied on the fact that both $X_1$ and $Y_1$ are ladder operators for $H_1$. This is still true for $\SU(3)$ but there are several additional commutation relations, one of which is $[H_1,H_2]=0$, which implies that given any representation $\psi:\sla(n,\C)\rar\Hom(E)$ in a finite dimensional complex vector-space $E$, the linear mappings $\psi(H_1)$ and $\psi(H_2)$ have at least one joint eigen-vector - if both $\psi(H_1)$ and $\psi(H_2)$ are diagonalizable, then $E$ has a basis of joint eigen-vectors. Indeed, we can always assume that these operators are jointly diagonalizable, because $\su(n)$ is the Lie-algebra of the compact group $\SU(n)$ and hence for any representation $\psi:\sla(n,\C)\rar\Hom(E)$ there is a euclidean product on $E$ such that for all $X\in\su(n)$: $\psi(X)^*=-\psi(X)$ (cf section). This implies that $\psi$ is the direct sum of irreducible representations. Indeed if $F$ is invariant, then so is $F^\perp$, because for all $y\in F^\perp$ and all $x\in F$:
$$
\la\psi(X)y,x\ra
=\la y,\psi(X)^*x\ra
=-\la y,\psi(X)x\ra=0~.
$$
From this it follows (cf. proposition) that $\psi$ is the direct sum of irreducible representations.
Let ${\cal A}$ be the space generated by $\psi(H_1)$ and $\psi(H_2)$, then by subsection a weight of ${\cal A}$ can be identified with a pair $\l\colon=(\l(H_1),\l(H_2))\in\C^2$, such that there exists a common eigen-vector $x\in E\sm\{0\}$:
$$
\psi(H_1)x=\l(H_1)x
\quad\mbox{and}\quad
\psi(H_2)x=\l(H_2)x~.
$$
$\l$ will be called a weight of the representation $\psi$ and $x$ is called a weight vector corresponding to the weight $\l$. The space of all weight vectors for the weight $\l$ is called the weight space of $\l$ and the dimension of the weight space of $\l$ is called the multiplicity of the weight $\l$.
By restricting $\psi$ to the sub-algebra generated by $H_1,X_1,Y_1$ and $H_2,X_2,Y_2$ respectively, we know from proposition that both components of a weight must be integers.
Let us calculate the weights of the standard representation of $\sla(3,\C)$ and the weights of its dual representation: obviously the canonical basis $e_1,e_2,e_3$ of $\C^3$ are weight vectors with weights: $(1,0)$, $(-1,1)$ and $(0,-1)$. Since the dual representation is the negative of the transpose, $e_1,e_2$ and $e_3$ are also the weight vectors of the dual with weights: $(-1,0)$, $(1,-1)$, $(0,1)$. Thus the weights of the dual of the standard representation are the negatives of the weights of the standard representation. This is actually true for any representation if we assume that $\psi(H_1)$ and $\psi(H_2)$ are jointly diagonalizable, which is no loss of generality. Under this condition the weights of the dual representation $\psi^d$ of $\psi:\sla(3,\C)\rar\Hom(E)$ are the negatives of the weights of $\psi$: Choose a basis $e_1,\ldots,e_n$ of $E$ such that both $\psi(H_1)$ and $\psi(H_2)$ are diagonal with respect to this basis and $\psi(H_j)e_k=m_{jk}e_k$. Then the matrix of $\psi^d(H_j)$ with respect to the dual basis $e_1^*,\ldots,e_n^*$ is the negative of the transpose of the matrix of $\psi(H_j)$ with respect to the basis $e_1,\ldots,e_n$ (cf. section). Hence $\psi^d(H_j)e_k^*=-m_{jk}e_k^*$.
The above commutation relations give us the following roots
$$
\begin{array}{r|c}
\mbox{root}&\mbox{root vector}\\
\hline
r_1=(2,-1)&X_1\\
r_2=(-1,2)&X_2\\
r_3=r_1+r_2=(1,1)&X_3\\
-r_1=(-2,1)&Y_1\\
-r_2=(1,-2)&Y_2\\
-r_3=(-1,-1)&Y_3
\end{array}
$$
i.e. all vectors $X_j,Y_j$, $j=1,2,3$ are root vectors. As both $H_1$ and $H_2$ are weight vectors, the vectors $X_j,Y_j$, $j=1,2,3$ and $H_1/\sqrt2,H_2/\sqrt2$ form a basis of normed weight vectors in $\sla(3,\C)$ furnished with the euclidean product $\tr(XY^*)$.
$\proof$
Since $\psi$ is an algebra homomorphism and $[H_j,Z]=r(H_j)Z$, it follows that
\begin{eqnarray*}
\psi(H_j)\psi(Z)x
&=&[\psi(H_j),\psi(Z)]x+\psi(Z)\psi(H_j)x\\
&=&r(H_j)\psi(Z)x+\l(H_j)\psi(Z)x
=(\l(H_j)+r(H_j))\psi(Z)x
\end{eqnarray*}
$\eofproof$
Just like eigen-vectors for pairwise distinct eigen-values are linearly independent, root vectors for pairwise distinct roots are linearly independent:
$\proof$
1. Suppose $x_1+\cdots+x_n=0$ and $n\geq2$. We proceed by induction on $n$ to prove that this assumption implies that all vectors $x_1,\ldots,x_n$ are $0$. For $n=1$ the assertion is obvious. So assume $n\geq2$; since $\l_1\neq\l_2$, there is some $j$ such that $\l_1(H_j)\neq\l_2(H_j)$ and
$$
0=(\psi(H_j)-\l_1(H_j))\sum_k x_k
=\sum_{k\geq2}(\l_k(H_j)-\l_1(H_j))x_k~.
$$
By induction hypothesis all terms of the last sum vanish, in particular $x_2=0$ and thus: $x_1+x_3+\cdots+x_n=0$ which again by induction hypothesis implies that all vectors $x_k$ vanish.
2. If $x\colon=x_1+\cdots+x_n$ is a non zero weight vector with weight $\l$, then for all $j$:
$$
0=(\psi(H_j)-\l(H_j))x=\sum_k(\l_k(H_j)-\l(H_j))x_k
$$
and by 1. it follows that for all $k$: $(\l_k(H_j)-\l(H_j))x_k=0$. Since $x\neq0$, there is some $l$ such that $x_l\neq0$ and thus for all $j$: $\l_l(H_j)=\l(H_j)$, i.e. $\l=\l_l$. If $k\neq l$ then there is some $j$ such that $\l_k(H_j)\neq\l_l(H_j)$ and therefore $x_k=0$.
$\eofproof$
$\proof$
This is obvious if $\psi(H_j)$ are diagonalizable, but we actually don`t need this assumption. We already observed that $\psi(H_1)$ and $\psi(H_2)$ have at least one common complex eigen-vector. Now let $W\sbe E$ be the subspace of all weight spaces, then $W\neq\{0\}$ and since $X_j$ and $Y_j$, $j=1,2,3$, are root vectors, the operators $\psi(X_j)$ and $\psi(Y_j)$ map $W$ into itself by lemma. Obviously, $W$ is invariant under $\psi(H_j)$ and thus it`s invariant under all operators $\psi(X)$, $X\in\sla(3,\C)$. By irreducibility $W=E$ is the sum of its weight spaces and by lemma the sum is direct.
$\eofproof$
In the picture below the weight (0,0) is higher than any weight in the blue sector!
The weight $(1,0)$ for the standard representation of $\sla(3,\C)$ is higher than the weight $(0,0)$, because $(1,0)-(0,0)=\tfrac23r_1+\tfrac13r_2$. Also, since $(1,0)-(-1,1)=(2,1)$ and $(1,0)-(0,-1)=(1,1)=r_1+r_2$, $(1,0)$ is a highest weight for the standard representation. Similarly $(0,1)$ is a highest weight for the dual representation.
A highest weight is of course a maximal element (with respect to $\preceq$) but in general it`s not the other way round; even if the set of weights is finite - which is the case if the representation is finite dimensional - a highest weight may not exist! So let us start with a maximal weight $\l$ of a representation $\psi:\sla(3,\C)\rar\Hom(E)$ with corresponding weight vector $x$, then $\psi(X_1)x=\psi(X_2)x=0$, for otherwise $\l$ wouldn`t be maximal. Moreover, since the root $(1,1)$ for the root vector $X_3$ can be written as $(1,1)=r_1+r_2$, we must also have $\psi(X_3)x=0$. We say $\psi:\sla(3,\C)\rar\Hom(E)$ is a highest weight cyclic representation if the following holds:
Restricting a highest weight cyclic representation to the sub-algebras generated by $H_1,X_1,Y_1$ and $H_2,X_2,Y_2$ respectively, we get representations of $\sla(2,\C)$ such that $\psi(X_1)=\psi(X_2)=0$ and thus by proposition we infer that both $\l(H_1)$ and $\l(H_2)$ must be non negative integers. Also, if a weight vector for a maximal weight is cyclic, then $\psi$ is a highest weight cyclic representation and both components of the weight are non negative integers. But more is true and this also explains why a highest weight cyclic representation is called highest weight representation:
$\proof$
First we proof that the space $W$ generated by $\psi(Y_{j_1})\cdots\psi(Y_{j_m})x$ is invariant: By the reordering lemma it suffices to prove that all elements of the form
$$
\psi(Y_{j_1})\cdots\psi(Y_{j_m})
\psi(H_1)^{k_1}\psi(H_2)^{k_2}
\psi(X_1)^{l_1}\psi(X_2)^{l_2}\psi(X_3)^{l_3}x
$$
live in $W$, but $\psi(X_j)x=0$ and $\psi(H_j)x=\l(H_j)x$ and thus $W$ is $\psi$-invariant by its definition. Since $x$ is cyclic, we conclude: $W=E$. Now suppose $y$ is any vector in $E=W$ of the form $y=\psi(Y_{j_1})\cdots\psi(Y_{j_m})x$; since $Y_1,Y_2,Y_3$ are root vectors with roots $-r_1,-r_2,-r_1-r_2$, $y$ is a weight vector and by lemma its weights are strictly lower than $\l$ unless $y=x$. Therefore $W$ has a basis $x,y_1,\ldots,y_m$ of weight vectors and each weight of $y_j$ is strictly lower than $\l$. By lemma we are done.
$\eofproof$
Now assume moreover that $\psi$ is irreducible, then any vector in $E\sm\{0\}$ is cyclic and thus any non zero weight vector for a maximal weight is cyclic and satisfies $\psi(X_j)=0$, i.e. every irreducible representation is a highest weight cyclic representation. Also the converse holds
$\proof$
As $E$ is finite dimensional it decomposes into sub-spaces $E=\bigoplus E_j$, such that $\psi:\sla(3,\C)\rar\Hom(E_j)$ is irreducible. Each of these spaces $E_j$ in turn decomposes by proposition into its weight spaces and by lemma the weight vector $x$ must lie in one of these spaces and consequently in one of the spaces $E_j$. Since $x$ is cyclic and $E_j$ is invariant, we must have $E=E_j$.
$\eofproof$
So far we have established that an irreducible representation is the same as a highest weight cyclic representation, moreover the weight space of the highest weight of this representations is one dimensional and the components of the highest weight are non negative. Our next goal is to verify the following
$\proof$
Let $\psi:\sla(3,\C)\rar\Hom(E)$ and $\vp:\sla(3,\C)\rar\Hom(F)$ be irreducible representations with the highest weight $\l$ and let $u\in E$, $v\in F$ be weight vectors with this weight. Then $(u,v)$ is a weight vector of the representation $\pi:\sla(3,\C)\rar\Hom(E\times F)$,
$$
\pi(X)(x,y)=(\psi(X)x,\vp(X)y).
$$
Indeed, we have $\pi(H)(x,y)=\l(H)(u,v)$ and for all $j=1,2$: $\pi(X_j)(u,v)=(0,0)$. Thus if $W$ denotes the subspace generated by $\{\pi(X)(u,v):\,X\in\sla(3,\C)\}$, then $\pi:\sla(3,\C)\rar\Hom(W)$ is a highest weight cyclic representation; by proposition it`s irreducible. Further, the projections $P:W\rar E$, $(x,y)\mapsto x$ and $Q:W\rar F$, $(x,y)\mapsto y$ are both intertwining operators, i.e. $P\pi(x,y)=\psi(x)=\psi(P(x,y))$ and $Q\pi(x,y)=\vp(y)=\vp(Q(x,y))$ and since $P(u,v)=u\neq0$ and $Q(u,v)=v\neq0$, both $P:W\rar E$ and $Q:W\rar F$ must be isomorphisms by Schur`s lemma for Lie algebras, proving that both irreducible representations $\psi:\sla(3,\C)\rar\Hom(E)$ and $\vp:\sla(3,\C)\rar\Hom(F)$ are equivalent to the irreducible representation $\pi:\sla(3,\C)\rar\Hom(W)$.
$\eofproof$
Our last step is the construction of irreducible representations
$\proof$
The trivial representation $X\mapsto0$ has highest weight $(0,0)$, the standard representation $X\mapsto X$ has highest weight $(1,0)$ and it`s dual $X\mapsto\bar X$ has highest weight $(0,1)$. Put $E=F=\C^3$ and let $u\in E$, $v\in F$ be weight vectors for these two representations with the highest weight $(1,0)$ and $(0,1)$. Let $\pi$ be the following representation
$$
\pi:\sla(3,\C)\rar\Hom(E\otimes\cdots\otimes E\otimes F\otimes\cdots\otimes F)
$$
where we take $m_1$-fold tensor products of $E$ and $m_2$-fold tensor products of $F$. $\pi$ is defined by the sum of $m_1+m_2$ terms:
\begin{eqnarray*}
\pi(X)\colon
&=&X\otimes1\otimes\cdots\otimes1
+\cdots
+1\otimes\cdots\otimes1\otimes X\otimes1\otimes\cdots\otimes1\\
&&+1\otimes\cdots\otimes1\otimes\bar X\otimes1\otimes\cdots\otimes1
+\cdots
+1\otimes\cdots\otimes1\otimes\bar X
\end{eqnarray*}
Finally put $w\colon=u\otimes\cdots\otimes u\otimes v\otimes\cdots\otimes v$. Then it follows that
$$
\pi(H_j)w=m_jw
\quad\mbox{and}\quad
\pi(X_j)w=0~.
$$
Thus $\pi$ restricted to the space $W$ invariant under $\pi(Y_j)$, $j=1,2,3$, is an irreducible representation with highest weight $(m_1,m_2)$. In order to get $W$ we observe that $Y_3=[Y_2,Y_1]$ and thus we only need to apply the operators $\pi(Y_j)$, $j=1,2$, in every order to $w$.
$\eofproof$
We have $[H_1,X_3]=[H_2,X_3]=X_3$ and $[X_1,X_3]=[X_2,X_3]=[X_3,X_3]=0$. Finally, from the table of the adjoint representation we infer that $X_3$ is a cyclic vector. Hence $(1,1)$ is the highest weight with weight vector $X_3$. Alternatively: from the root table (or the table of the adjoint representation) we see that all the weights are given by $r_1,r_2,r_1+r_2,-r_1,-r_2,-r_1-r_2$ and $0$, which has multiplicity $2$; obviously $(1,1)=r_1+r_2$ is the highest weight.
Highest Weight $(2,0)$ - the $\mathbf{6}$-representation
If $m_1=2$ and $m_2=0$, then $\pi(X)=X\otimes1+1\otimes X$ and the highest weight vector is $e_1\otimes e_1$. We have to find the smallest space $W$ containing $w\colon=e_1\otimes e_1$ and invariant under the action of the operators
$$
\pi(Y_1)=Y_1\otimes1+1\otimes Y_1
\quad\mbox{and}\quad
\pi(Y_2)=Y_2\otimes1+1\otimes Y_2
$$
applied in any order.
\begin{eqnarray*}
\pi(Y_1)(e_1\otimes e_1)&=&e_2\otimes e_1+e_1\otimes e_2\\
\pi(Y_2)(e_1\otimes e_1)&=&0\\
\pi(Y_1)(e_2\otimes e_1+e_1\otimes e_2)&=&2e_2\otimes e_2\\
\pi(Y_2)(e_2\otimes e_1+e_1\otimes e_2)&=&e_3\otimes e_1+e_1\otimes e_3\\
\pi(Y_1)(e_2\otimes e_2)&=&0\\
\pi(Y_2)(e_2\otimes e_2)&=&e_3\otimes e_2+e_2\otimes e_3\\
\pi(Y_1)(e_3\otimes e_2+e_2\otimes e_3)&=&0\\
\pi(Y_2)(e_3\otimes e_2+e_2\otimes e_3)&=&2e_3\otimes e_3\\
\pi(Y_1)(e_3\otimes e_3)&=&0\\
\pi(Y_2)(e_3\otimes e_3)&=&0
\end{eqnarray*}
Thus the irreducible representation with highest weight $(2,0)$ is of dimension $6$ - hence it`s called the $\mathbf{6}$-representation - with basis:
$$
e_1\otimes e_1,
e_2\otimes e_2,
e_3\otimes e_3,
e_1\otimes e_2+e_2\otimes e_1,
e_2\otimes e_3+e_3\otimes e_2,
e_3\otimes e_1+e_1\otimes e_3.
$$
In particle physics the vectors $e_1\otimes e_1,\ldots,e_3\otimes e_3$ are designated as $uu,\ldots,ss$, which flag particles composed of two quarks. Hence we have the following orthonormal basis in "quark"-notation:
$$
uu,dd,ss,
\frac{ud+du}{\sqrt2},
\frac{ds+sd}{\sqrt2},
\frac{us+su}{\sqrt2}~.
$$
Since $u,d,s$ have weights $(1,0),(-1,1),(0,-1)$, these vectors have the weights:
$$
(1,0)+(1,0)=(2,0),(-2,2),(0,-2),(0,1),(-1,0),(1,0)+(0,-1)=(1,-1)~.
$$
The following diagram depicts the $\mathbf{6}$-representation $\pi$: the mappings $\pi(Y_1)$, $\pi(Y_2)$, $\pi(X_1)$ and $\pi(X_2)$ map each basis vector, represented by a circle enclosing its weight, to a multiple of another basis vector. This multiple is attached to the correspondingly colored arrow, i.e. the blue, cyan, red and green arrow indicates the action of $\pi(Y_1)$, $\pi(Y_2)$, $\pi(X_1)$ and $\pi(X_2)$. If there is e.g. no outgoing blue arrow from one of the circles, then the corresponding basis vector is mapped by $\pi(Y_1)$ to the null vector!
Highest Weight $(1,1)$ - the $\mathbf{8}$- or adjoint representation
This time we have $\pi(X)=X\otimes1+1\otimes\bar X$, where $\bar X\colon=-X^t$. Hence the matrices $\bar H_j$, $\bar X_j$, $\bar Y_j$ are given by:
$$
\bar H_1\colon=\left(\begin{array}{ccc}
-1&0&0\\
0&1&0\\
0&0&0
\end{array}\right),
\bar X_1\colon=\left(\begin{array}{ccc}
0&0&0\\
-1&0&0\\
0&0&0
\end{array}\right),
\bar Y_1\colon=\left(\begin{array}{ccc}
0&-1&0\\
0&0&0\\
0&0&0
\end{array}\right)
$$
$$
\bar H_2\colon=\left(\begin{array}{ccc}
0&0&0\\
0&-1&0\\
0&0&1
\end{array}\right),
\bar X_2\colon=\left(\begin{array}{ccc}
0&0&0\\
0&0&0\\
0&-1&0
\end{array}\right),
\bar Y_2\colon=\left(\begin{array}{ccc}
0&0&0\\
0&0&-1\\
0&0&0
\end{array}\right)
$$
$$
\bar X_3\colon=\left(\begin{array}{ccc}
0&0&0\\
0&0&0\\
-1&0&0
\end{array}\right),
\bar Y_3\colon=\left(\begin{array}{ccc}
0&0&-1\\
0&0&0\\
0&0&0
\end{array}\right)
$$
Again, let $e_1,e_2,e_3$ be the standard basis of $\C^3$, then $e_1$ is the weight vector for the standard representation with the highest weight $(1,0)$ and $\bar e_3$ is the weight vector for the dual representation with the highest weight $(0,1)$. We are left to identify the smallest space $W$ containing $w\colon=e_1\otimes \bar e_3$ and invariant under
$$
\pi(Y_1)=Y_1\otimes1+1\otimes\bar Y_1
\quad\mbox{and}\quad
\pi(Y_2)=Y_2\otimes1+1\otimes\bar Y_2~.
$$
We start out with the vector $e_1\otimes \bar e_3$:
\begin{eqnarray*}
Y_1e_1\otimes \bar e_3+e_1\otimes\bar Y_1\bar e_3&=&e_2\otimes \bar e_3\\
Y_2e_1\otimes \bar e_3+e_1\otimes\bar Y_2\bar e_3&=&-e_1\otimes \bar e_2
\end{eqnarray*}
with weights:
Thus we have to apply the operators to $e_2\otimes \bar e_3$ and $e_1\otimes \bar e_2$:
\begin{eqnarray*}
Y_1e_2\otimes \bar e_3+e_2\otimes\bar Y_1\bar e_3&=&0\\
Y_2e_2\otimes \bar e_3+e_2\otimes\bar Y_2\bar e_3&=&e_3\otimes \bar e_3-e_2\otimes \bar e_2\\
Y_1e_1\otimes \bar e_2+e_1\otimes\bar Y_1\bar e_2&=&e_2\otimes \bar e_2-e_1\otimes \bar e_1\\
Y_2e_1\otimes \bar e_2+e_1\otimes\bar Y_2\bar e_2&=&0
\end{eqnarray*}
This gives another two vectors: $e_3\otimes \bar e_3-e_2\otimes \bar e_2$ and $e_2\otimes \bar e_2-e_1\otimes \bar e_1$:
\begin{eqnarray*}
\pi(Y_1)(e_3\otimes \bar e_3-e_2\otimes \bar e_2)&=&e_2\otimes \bar e_1\\
\pi(Y_2)(e_3\otimes \bar e_3-e_2\otimes \bar e_2)&=&-2e_3\otimes \bar e_2\\
\pi(Y_1)(e_2\otimes \bar e_2-e_1\otimes \bar e_1)&=&-2e_2\otimes \bar e_1\\
\pi(Y_2)(e_2\otimes \bar e_2-e_1\otimes \bar e_1)&=&e_3\otimes \bar e_2
\end{eqnarray*}
which gives the two vectors: $e_2\otimes \bar e_1$ and $e_3\otimes \bar e_2$:
\begin{eqnarray*}
\pi(Y_1)(e_2\otimes \bar e_1)&=&0\\
\pi(Y_2)(e_2\otimes \bar e_1)&=&e_3\otimes \bar e_1\\
\pi(Y_1)(e_3\otimes \bar e_2)&=&-e_3\otimes \bar e_1\\
\pi(Y_2)(e_3\otimes \bar e_2)&=&0
\end{eqnarray*}
Finally:
\begin{eqnarray*}
\pi(Y_1)(e_3\otimes \bar e_1)&=&0\\
\pi(Y_2)(e_3\otimes \bar e_1)&=&0
\end{eqnarray*}
The space $W$ is generated by the eight vectors and therefore it`s called the $\mathbf{8}$-representation:
$$
e_1\otimes \bar e_2,
e_1\otimes \bar e_3,
e_2\otimes \bar e_1,
e_2\otimes \bar e_3,
e_3\otimes \bar e_1,
e_3\otimes \bar e_2,
e_3\otimes \bar e_3-e_2\otimes \bar e_2,
e_2\otimes \bar e_2-e_1\otimes \bar e_1~.
$$
Finally we compute the weights of these vectors: the weights of $e_1,e_2,e_3$ are $(1,0),(-1,1),(0,-1)$ and the weights of $\bar e_1,\bar e_2,\bar e_3$ are $(-1,0),(1,-1),(0,1)$. Since the weights of $e_j\otimes\bar e_k$ is the sum of the weights of $e_j$ and $\bar e_k$, we get the weights:
\begin{eqnarray*}
&&(1,0)+(1,-1)=(2,-1),\quad
(1,0)+(0,1)=(1,1),\quad
(-1,1)+(-1,0)=(-2,1),\\
&&(-1,1)+(0,1)=(-1,2),\quad
(0,-1)+(-1,0)=(-1,-1),\quad
(0,1)+(1,-1)=(1,-2),
\end{eqnarray*}
and the last two vectors have the weight $(0,0)$. Here comes a diagrammatic visualization: $\pi(Y_1)$, $\pi(Y_2)$, $\pi(X_1)$ and $\pi(X_2)$ are again depicted as blue, cyan, red and green arrows. But this time the weight space with weight $(0,0)$ is two dimensional, thus it has two basis vectors and the numbers attached to an outgoing arrow from this space denote the multiples of the weight vector, these vectors are mapped on. Also an arrow pointing to the weight space with weight $(0,0)$ has an additional number put in parentheses, indicating the first or the second basis vector of the weight space.
In particle physics the vectors $e_1\otimes\bar e_1,\ldots,e_3\otimes\bar e_3$ are designated as $u\bar u,\ldots,s\bar s$, which again are particles composed of a quark and an anti-quark and are called mesons. This notation saves you a lot of writing!
We have to compute $\pi(Q)u\bar d=(Qu)\bar d+u(\bar Q\bar d)$, which comes down to: $(2/3-1/3)u\bar d$. so the charge of $u\bar d$ is $1/3$.
For this orthogonal basis we get the following table of mesons:
$$
\begin{array}{r|ccc|r}
\mbox{quark content}&\mbox{isospin}&\mbox{hypercharge}&\mbox{charge}&\mbox{particle}\\
\hline
u\bar d&1&0&1&\pi^+\mbox{-meson}\\
u\bar s&1/2&1&1&K^+\mbox{-meson}\\
d\bar u&-1&0&-1&\pi^-\mbox{-meson}\\
d\bar s&-1/2&1&0&K^0\mbox{-meson}\\
s\bar u&-1/2&-1&-1&K^-\mbox{-meson}\\
s\bar d&1/2&-1&0&\bar K^0\mbox{-meson}\\
s\bar s-d\bar d&0&0&0&\eta\mbox{-meson}\\
d\bar d-2u\bar u+s\bar s&0&0&0&\pi^0\mbox{-meson}\\
\end{array}
$$
Highest Weight $(3,0)$ - the $\mathbf{10}$-representation
This time we use the "quark" notation; it will really make the calculations more clearly laid out: The highest weight vector is $uuu$, $Y_1$ sends $u$ to $d$ and $d,s$ to $0$, $Y_2$ sends $d$ to $s$ and $u,s$ to $0$. Applying
$$
\pi(Y_j)=Y_j\otimes1\otimes1+1\otimes Y_j\otimes1+1\otimes1\otimes Y_j
$$
for $j=1$ to $uuu$ repeatedly gives:
\begin{eqnarray*}
uuu&\to&duu+udu+uud,\\
duu+udu+uud&\to&2(ddu+dud+udd),\\
ddu+dud+udd&\to&3ddd,\\
ddd&\to&0
\end{eqnarray*}
with weights $(3,0),(1,1),(-1,2),(-3,3)$. Applying $\pi(Y_2)$ recursively gives:
\begin{eqnarray*}
uuu&\to&0\\
duu+udu+uud&\to&suu+usu+uus,\\
ddu+dud+udd&\to&sdu+dsu+sud+dus+usd+uds,\\
ddd&\to&sdd+dsd+dds,\\
suu+usu+uus&\to&0,\\
sdu+dsu+sud+dus+usd+uds&\to&2(ssu+sus+uss),\\
sdd+dsd+dds&\to&2(ssd+sds+dss),\\
ssu+sus+uss&\to&0,\\
ssd+sds+dss&\to&3sss,\\
sss&\to&0.
\end{eqnarray*}
Finally apply $\pi(Y_1)$ to the newly obtained vectors:
\begin{eqnarray*}
suu+usu+uus&\to&sdu+sud+dsu+usd+dus+uds,\\
sdu+dsu+sud+dus+usd+uds&\to&2(sdd+dsd+dds),\\
sdd+dsd+dds&\to&0,\\
ssu+sus+uss&\to&ssd+sds+dss,\\
ssd+sds+dss&\to&0,\\
sss&\to&0.
\end{eqnarray*}
Since we have not produced any new vector, the irreducible representation lives in the space generated by $10$ weight vectors.
$$
\begin{array}{ccccccc}
\mbox{weight}&\mbox{vector}&\mbox{quark content}&\mbox{isospin}&\mbox{hypercharge}&\mbox{charge}&\mbox{particle}\\
\hline
(3,0)&uuu&uuu&3/2&1&2&\D^{++}\\
(1,1)&duu+udu+uud&uud&1/2&1&1&\D^+\\
(-1,2)&ddu+dud+udd&udd&-1/2&1&0&\D^0\\
(-3,3)&ddd&ddd&-3/2&1&-1&\D^-\\
(2,-1)&suu+usu+uus&uus&1&0&1&\Sigma^{*+}\\
(0,0)&sdu+dsu+sud+dus+usd+uds&uds&0&0&0&\Sigma^{*0}\\
(1,-2)&ssu+sus+uss&uss&1/2&-1&0&\Xi^{*0}\\
(-2,1)&sdd+dsd+dds&dds&-1&0&-1&\Sigma^{*-}\\
(-1,1)&ssd+sds+dss&dss&-1/2&-1&-1&\Xi^{*-}\\
(1,-1)&sss&sss&0&-2&-1&\O^-
\end{array}
$$
Particles made of three quarks are called baryons.
Let ${\cal P}_n$ be the homogeneous polynomials on $\C^3$ of degree $n\in\N_0$ and $\G:\SU(3)\rar\Hom({\cal P}_n)$ the representation $\G(U)f\colon=f\circ U^{-1}$. Let us first compute $\g(X)\colon=D\G(1)X$: For any holomorphic function $f:\C^3\rar\C$ we have (cf. section)
$$
\g(X)f(z)=\ftdl t0f(e^{-tX}z)=-(\pa_1f(z),\pa_2f(z),\pa_3f(z))Xz
$$
and thus
\begin{array}{ccc}
\g(X_1)f(z)=-\pa_1f(z)z_2,&\g(X_2)f(z)=-\pa_2f(z)z_3,&\g(X_3)f(z)=-\pa_1f(z)z_3,\\
\g(Y_1)f(z)=-\pa_2f(z)z_1,&\g(Y_2)f(z)=-\pa_3f(z)z_2,&\g(Y_3)f(z)=-\pa_3f(z)z_1~.
\end{array}
Next we will verify that any non trivial invariant subspace contains the polynomial $z_3^n$: $\g(X_2)$ and $\g(X_3)$ act on a polynomial by decreasing the degree in $z_2$ and $z_1$, respectively, by one and increasing the degree of $z_3$ by one. $z_1^{n_1}z_2^{n_2}z_3^{n_3}$ will be mapped by $\g(X_3)^{n_1}$ to a non zero multiple of $z_2^{n_2}z_3^{n_1+n_3}$ and this gets mapped by $\psi(X_2)^{n_2}$ to a non zero multiple of $z_3^n$. All monomials $z_1^{l_1}z_2^{l_2}z_3^{l_3}$ with $l_1 < n_1$ or $l_2 < n_2$ are mapped to $0$. Now if $f$ is a polynomial in an invariant subspace it must be a linear combination of monomials; we pick the one with the lowest degree $n_3$, say. Then $\psi(X_2)^{n_2}\g(X_3)^{n_1}p$ is just a non zero multiple of $z_3^n$. The space generated by $\psi(Y_2)^{l_2}\g(Y_3)^{l_1}z_3^n$ is just ${\cal P}_n$, i.e. $\g$ and thus $\G$ is irreducible. Eventually $\g(X_1)z_3^n=\g(X_2)z_3^n=0$, i.e. $z_3^n$ is a highest weight cyclic vector with weight $(0,n)$, because $\g(H_1)z_3^n=0$, $\g(H_2)z_3^n=-(z_2\pa_2z_3^n-z_3\pa_3z_3^n)=nz_3^n$.
A weight $\l$ was defined as a pair of complex numbers: the eigen-values of $\psi(H_1)$ and $\psi(H_2)$ for some common eigen-vector $x$, but in fact for every $H\in{\cal H}$ we have $H=a_1H_1+a_2H_2$ and $\l(H)\colon=a_1\l(H_1)+a_2\l(H_2)$ is the eigen-value of $H$ with eigen-vector $x$. Thus we may conveniently think of a weight as a linear functional $\l$ on the space ${\cal H}$. Moreover, endowing ${\cal H}$ with the euclidean product:
$$
\la H,G\ra\colon=\tr(HG^*)
$$
we know that any linear functional $\l$ can be written as $\l(H)=\la H,G\ra$ for a unique $G\in{\cal H}$. Thus we`ve got our new definition: a weight of a representation $\psi$ of $\sla(3,\C)$ in some finite dimensional vector-space $E$ is a vector $\l\in{\cal H}$ such that there exists some $x\in E\sm\{0\}$ such that
$$
\forall H\in {\cal H}:\quad
\psi(H)x=\la H,\l\ra x
$$
Hence weights are particular vectors in the Cartan algebra ${\cal H}$ of $\sla(3,\C)$!
1. If $\l_1=diag\{x,y,z\}$, then $x+y+z=0$, $x-y=1$ and $y-z=0$, i.e. $y=-1/3$, $z=-1/3$ and $x=2/3$, i.e. $\l_1$ is the electric charge operator $Q$; analogously we get: $\l_2=diag\{1/3,1/3,-2/3\}$ which is the hypercharge operator $Y$. Moreover: $\la H_1+H_2,\l_1-\l_2\ra=1-0+0-1=0$
3. By definition of the hypercharge and the charge we have:
$$
\l_1=Q\colon=\tfrac23H_1+\tfrac13H_2\perp H_2,\quad
\l_2=Y\colon=\tfrac13H_1+\tfrac23H_2\perp H_1,\quad
\l_3=\tfrac13H_1-\tfrac13H_2\perp H_3
$$
$\norm{H_j}=\sqrt2$ and $\norm{\l_j}=\sqrt{2/3}$.
$$
H_1=2\l_1-\l_2=2Q-Y,\quad
H_2=2\l_2-\l_1=2Y-Q,\quad
H_3=-\l_1-\l_2=-Q-Y~.
$$
Suppose we are given two weights $\l$ and $\mu$ in ${\cal H}$. What does it mean in our new notation that $\l$ is higher than $\mu$? In our old notation it means that $\l-\mu=a_1(2,-1)+a_2(-1,2)$ for some $a_j\geq0$, which now says, that $\l,\mu\in{\cal H}$ can be written as
$$
\l-\mu=a_1H_1+a_2H_2
$$
for some real non-negative numbers $a_1,a_2$. Since $\l_1,\l_2\in{\cal H}$ is the dual basis to $H_1,H_2$, this is equivalent to
\begin{equation}\label{weyeq1}\tag{WEY1}
\la\l-\mu,\l_1\ra\geq0
\quad\mbox{and}\quad
\la\l-\mu,\l_2\ra\geq0~.
\end{equation}
$N({\cal H})$ acting on ${\cal H}$ via $\Ad$
Since $\SU(3)\sbe\UU(3)$ we infer from exam that $\Ad:\SU(3)\rar\Gl(\su(3))$ is unitary. Hence for all $g\in N({\cal H})$ the mapping $\Ad(g)$ acts isometrically on $({\cal H},\la.,.\ra)$. Also, roots are weights of the adjoint representation; hence, in our new interpretation, a root is an element $r\in{\cal H}$ such that for some root vector $X\in\sla(3,\C)$:
$$
\forall H\in{\cal H}:\quad
\ad(H)X=\la H,r\ra X~.
$$
The root of $X_1$ is $(2,-1)$, which is in our new interpretation: $2\l_1-\l_2=H_1$; similarly the root of $X_2$ is $(-1,2)$, which now becomes $-\l_1+2\l_2=H_2$. The root of $X_3$ is $(1,1)$, which now becomes $H_1+H_2=-H_3$. Finally the roots of the root vectors $Y_j$ are just the negatives of the roots of $X_j$.
$\proof$
First we show, that for all $g\in N({\cal H})$ the vector $\Psi(g)x$ is a weight vector with weight $\Ad(g)\l$: for all $H\in{\cal H}$ we have $g^{-1}Hg\in{\cal H}$ and $\psi(g^{-1}Hg)=\Psi(g)^{-1}\psi(H)\Psi(g)$ (cf. example) and thus
\begin{eqnarray*}
\psi(H)\Psi(g)x
&=&\Psi(g)\Psi(g)^{-1}\psi(H)\Psi(g)x\\
&=&\Psi(g)\psi(g^{-1}Hg)x
=\Psi(g)(\la g^{-1}Hg,\l\ra x)\\
&=&\la g^{-1}Hg,\l\ra \Psi(g)x
=\la\Ad(g^{-1})H,\l\ra \Psi(g)x
=\la H,\Ad(g)\l\ra \Psi(g)x,
\end{eqnarray*}
where the last equality follows from the fact that $\Ad$ acts isometrically on $({\cal H},\la.,.\ra)$. Therefore $\Psi(g)$ maps the weight space with weight $\l$ into the weight space with weight $\Ad(g)\l$ and since $\Ad$ and $\Psi$ are representation $\Psi(g^{-1})$ is the inverse of this map, i.e. the dimensions of the weight spaces with weight $\l$ and $\Ad(g)\l$ coincide.
$\eofproof$
The permutations $w\in\{(213),(132),(321),(231),(312)\}$ act on the pair of vectors $(H_1,H_2)$, yielding pairs $(w\cdot H_1,w\cdot H_2)$, which are given by
$$
(-H_1,-H_3),
(-H_3,-H_2),
(-H_2,-H_1),
(H_3,H_1),
(H_2,H_3)~.
$$
In other words: if $H_1$ is a weight of the representation $\psi$ of multiplicity $m_1$, then so are $-H_1,-H_3,-H_2,H_3,H_2$ and if $H_2$ is a weight of the representation $\psi$ of multiplicity $m_2$, then so are $-H_3,-H_2,-H_1,H_1,H_3$.
We are now going to describe the action of the Weyl group on the two dimensional space ${\cal H}$ geometrically: The action of the first permutation $(213)$ is a reflection about the line $\R\l_1=[\la.,H_2\ra=0]$; the action of the second $(132)$ is a reflection about the line $\R\l_2=[\la.,H_1\ra=0]$ and the action of the third $(321)$ is a reflection about the line $\R(\l_1-\l_2)=[\la.,H_3\ra=0]$. The action of the forth $(231)$ is a rotation by $4\pi/3$ and the action of the fifth $(312)$ is a rotation by $2\pi/3$. Hence the Weyl group is isomorphic to the symmetry group $C_{3v}$ of the regular triangle formed by $\l_3\colon=\l_1-\l_2,\l_2,-\l_1$ and the reflections about the three lines orthogonal to $H_j$, $j=1,2,3$ generate this group of symmetries. That`s the way we look at the Weyl group: a subgroup of the group of isometries of the two dimensional space ${\cal H}$. The picture below illustrates the lattice points $\Z\l_1+\Z\l_2$ (which contains all possible weights), the regular triangle (of which the Weyl group is the symmetry group) and the positive cone
$$
P\colon=\R_0^+\l_1+\R_0^+\l_2
=[\la.,H_1\ra\geq0]\cap[\la.,H_2\ra\geq0]
$$
of non negative weights bounded by half-lines through $\l_1$ and $\l_2$. In addition, the cone indicates all weights lower than $-H_3$.
Some conclusions can be drawn from this interpretation immediately:
If $\l\neq0$ is any weight, then the set $\{w\cdot\l:w\in W(\su(3))\}$ contains either three or six points, depending on whether $\l$ lies on one of the lines generated by $\l_1,\l_2$ or $\l_1-\l_2$ or not.
The vectors $\pm H_j$, $j=1,2,3$, form a regular hexagon and for every lattice point in $\l\in\Z\l_1+\Z\l_2$ there is some $w$ in the Weyl group $W(\su(3))$ such that $w\cdot\l$ lies in the positive cone $P$ generated by $\l_1$ and $\l_2$, i.e. both $\la w\cdot\l,H_1\ra$ and $\la w\cdot\l,H_2\ra$ are non negative.
The $\mathbf{1}$-representation has the heighest weight $(1,0)$, i.e. $\l_1$, then it must also have the weights $-\l_1+\l_2$ and $-\l_2$, which in old notation is $(-1,1)$ and $(0,-1)$.
The $\mathbf{8}$-representation is the highest weight $(1,1)$ cyclic representation, i.e. the highest weight in ${\cal H}$ is $\l_1+\l_2$ and its multiplicity is $1$. Applying the Weyl group yields the new weights
$$
-\l_1+2\l_2,\quad
-\l_1-\l_2,\quad
2\l_1-\l_2,\quad
-2\l_1+\l_2,\quad
\l_1-2\l_2,
$$
all of which have multiplicity $1$.
Assume a representation $\psi=D\Psi(1)$ has weights $(0,3)$ and $(1,1)$. What additional weights $\psi$ must have? These weights in our new approach are given by $3\l_1$ and $\l_1+\l_2=-H_3$, thus it must have the additional weights: $3(\l_1-\l_2)$, $-3\l_1$ and $H_3,\pm H_1,\pm H_2$.
The reflection about $\R\l_1$ and $\R\l_2$, respectively, are given by $w_1\cdot H=H-\la H,H_2\ra H_2$ and $w_2\cdot H=H-\la H,H_1\ra H_1$, in particular:
\begin{eqnarray*}
w_1\cdot(n_1H_1+n_2H_2)
&=&n_1H_1+n_2H_2-(-n_1+2n_2)H_2
=n_1H_1+(n_1-n_2)H_2
=n_1H_3-n_2H_2\\
w_2\cdot(n_1H_1+n_2H_2)
&=&n_1H_1+n_2H_2-(2n_1-n_2)H_1
=(n_2-n_1)H_1+n_2H_2
=n_2H_3-n_1H_1~.
\end{eqnarray*}
Similarly we get for the reflection $w_3$ about $\R\l_3$:
$$
w_3\cdot(n_1H_1+n_2H_2)
=n_1H_1+n_2H_2-\la n_1H_1+n_2H_2,H_3\ra H_3
=n_1H_1+n_2H_2-(n_1+n_2)(H_1+H_2)
=-n_2H_1-n_1H_2~.
$$
Denote by $\mu$ the lowest weight; as the representation is irreducible, it must be of the form $\mu=\l-n_1H_1-n_2H_2$, Assume $w_3\cdot\l=\l-2\la\l,H_3\ra H_3/2$ is strictly greater than $\mu$, i.e. $n_1H_1+n_2H_2 > \la\l,H_3\ra H_3$. Hence we have:
\begin{eqnarray*}
w_3\cdot\mu&=&\mu-\la\mu,H_3\ra H_3
=\l-n_1H_1-n_2H_2-2\la\l-n_1H_1-n_2H_2,H_3\ra H_3/2\\
&=&\l-n_1H_1-n_2H_2-2\la\l,H_3\ra H_3/2+2\la n_1H_1+n_2H_2,H_3\ra H_3/2\\
&>&\l+2(-n_1H_1-n_2H_2+\la n_1H_1+n_2H_2,H_1+H_2\ra(H_1+H_2)/2)\\ &=&\l+(n_1+n_2)(H_1+H_2)-2n_1H_1-2n_2H_2)=\l+(n_2-n_1)H_1+(n_1-n_2)H_2~.
\end{eqnarray*}
On the other hand $w_3\cdot\mu$ must be smaller than $\l$.
2. The weights of the dual representation are the negatives the weights of the original representation!
Starting with the highest weight of an irreducible representation can we determine all its weights? The following examples give an hint about finding the dimensions of the weight spaces.
$X_j-Y_j,iX_j+iY_j\in\su(3)$ and thus
\begin{eqnarray*}
\psi(X_j)^*-\psi(Y_j)^*&=&-\psi(X_j)+\psi(Y_j)\quad\mbox{and}\\
-i\psi(X_j)^*-i\psi(Y_j)^*&=&-i\psi(X_j)-i\psi(Y_j)
\end{eqnarray*}
i.e. $\psi(X_j)^*=\psi(Y_j)$. Moreover, as $H_jx=m_jx$ and $\psi(X_j)x=0$:
$$
=\norm{\psi(Y_j)x}^2
=\la x,\psi(X_j)\psi(Y_j)x\ra
=\la x,\psi(H_j)x+\psi(Y_j)\psi(X_j)x\ra
=m_j
$$
For claerity we drop $\psi$ in the subsequent calculations, i.e. we write $X_1$ for $\psi(X_1)$, etc. As $[X_1,Y_2]=[X_2,Y_1]=0$ and $X_jx=0$ we get $X_1Y_2x=0$ and therefore by the commutation relations:
\begin{eqnarray*}
\la x_1,x_1\ra
&=&\la Y_1Y_2x,Y_1Y_2x\ra
=\la x,X_2X_1Y_1Y_2)x\ra\\
&=&\la x,X_2([X_1,Y_1]+Y_1X_1)Y_2)x\ra
=\la x,X_2H_1Y_2x\ra\\
&=&\la x,([X_2,H_1]+H_1X_2)Y_2x\ra
=\la x,(X_2Y_2+H_1([X_2,Y_2]+Y_2X_2)x\ra\\
&=&\la x,([X_2,Y_2]+Y_2X_2+H_1H_2)x\ra
=m_2+m_1m_2
\end{eqnarray*}
The verification of the other relations is quite similar.
3.,4. We compute
$$
a^2\colon=\norm{x_1}^2\norm{x_2}^2-\la x_1,x_2\ra
=m_1m_2(m_1+1)(m_2+1)-m_1^2m_2^2
=m_1m_2(m_1+m_2+1)~.
$$
If $m_1,m_2\geq1$, then $a^2 > 0$ and thus by the equality case in the Cauchy-Schwarz-inequality: $x_1$ and $x_2$ are not col-linear. On the other hand, if $m_1=0$ or $m_2=0$, the $x_2$ are col-linear.
$\proof$ In the previous subsection we found that for any weight $\mu$ of $\psi$ there is some $w\in W(\su(3))$ such that : $w\cdot\mu\in P$. Reflecting the weight $\l$ about the lines $\R\l_1=[\la.,H_2\ra=0]$ and $\R\l_2=[\la.,H_1\ra=0]$ gives us two weights $w_1\cdot\l$ and $w_2\cdot\l$ in the set
$$
D_0\colon=[\la.,\l_1\ra\leq\la\l,\l_1\ra]\cap
[\la.,\l_2\ra\leq\la\l,\l_2\ra],
$$
Since a reflection about $\R\l_j$ doesn`t alter the inner product with $\l_j$, these two weights are on the boundary of $D_0$.
We claim that $w\cdot\mu$ is in the intersection
$$
D=P\cap D_0,
$$
which follows from the fact that $w\cdot\mu$ is lower than $\l$, i.e. $\la w\cdot\mu,\l_j\ra\leq\la\l,\l_j\ra$. Now put $K\colon=\convex{0,w_1\cdot\l,w_2\cdot\l,\l}$; as $0=\tfrac16\sum_w w\cdot\l\in C$, we have $K\sbe C$. Finally we see from the above picture that $D\sbe K$. Hence $D\sbe C$ and
$$
\mu
\in\bigcup_{u\in W(\su(3))} u\cdot D
\sbe\bigcup u\cdot C
=C~.
$$
$\eofproof$
$\proof$
The sub-algebras $A_j\colon=\lhull{H_j,X_j,Y_j}$, $j=1,2,3$ are all isomorphic to $\sla(2,\C)$. W.l.o.g we may assume $j=1$. So let $F$ be the subspace of $E$ generated by all weight vectors in $E$ whose weights lie on the line $\mu+\R H_1$. These weights are shifted by $\psi(X_1)$ and $\psi(Y_1)$ by $\pm H_1$ and therefore the restrictions $\psi|A_1$ gives a representation $\vp$ of $\sla(2,\C)$ in $F$. Since $w\cdot H_1=-H_1$ and $w^*=w$, we have:
$$
\la H_1,w\cdot\mu\ra=\la w\cdot H_1,\mu\ra=-\la H_1,\mu\ra,
$$
i.e. $\vp$ has at least the weights $\la H_1,\mu\ra$ and $-\la H_1,\mu\ra$. By proposition: $\la H_1,\mu\ra\in\Z$ and $\vp(H_1)\in\Hom(F)$ must have the eigen-values
$$
-|\la H_1,\mu\ra|,-|\la H_1,\mu\ra|+2,\ldots,|\la H_1,\mu\ra|-2,|\la H_1,\mu\ra|,
$$
which coincides with the set $\{\la H_1,\nu\ra\}$, where $\nu$ is the set of points $\nu=\mu+nH_1$, $n\in\Z$, on the line segment joining $\mu$ to $w\cdot\mu$. Thus for any such $\nu$ there must be an eigen-vector $x\in F$ of $\psi(H_1)$ such that: $\psi(H_1)x=\la H_1,\nu\ra x$ and this eigen-vector can be obtained by starting with a normed weight vector $x_0$ for the weight $\mu$ (or $w\cdot\mu$) and applying $\psi(Y_1)$ $n$ times, i.e. $x=\psi(Y_1)^nx_0$; as $x\neq0$, it`s a weight vector for $\psi$.
For the moreover part we remark that we just proved: $\la H_1,\mu\ra,\la H_2,\mu\ra\in\Z$. Since $\l_1,\l_2$ is the dual Basis of $H_1,H_2$ this means: $\mu\in\Z\l_1+\Z\l_2$.
$\eofproof$
$\proof$
The necessity is clear from lemma, lemma and the fact that every weight smaller than $\l$ must be of the form: $\l-n_1H_1-n_2H_2$, $n_1,n_2\in\N_0$. Thus we only need to prove that these conditions are sufficient. By lemma each point in $\l-\N_0H_1-\N_0H_2$ on the boundary of $C$ is a weight. So assume $\mu=\l-n_1H_1-n_2H_2$ is in the interior of $C$; w.l.o.g. we may also assume: $n_2\leq n_1$, then
$$
\mu=\l-mH_1+n_2H_3,
\quad\mbox{with}\quad
m=n_1-n_2\geq0~.
$$
Starting at $\mu$ in the direction of $-H_3$ we end up in $\nu\colon=\l-(n_1-n_2)H_1$ after $n_2$ steps. We claim that $\nu$ lies on the boundary of the convex set $C$. Firstly it cannot lie on a boundary part of $C$ parallel to $H_3$. Secondly the intersection with the line passing through $\l$ and parallel to $H_2$ is given by the equation $\l-mH_1+x_3H_3=\l-x_2H_2$, i.e.: $mH_1=x_2H_2+x_3H_3$, i.e. $x_2=x_3=-m$. Hence $\l-mH_2=\l+mH_2$, which is not in $C$ unless $m=0$. Thus $\nu$ is a weight and so is its reflection $w\cdot\nu$ about the line orthogonal to $H_3$; by lemma $\mu$ must also be a weight.
$\eofproof$
The weight diagram for the highest weight $\l_1+2\l_2$ irreducible representation $\psi$: the blue points represent the points in the lattice $\Z\l_1+\Z\l_2$ and the orange encircled points represent the weights of $\psi$. The light blue area is the convex hull of the set $\{w\cdot\l:w\in C_{3v}\}$, where $C_{3v}$ denotes the group of symmetry of the dashed triangle in the center, i.e. the Weyl group of $\su(3)$. For all weights on the boundary of the convex hull the dimension of the associated weight spaces equals $1$ and for the remaining three weights in the interior the dimension equals $2$. Adding up to a dimension of $15$.