What should you be acquainted with? 1. Linear Algebra, in particular inner product spaces both over the real and the complex numbers. 2. Very basics in Group Theory and Complex Analysis. This chapter is essentially taken from Brian Hall, Lie Groups, Lie Algebras, and Representations, Chapter 6.
For $n=3$ we choose the following eight matrices as basis for $\sla(3,\C)$:
$$
H_1\colon=\left(\begin{array}{ccc}
1&0&0\\
0&-1&0\\
0&0&0
\end{array}\right),
X_1\colon=\left(\begin{array}{ccc}
0&1&0\\
0&0&0\\
0&0&0
\end{array}\right)=E^{12},
Y_1\colon=\left(\begin{array}{ccc}
0&0&0\\
1&0&0\\
0&0&0
\end{array}\right)=E^{21}
$$
$$
H_2\colon=\left(\begin{array}{ccc}
0&0&0\\
0&1&0\\
0&0&-1
\end{array}\right),
X_2\colon=\left(\begin{array}{ccc}
0&0&0\\
0&0&1\\
0&0&0
\end{array}\right)=E^{23},
Y_2\colon=\left(\begin{array}{ccc}
0&0&0\\
0&0&0\\
0&1&0
\end{array}\right)=E^{32}
$$
$$
X_3\colon=\left(\begin{array}{ccc}
0&0&1\\
0&0&0\\
0&0&0
\end{array}\right)=E^{13},
Y_3\colon=\left(\begin{array}{ccc}
0&0&0\\
0&0&0\\
1&0&0
\end{array}\right)=E^{31}
$$
Notice: $X_j,Y_j\notin\su(3)$! The spaces $\lhull{H_1,X_1,Y_1}$ and $\lhull{H_2,X_2,Y_2}$ are obviously sub-algebras and isomorphic to $\sla(2,\C)$, i.e.
$$
\forall j\in\{1,2\}\quad
[H_j,X_j]=2X_j,
[H_j,Y_j]=-2Y_j,
[X_j,Y_j]=H_j
$$
Also, the subspace $\lhull{H_1,H_2}$ is the (complexified) Lie-algebra of the torus $\TT^2$, i.e. the traceless diagonal matrices; this sub-algebra is commutative and thus $[H_1,H_2]=0$. Moreover we have the following commutation relations:
\begin{eqnarray*}
&&[H_1,X_2]=-X_2,
[H_1,Y_2]=Y_2,
[H_1,X_3]=X_3,
[H_1,Y_3]=-Y_3\\
&&[H_2,X_1]=-X_1,
[H_2,Y_1]=Y_1,
[H_2,X_3]=X_3,
[H_2,Y_3]=-Y_3
\end{eqnarray*}
The classification of the irreducible representations of $\SU(2)$ heavily relied on the fact that both $X_1$ and $Y_1$ are ladder operators for $H_1$. This is still true for $\SU(3)$ but there are several additional commutation relations, one of which is $[H_1,H_2]=0$, which implies that given any representation $\psi:\sla(n,\C)\rar\Hom(E)$ in a finite dimensional complex vector-space $E$, the linear mappings $\psi(H_1)$ and $\psi(H_2)$ have at least one joint eigen-vector - if both $\psi(H_1)$ and $\psi(H_2)$ are diagonalizable, then $E$ has a basis of joint eigen-vectors. Indeed, we can always assume that these operators are jointly diagonalizable, because $\su(n)$ is the Lie-algebra of the compact group $\SU(n)$ and hence for any representation $\psi:\sla(n,\C)\rar\Hom(E)$ there is a euclidean product on $E$ such that for all $X\in\su(n)$: $\psi(X)^*=-\psi(X)$ (cf section). This implies that $\psi$ is the direct sum of irreducible representations. Indeed if $F$ is invariant, then so is $F^\perp$, because for all $y\in F^\perp$ and all $x\in F$:
$$
\la\psi(X)y,x\ra
=\la y,\psi(X)^*x\ra
=-\la y,\psi(X)x\ra=0~.
$$
From this it follows (cf. proposition) that $\psi$ is the direct sum of irreducible representations.
Let ${\cal A}$ be the space generated by $\psi(H_1)$ and $\psi(H_2)$, then by subsection a weight of ${\cal A}$ can be identified with a pair $\l\colon=(\l(H_1),\l(H_2))\in\C^2$, such that there exists a common eigen-vector $x\in E\sm\{0\}$:
$$
\psi(H_1)x=\l(H_1)x
\quad\mbox{and}\quad
\psi(H_2)x=\l(H_2)x~.
$$
$\l$ will be called a weight of the representation $\psi$ and $x$ is called a weight vector corresponding to the weight $\l$. The space of all weight vectors for the weight $\l$ is called the weight space of $\l$ and the dimension of the weight space of $\l$ is called the multiplicity of the weight $\l$.
By restricting $\psi$ to the sub-algebra generated by $H_1,X_1,Y_1$ and $H_2,X_2,Y_2$ respectively, we know from proposition that both components of a weight must be integers.
Let us calculate the weights of the standard representation of $\sla(3,\C)$ and the weights of its dual representation: obviously the canonical basis $e_1,e_2,e_3$ of $\C^3$ are weight vectors with weights: $(1,0)$, $(-1,1)$ and $(0,-1)$. Since the dual representation is the negative of the transpose, $e_1,e_2$ and $e_3$ are also the weight vectors of the dual with weights: $(-1,0)$, $(1,-1)$, $(0,1)$. Thus the weights of the dual of the standard representation are the negatives of the weights of the standard representation. This is actually true for any representation if we assume that $\psi(H_1)$ and $\psi(H_2)$ are jointly diagonalizable, which is no loss of generality. Under this condition the weights of the dual representation $\psi^d$ of $\psi:\sla(3,\C)\rar\Hom(E)$ are the negatives of the weights of $\psi$: Choose a basis $e_1,\ldots,e_n$ of $E$ such that both $\psi(H_1)$ and $\psi(H_2)$ are diagonal with respect to this basis and $\psi(H_j)e_k=m_{jk}e_k$. Then the matrix of $\psi^d(H_j)$ with respect to the dual basis $e_1^*,\ldots,e_n^*$ is the negative of the transpose of the matrix of $\psi(H_j)$ with respect to the basis $e_1,\ldots,e_n$ (cf. section). Hence $\psi^d(H_j)e_k^*=-m_{jk}e_k^*$.
The above commutation relations give us the following roots
$$
\begin{array}{r|c}
\mbox{root}&\mbox{root vector}\\
\hline
r_1=(2,-1)&X_1\\
r_2=(-1,2)&X_2\\
r_3=r_1+r_2=(1,1)&X_3\\
-r_1=(-2,1)&Y_1\\
-r_2=(1,-2)&Y_2\\
-r_3=(-1,-1)&Y_3
\end{array}
$$
i.e. all vectors $X_j,Y_j$, $j=1,2,3$ are root vectors.
$\proof$
Since $\psi$ is an algebra homomorphism and $[H_j,Z]=r(H_j)Z$, it follows that
\begin{eqnarray*}
\psi(H_j)\psi(Z)x
&=&[\psi(H_j),\psi(Z)]x+\psi(Z)\psi(H_j)x\\
&=&r(H_j)\psi(Z)x+\l(H_j)\psi(Z)x
=(\l(H_j)+r(H_j))\psi(Z)x
\end{eqnarray*}
$\eofproof$
Just like eigen-vectors for pairwise distinct eigen-values are linearly independent, root vectors for pairwise distinct roots are linearly independent:
$\proof$
1. Suppose $x_1+\cdots+x_n=0$ and $n\geq2$. We proceed by induction on $n$ to prove that this assumption implies that all vectors $x_1,\ldots,x_n$ are $0$. For $n=1$ the assertion is obvious. So assume $n\geq2$; since $\l_1\neq\l_2$, there is some $j$ such that $\l_1(H_j)\neq\l_2(H_j)$ and
$$
0=(\psi(H_j)-\l_1(H_j))\sum_k x_k
=\sum_{k\geq2}(\l_k(H_j)-\l_1(H_j))x_k~.
$$
By induction hypothesis all terms of the last sum vanish, in particular $x_2=0$ and thus: $x_1+x_3+\cdots+x_n=0$ which again by induction hypothesis implies that all vectors $x_k$ vanish.
2. If $x\colon=x_1+\cdots+x_n$ is a non zero weight vector with weight $\l$, then for all $j$:
$$
0=(\psi(H_j)-\l(H_j))x=\sum_k(\l_k(H_j)-\l(H_j))x_k
$$
and by 1. it follows that for all $k$: $(\l_k(H_j)-\l(H_j))x_k=0$. Since $x\neq0$, there is some $l$ such that $x_l\neq0$ and thus for all $j$: $\l_l(H_j)=\l(H_j)$, i.e. $\l=\l_l$. If $k\neq l$ then there is some $j$ such that $\l_k(H_j)\neq\l_l(H_j)$ and therefore $x_k=0$.
$\eofproof$
$\proof$
This is obvious if $\psi(H_j)$ are diagonalizable, but we actually don`t need this assumption. We already observed that $\psi(H_1)$ and $\psi(H_2)$ have at least one common complex eigen-vector. Now let $W\sbe E$ be the subspace of all weight spaces, then $W\neq\{0\}$ and since $X_j$ and $Y_j$, $j=1,2,3$, are root vectors, the operators $\psi(X_j)$ and $\psi(Y_j)$ map $W$ into itself by lemma. Obviously, $W$ is invariant under $\psi(H_j)$ and thus it`s invariant under all operators $\psi(X)$, $X\in\sla(3,\C)$. By irreducibility $W=E$ is the sum of its weight spaces and by lemma the sum is direct.
$\eofproof$
In the picture below the weight (0,0) is higher than any weight in the blue sector!
The weight $(1,0)$ for the standard representation of $\sla(3,\C)$ is higher than the weight $(0,0)$, because $(1,0)-(0,0)=\tfrac23r_1+\tfrac13r_2$. Also, since $(1,0)-(-1,1)=(2,1)$ and $(1,0)-(0,-1)=(1,1)=r_1+r_2$, $(1,0)$ is a highest weight for the standard representation. Similarly $(0,1)$ is a highest weight for the dual representation.
A highest weight is of course a maximal element (with respect to $\preceq$) but in general it`s not the other way round; even if the set of weights is finite - which is the case if the representation is finite dimensional - a highest weight may not exist! So let us start with a maximal weight $\l$ of a representation $\psi:\sla(3,\C)\rar\Hom(E)$ with corresponding weight vector $x$, then $\psi(X_1)x=\psi(X_2)x=0$, for otherwise $\l$ wouldn`t be maximal. Moreover, since the root $(1,1)$ for the root vector $X_3$ can be written as $(1,1)=r_1+r_2$, we must also have $\psi(X_3)x=0$. We say $\psi:\sla(3,\C)\rar\Hom(E)$ is a highest weight cyclic representation if the following holds:
Restricting a highest weight cyclic representation to the sub-algebras generated by $H_1,X_1,Y_1$ and $H_2,X_2,Y_2$ respectively, we get representations of $\sla(2,\C)$ such that $\psi(X_1)=\psi(X_2)=0$ and thus by proposition we infer that both $\l(H_1)$ and $\l(H_2)$ must be non negative integers. Also, if a weight vector for a maximal weight is cyclic, then $\psi$ is a highest weight cyclic representation and both components of the weight are non negative integers. But more is true and this also explains why a highest weight cyclic representation is called highest weight representation:
$\proof$
First we proof that the space $W$ generated by $\psi(Y_{j_1})\cdots\psi(Y_{j_m})x$ is invariant: By the reordering lemma it suffices to prove that all elements of the form
$$
\psi(Y_{j_1})\cdots\psi(Y_{j_m})
\psi(H_1)^{k_1}\psi(H_2)^{k_2}
\psi(X_1)^{l_1}\psi(X_2)^{l_2}\psi(X_3)^{l_3}x
$$
live in $W$, but $\psi(X_j)x=0$ and $\psi(H_j)x=\l(H_j)x$ and thus $W$ is $\psi$-invariant by its definition. Since $x$ is cyclic, we conclude: $W=E$. Now suppose $y$ is any vector in $E=W$ of the form $y=\psi(Y_{j_1})\cdots\psi(Y_{j_m})x$; since $Y_1,Y_2,Y_3$ are root vectors with roots $-r_1,-r_2,-r_1-r_2$, $y$ is a weight vector and by lemma its weights are strictly lower than $\l$ unless $y=x$. Therefore $W$ has a basis $x,y_1,\ldots,y_m$ of weight vectors and each weight of $y_j$ is strictly lower than $\l$. By lemma we are done.
$\eofproof$
Now assume moreover that $\psi$ is irreducible, then any vector in $E\sm\{0\}$ is cyclic and thus any non zero weight vector for a maximal weight is cyclic and satisfies $\psi(X_j)=0$, i.e. every irreducible representation is a highest weight cyclic representation. Also the converse holds
$\proof$
As $E$ is finite dimensional it decomposes into sub-spaces $E=\bigoplus E_j$, such that $\psi:\sla(3,\C)\rar\Hom(E_j)$ is irreducible. Each of these spaces $E_j$ in turn decomposes by proposition into its weight spaces and by lemma the weight vector $x$ must lie in one of these spaces and consequently in one of the spaces $E_j$. Since $x$ is cyclic and $E_j$ is invariant, we must have $E=E_j$.
$\eofproof$
So far we have established that an irreducible representation is the same as a highest weight cyclic representation, moreover the weight space of the highest weight of this representations is one dimensional and the components of the highest weight are non negative. Our next goal is to verify the following
$\proof$
Let $\psi:\sla(3,\C)\rar\Hom(E)$ and $\vp:\sla(3,\C)\rar\Hom(F)$ be irreducible representations with the highest weight $\l$ and let $u\in E$, $v\in F$ be weight vectors with this weight. Then $(u,v)$ is a weight vector of the representation $\pi:\sla(3,\C)\rar\Hom(E\times F)$, $\pi(x,y)=(\psi(x),\vp(y))$. Let $W$ be the subspace generated by $\{\pi(x,y)(u,v):\,x,y\in\sla(3,\C)\}$, then $\pi:\sla(3,\C)\rar\Hom(W)$ is a highest weight cyclic representation, by proposition it`s irreducible. Further, the projections $P:W\rar E$, $(x,y)\mapsto x$ and $Q:W\rar F$, $(x,y)\mapsto y$ are intertwining operators, i.e. $P\pi(x,y)=\psi(x)=\psi(P(x,y))$ and $Q\pi(x,y)=\vp(y)=\vp(Q(x,y))$ and since $P(u,v)=u\neq0$ and $Q(u,v)=v\neq0$, both $P:W\rar E$ and $Q:W\rar F$ must be isomorphisms by Schur`s lemma for Lie algebras, proving that both irreducible representations $\psi:\sla(3,\C)\rar\Hom(E)$ and $\vp:\sla(3,\C)\rar\Hom(F)$ are equivalent to the irreducible representation $\pi:\sla(3,\C)\rar\Hom(W)$.
$\eofproof$
Our last step is the construction of irreducible representations
$\proof$
The trivial representation $X\mapsto0$ has highest weight $(0,0)$, the standard representation $\psi$ has highest weight $(1,0)$ and it`s dual $\vp$ has highest weight $(0,1)$. Put $E=F=\C^3$ and let $u\in E$, $v\in F$ be weight vectors for these two representations with the highest weight $(1,0)$ and $(0,1)$. Let $\pi$ be the following representation
$$
\pi:\sla(3,\C)\rar\Hom(E\otimes\cdots\otimes E\otimes F\otimes\cdots\otimes F)
$$
where we take $m_1$-fold tensor products of $E$ and $m_2$-fold tensor products of $F$. $\pi$ is defined by the sum of $m_1+m_2$ terms:
\begin{eqnarray*}
\pi(X)\colon
&=&\psi(X)\otimes1\otimes\cdots\otimes1
+\cdots
+1\otimes\cdots\otimes1\otimes\psi(X)\otimes1\otimes\cdots\otimes1\\
&&+1\otimes\cdots\otimes1\otimes\vp(X)\otimes1\otimes\cdots\otimes1
+\cdots
+1\otimes\cdots\otimes1\otimes\vp(X)
\end{eqnarray*}
Finally put $w\colon=u\otimes\cdots\otimes u\otimes v\otimes\cdots\otimes v$. Then it follows that
$$
\pi(H_j)w=m_jw\quad\mbox{and}\quad
\pi(X_j)w=0~.
$$
Thus $\pi$ restricted to the space $W$ invariant under $\pi(Y_j)$, $j=1,2,3$, is an irreducible representation with highest weight $(m_1,m_2)$. In order to get $W$ we observe that $Y_3=[Y_2,Y_1]$ and thus we only need to apply the operators $\pi(Y_j)$, $j=1,2$, in every order to $w$.
$\eofproof$
Highest Weight $(2,0)$ - the $\mathbf{6}$-representation
If $m_1=2$ and $m_2=0$, then $\pi(X)=X\otimes1+1\otimes X$ and the highest weight vector is $e_1\otimes e_1$. We have to find the smallest space $W$ containing $w\colon=e_1\otimes e_1$ and invariant under the action of the operators
$$
\pi(Y_1)=Y_1\otimes1+1\otimes Y_1
\quad\mbox{and}\quad
\pi(Y_2)=Y_2\otimes1+1\otimes Y_2
$$
applied in any order.
\begin{eqnarray*}
\pi(Y_1)(e_1\otimes e_1)&=&e_2\otimes e_1+e_1\otimes e_2\\
\pi(Y_2)(e_1\otimes e_1)&=&0\\
\pi(Y_1)(e_2\otimes e_1+e_1\otimes e_2)&=&2e_2\otimes e_2\\
\pi(Y_2)(e_2\otimes e_1+e_1\otimes e_2)&=&e_3\otimes e_1+e_1\otimes e_3\\
\pi(Y_1)(e_2\otimes e_2)&=&0\\
\pi(Y_2)(e_2\otimes e_2)&=&e_3\otimes e_2+e_2\otimes e_3\\
\pi(Y_1)(e_3\otimes e_2+e_2\otimes e_3)&=&0\\
\pi(Y_2)(e_3\otimes e_2+e_2\otimes e_3)&=&2e_3\otimes e_3\\
\pi(Y_1)(e_3\otimes e_3)&=&0\\
\pi(Y_2)(e_3\otimes e_3)&=&0
\end{eqnarray*}
Thus the irreducible representation with highest weight $(2,0)$ is of dimension $6$ - hence it`s called the $\mathbf{6}$-representation - with basis:
$$
e_1\otimes e_1,
e_2\otimes e_2,
e_3\otimes e_3,
e_1\otimes e_2+e_2\otimes e_1,
e_2\otimes e_3+e_3\otimes e_2,
e_3\otimes e_1+e_1\otimes e_3.
$$
In particle physics the vectors $e_1\otimes e_1,\ldots,e_3\otimes e_3$ are designated as $uu,\ldots,ss$, which flag particles composed of two quarks. Hence we have the following orthonormal basis in "quark"-notation:
$$
uu,dd,ss,
\frac{ud+du}{\sqrt2},
\frac{ds+sd}{\sqrt2},
\frac{us+su}{\sqrt2}~.
$$
Since $u,d,s$ have weights $(1,0),(-1,1),(0,-1)$, these vectors have the weights:
$$
(1,0)+(1,0)=(2,0),(-2,2),(0,-2),(0,1),(-1,0),(1,0)+(0,-1)=(1,-1)~.
$$
The following diagram depicts the $\mathbf{6}$-representation $\pi$: the mappings $\pi(Y_1)$, $\pi(Y_2)$, $\pi(X_1)$ and $\pi(X_2)$ map each basis vector, represented by a circle enclosing its weight, to a multiple of another basis vector. This multiple is attached to the correspondingly colored arrow, i.e. the blue, cyan, red and green arrow indicates the action of $\pi(Y_1)$, $\pi(Y_2)$, $\pi(X_1)$ and $\pi(X_2)$. If there is e.g. no outgoing blue arrow from one of the circles, then the corresponding basis vector is mapped by $\pi(Y_1)$ to the null vector!
Highest Weight $(1,1)$ - the $\mathbf{8}$-representation
This time we have $\pi(X)=X\otimes1+1\otimes\bar X$, where the matrices $\bar H_j$, $\bar X_j$, $\bar Y_j$ are given by:
$$
\bar H_1\colon=\left(\begin{array}{ccc}
-1&0&0\\
0&1&0\\
0&0&0
\end{array}\right),
\bar X_1\colon=\left(\begin{array}{ccc}
0&0&0\\
-1&0&0\\
0&0&0
\end{array}\right),
\bar Y_1\colon=\left(\begin{array}{ccc}
0&-1&0\\
0&0&0\\
0&0&0
\end{array}\right)
$$
$$
\bar H_2\colon=\left(\begin{array}{ccc}
0&0&0\\
0&-1&0\\
0&0&1
\end{array}\right),
\bar X_2\colon=\left(\begin{array}{ccc}
0&0&0\\
0&0&0\\
0&-1&0
\end{array}\right),
\bar Y_2\colon=\left(\begin{array}{ccc}
0&0&0\\
0&0&-1\\
0&0&0
\end{array}\right)
$$
$$
\bar X_3\colon=\left(\begin{array}{ccc}
0&0&0\\
0&0&0\\
-1&0&0
\end{array}\right),
\bar Y_3\colon=\left(\begin{array}{ccc}
0&0&-1\\
0&0&0\\
0&0&0
\end{array}\right)
$$
Again, let $e_1,e_2,e_3$ be the standard basis of $\C^3$, then $e_1$ is the weight vector for the standard representation with the highest weight $(1,0)$ and $\bar e_3$ is the weight vector for the dual representation with the highest weight $(0,1)$. We are left to identify the smallest space $W$ containing $w\colon=e_1\otimes \bar e_3$ and invariant under
$$
\pi(Y_1)=Y_1\otimes1+1\otimes\bar Y_1
\quad\mbox{and}\quad
\pi(Y_2)=Y_2\otimes1+1\otimes\bar Y_2~.
$$
We start out with the vector $e_1\otimes \bar e_3$:
\begin{eqnarray*}
Y_1e_1\otimes \bar e_3+e_1\otimes\bar Y_1\bar e_3&=&e_2\otimes \bar e_3\\
Y_2e_1\otimes \bar e_3+e_1\otimes\bar Y_2\bar e_3&=&-e_1\otimes \bar e_2
\end{eqnarray*}
with weights:
Thus we have to apply the operators to $e_2\otimes \bar e_3$ and $e_1\otimes \bar e_2$:
\begin{eqnarray*}
Y_1e_2\otimes \bar e_3+e_2\otimes\bar Y_1\bar e_3&=&0\\
Y_2e_2\otimes \bar e_3+e_2\otimes\bar Y_2\bar e_3&=&e_3\otimes \bar e_3-e_2\otimes \bar e_2\\
Y_1e_1\otimes \bar e_2+e_1\otimes\bar Y_1\bar e_2&=&e_2\otimes \bar e_2-e_1\otimes \bar e_1\\
Y_2e_1\otimes \bar e_2+e_1\otimes\bar Y_2\bar e_2&=&0
\end{eqnarray*}
This gives another two vectors: $e_3\otimes \bar e_3-e_2\otimes \bar e_2$ and $e_2\otimes \bar e_2-e_1\otimes \bar e_1$:
\begin{eqnarray*}
\pi(Y_1)(e_3\otimes \bar e_3-e_2\otimes \bar e_2)&=&e_2\otimes \bar e_1\\
\pi(Y_2)(e_3\otimes \bar e_3-e_2\otimes \bar e_2)&=&-2e_3\otimes \bar e_2\\
\pi(Y_1)(e_2\otimes \bar e_2-e_1\otimes \bar e_1)&=&-2e_2\otimes \bar e_1\\
\pi(Y_2)(e_2\otimes \bar e_2-e_1\otimes \bar e_1)&=&e_3\otimes \bar e_2
\end{eqnarray*}
which gives the two vectors: $e_2\otimes \bar e_1$ and $e_3\otimes \bar e_2$:
\begin{eqnarray*}
\pi(Y_1)(e_2\otimes \bar e_1)&=&0\\
\pi(Y_2)(e_2\otimes \bar e_1)&=&e_3\otimes \bar e_1\\
\pi(Y_1)(e_3\otimes \bar e_2)&=&-e_3\otimes \bar e_1\\
\pi(Y_2)(e_3\otimes \bar e_2)&=&0
\end{eqnarray*}
Finally:
\begin{eqnarray*}
\pi(Y_1)(e_3\otimes \bar e_1)&=&0\\
\pi(Y_2)(e_3\otimes \bar e_1)&=&0
\end{eqnarray*}
The space $W$ is generated by the eight vectors and therefore it`s called the $\mathbf{8}$-representation:
$$
e_1\otimes \bar e_2,
e_1\otimes \bar e_3,
e_2\otimes \bar e_1,
e_2\otimes \bar e_3,
e_3\otimes \bar e_1,
e_3\otimes \bar e_2
$$
and
$$
e_3\otimes \bar e_3-e_2\otimes \bar e_2,
e_2\otimes \bar e_2-e_1\otimes \bar e_1~.
$$
Finally we compute the weights of these vectors: the weights of $e_1,e_2,e_3$ are $(1,0),(-1,1),(0,-1)$ and the weights of $\bar e_1,\bar e_2,\bar e_3$ are $(-1,0),(1,-1),(0,1)$. Since the weights of $e_j\otimes\bar e_k$ is the sum of the weights of $e_j$ and $\bar e_k$, we get the weights:
\begin{eqnarray*}
&&(1,0)+(1,-1)=(2,-1),\quad
(1,0)+(0,1)=(1,1),\quad
(-1,1)+(-1,0)=(-2,1),\\
&&(-1,1)+(0,1)=(-1,2),\quad
(0,-1)+(-1,0)=(-1,-1),\quad
(0,1)+(1,-1)=(1,-2),
\end{eqnarray*}
and the last two vectors have the weight $(0,0)$. Here comes a diagrammatic visualization: $\pi(Y_1)$, $\pi(Y_2)$, $\pi(X_1)$ and $\pi(X_2)$ are again depicted as blue, cyan, red and green arrows. But this time the weight space with weight $(0,0)$ is two dimensional, thus it has two basis vectors and the numbers attached to an outgoing arrow from this space denote the multiples of the weight vector, these vectors are mapped on. Also an arrow pointing to the weight space with weight $(0,0)$ has an additional number put in parentheses, indicating the first or the second basis vector of the weight space.
In particle physics the vectors $e_1\otimes\bar e_1,\ldots,e_3\otimes\bar e_3$ are designated as $u\bar u,\ldots,s\bar s$, which again are particles composed of a quark and an anti-quark and are usually called mesons. This notation saves you a lot of writing!
We have to compute $\pi(Q)u\bar d=(Qu)\bar d+u(\bar Q\bar d)$, which comes down to: $(2/3-1/3)u\bar d$. so the charge of $u\bar d$ is $1/3$.
Highest Weight $(3,0)$ - the $\mathbf{10}$-representation
This time we use the "quark" notation; it will really make the calculations more clearly laid out: The highest weight vector is $uuu$, $Y_1$ sends $u$ to $d$ and $d,s$ to $0$, $Y_2$ sends $d$ to $s$ and $u,s$ to $0$. Applying
$$
\pi(Y_j)=Y_j\otimes1\otimes1+1\otimes Y_j\otimes1+1\otimes1\otimes Y_j
$$
for $j=1$ to $uuu$ recursively gives:
\begin{eqnarray*}
uuu&\to&duu+udu+uud,\\
duu+udu+uud&\to&2(ddu+dud+udd),\\
ddu+dud+udd&\to&3ddd,\\
ddd&\to&0
\end{eqnarray*}
with weights $(3,0),(1,1),(-1,2),(-3,3)$. Applying $\pi(Y_2)$ recursively gives:
\begin{eqnarray*}
uuu&\to&0\\
duu+udu+uud&\to&suu+usu+uus,\\
ddu+dud+udd&\to&sdu+dsu+sud+dus+usd+uds,\\
ddd&\to&sdd+dsd+dds,\\
suu+usu+uus&\to&0,\\
sdu+dsu+sud+dus+usd+uds&\to&2(ssu+sus+uss),\\
sdd+dsd+dds&\to&2(ssd+sds+dss),\\
ssu+sus+uss&\to&0,\\
ssd+sds+dss&\to&3sss,\\
sss&\to&0.
\end{eqnarray*}
Finally apply $\pi(Y_1)$ to the newly obtained vectors:
\begin{eqnarray*}
suu+usu+uus&\to&sdu+sud+dsu+usd+dus+uds,\\
sdu+dsu+sud+dus+usd+uds&\to&2(sdd+dsd+dds),\\
sdd+dsd+dds&\to&0,\\
ssu+sus+uss&\to&ssd+sds+dss,\\
ssd+sds+dss&\to&0,\\
sss&\to&0.
\end{eqnarray*}
Since we have not produced any new vector, the irreducible representation lives in the space generated by $10$ weight vectors:
$$
\begin{array}{cc}
weight&vector\\
\hline
(3,0)&uuu\\
(1,1)&duu+udu+uud\\
(-1,2)&ddu+dud+udd\\
(-3,3)&ddd\\
(2,-1)&suu+usu+uus\\
(0,0)&sdu+dsu+sud+dus+usd+uds\\
(1,-2)&ssu+sus+uss\\
(-2,1)&sdd+dsd+dds\\
(-1,1)&ssd+sds+dss\\
(1,-1)&sss
\end{array}
$$
Let ${\cal P}_n$ be the homogeneous polynomials on $\C^3$ of degree $n\in\N_0$ and $\G:\SU(3)\rar\Hom({\cal P}_n)$ the representation $\G(U)f\colon=f\circ U^{-1}$. Let us first compute $\g(X)\colon=D\G(1)X$: For any holomorphic function $f:\C^3\rar\C$ we have (cf. section)
$$
\g(X)f(z)=\ftdl t0f(e^{-tX}z)=-(\pa_1f(z),\pa_2f(z),\pa_3f(z))Xz
$$
and thus
\begin{array}{ccc}
\g(X_1)f(z)=-\pa_1f(z)z_2,&\g(X_2)f(z)=-\pa_2f(z)z_3,&\g(X_3)f(z)=-\pa_1f(z)z_3,\\
\g(Y_1)f(z)=-\pa_2f(z)z_1,&\g(Y_2)f(z)=-\pa_3f(z)z_2,&\g(Y_3)f(z)=-\pa_3f(z)z_1~.
\end{array}
Next we will verify that any non trivial invariant subspace contains the polynomial $z_3^n$: $\g(X_2)$ and $\g(X_3)$ act on a polynomial by decreasing the degree in $z_2$ and $z_1$, respectively, by one and increasing the degree of $z_3$ by one. $z_1^{n_1}z_2^{n_2}z_3^{n_3}$ will be mapped by $\g(X_3)^{n_1}$ to a non zero multiple of $z_2^{n_2}z_3^{n_1+n_3}$ and this gets mapped by $\psi(X_2)^{n_2}$ to a non zero multiple of $z_3^n$. All monomials $z_1^{l_1}z_2^{l_2}z_3^{l_3}$ with $l_1 < n_1$ or $l_2 < n_2$ are mapped to $0$. Now if $f$ is a polynomial in an invariant subspace it must be a linear combination of monomials; we pick the one with the lowest degree $n_3$, say. Then $\psi(X_2)^{n_2}\g(X_3)^{n_1}p$ is just a non zero multiple of $z_3^n$. The space generated by $\psi(Y_2)^{l_2}\g(Y_3)^{l_1}z_3^n$ is just ${\cal P}_n$, i.e. $\g$ and thus $\G$ is irreducible. Eventually $\g(X_1)z_3^n=\g(X_2)z_3^n=0$, i.e. $z_3^n$ is a highest weight cyclic vector with weight $(0,n)$, because $\g(H_1)z_3^n=0$, $\g(H_2)z_3^n=-(z_2\pa_2z_3^n-z_3\pa_3z_3^n)=nz_3^n$.
A weight $\l$ was defined as a pair of complex numbers: the eigen-values of $\psi(H_1)$ and $\psi(H_2)$ for some common eigen-vector $x$, but in fact for every $H\in{\cal H}$ we have $H=a_1H_1+a_2H_2$ and $\l(H)\colon=a_1\l(H_1)+a_2\l(H_2)$ is the eigen-value of $H$ with eigen-vector $x$. Thus we may conveniently think of a weight as a linear functional $\l$ on the space ${\cal H}$. Moreover, endowing ${\cal H}$ with the euclidean product:
$$
\la H,G\ra\colon=\tr(HG^*)
$$
we know that any linear functional $\l$ can be written as $\l(H)=\la H,G\ra$ for a unique $G\in{\cal H}$. Thus we`ve got our new definition: a weight of a representation $\psi$ of $\sla(3,\C)$ in some finite dimensional vector-space $E$ is a vector $\l\in{\cal H}$ such that there exists some $x\in E\sm\{0\}$ such that
$$
\forall H\in {\cal H}:\quad
\psi(H)x=\la H,\l\ra x
$$
Hence weights are particular vectors in the Cartan algebra ${\cal H}$ of $\sla(3,\C)$!
1. If $\l_1=diag\{x,y,z\}$, then $x+y+z=0$, $x-y=1$ and $y-z=0$, i.e. $y=-1/3$, $z=-1/3$ and $x=2/3$, i.e. $\l_1$ is the electric charge operator $Q$; analogously we get: $\l_2=diag\{1/3,1/3,-2/3\}$ which is the hypercharge operator $Y$.
3. $\la H_1+H_2,\l_1+\l_2\ra=1-0+0-1=0$.
$N({\cal H})$ acting on ${\cal H}$ via $\Ad$
For $g\in N({\cal H})$ and $H,G\in{\cal H}$ we have:
$$
\la\Ad(g)H,\Ad(g)G\ra
=\tr(gHg^{-1}(gGg^{-1})^*)
=\tr(gHG^*g^{-1})
=\tr(HG^*)
=\la H,G\ra
$$
and thus $\Ad(g)$, $g\in N({\cal H})$, acts isometrically on $({\cal H},\la.,.\ra)$. Also, roots are weights of the adjoint representation; hence, in our new interpretation, a root is an element $r\in{\cal H}$ such that for some root vector $X\in\sla(3,\C)$:
$$
\forall H\in{\cal H}:\quad
\ad(H)X=\la r,H\ra X~.
$$
The root of $X_1$ is $(2,-1)$, which is in our new interpretation: $2\l_1-\l_2=H_1$; similarly the root of $X_2$ is $(-1,2)$, which now becomes $-\l_1+2\l_2=H_2$; the root of $Y_1$ is $(-2,1)$, which now becomes $-2\l_1+\l_2=-H_1$ and finally the root of $Y_2$ is $(1,-2)$, which now becomes $\l_1-2\l_2=-H_2$.
$\proof$
First we show, that for all $g\in N({\cal H})$ the vector $\Psi(g)x$ is a weight vector with weight $\Ad(g)\l$: for all $H\in{\cal H}$ we have $g^{-1}Hg\in{\cal H}$ and $\psi(g^{-1}Hg)=\Psi(g)^{-1}\psi(H)\Psi(g)$ (cf. example) and thus
\begin{eqnarray*}
\psi(H)\Psi(g)x
&=&\Psi(g)\Psi(g)^{-1}\psi(H)\Psi(g)x\\
&=&\Psi(g)\psi(g^{-1}Hg)x
=\Psi(g)(\la g^{-1}Hg,\l\ra x)
=\la g^{-1}Hg,\l\ra \Psi(g)x\\
&=&\la\Ad(g^{-1})H,\l\ra \Psi(g)x
=\la H,\Ad(g)\l\ra \Psi(g)x,
\end{eqnarray*}
where the last equality follows from the fact that $\Ad$ acts isometrically on $({\cal H},\la.,.\ra)$. Therefore $\Psi(g)$ maps the weight space with weight $\l$ into the weight space with weight $\Ad(g)\l$ and since $\Ad$ and $\Psi$ are representation $\Psi(g^{-1})$ is the inverse of this map, i.e. the dimensions of the weight spaces with weight $\l$ and $\Ad(g)\l$ coincide.
$\eofproof$
The permutations $w\in\{(213),(132),(321),(231),(312)\}$ act on the pair of vectors $(H_1,H_2)$, yielding pairs $(w\cdot H_1,w\cdot H_2)$, which are given by
$$
(-H_1,-H_3),
(-H_3,-H_2),
(-H_2,-H_1),
(H_3,H_1),
(H_2,H_3)~.
$$
The action of the first permutation $(213)$ is a reflection about the line $\R\l_1=[\la.,H_2\ra=0]$, the action of the second is a reflection about the line $\R\l_2=[\la.,H_1\ra=0]$ and the action of the third is a reflection about the line $\R(\l_1-\l_2)=[\la.,H_3\ra=0]$. The forth action is a rotation by $4\pi/3$ and the fifth is a rotation by $2\pi/3$. Hence the Weyl group is just the symmetry group of the regular triangle formed by $\l_3\colon=\l_1-\l_2,\l_2,-\l_1$ and the reflections about the three lines orthogonal to $H_j$, $j=1,2,3$ generate this group of symmetries. That`s the way we look at the Weyl group: a subgroup of the group of isometries of the two dimensional space ${\cal H}$. The picture below illustrates the lattice points $\Z\l_1+\Z\l_2$ (which contains all possible weights), the regular triangle (of which the Weyl group is the symmetry group) and the positive sector
$$
P\colon=\R_0^+\l_1+\R_0^+\l_2
=[\la.,H_1\ra\geq0]\cap[\la.,H_2\ra\geq0]
$$
of non negative weights bounded by half-lines through $\l_1$ and $\l_2$. In addition, the sector indicates all weights lower than $-H_3$.
Some conclusions can be drawn from this interpretation immediately: 1. If $\l\neq0$ is any weight, then the set $\{w\cdot\l:w\in N({\cal H})/Z\}$ contains either three or six points, depending on whether $\l$ lies on one of the lines generated by $\l_1,\l_2$ or $\l_1-\l_2$ or not.
2. The vectors $\pm H_j$, $j=1,2,3$, form a regular hexagon and for every lattice point in $\l\in\Z\l_1+\Z\l_2$ there is some $w$ in the $N({\cal H})/Z$ such that $w\cdot\l$ lies in the positive sector $P$ generated by $\l_1$ and $\l_2$, i.e. both $\la w\cdot\l,H_1\ra$ and $\la w\cdot\l,H_2\ra$ are non negative.
$\mathbf{1}$-representation: Suppose we have a representation $\psi=D\Psi(1)$, which has the weight $(1,0)$, which now becomes $\l_1=diag\{2,-1,-1\}/3$, then it must also have the weights $diag\{-1,2,-1\}/3$ and $diag\{-1,-1,2\}/3$, which in old notation is $(-1,1)$ and $(0,-1)$.
$\mathbf{8}$-representation: This representation has the weight $(1,1)$, i.e. $\l_1+\l_2=-H_3=diag\{1,0,-1\}$, then it must have another five weights: $H_3,\pm H_1,\pm H_2$, which can be written as
$$
-\l_1+2\l_2,\quad
-\l_1-\l_2,\quad
2\l_1-\l_2,\quad
-2\l_1+\l_2,\quad
\l_1-2\l_2,
$$
or in old notation: $(-1,2),(-1,-1),(2,-1),(-2,1)$ and $(1,-2)$.
Assume a representation $\psi=D\Psi(1)$ has weights $(0,3)$ and $(1,1)$. What additional weights $\psi$ must have? These weights in our new approach are given by $3\l_1$ and $-H_3$, thus it must have the additional weights: $3(\l_1-\l_2)$, $-3\l_1$ and $H_3,\pm H_1,\pm H_2$.
$X_j-Y_j,iX_j+iY_j\in\su(3)$ and thus
\begin{eqnarray*}
\pi(X_j)^*-\psi(Y_j)^*&=&-\pi(X_j)+\pi(Y_j)\quad\mbox{and}\\
-i\pi(X_j)^*-i\pi(Y_j)^*&=&-i\pi(X_j)-i\pi(Y_j)
\end{eqnarray*}
i.e. $\pi(X_j)^*=\pi(Y_j)$.
Convex hulls of weights of irreducible representations
Suppose $\l$ is the highest weight of an irreducible representation $\psi$. Then the convex hull $C$ of $\{w\cdot\l:w\in N({\cal H})/Z\}$ contains all weights of $\psi$. Indeed, we have just seen that there is some $w\in N({\cal H})/Z$ such that for any weight $\mu$ of $\psi$: $w\cdot\mu\in P$. Reflecting $\l$ about the lines $\R\l_1=[\la.,H_2\ra=0]$ and $\R\l_2=[\la.,H_1\ra=0]$ gives us two points $w_1\cdot\l$ and $w_2\cdot\l$ on the boundary of
$$
[\la.,\l_1\ra\leq\la\l,\l_1\ra]\cap
[\la.,\l_2\ra\leq\la\l,\l_2\ra]~.
$$
We claim that $w\cdot\mu$ is in the intersection $D$ of $P$ with the convex hull of $0,w_1\cdot\l,w_2\cdot\l,\l$, i.e.
$$
D=P\cap[\la.,\l_1\ra\leq\la\l,\l_1\ra]\cap[\la.,\l_2\ra\leq\la\l,\l_2\ra],
$$
which follows from the fact that $w\cdot\mu$ is lower than $\l$, i.e. $\la w\cdot\mu,\l_j\ra\leq\la\l,\l_j\ra$. Eventually: $0=\tfrac16\sum_w w\cdot\l\in C$, hence $D\sbe C$ and $\mu\in\bigcup w\cdot D\sbe\bigcup w\cdot C=C$.
$\proof$
The sub-algebras $A_j\colon=\lhull{H_j,X_j,Y_j}$, $j=1,2,3$ are all isomorphic to $\sla(2,\C)$. W.l.o.g we may assume $j=1$. So let $F$ be the subspace of $E$ generated by all weight vectors in $E$ whose weights lie on the line $\mu+\R H_1$. These weights are shifted by $\psi(X_1)$ and $\psi(Y_1)$ by $\pm H_1$ and therefore the restrictions $\psi|A_1$ gives a representation $\vp$ of $\sla(2,\C)$ in $F$. Since $w\cdot H_1=-H_1$ and $w^*=w$, we have:
$$
\la H_1,w\cdot\mu\ra=\la w\cdot H_1,\mu\ra=-\la H_1,\mu\ra~.
$$
By proposition $\vp(H_1)$ has the eigen-values
$$
-|\la H_1,\mu\ra|,-|\la H_1,\mu\ra|+2,\ldots,|\la H_1,\mu\ra|-2,|\la H_1,\mu\ra|,
$$
which coincides with the set $\{\la H_1,\nu\ra\}$, where $\nu$ is the set of points $\nu=\mu+\Z H_1$ on the line segment joining $\mu$ to $w\cdot\mu$. Thus for any such $\nu$ there must be an eigen-vector $x\in F$ of $H_1$ such that: $H_1x=\la H_1,\nu\ra x$.
$\eofproof$
$\proof$
We only need to prove that these conditions are sufficient. By lemma each point in $\l-\N_0H_1-\N_0H_2$ on the boundary of $C$ is a weight. So assume $\mu=\l-n_1H_1-n_2H_2$ is in the interior of $C$; w.l.o.g. we may also assume: $n_2\leq n_1$, then
$$
\mu=\l-mH_1+n_2H_3,
\quad\mbox{with}\quad
m=n_1-n_2~.
$$
Starting at $\mu$ in the direction of $-H_3$ we end up in $\nu\colon=\l-(n_1-n_2)H_1$ after $n_2$ steps. $\nu$ lies on the boundary of the convex hull $C$ of $\{w\cdot\l:w\in N({\cal H})/Z\}$, for it cannot lie on a boundary part of $C$ parallel to $H_3$ an the intersection with the line passing through $\l$ and parallel to $H_2$ is given by the equation $\l-m_1H_1+x_3H_3=\l-x_2H_2$; putting $a_j=x_j/m_1$ gives $H_1=aH_2+bH_3$, i.e. $a=b=-1$ and thus $\l-x_2H_2=\l+m_1H_2$, which is not in $C$. Thus $\nu$ is a weight and so is its reflection $w\cdot\nu$ about the line orthogonal to $H_3$; by lemma $\mu$ must also be a weight.
$\eofproof$