← Root Systems → Properties of Representations of Semi-simple Lie-Algebras
What should you be acquainted with? 1. Linear Algebra, in particular inner product spaces both over the real and the complex numbers. This chapter is essentially taken from Brian Hall, Lie Groups, Lie Algebras, and Representations, Chapter 9

Representations of Semi-simple Lie-Algebras

Weights of Representations

Suppose $\psi:A\rar\Hom(E)$ is a representation of a semi-simple Lie-algebra $A$. A vector $\l$ in the Cartan algebra $H$ of $A$ is said to be a weight if there is a vectors $x\in E$ such that $$ \forall h\in H:\quad \psi(h)x=\la h,\l\ra x~. $$ $x$ is called a weight vector and the sub-space of all weight vectors with weight $\l$ is called a weight space of $\psi$. The dimension of the weight space is said to be the multiplicity of the weight $\l$.
We choose a positive root system $R^+$ and for any $r\in R^+\sbe H$ we consider the sub-algebra $\lhull{r,x_r,y_r}$ of subsection isomorphic to $\sla(2,\C)$ - the isomorphism takes the co-root $r^\vee$ to the matrix $H\in\sla(2,\C)$, $x_r$ to $X$ and $y_r$ to $Y$! Then $\psi|\lhull{r,x_r,y_r}$ is a representation of $\sla(2,\C)$ and for any weight vector $x$ we have: $\psi(r^\vee)x=\la r^\vee,\l\ra x$. By proposition this implies that $\la r^\vee,\l\ra\in\Z$, i.e. every weight is integral!
Next we prove the generalization of proposition to semi-simple Lie-algebras:
Let $\psi$ be a finite dimensional representation of a semi-simple Lie-algebra $A$. If $\l$ is a weight for $\psi$, then for all $r\in R$ the vector $R_r(\l)\in H$ is also a weight for $\psi$ with the same multiplicity. Moreover $$ S_r^{-1}\colon=e^{-\frac12\pi\psi(x_r-y_r)} $$ maps the weight space for the weight $\l$ onto the weight space for the weight $R_r(\l)$.
$\proof$ We stick to the notation of theorem, but this time we put $S_r\colon=e^{\psi(v_r)}$ with $v_r\colon=\frac12\pi(x_r-y_r)$. The primary property of $v_r$ (cf. subsection): $e^{\ad(v_r)}=R_r$. By lemma this implies: $$ \forall h\in H:\quad S_r\psi(h)S_r^{-1} =\psi(e^{\ad(v_r)}h) =\psi(R_r(h))~. $$ Now assume that $x\in E$ is a weight vector for the weight $\l\in H$; taking into account the fact that $R_r^*=R_r$ we infer that $$ \forall h\in H:\quad \psi(h)S_r^{-1}x =S_r^{-1}\psi(R_r(h))x =S_r^{-1}\la R_r(h),\l\ra x =\la h,R_r(\l)\ra S_r^{-1}x, $$ i.e. $S_r^{-1}x$ is a weight vector for the weight $R_r(\l)$ and vice versa. As $S_r$ is an isomorphism both $\l$ and $R_r(\l)$ have the same multiplicity. $\eofproof$
Lemma generalizes easily: Suppose $\psi:A\rar\Hom(E)$ is a representation, $\l$ a weight for $\psi$ and $v\in E$ a weight vector. For any root $r$ with corresponding root vector $x$ the vector $\psi(x)v\in E$ is a weight vector for the weight $\l+r$: Since $\psi$ is an algebra homomorphism and for all $h\in H$: $[h,x]=\la h,r\ra x$, it follows that \begin{eqnarray*} \psi(h)\psi(x)v &=&[\psi(h),\psi(x)]v+\psi(x)\psi(h)v\\ &=&\psi([h,x])v+\psi(x)(\la h,\l\ra v) =\la h,r+\l\ra\psi(x)v \end{eqnarray*}
Let $\psi:A\rar\Hom(E)$ be an irreducible finite dimensional representation of a semi-simple Lie-algebra $A$.
  1. $\psi$ has a highest weight $\l$ and its multiplicity is $1$.
  2. If $\vp:A\rar\Hom(F)$ is another irreducible finite dimensional representations of $A$ with the same highest weight $\l$, then $\vp$ and $\psi$ are equivalent.
  3. The highest weight $\l$ is integral and dominant.
$\proof$ 1. We just copy the steps in the proof of proposition: As $E$ is finite dimensional the weights are finite and we may choose a maximal weight $\l$ (with respect to the ordering $\preceq$) and a corresponding weight vector $v\in E$, i.e. for all $h\in H$: $\psi(h)v=\la h,\l\ra v$. Then for all $r\in R^+$: $\psi(x_r)v=0$ - otherwise it would be a weight vector with weight $\l+r$, which is strictly higher than $\l$. Moreover, as $\psi$ is irreducible, $v$ must be cyclic. Again, the space $W$ generated by $\psi(y_{r_1})\cdots\psi(y_{r_m})v$, $m\in\N$, is $\psi$-invariant and each $\psi(y_{r_1})\cdots\psi(y_{r_m})v$ is a weight vector with strictly smaller weight than $v$. Hence $W$ has a basis $v,v_1,\ldots,v_m$ of weight vectors and each weight of $v_j$ is strictly lower than $\l$. By lemma $\l$ is the highest weight of $\psi$ and the only weight vector with weight $\l$ is a multiple of $v$. Indeed $\psi$ is a highest weight $\l$ cyclic representation, i.e.
  1. There exists a weight vector $v\in E\sm\{0\}$ with weight $\l$.
  2. For all $r\in R^+$ and all $x\in A_r\colon=\bigcap_{h\in H}\ker(\ad(h)-\la h,r\ra)$ - cf. subsection: $\psi(x)v=0$.
  3. $v$ is a cyclic vector, i.e. the space generated by $\psi(x)v$, $x\in A$ is all of $E$.
2. We reproduce the proof of proposition almost literally: Let $\vp:A\rar\Hom(F)$ be another irreducible representations with the highest weight $\l$ and let $u\in E$, $v\in F$ be weight vectors with this weight. Then $(u,v)$ is a weight vector of the representation $\pi:A\rar\Hom(E\times F)$, $$ \pi(x)(z,w)=(\psi(x)z,\vp(x)w). $$ Thus if $W$ denotes the sub-space generated by $\{\pi(x)(u,v):\,x\in A\}$, then $\pi:A\rar\Hom(W)$ is a highest weight cyclic representation, hence it`s irreducible. Further, the projections $P:W\rar E$, $(z,w)\mapsto z$ and $Q:W\rar F$, $(z,w)\mapsto w$ are both intertwining operators, i.e. $P\pi(z,w)=\psi(z)=\psi(P(z,w))$ and $Q\pi(z,w)=\vp(w)=\vp(Q(z,w))$ and since $P(u,v)=u\neq0$ and $Q(u,v)=v\neq0$, both $P:W\rar E$ and $Q:W\rar F$ must be isomorphisms by Schur`s lemma for Lie algebras, proving that both the representation $\psi:A\rar\Hom(E)$ and $\vp:A\rar\Hom(F)$ are equivalent to the irreducible representation $\pi:A\rar\Hom(W)$.
3. We already know that any weight is integral. Thus we are left to verify that the highest weight $\l$ is dominant, i.e. for all $r\in R^+$: $\la\l,r\ra\geq0$. If for some $r\in R^+$ $\la\l,r\ra < 0$, then by proposition $R_r(\l)=\l-2\la\l,r\ra r/\Vert r\Vert^2$ is a weight strictly higher than $\l$. $\eofproof$
The remainder of this chapter is devoted to the proof of
For every integral and dominant element $\l\in H$ there is an irreducible finite dimensional representation of a semi-simple Lie-algebra $A$ with highest weight $\l$.
Our first task will be the construction of a sort of universal associative algebra, which carries all representations. By an associative algebra $(A,+,.)$ we will mean a vector space over some field $\bK$ with an associative multiplication $(x,y)\mapsto xy$, such that $(x,y)\mapsto xy$ is bi-linear and $(A,.)$ has a unit $1$. Hence every associative algebra $A$ is a ring with unit.
Suppose $A,B$ are associative algebras, then $A\otimes B$ with multiplication $(a_1\otimes b_1)(a_2\otimes b_2)\colon=a_1a_2\otimes b_1b_2$ is an associative algebra.
For each pair $(a,b)\in A\times B$ the mapping $(x,y)\mapsto ax\otimes by$ is bi-linear and thus there is a linear mapping $M_{a,b}(x\otimes y)=ax\otimes by$. Now the mapping $(a,b)\mapsto M_{a,b}$ is again a bi-linear mapping in $\Hom(A\otimes B)$. Hence there is a linear mapping $M:A\otimes B\rar\Hom(A\otimes B)$ satisfying $M(a\otimes b)(x\otimes y)=ax\otimes by$. This verifies that there is a bi-linear multiplication. Associativity is checked similarly.
Let $A,B,C$ be associative algebras and let $u:A\times B\rar C$ be a bi-linear mapping, such that $u(ax,by)=u(a,b)u(x,y)$. Then the unique linear mapping $\wt u:A\otimes B\rar C$ is an associative algebra homomorphism.
2. Suppose $I$ is a two sided ideal in $A$, i.e. $I$ is a sub-space of the vector space $A$ and for all $x,y\in A$: $xIy\sbe I$. Then $A/I$ is an associative algebra.
1. Since $\wt u(a\otimes b)=u(a,b)$ we get: \begin{eqnarray*} \wt u((a\otimes b)(x\otimes y)) &=&\wt u(ax\otimes by)\\ &=&u(ax,by)=u(a,b)u(x,y)=\wt u(a\otimes b)\wt u(x\otimes y) \end{eqnarray*} 2. For all $x,y\in A$ und all $j,k\in I$ we have: $(x+j)(y+k)=xy+jy+xk+jk\in xy+I$.
Suppose $A$ is an associative algebra and $A_1,A_2$ are sub-spaces of $A$, such that $A_1A_1,A_2A_2\sbe A_1$ and $A_1A_2,A_2A_1\sbe A_2$. Then on the direct sum $A_1\oplus A_2$ we may define an associative multiplication by: $$ (x_1\oplus x_2)(y_1\oplus y_2) \colon=(x_1y_1+x_2y_2)\oplus(x_1y_2+x_2y_1)~. $$ If $A_1$ has a unit $1$, then $1\oplus0$ is a unit of $A_1\oplus A_2$.
Let $A,B$ be associative algebras over the field $\bK$. Then the following holds:
  1. The algebra $A\otimes B\oplus A\otimes B$ is isomorphic to $(A\oplus A)\otimes B$.
  2. The algebras $\Ma(n,\bK)\otimes A$ and $\Ma(n,A)$ are isomorphic.
1. We essentially repeat the proof of proposition: Define a bi-linear map $F:(A\oplus A)\times B\rar A\otimes B\oplus A\otimes B$ by $(x\oplus y,z)\mapsto x\otimes z\oplus y\otimes z$. Then we have $$ F((x_1\oplus y_1)(x_1\oplus y_1),z_1z_2) =F(x_1\oplus y_1,z_1)F(x_2\oplus y_2,z_2) $$ and thus the linear mapping $\wh F:(A\oplus A)\otimes B\rar(A\otimes B)\oplus(A\otimes B)$ is an associative algebra homomorphism by exam. Also $J_1:A\times B\rar(A\oplus A)\otimes B$, $J_1(x,y)=x\oplus0\otimes y$ and $J_2:A\times B\rar(A\oplus A)\otimes B$, $J_2(x,y)=0\oplus x\otimes y$ induce associative algebra homomorphisms $\wh J_1,\wh J_2:A\otimes B\rar(A\oplus A)\otimes B$. We define $\wh J:A\otimes B\oplus A\otimes B\rar(A\oplus A)\otimes B$ by $\wh J(X\oplus Y)\colon=\wh J_1(X)+\wh J_2(Y)$ and verify as in proposition that $\wh F\circ\wh J=id$ and $\wh J\circ\wh F=id$.
2. Define $F:\Ma(n,\bK)\times A\rar\Ma(n,A)$ by $((u_{jk}),x)\mapsto(u_{jk}x)_{j,k=1}^n$, then: $$ F(uv,xy) =((uv)_{jk}xy)_{j,k=1}^n =(\sum_l u_{jl}v_{lk}xy)_{j,k=1}^n =\sum_l(u_{jl}xv_{lk}y)_{j,k=1}^n =F(u,x)F(v,y)~. $$ Hence $\wt F$ is an associative algebra homomorphism. As $E^{jk}$ is a basis for $\Ma(n,\bK)$ every $X\in\Ma(n,\bK)\times A$ has a unique expansion - cf. corollary: $$ X=\sum_{j,k}E^{jk}\otimes x_{jk} $$ Hence, given $(x_{jk})\in\Ma(n,A)$, then $$ (x_{jk}) =\sum_{j,k}E^{jk}x_{jk} =\wt F\Big(\sum_{j,k}E^{jk}\otimes x_{jk}\Big) $$ and thus $\wt F$ is an isomorphism.
The matrix algebra $\Ma(n,A)\oplus\Ma(n,A)$ is isomorphic to the sub-algebra of $\Ma(2n,A)$, consisting of matrices of the form $$ \left(\begin{array}{cc} X&Y\\ Y&X \end{array}\right) \quad\mbox{where $X,Y\in\Ma(n,A)$} $$
If $A$ is an associative algebra over $\R$, then its complexification $A\otimes\C$ is an associative algebra over $\C$, i.e. multiplication is $\C$-bi-linear $$ (x\otimes a)(\l(y\otimes b)) =xy\otimes a\l b =\l(xy\otimes ab) =(\l(x\otimes a))(y\otimes b) $$

Universal Enveloping Algebra

Uniqueness of the universal enveloping algebra

Given a Lie-Algebra $A$ an enveloping algebra
of $A$ is a pair $(U,j)$, where $(U,+,.)$ is an associative algebra with unit $1$ and $j$ a linear mapping $j:A\rar U$ such that$$ \forall x,y\in A:\quad j([x,y])=j(x)j(y)-j(y)j(x), $$ i.e. $j:A\rar(U,[.,.])$ is a Lie-algebra homomorphism with bracket $[u,v]\colon=uv-vu$.
If $\psi:A\rar\Hom(E)$ is a representation of a Lie-algebra $A$, then $(\Hom(E),\psi)$ is an enveloping algebra.
The universal enveloping algebra $({\cal U}(A),j)$ of $A$ is an enveloping algebra of $A$ with the following universal property: for any enveloping algebra $(U,\pi)$ of $A$ there is a unique associative algebra homomorphism $\Pi:{\cal U}(A)\rar U$ such that $\pi=\Pi\circ j$.
If $\psi:A\rar\Hom(E)$ is a representation of a Lie-algebra $A$, then there is a unique associative algebra homomorphism $\Psi:{\cal U}(A)\rar\Hom(E)$ such that $\psi=\Psi\circ j$, in particular: $\Psi(j(x)j(y))=\psi(x)\psi(y)$.
The universal property ensures uniqueness up to isomorphism: Suppose $({\cal V}(A),k)$ is another universal enveloping algebra, the there are unique associative algebra homomorphisms $\Pi:{\cal U}(A)\rar{\cal V}(A)$ and $\Phi:{\cal V}(A)\rar{\cal U}(A)$ such that $k=\Pi\circ j$ and $j=\Phi\circ k$. Thus $\Phi\circ\Pi\circ j=j$ and $\Pi\circ\Phi\circ k=k$, but the universal property says that the identity mappings are the only associative algebra homomorphisms with these properties, i.e. $\Phi\circ\Pi=id$ and $\Pi\circ\Phi=id$. Hence ${\cal V}(A)$ and ${\cal U}(A)$ are isomorphic as associative algebras.

Constructing the universal enveloping algebra

The construction of ${\cal U}(A)$ is similar to the construction of the exterior algebra or, more generally, the Clifford-algebra of a vector space $E$ with symmetric bi-linear form $B:E\times E\rar\C$: in the latter case we have instead of $j([x,y])=j(x)j(y)-j(y)j(x)$ the condition: $j(x)j(y)+j(y)j(x)=-2B(x,y)1$, where $1$ denotes the unit - the choice $B=0$ produces the exterior algebra. We start off with the construction of ${\cal U}(A)$ by repeating the definition of the tensor algebra $({\cal T}(A),+,\otimes)$ of the vector space $A$: $$ {\cal T}(A)\colon=\C\oplus A\oplus A\otimes A\oplus A\otimes A\otimes A\oplus\cdots =\C\oplus A\oplus\bigoplus_{p=2}^\infty{\cal T}_p(A) =\bigoplus_{p=0}^\infty{\cal T}_p(A) $$ with unit $1\oplus0\oplus0\cdots$. ${\cal T}(A)$ is infinite dimensional and elements in ${\cal T}_p(A)$ are called $p$-tensors or tensors of degree $p$. The importance of the tensor algebra is based on its universal property: for any linear map $\pi:A\rar B$ into an associative algebra $B$ there is a unique associative algebra homomorphism $\Pi:{\cal T}(A)\rar B$ such that $\Pi|A=\pi$: as $\Pi$ is a homomorphism we must have $$ \Pi(x_1\otimes\cdots\otimes x_p)\colon=\pi(x_1)\cdots\pi(x_p)~. $$ Next we define a two sided ideal $I$ in ${\cal T}(A)$ generated by all elements of the form $[x,y]-x\otimes y+y\otimes x$: $$ I(A)\colon=\Big\{\sum_{j=1}^nA_j\otimes([x_j,y_j]-x_j\otimes y_j+y_j\otimes x_j)\otimes B_j: n\in\N,A_j,B_j\in{\cal T}(A)\Big\} $$ and put ${\cal U}(A)\colon={\cal T}(A)/I(A)$. As $I(A)$ is a two sided ideal ${\cal U}(A)$ is again an associative algebra. It remains to check the universal property: So let $(U,\pi)$ be any enveloping algebra, then $\pi:A\rar U$ is linear and by the universal property of the tensor algebra there is a unique associative algebra homomorphism $\Pi:{\cal T}(A)\rar U$ such that $\Pi|A=\pi$. Moreover, for all $x,y\in A$ we have $$ \Pi([x,y]-x\otimes y+y\otimes x) =\pi([x,y])-\pi(x)\pi(y)+\pi(y)\pi(x) $$ which vanishes because $\pi:A\rar(U,[.,.])$ is by assumption a Lie-algebra homomorphism. As $\Pi$ is an associative algebra homomorphism we conclude that $I(A)\sbe\ker\Pi$ and thus there is a unique associative algebra homomorphism $\wh\Pi:{\cal T}(A)/I(A)\rar U$ such that $\Pi=\wh\Pi\circ j$, where $Q:{\cal T}(A)\rar{\cal U}(A)$ denotes the quotient map. Finally for $x\in A$ we put $j\colon=Q|A$ and get $\pi(x)=\Pi(x)=\wh\Pi(j(x))$. Though $\Pi$ and $\wh\Pi$ are unique that doesn`t mean that there is only one homomorphism ${\cal T}(A)/I(A)\rar U$. So let $\vp$ be another such homomorphism satisfying for all $x\in A$: $\vp(j(x))=\pi(x)$, then $\vp\circ Q:{\cal T}(A)\rar U$ is a homomorphism such that $\vp\circ Q=\pi$, i.e. $\vp\circ Q=\wh\Pi\circ Q$; since $Q$ is onto we must have: $\vp=\wh\Pi$. This proves that $({\cal U}(A),j)$ is the universal enveloping algebra.
In the sequel we will use the symbols $+$ and $.$ to denote addition and multiplication in the quotient algebra ${\cal U}(A)$, i.e. for $\wh A,\wh B\in {\cal U}(A)$ and $Q(A)=\wh A$, $Q(B)=\wh B$: $$ \wh A+\wh B=Q(A+B),\quad \wh A\wh B=Q(AB),\quad Q^{-1}(\wh A)=A+I(A)~. $$ For the time being we don`t know anything about the mapping $j$; it could be zero. If we further assume that there is a faithful representation $\pi_0:A\rar\Hom(E)$ - by
exam such a representation exists if $A$ is a matrix algebra - then $j$ is injective, for $\pi_0(x)=\wh\Pi_0(j(x))$. The Poincaré-Birkhoff-Witt Theorem will show that $j$ is injective for any finite dimensional Lie-algebra $A$. This implies that $j$ maps the subspace $A$ of ${\cal T}(A)$ isomorphically onto its image $j(A)$, so we may identify $A$ and $j(A)$!
Suppose $b_1,\ldots,b_d$ is a basis for the Lie-algebra $A$ over the field $\bK$. Then the vectors $$ j(b_1)^{k_1}\ldots j(b_d)^{k_d},\quad k_1,\ldots,k_d\in\N_0 $$ form a basis of the vector space ${\cal U}(A)$.
$\proof$ The proof of the reordering lemma shows that any expression of the form $j(b_{i_1})\cdots j(b_{i_m})$ can be expressed as a linear combination of terms of the form $j(b_1)^{k_1}\cdots j(b_d)^{k_d}$; if $l > i$ we just use the commutation relation $j(b_l)j(b_i)=j(b_i)j(b_l)+j([b_l,b_i])$ and the structure constants: $$ [b_l,b_i]=\sum_mc_{mi}^li(b_m)~. $$ The proof that these vectors are linearly independent is much more involved: We will find it convenient to write the element $j(b_1)^{n_1}\ldots j(b_d)^{n_d}$ as $$ j(b_{n_1})\ldots j(b_{n_N}),\quad 1\leq n_1\leq\cdots\leq n_N\leq d,\quad N\in\N_0 $$ and declare an index set ${\cal I}\colon=\{(n_1,\ldots,n_N): 1\leq n_1\leq\cdots\leq n_N\leq d,N\in\N_0\}$. We are going to construct a linear map $\d:{\cal T}(A)\rar V$ into the vector space $V$ generated by a basis $v_{(n_1,\ldots,n_N)}$. To keep the formulas more readable we omit the symbol for the multiplication $\otimes$ in the tensor algebra ${\cal T}(A)$. $\d$ should have the following two properties: \begin{equation}\label{ueaeq1}\tag{UEA1} \d(b_{n_1}\cdots b_{n_N})=v_{(n_1,\ldots,n_N)}~. \end{equation} The second property will be that for all $n\in\N$ and all $N$-tuples $n_1,\ldots,n_N\in\{1,\ldots,d\}^N$: \begin{equation}\label{ueaeq2}\tag{UEA2} \d(b_{n_1}\cdots b_{n_k} b_{n_{k+1}}\cdots b_{n_N}-b_{n_1}\cdots b_{n_{k+1}} b_{n_k}\cdots b_{n_N}-b_{n_1}\cdots [b_{n_k},b_{n_{k+1}}]\cdots b_{n_N})=0~. \end{equation} As any element in the ideal $I(A)$ is a linear combination of elements of the form $$ b_{n_1}\cdots b_{n_k} b_{n_{k+1}}\cdots b_{n_N}-b_{n_1}\cdots b_{n_{k+1}} b_{n_k}\cdots b_{n_N}-b_{n_1}\cdots [b_{n_k},b_{n_{k+1}}]\cdots b_{n_N} $$ the linear map $\d$ vanishes on $I(A)$ and thus induces a linear map $\wh\d:{\cal T}(A)/I(A)\rar V$ satisfying $\d=\wh\d\circ Q$. As $\d(b_I)=v_I$ we infer that $Q(b_I)$, $I\in{\cal I}$, must be linearly independent.
Before embarking on the general case a few pivotal cases will be elucidated; it will turn out that the existence of the map $\d$ relies basically on one relation: the Jacobi identity!
1. Suppose we have two basis vectors $b_j$ and $b_k$; for $j\leq k$ \eqref{ueaeq1} dictates that: $\d(b_jb_k)=v_{(j,k)}$ and in case $j > k$ by \eqref{ueaeq2} we need to put: $$ \d(b_jb_k)\colon=\d(b_kb_j)+\d([b_j,b_k]), $$ which we simply write as $jk\colon=kj+[j,k]$, so in particular for \eqref{ueaeq2} we now write: \begin{equation}\label{ueaeq3}\tag{UEA3} n_1\cdots n_kn_{k+1}\cdots n_N-n_1\cdots n_{k+1}n_k\cdots n_N-n_1\cdots [n_k,n_{k+1}]\cdots n_N=0~. \end{equation} 2. Next look at the case of a triple: $(i,j,k)\in\{1,\ldots,d\}^3$ - we assume that $\d$ is already defined on all pairs. By the first property \eqref{ueaeq1} $ijk$ is defined only for $i\leq j\leq k$.
i. If $i\leq k < j$ then define $$ ijk\colon=ikj+i[j,k] $$ ii. If $k < i \leq j$, then: $$ ijk\colon=ikj+i[j,k],\quad ikj\colon=kij+[i,k]j,\quad\mbox{i.e.}\quad ijk=kij+[i,k]j+i[j,k] $$ iii. If $k < j < i$ then there are two ways: $$ ijk=ikj+i[j,k],\quad ikj=kij+[i,k]j,\quad kij=kji+k[i,j],\quad\mbox{i.e.}\quad ijk=kji+i[j,k]+[i,k]j+k[i,j] $$ Another way: $$ ijk=jik+[i,j]k,\quad jik=jki+j[i,k],\quad jki=kji+[j,k]i,\quad\mbox{i.e.}\quad ijk=kji+[i,j]k+j[i,k]+[j,k]i $$ As $\d$ must be well defined, we must have $$ 0=i[j,k]-[j,k]i+[i,k]j-j[i,k]+k[i,j]-[i,j]k =[i,[j,k]]+[[i,k],j]+[k,[i,j]] $$ which is Jacobi`s identity.
3. Finally lets look at a quadruple $ijkl$. We just delineate the case $j < i\leq l < k$: $$ ijkl=ijlk+ij[k,l]=jilk+[i,j]lk+ji[k,l]+[i,j][k,l] $$ or the other way: $$ ijkl=jikl+[i,j]kl=jilk+ji[k,l]+[i,j]lk+[i,j][k,l] $$ but both expressions coincide.
4. The general case will be verified by double induction: induction on the length $N$ and on the `index` of $n_1\cdots n_N$, where the `index` $p$ is defined to be the number of pairs $j < k$ for which $n_j > n_k$. So assume $\d$ is well defined up to length $N$ and up to `index` $p$. Assume $n_1\ldots n_{N}$ has `index` $p+1$. Then there is at least one $j\in\{1,\ldots,N-1\}$ such that $n_j > n_{j+1}$ and we can put $$ n_1\ldots n_jn_{j+1}\ldots n_{N} \colon=n_1\ldots n_{j+1}n_j\ldots n_{N}+n_1\ldots [n_j,n_{j+1}]\ldots n_{N} $$ As $j$ is not unique, we need to check that we get the same result if we`d chosen another element $k\in\{1,\ldots,N-1\}$ satisfying $n_k > n_{k+1}$. W.l.o.g we may assume $j < k$ and thus there are two cases: $j+1 < k$, which is essentially covered by 3. and $j+1=k$, which is essentially discussed under 2. Hence the result does not depend on the choice of `$j$` and since $n_1\ldots n_{j+1}n_j\ldots n_{N}$ has `index` $p$ and $n_1\ldots [n_j,n_{j+1}]\ldots n_{N}$ length $N-1$ our inductive step on the `index` $p$ is complete.
Finally we define $\d(b_{n_1}\ldots b_{n_{N+1}})$ for $n_1\leq\cdots\leq n_{N+1}$ by \eqref{ueaeq1} and proceed by induction on the `index` $p$ as before. $\eofproof$
What is the dimension of $Q({\cal T}_p(A))\sbe{\cal U}(A)$?
That`s the number of ways to distribute $k=\sum_{j=1}^dn_j$ undistinguishable presents among $d$ children, which is $$ {k+d-1}\choose{k} $$
If $A$ is commutative i.e. $[.,.]=0$ and $\dim A=n$, then ${\cal U}(A)$ is isomorphic to the associative algebra $\bK[X_1,\ldots,X_n]$ of polynomials in $n$ indeterminates over the field $\bK$.
For $A=\sla(2,\C)$ a basis for ${\cal U}(A)$ is given by $H^lX^mY^n$, $l,m,n\in\N_0$, where multiplication denotes tensorization, i.e. $H^l$ stands for the $l$-fold tensor product $H\otimes\cdots\otimes H$, $XY$ for $X\otimes Y$, etc.
Suppose $B$ is a sub-algebra of Lie-algebra $A$, $i:B\rar A\rar{\cal U}(A)$ the canonical inclusion. Then the unique assocative algebra homomorphism $I:{\cal U}(B)\rar{\cal U}(A)$ is one-one.
Let $b_1,\ldots,b_d$ be a basis for the vector space $E$. Then the Clifford-algebra $\Cl(E,B)$ is spanned by: $$ e_{n_1}\cdots e_{n_N},\quad 1\leq n_1 < \cdots < n_N\leq d,\quad N\in\{0,\ldots,d\}~. $$ The dimension of $\Cl(E,B)$ is $2^d$.
2. If $A$ is another associative algebra and $\pi:E\rar A$ a linear map such that for all $x\in E$: $\pi(x)^2+B(x,x)1=0$, then there is exactly one associative algebra homomorphism $\wh\Pi:\Cl(E,Q)\rar A$ such that $\wh\Pi\circ j=\pi$, where $j=Q|E$ is the restriction of the quotient map $Q:{\cal T}(E)\rar\Cl(E,B)$.
Define $\d$ on the presumed basis as in the proof of theorem but instead of \eqref{ueaeq3} declare for any $(n_1,\ldots,n_N)\in\{1,\ldots,d\}^N$: $$ n_1\cdots n_kn_{k+1}\cdots n_N+n_1\cdots n_{k+1}n_k\cdots n_N+2B(n_k,n_{k+1})n_1\cdots n_{k-1}n_{k+2}\cdots n_N=0~. $$ We only handle the case $k < j < i$: $$ ijk=-ikj-2B(j,k)i,\quad -ikj=kij+2B(i,k)j,\quad kij=-kji-2B(i,j)k $$ and therefore $ijk=-kji-2B(j,k)i+2B(i,k)j-2B(i,j)k$. Following the other way: $$ ijk=-jik-2B(i,j)k,\quad -jik=jki+2B(i,k)j,\quad jki=-kji-2B(j,k)i $$ and therefore $ijk=-kji-2B(i,j)k+2B(i,k)j-2B(j,k)i$, which is exactly the same expression. Thus apart from symmetry we do not need any restriction on $B$!
2. If $\pi(x)^2+B(x,x)1=0$, then $\pi(x+y)^2+B(x+y,x+y)1=0$, i.e. $$ 0=\pi(x)^2+\pi(y)^2+\pi(x)\pi(y)+\pi(y)\pi(x)+B(x,x)+B(y,y)+2B(x,y) =\pi(x)\pi(y)+\pi(y)\pi(x)+2B(x,y) $$ Hence the homomorphism $\Pi:{\cal T}(A)\rar A$ factorizes through $\Cl(E,B)$, i.e. there is a homomorphism $\wh\Pi:\Cl(E,B)\rar A$ such that $\Pi=\wh\Pi\circ Q$. As for uniqueness suppose $\vp:\Cl(E,B)\rar A$ is another homomorphism satisfying $\vp(j(x))=\pi(x)$. Then $\vp\circ Q:{\cal T}(A)\rar A$ is a homomorphism and $\vp\circ Q(x)=\pi(x)$, i.e. $\vp\circ Q=\Pi=\wh\Pi\circ Q$. Since $Q$ is onto we conclude that $\vp=\wh\Pi$.
For $B=0$ the algebra $\Cl(E,B)$ is denoted by $\O(E)$ or $\L(E)$ and is called the exterior algebra. Multiplication is commonly denoted by $\wedge$, so a basis for $\O(E)$ is given by $$ e_{n_1}\wedge\cdots\wedge e_{n_N},\quad 1\leq n_1 < \cdots < n_N\leq d,\quad N\in\{0,\ldots,d\} $$ and $x\wedge y=-y\wedge x$. If $A$ is another associative algebra and $\pi:E\rar A$ a linear map such that for all $x\in E$: $\pi(x)^2=0$, then there is exactly one associative algebra homomorphism $\wh\Pi:\O(E)\rar A$ such that $\wh\Pi\circ j=\pi$.
Let $\Cl(d)$ be the Clifford-algebra of $\R^d$ with $B$ being the canonical euclidean product. Show that $\Cl(1)$ is isomorphic to the complex numbers $\C$ and $\Cl(2)$ is isomorphic the hyper complex numbers $\H$.

Verma Modules

Construction of Verma modules

An associative algebra homomorphism $\psi:{\cal U}(A)\rar\Hom(E)$ will also be called a representation of ${\cal U}(A)$. We have a sort of left-regular representation (cf.
subsection) $\g:{\cal U}(A)\rar{\cal U}(A)$, $\g(X)Y\colon=XY$; it`s indeed a representation: $$ \g(X_1X_2)Y =X_1(X_2Y) \g(X_1)(\g(X_2)) =\g(X_1)\circ\g(X_2)Y, $$ i.e. $\g$ is an associative algebra homomorphism. Now suppose $I$ is a left ideal in ${\cal U}(A)$, i.e. $I$ is invariant under $\g$, then $\g(X)(Y+I)=XY+XI\sbe XY+I$, which shows that we get a representation $\wh\g_I:{\cal U}(A)\rar\Hom({\cal U}(A)/I)$ satisfying \begin{equation}\label{vemeq1}\tag{VEM1} \wh\g_I(X)q_I(Y)=q_I(\g(X)Y), \end{equation} where $q_I:{\cal U}(A)\rar{\cal U}(A)/I$ denotes the quotient map. Restricting $\wh\g_I$ to $j(A)\sbe{\cal U}(A)$ we get a representation of the associative algebra $A$ and thus a representation of the Lie-algebra $(A,[.,.])$: for all $x,y\in A$ we indeed have $$ \wh\g_I(j([x,y])) =\wh\g_I(j(x)j(y)-j(y)j(x)) =\wh\g_I(j(x))\wh\g_I(j(y))-\wh\g_I(j(y))\wh\g_I(j(x)) =[\wh\g_I(j(x)),\wh\g_I(j(y))]~. $$ Writing $\g_I:A\rar\Hom({\cal U}(A)/I)$ for $\wh\g_I\circ j$ we see that $$ \forall x,y\in A:\quad\g_I([x,y])=[\g_I(x),\g_I(y)]~. $$ A subspace $W$ of ${\cal U}(A)/I$ is $\wh\g_I$-invariant iff $\wh\g_I(x)W\sbe W$, i.e. iff for all $x\in A$: $\g(x)(W)\sbe W+I$ or simply: $A(W+I)\sbe W+I$, which by theorem holds iff $E\colon=W+I=q_I^{-1}(W)$ is invariant under all $X\in{\cal U}(A)$ and this means that $E$ is a left ideal in ${\cal U}(A)$. in particular $\g_I$ is irreducible iff $I$ and ${\cal U}(A)$ are the only left ideals containing $I$, i.e. $I$ must be a maximal left ideal.
Ideals in ${\cal U}(A)$ can be found via representations $\psi:A\rar\Hom(E)$: By exam any representation $\psi:A\rar\Hom(E)$ of the Lie-algebra $A$ gives rise to a unique representation $\Psi:{\cal U}(A)\rar\Hom(E)$ such that $\psi=\Psi\circ j$ and $\ker\psi\colon=\{X\in{\cal U}(A):\psi(X)=0\}$ is a two sided ideal in ${\cal U}(A)$. In proving the following lemma we will define for each $\l\in H$ a one dimensional representation $\s_\l:B\rar\C$: let $R^+$ be a system of positive roots in the Cartan algebra $H$ of $A$; put $$ A^+\colon=\lhull{\{A_r:\,r\in R^+\}},\quad A^-\colon=\lhull{\{A_r:\,r\in -R^+\}},\quad\mbox{and}\quad B\colon=H\oplus A^+~. $$ By proposition all these sub-spaces $A^+,A^-$ and $B$ are sub-algebras of $A$.
Let $J_\l$ be the left ideal in ${\cal U}(B)$ generated by $A^+$ and all elements of the form $h-\la h,\l\ra1$, $h\in H$. Then $1\notin J_\l$.
$\proof$ We define the one dimensional representation $\s_\l:B\rar\C$ by $$ \forall h\in H\,\forall x\in A^+:\quad \s_\l(h+x)\colon=\la h,\l\ra, $$ which is indeed a representation, because for $h,k\in H=A_0$, $x\in A_r$ and $y\in A_s$ we have by proposition: $$ [h+x,k+y] =[h,y]+[x,k]+[x,y] \in A_s+A_r+A_{r+s} \sbe A^+ $$ and therefore for all $x,y\in A^+$: $[h+x,k+y]\in A^+$. Hence $\s_\l([h+x,k+y])=0$ and as $\C$ is commutative, $\s_\l$ is a representation. Let $\Sigma_\l:{\cal U}(B)\rar\C$ be the unique representation extending $\s_\l$, then $\ker\Sigma_\l$ is an ideal containing $A^+$; moreover we have for all $h\in H$: $$ \Sigma_\l(h-\la h,\l\ra1) =\s_\l(h)-\la h,\l\ra\Sigma_\l(1) =\la h,\l\ra-\la h,\l\ra =0 $$ Hence $J_\l\sbe\ker\Sigma_\l$. On the other hand we get by the definition of $\Sigma_\l:{\cal U}(B)\rar\C$: $\Sigma_\l(1)=1$, i.e. $1\notin J_\l$. $\eofproof$
For any vector $\l\in H$ of $A$ let $I_\l$ be the left ideal in ${\cal U}(A)$ generated by the set of all root vectors $x\in A^+$ and the set of all vectors $h-\la h,\l\ra$, $h\in H$. The quotient $W_\l\colon={\cal U}(A)/I_\l$ is called a Verma module with weight $\l$ and quotient map $q_\l:{\cal U}(A)\rar W_\l$.
The vector $v_0\colon=q_\l(1)$ is non zero and $\g_\l:A\rar\Hom(W_\l)$, $\g_\l\colon=\g_{I_\l}$, is a highest weight $\l$ cyclic representation with highest weight vector $v_0$. Moreover, if $y_1,\ldots,y_k$ is a basis for $A^-$, then the elements $$ \g_\l(y_1)^{n_1}\cdots\g_\l(y_k)^{n_k}v_0,\quad n_1,\ldots,n_k\in\N_0, $$ form a basis for $W_\l$. Thus $\g_\l:A\rar\Hom(W_\l)$ is an infinite dimensional representation.
$\proof$ 1. $v_0\neq 0$: We choose a basis $y_1,\ldots,y_k$ for $A^-$. By theorem any element $X$ of $A=A^-\oplus B$ can be written as $$ X=\sum_{}y_1^{n_1}\cdots y_k^{n_k}b(n_1,\ldots,n_k) $$ with uniquely determined coefficients $b(n_1,\ldots,n_k)\in{\cal U}(B)$. Now assume that $X\in I_\l$, then $X$ is a linear combination of elements of the form $$ y_1^{n_1}\cdots y_k^{n_k}b_1(n_1,\ldots,b_k)(h-\la h,\l\ra1)\quad\mbox{and}\quad y_1^{n_1}\cdots y_k^{n_k}b_2(n_1,\ldots,b_k)X_r, $$ for $b_1(),b_2()\in J_\l$ and $r\in R^+$. As the expansion of $X$ in terms of $y_1^{n_1}\cdots y_k^{n_k}$ with coefficients $b(n_1,\ldots,n_k)$ is unique, we must have: $b(n_1,\ldots,n_k)\in J_\l\sbe{\cal U}(B)$. For $X=1$ we therefore conclude that the only non zero coefficient is $b(0,\ldots,0)$ and this coefficient must be $1$ - the unit in ${\cal U}(B)$. However, by lemma this is impossible.
2. $v_0$ is a weight vector for $\g_\l$ with weight $\l$: For all $x\in A$ and all $Y\in{\cal U}(A)$ we have by \eqref{vemeq1}: $\g_\l(x)q_\l(Y)=q_\l(\g(x)Y)$, this implies that for all $h\in H$: $$ \g_\l(h-\la h,\l\ra1)v_0 =\g_\l(h-\la h,\l\ra1)q_\l(1) =q_\l(\g(h-\la h,\l\ra1)1) =q_\l(h-\la h,\l\ra1) =0 $$ because $h-\la h,\l\ra1\in I_\l$. Hence for all $h\in H$: $\g_\l(h)v_0=\la h,\l\ra v_0$, which means that $v_0$ is a weight vector for $\g_\l$ with weight $\l$. 3. For all $x\in A^+$: $\g_\l(x)v_0=0$: It suffices to verify that this for all $x\in A_r$, $r\in R^+$: $$ \g_\l(x)v_0 =q_\l(\g(x)1) =q_\l(x) =0~. $$ 4. $v_0$ is cyclic: Suppose that $V_0$ is an $\g_\l$-invariant sub-space of $W_\l$ containing $v_0$, then $V_0$ is $\g$-invariant because all elements in ${\cal U}(A)$ are sums of products of elements in $A$. Hence for all $X\in{\cal U}(A)$: $$ \g_\l(X)v_0 =q_\l(\g(X)1) =q_\l(X), $$ which shows that $V_0=W_\l$.
5. Assume to the contrary that these elements are linearly dependent, i.e. for some coefficients $b(n_1,\ldots,n_k)\in\C$: $$ \sum\g_\l(y_1)^{n_1}\cdots\g_\l(y_k)^{n_k}v_0b(n_1,\ldots,n_k)=0~. $$ By \eqref{vemeq1} his means that in ${\cal U}(A)$: $$ \sum y_1^{n_1}\cdots y_k^{n_k}b(n_1,\ldots,n_k)\in I_\l $$ By 1. this implies that all coefficients $b(n_1,\ldots,n_k)$ must lie in $J_\l$, but by lemma the only constant in $J_\l$ is zero. $\eofproof$
If $y_1,\ldots,y_k$, $R^+=\{r_1,\ldots,r_k\}$, is a basis of root vectors for $A^-$, then $\g_\l(y_1)^{n_1}\cdots\g_\l(y_k)^{n_k}v_0$ is a weight vector for $\g_\l$ with weight $\mu\colon=\l-n_1r_1-\cdots-n_kr_k$. Moreover, the multiplicity of $\mu$ is the number of $k$-tuples $(m_1,\ldots,m_k)\in\N_0^k$ satisfying the equation: $\sum m_jr_r=\l-\mu$. In particular $W_\l$ is the direct sum of its weight spaces, all of which are finite dimensional.
We know that if $v$ is a weight vector with weight $\mu$, then $\g_\l(y_r)v$ is a weight vector with weight $\mu-r$. Hence $\g_\l(y_1)^{n_1}\cdots\g_\l(y_k)^{n_k}v_0$ is a weight vector for $\g_\l$ with weight $\mu\colon=\l-n_1r_1-\cdots-n_kr_k$. By theorem the set of these vectors $\g_\l(y_1)^{n_1}\cdots\g_\l(y_k)^{n_k}v_0$ form a basis for $W_\l$, i.e. we`ve found all weights of $\g_\l:A\rar\Hom(W_\l)$ and $W_\l$ is the direct sum of its weight spaces.
For $A=\sla(3,\C)$ we have $R^+=\{H_1,H_2,H_1+H_2\}$. Find all solutions $(m_1,m_2,m_3)\in\N_0^3$ of the equation $m_1H_1+m_2H_2+m_3(H_1+H_2)=3H_1+4H_2$.

Verma modules of $\sla(2,\C)$

For the Lie-algebra $A=\sla(2,\C)$ we have: $R^+=\{H\}$ and $I_\l$ is the ideal generated by $H-\la H,\l\ra$ and $X$. A basis for the Verma module $W_\l$ is $v_m\colon=\g_\l(Y)^mv_0$, $m\in\N_0$. As $\la H,H\ra=2$ it follows by induction on $m$: \begin{eqnarray*} \g_\l(H)v_m&=&(\la H,\l\ra-2m)v_m\\ \g_\l(X)v_m&=&m(\la H,\l\ra-m+1)v_{m-1}\\ \g_\l(Y)v_m&=&v_{m+1}~. \end{eqnarray*} We just verify the inductive step for $\g_\l(X)v_m$: \begin{eqnarray*} \g_\l(X)v_m &=&[\g_\l(X),\g_l(Y)]v_{m-1}+\g_l(Y)\g_\l(X)v_{m-1}\\ &=&\g_\l(H)v_{m-1}+\g_\l(Y)(m-1)(\la H,\l\ra-m+2)v_{m-2}\\ &=&(\la H,\l\ra-2(m-1))v_{m-1}+(m-1)(\la H,\l\ra-m+2)v_{m-1} =m(\la H,\l\ra+m-1)v_{m-1} \end{eqnarray*} In section we defined for any $l\in\N_0$ a representation $\psi_l:\sla(2,\C)\rar\lhull{u_{-l},u_{-l+2},\ldots,u_l}$ by: \begin{eqnarray*} \psi_l(H)u_{l-2m}&=&(l-2m)u_{l-2m},\quad m=0,\ldots,l\\ \psi_l(X)u_{l-2m}&=&m(l-m+1)u_{l-2(m-1)},\\ \psi_l(Y)u_{l-2m}&=&u_{l-2m-2}~. \end{eqnarray*} Put $v_m=u_{l-2m}$, then with $v_m=0$ for $m < 0$: \begin{eqnarray*} \psi_l(H)v_m&=&(l-2m)v_m,\\ \psi_l(X)v_m&=&m(l-m+1)v_{m-1},\\ \psi_l(Y)v_m&=&v_{m+1}~. \end{eqnarray*} Hence the representation $\g_\l$ is equivalent to $\psi_l$ provided $l=\la H,\l\ra\in\N_0$. For this value of $\la H,\l\ra$ we have $\g_\l(X)v_{l+1}=0$ and thus the subspace $V_l$ generated by $v_{l+1},v_{l+2},\ldots$ is $\g_\l$-invariant. Hence we get a representation of $\sla(2,\C)$ in the quotient space $W_l/V_l$, which is just the irreducible representation we studied in section.

Irreducible Quotients of Verma Modules

We are looking for a maximal subspace $V_\l$ of the Verma module $W_\l$ invariant under $\g_\l$. By exam any vector $v$ in $W_\l$ can be represented as a unique linear combination of weight vectors. As one of these weight vectors is $v_0$ we can speak about the $v_0$-component of $v$ - which is $v_0^*(v)$, where $v_0^*$ is just the corresponding vector to $v_0$ in the dual basis.
Let $V_\l$ be the sub-space of all vectors $v\in W_\l$ such that for all $n\in\N_0$ and all $x_1,\ldots,x_n\in A^+$ the $v_0$-component of $\g_\l(x_1)\cdots\g_\l(x_n)v$ vanishes. Then $V_\l$ is $\g_\l$-invariant.
$\proof$ We have to show that for all $v\in V_\l$ and all $x\in A$ the element $\g_\l(x)v$ is again in $V_\l$, i.e. the $v_0$-component of any product $$ \g_\l(x_1)\cdots\g_\l(x_n)\g_\l(x)v,\quad x_1,\ldots,x_n\in A^+ $$ equals $0$. By the reordering lemma this element is a linear combination of elements of the form \begin{equation}\label{iqveq1}\tag{IQV1} \g_\l(y_1)\cdots\g_\l(y_m)\g_\l(h_1)\cdots\g_\l(h_l)\g_\l(z_1)\cdots\g_\l(z_k)v \end{equation} where $y_j\in A^-$, $h_j\in H$ and $z_j\in A^+$. Since $v\in V_\l$ the $v_0$-component of $v_1\colon=\g_\l(z_1)\cdots\g_\l(z_l)v$ vanishes and thus $v_1$ is a finite sum $\sum w_\mu$ of weight vectors $w_\mu$ for weights $\mu$ strictly lower than $\l$. Now for all $h\in H$ the vector $\g_\l(h)w_\mu$ is again a weight vector with weight $\mu$ and for all $y\in A^-$ the vector $\g_\l(y)w_\mu$ is a weight vector with weight strictly lower than $\mu$. Hence the $v_0$-component of any element in \eqref{iqveq1} vanishes. $\eofproof$
Knowing that $V_\l$ is $\g_\l$-invariant (and thus $\wh\g_\l$-invariant), we can define the induced representation $\g_\l:A\rar\Hom(W_\l/V_\l)$ (or $\wh\g_\l:{\cal U}(A)\rar\Hom(W_\l/V_\l)$) with quotient map $\pi:W_\l\rar W_\l/V_\l$: we define for $x\in A$ and $v\in W_\l$ the expression $\g_\l(x)(\pi(v))$ by $\pi(\g_\l(x)v)$, i.e. we have the following commutative diagram:
highest weight
Since $\g_\l:A\rar\Hom(W_\l)$ is a highest weight $\l$ representation and since its heighest weight vector $v_0$ doesn`t lie in $V_\l$, the induced representation is again a highest weight $\l$ representation. The weights for the induced representation are in general a subset of the weights of the original representation and the weight vectors of the induced representation are the images of the original weight vectors under the quotient map $\pi$.
The induced representation $\g_\l:A\rar\Hom(W_\l/V_\l)$ is an irreducible highest weight $\l$ representation.
$\proof$ Assume there is some $v\notin V_\l$ and an $\g_\l$-invariant sub-space $V$ containing $V_\l$ and $v$. We will show that $V$ contains $v_0$ and thus $V=W_\l$. As $v\notin V_\l$ there are $x_1,\ldots,x_n\in A^+$ such that the $v_0$-component of $$ v_1\colon=\g_\l(x_1)\cdots\g_\l(x_n)v $$ is different from $0$. Again we write $v_1=\sum w_\mu$ with $w_\l\neq0$. For $\mu\neq\l$ we choose $h\in H$ such that $\la h,\mu-\l\ra\neq0$ and apply $\wh\g_\l(h-\la h,\mu\ra1)$ to $v_1$. As this operator leaves $V$ invariant and $$ \wh\g_\l(h-\la h,\mu\ra1)w_\mu =\g_\l(h)w_\mu-\la h,\mu\ra w_\mu =0 \quad\mbox{and}\quad \wh\g_\l(h-\la h,\mu\ra1)w_\l =\la h,\l-\mu\ra w_\l \neq0 $$ the vector $\wh\g_\l(h-\la h,\mu\ra1)v_1$ has component $0$ in the weight space with weight $\mu$ and non zero $v_0$-component. Repeating this step for all weights different from $\l$ we eventually get a multiple of $v_0$, i.e. $v_0\in V$. $\eofproof$
Let $v_0$ be the highest weight vector with weight $\l$. Restriction $\g_\l:A\rar\Hom(W_\l)$ to the sub-algebra $\lhull{r,x_r,y_r}$ of
subsection we get a representation of this sub-algebra. Since $r^\vee\to H$, $x_r\to X$, $y_r\to Y$ establishes an isomorphism of $\lhull{r,x_r,y_r}$ onto $\sla(2,\C)$, we must have for all $m\in\N_0$ (cf. subsection): \begin{eqnarray}\label{iqveq2}\tag{IQV2} \g_\l(r^\vee)v_m&=&(\la r^\vee,\l\ra-2m)v_m\\ \g_\l(x_r)v_m&=&m(\la r^\vee,\l\ra-m+1)v_{m-1}\\ \g_\l(y_r)v_m&=&v_{m+1}~. \end{eqnarray} Now suppose $B$ is a base for the root system $R$ and for some $r\in B$: $\la r^\vee,\l\ra=l\in\N_0$, then $\g_\l(x_r)v_{l+1}=0$. Moreover, for any $s\in R^+$, $s\neq r$, the vector $\g_\l(x_s)v_{l+1}$ is either $0$ or a weight vector with weight $\l-(l+1)r+s$, which is not lower than $\l$, for otherwise there would be $n(b)\in\N_0$ such that $$ s=(1+l)r-\sum_{b\in B}n(b)b =(1+l-n(r))r-\sum_{b\in B,b\neq r}n(b)b $$ it follows that for all $b\neq r$: $n(b)=0$ and thus $s$ is just a multiple of $r$, which is only possible if $s=0$ or $s=r$. Hence $\l-(l+1)r+s$ is not lower than $\l$. However, by theorem all weights of $\g_\l$ are lower than $\l$ and therefore we conclude that \begin{equation}\label{iqveq3}\tag{IQV3} \forall s\in R^+:\quad \g_\l(x_s)v_{l+1}=0, \end{equation} which readily implies that $v_{l+1}\in V_\l$ and therefore the space $\lhull{q_\l(v_0),\ldots,q_\l(v_l)}$ is a sub-space of $W_\l/V_\l$ invariant under $\lhull{r,x_r,y_r}$.

Finite Dimensional Quotients of Verma Modules

In order to prove that for dominant integral $\l\in H$ the space $W_\l/V_\l$ is of finite dimension, we will verify that the irreducible representation $\g_\l:A\rar\Hom(W_\l/V_\l)$ has a finite number of weights: since by exam the multiplicity of each weight is finite this implies that $\dim(W_\l/V_\l)$ is finite.
For finite dimensional representations we know from proposition that the weights are invariant under the the Weyl group. As we don`t know yet that the space $W_\l/V_\l$ is finite dimensional, we cannot refer to this result. Let us skip this catch for a moment and assume that the weights of $\g_\l:A\rar\Hom(W_\l/V_\l)$ are invariant under the Weyl group $W_R$. For any weight $\mu$ we must have $\mu\in\l+\sum_{r\in R}\Z r$. On the other hand $\l$ is dominant and thus for all $w\in W_R$: $w(\mu)\preceq\l$; by theorem this implies that $\mu\in\convex{W_R(\l)}$. However there is only a finite number of $\mu$ satisfying: $$ \mu\in\l+\sum_{r\in R}\Z r\quad\mbox{and}\quad \mu\in\convex{W_R(\l)} $$ The main obstacle in proposition for infinite dimensions is the lack of a definition of the exponential: How to define for infinite dimensional spaces $E$ the operator $e^v$ for $v\in\Hom(E)$? We can obviously do it for diagonal operators. But we can also do it for operators $v$, which are local in the following sense: for every $x\in E$ there is some finite dimensional sub-space $V$ such that: $$ x\in V \quad\mbox{and}\quad v(V)\sbe V~. $$ Then we may put $$ e^vx\colon=\sum_{k=0}^\infty\frac1{k!}v^kx\in V~. $$ The following lemma will just verify that all operators $\g_\l(z)$, $z\in\lhull{r,x_r,y_r}$ are local. Thus we may define $\exp(\pi(x_r-y_r)/2)$ and establish proposition for the representation $\g_\l:A\rar\Hom(W_\l/V_\l)$. This in turn will eventually finish the proof of theorem.
For all $v\in W_\l/V_\l$ and all $r\in R^+$ there is a finite dimensional sub-space $V$ of $W_\l/V_\l$ such that: $$ v\in V \quad\mbox{and}\quad \g_\l(\lhull{r,x_r,y_r})V\sbe V~. $$
$\proof$ Let $U_r$ be the space of all vectors $v\in W_\l/V_\l$ which are contained in a finite dimensional $\lhull{r,x_r,y_r}$-invariant sub-space of $W_\l/V_\l$ - we just proved that $v_0\in U_r$, cf. \eqref{iqveq3}!
We claim that $U_r$ is invariant under all $\g_\l(x)$, $x\in A$: For any $u\in U_r$ there is by definition a $\lhull{r,x_r,y_r}$-invariant sub-space $F$ of $W_\l/V_\l$ containing $u$. Now let $\wt F$ be the sub-space generated by $\g_\l(A)F$. Then $\dim(\wt F)\leq\dim(A)\dim(F)$ and by construction of the universal enveloping algebra: $$ \forall x,z\in A:\quad \g_\l(z)\g_\l(x)=\g_\l(x)\g_\l(z)+\g_\l([z,x])~. $$ For $z\in\lhull{r,x_r,y_r}$ we have $\g_\l(z)F\sbe F$ and thus by the previous identity: $$ \g_\l(z)\g_\l(A)F \sbe\g_\l(A)\g_\l(z)F+\g_\l(A)F \sbe\g_\l(A)F+\g_\l(A)F \sbe\wt F~. $$ This shows that $\wt F$ is $\lhull{r,x_r,y_r}$-invariant, finite dimensional and it contains $\g_\l(A)u$, i.e. for all $x\in A$: $\g_\l(A)u\in U_r$. Hence $U_r$ is a $\g_\l$-invariant sub-space of $W_\l/V_\l$. As the latter is irreducible and the former different from $\{0\}$ we infer that $U_r=W_\l/V_\l$. $\eofproof$
← Root Systems → Properties of Representations of Semi-simple Lie-Algebras

<Home>
Last modified: Wed Jun 16 13:31:42 CEST 2021