Prékopa-Leindler Type Inequalities
Definition and examples
Let $\mu$ be a measure on some measure space $X$. For $s\in(0,1)$ and $x,y\in X$ let $A_s(x,y)$ be a measurable subset of $X$ and $v_s:X\times X\rar\R$. We will say that $(X,\mu)$ admits a Prékopa-Leindler type inequality with distortion function $v_s$ and set $A_s$, if for all non-negative functions $f,g$ and $h$ such that
\begin{equation}\label{1}\tag{1}
\inf_{z\in A_s(x,y)}h(z)
\geq e^{-v_s(x,y)}f(x)^{1-s}g(y)^s
\end{equation}
we have
$$
\int h\,d\mu
\geq\Big(\int f\,d\mu\Big)^{1-s}\Big(\int g\,d\mu\Big)^s~.
$$
Property $(\t)$
If for all $x,y\in X$: $A(x,y)=X$, this is property $(\t)$, considered by B. Maurey (GAFA 91). Just to get a glimpse of what this might be good for assume that $\mu$ is a probability measure on the metric space $(X,d)$. Put for simplicity $s=1/2$ and $v_{1/2}(x,y)=\psi(d(x,y))$ for some increasing function $\psi$. Then for $h=1$, $g=I_A$ and
$$
f(x)\colon=\inf\Big\{e^{2\psi(d(x,y))}/I_A(y):y\in A\Big\}=e^{2\psi(d_A(x))}
$$
we get
$$
\int e^{2\psi(d_A)}\,d\mu\leq\frac1{\mu(A)},
$$
which by Chebyshev's inequality implies: $\mu(d_A > t)\leq e^{-2\psi(t)}/\mu(A)$. Therefore the above inequality is called an inequality of isoperimetric type or a concentration inequality.
The classical PL inequality
This inequality indicates that the Lebesgue measure $\l$ on $\R^n$ admits a PL type inequality for $A_s(x,y)=\{(1-s)x+ty\}$ and $v_s(x,y)=1$. Moreover, replacing $f,g,h$ with $fe^{-V},ge^{-V},he^{-V}$ and assuming $\Hess V\geq c$, we get a PL type inequality for the measure $\mu$ with density $e^{-V}$ for
$$
A_s(x,y)=\{(1-s)x+sy\}
\quad\mbox{and}\quad
v_s(x,y)=\frac12cs(1-s)\norm{x-y}^2
$$
i.e. if
$$
h(z)\geq e^{-\frac12cs(1-s)\norm{x-y}^2}f(x)^{1-s}g(y)^s
$$
then
$$
\int h\,d\mu
\geq\Big(\int f\,d\mu\Big)^{1-s}\Big(\int g\,d\mu\Big)^s~.
$$
For an arbitrary log-concave probability measure on $\R^n$ we only can infer that $c\geq0$ and thus $v_s=1$. We are going to show that in many cases one can do much better than this!
Manifolds
Suppose that $(M,\la.,.\ra)$ is a complete $n$-dimensional Riemannian manifold such that $\Ric\geq R$. Then the Riemannian volume $v$ on $M$ admits a PL type inequality with distortion function $v_s(x,y)=\frac12Rs(1-s)d(x,y)^2$. The set $A_s(x,y)$ comprises all points $z$ such that $d(x,z)=sd(x,y)$ and $d(z,y)=(1-s)d(x,y)$.
Tangent spaces
Let $(M,\la.,.\ra)$ be a complete Riemannian manifold. On the tangent space $TM$ define a Riemannian metric as follows: for $U,V\in T_{X_m}TM$ let $X$ and $Y$ be any curves such that $X(0)=Y(0)=X_m$ and $X^\prime(0)=U$, $Y^\prime(0)=V$. Then put
$$
\la U,V\ra\colon=\la\pi_*(U),\pi_*(V)\ra
+\la\bnabla X(0),\bnabla Y(0)\ra
$$
where $\pi:TM\rar M$ is the canonical projection and $\bnabla X$ denotes the covariant derivative of the vectorfield $X(s)$ along the curve $s\mapsto\pi(X(s))$ in $M$. A curve $G:[0,1]\rar TM$ is a geodesic with respect to this metric joining $X_m$ and $X_n$, if and only if $\g\colon=\pi\circ G$ is a geodesic in $M$ and
$$
G(s)=(1-s)P_s(X_m)+(1-s)P_sP_1^{-1}(X_n)
$$
where $P_s:T_mM\rar T_{\g(t)}M$ is parallel translation along $\g$. It follows that
$$
d(X_m,X_n)^2=d(m,n)^2+\tnorm{X_m-P_1^{-1}(X_n)}^2~.
$$
Assume $(M,v)$ admits a PL inequality with distortion function $\frac12s(1-s)Rd(x,y)^2$, $R>0$. On $TM$ we define a measure $\mu$ as follows: 1. conditioned on $\pi=m$ this is the gaussian measure with density
$$
(2r\pi)^{-n/2}\exp\Big(-\tfrac1{2r}\norm{X_m}^2\Big)
\qquad r>0~.
$$
2. For all open subsets $U$ of $M$: $\mu(\pi^{-1}(U))=v(U)$.
These conditions determine $\mu$ uniquely. This measure is probably of some importance in statistical mechanics, since it is invariant under the geodesic flow, i.e. the flow of the geodesic vectorfield on $TM$. Then $(TM,\mu)$ admits a PL inequality with distortion function
$$
v_s(x,y)=\tfrac12(R\wedge r)s(1-s)d(x,y)^2~.
$$
Similar constructions work on $S_rM\colon=\{X\in TM:\norm{X_m}=r\}$ or on the bundle of orthonormal frames $OM$.
Inverse Hölder
Now let us again assume that $\mu$ is a probability measure which admits a PL type inequality; choosing $f$ to be the constant function $1$ and defining $B_s(z,y)\colon=\{x:z\in A_s(x,y)\}$, we infer from
$$
h(z)\geq\sup\Big\{e^{-v_s(x,y)}g(y)^s:\,x\in B_s(z,y),y\in X\Big\}
$$
and $z\in A_s(x,y)$ that $x\in B_s(z,y)$ and thus \eqref{1} holds; hence
$$
\int h\,d\mu\geq\Big(\int g\,d\mu\Big)^s~.
$$
For $0 < p < q$ we put $s=p/q$ and replace $g$ with $g^q$ and $h$ with $h^p$. It follows that the condition
$$
h(z)\geq\sup\Big\{e^{-v_{p/q}(x,y)/p}g(y):\,x\in B_{p/q}(z,y)\Big\}
$$
implies the inequality:
$$
\Big(\int h^p\,d\mu\Big)^{1/p}\geq\Big(\int g^q\,d\mu\Big)^{1/q}~.
$$
In the following section we discuss the limit $p\to q$ of this inequality.
PL Type implies Log-Sobolev
Let $M$ be a complete Riemannian manifold and $\mu$ a probability measure which admits a PL type inequality with: $v_s(x,y)=(1-s)s\psi(d(x,y))$ where $\psi:\R_0^+\rar\R_0^+$ is increasing. We will also assume that for $s> 1/2$ say: $z\in A_s(x,y)$ implies $d(y,z)< c(1-s)d(y,x)$ for some constant $c$. Put $t=1-s$, $g\to e^{g/s}$, $f\to 1$, $h\to e^{g_s}$, then for all $x,y\in M$ we must have:
$$
\inf_{z\in A_s(x,y)} g_s(z)\geq g(y)-st\psi(d(x,y))
$$
We claim that for $t\to0$:
$$
g_s(z)\leq g(z)+t\psi^*(c\norm{\nabla_zg})+\Oh(t^2)~.
$$
We just reproduce the argument of S. Bobkov and M. Ledoux! Write $y=\exp_z(ctrY)$, $\norm Y=d(x,y)$, $0< r< 1$, then:
$$
g(y)=g(z)+ctr\la\nabla_zg,Y\ra+\Oh(t^2)
$$
Thus we have to prove that
$$
\psi^*(c\norm{\nabla_zg})
\geq rc\la\nabla_zg,Y\ra-\psi(\norm Y)
$$
which holds by definition of $\psi^*$. Since
$(\int e^{g/s}\,d\mu)^s=\int e^{g}\,d\mu+t\,\Ent_\mu(e^g)+\Oh(t^2)$, where $\Ent_\mu(f)\colon=\int f\log f\,d\mu$, it follows that
$$
\Ent_\mu(e^g)
\leq\int\psi^*(c\norm{\nabla g})e^g\,d\mu~.
$$
If $M=\R^n$ and $d(x,y)=\norm{x-y}$ for some norm $\Vert.\Vert$ on $\R^n$, then, by the same argument:
\begin{equation}\tag{2}\label{2}
\Ent_\mu(e^g)
\leq\int\psi^*(c\norm{\nabla g}_*)e^g\,d\mu
\end{equation}
where $\norm{.}_*$ denotes the dual norm. As is well known, replacing $g$ with $tf$, we get for $t\to0$ the Poincaré inequality:
$$
\int\Big(f-\int f\,d\mu\Big)^2\,d\mu
\leq\frac12\psi^{*\dprime}(0)c^2\int\norm{\nabla f}_*^2\,d\mu~.
$$
Also, by Herbst's argument (cf. [BL]) \eqref{2} implies the concentration inequality:
$$
\mu\Big(\Big|f-\int f\,d\mu\Big| > t\Big)
\leq e^{-\vp^*(t)},
$$
where $\vp(t)\colon=t\int_0^t\psi^*(s)/s^2\,ds$ and $\vp^*$ its Legendre transform.
The exponential measure
Following B. Maurey (GAFA 91) we (&D. Cordero-Erausquin) will prove a PL type inequality for the measure $\mu(dx)=e^{-x}\,dx$ on the real line: For $s\in(0,1)$ let $v$, $w$ be functions on $\R$, such that $|w^\prime|< (1-s)\wedge s$ and
\begin{equation}\tag{3}\label{3}
\Big(1-\tfrac{w^\prime}{1-s}\Big)^{1-s}
\Big(1+\tfrac{w^\prime}{s}\Big)^{s}
e^{w-v}\geq1
\end{equation}
Suppose $f,g$ and $h$ are non-negative functions on $\R$ such that for all $x,y\in\R$:
$$
h((1-s)x+sy-w(x-y))\geq e^{-v(x-y)}f(x)^{1-s}g(y)^s
$$
Then for $\mu(dx)=e^{-x}\,dx$:
$$
\int h\,d\mu
\geq\Big(\int f\,d\mu\Big)^{1-s}
\Big(\int g\,d\mu\Big)^{s}
$$
Put $I_0=\int f\,d\mu$, $I_1=\int g\,d\mu$ and define for $t\in[0,1]$ functions $x(t)$ and $y(t)$ by
$$
\int_{-\infty}^{x(t)}f(u)e^{-u}\,du=tI_0
\quad\mbox{and}\quad
\int_{-\infty}^{y(t)}g(u)e^{-u}\,du=tI_1~,
$$
i.e. the measures with density $f(u)e^{-u}/I_0$ and $g(u)e^{-u}/I_1$ respectively are the image measures of the uniform distribution on $(0,1)$ under the mappings $t\mapsto x(t)$ and $t\mapsto y(t)$ respectively.
Then $x^\prime=I_0e^x/f(x)$ and $y^\prime=I_1e^y/g(y)$. Put
$$
z(t)\colon=(1-s)x(t)+sy(t)-w(x(t)-y(t)),
$$
then we have
$$
z^\prime=((1-s)-w^\prime(x-y))x^\prime
+(s+w^\prime(x-y))y^\prime
$$
Since $-s < w^\prime < 1-s$, we get by the AM-GM inequality:
\begin{eqnarray*}
z^\prime
&=&((1-s)-w^\prime(x-y))I_0e^x/f(x)
+(s+w^\prime(x-y))I_1e^y/g(y)\\
&=&(1-s)\Big(1-\tfrac{w^\prime(x-y)}{1-s}\Big)I_0e^x/f(x)
+s\Big(1+\tfrac{w^\prime(x-y)}{s}\Big)I_1e^y/g(y)\\
&\geq&I_0^{1-s}I_1^s
\Big(1-\tfrac{w^\prime(x-y)}{1-s}\Big)^{1-s}
\Big(1+\tfrac{w^\prime(x-y)}{s}\Big)^{s}
e^{(1-s)x+sy}/(f(x)^{1-s}g(y)^s)\\
&\geq&I_0^{1-s}I_1^s
\Big(1-\tfrac{w^\prime(x-y)}{1-s}\Big)^{1-s}
\Big(1+\tfrac{w^\prime(x-y)}{s}\Big)^{s}
e^{z+w(x-y)-v(x-y)}/h(z)
\end{eqnarray*}
It follows that
$$
h(z)e^{-z}z^\prime
\geq I_0^{1-s}I_1^s
\Big(1-\tfrac{w^\prime(x-y)}{1-s}\Big)^{1-s}
\Big(1+\tfrac{w^\prime(x-y)}{s}\Big)^{s}
e^{w(x-y)-v(x-y)}
\geq I_0^{1-s}I_1^s
$$
and therefore
$$
I_0^{1-s}I_1^s
\leq\int_0^1 h(z(t))e^{-z(t)}z^\prime(t)\,dt
=\int_\R h(z)e^{-z}\,dz~.
$$
For $s=1/2$ the functions $w(x)=2\log\cosh(x/4)$ and $v(x)=w(x)/2$ satisfy \eqref{3} with equality. For arbitrary $s\in(0,1)$ we put for some $c>0$: $w(x)=4cs(1-s)\log\cosh(x/4)$ and $v=w/2$. Then \eqref{3} is equivalent to
$$
(1-cs\tanh(x/4))^{1-s}(1+c(1-s)\tanh(x/4))^s\cosh(x/4)^{2cs(1-s)}
\geq1~.
$$
Since $\cosh^2=(1-\tanh^2)^{-1}$, we get by putting $t=\tanh(x/4)\in(-1,1)$:
$$
(1-s)\log(1-cst)+s\log(1+c(1-s)t)-cs(1-s)\log(1-t^2)\geq0~.
$$
We consider the left hand side as a function $\vp(t)$ on the interval $(-1,1)$. Since $\vp(0)=0$ it suffices to prove that $\vp$ is decreasing on $(-1,0)$ and increasing on $(0,1)$:
$$
\vp^\prime(t)
=-\frac{cs(1-s)}{1-cst}
+\frac{cs(1-s)}{1+c(1-s)t}
+\frac{2cs(1-s)t}{1-t^2}
=cs(1-s)t\left(
-\frac{c}{(1-cst)(1+c(1-s)t)}
+\frac{2}{1-t^2}
\right)
$$
Hence we have to choose $c$ such that the last factor is non-negative, which boils down to
$$
\forall t\in(-1,1)\qquad
-c(1-t^2)+2(1-cst)(1+c(1-s)t)\geq0~.
$$
We choose $c$ in such a way that the quadratic polynomial attains its minimum at $t=\pm1$. This is the case for $c\colon=s^{-1}\wedge(1-s)^{-1}$. With this choice the minimum of the quadratic polynomial is $0$. Hence
\begin{equation}\tag{4}\label{4}
w(x)=4(s\wedge(1-s))\log\cosh(x/4)
\quad\mbox{and}\quad
v(x)=2(s\wedge(1-s))\log\cosh(x/4)
\end{equation}
Thus the exponential measure $\mu(dx)=e^{-x}\,dx$ admits a PL type inequality with distortion function $v_s$ and set $A_s$ made up of a single point $z_s(x,y)$ given by
\begin{eqnarray*}
v_s(x,y)&=&2((1-s)\wedge s)\log\cosh\Big(\frac{x-y}4\Big)
\quad\mbox{and}\\
z_s(x,y)&=&(1-s)x+sy-4((1-s)\wedge s)\log\cosh\Big(\frac{x-y}4\Big)~.
\end{eqnarray*}
Since $4\log\cosh(x/4)\leq|x|$, the point $z_s$ lies always in the segment joining $x$ and $y$ and it's always smaller than $(1-s)x+sy$. Moreover, for $s\geq1/2$ we have
$$
|y-z_s|
\leq(1-s)|y-x|\Big(1+\frac{4\log\cosh\frac{y-x}4)}{|y-x|}\Big)
\leq2(1-s)|y-x|~.
$$
For small values of $x$ the function $x\mapsto2\log\cosh(x/4)$ is quadratic: $\sim(x/4)^2$ and for large values of $x$ it's linear: $\sim x/2$. The Legendre transform $v^*$ of $v(x)=2\log\cosh(x/4)$ is defined for $|y| < 1/2$ and is given by $v^*(y)=xy-v(x)$, where $y=v^\prime(x)=\tanh(x/4)/2$, i.e. $x=4\atanh(2y)=2\log(1+2y)/(1-2y)$, $\cosh(2y)=1/\sqrt{1-4y^2}$:
\begin{eqnarray*}
v^*(y)
&=&4y\,\atanh(2y)-v(4\atanh(2y))
=2y\log\frac{1+2y}{1-2y}+\log(1-4y^2)\\
&=&(1+2y)\log(1+2y)+(1-2y)\log(1-2y)~.
\end{eqnarray*}
Stability results
Tensorization
Let $\mu_1$ and $\mu_2$ be measures on $X_1$ and $X_2$ respectively. Suppose that for some $s\in(0,1)$ $(X_i,\mu_i)$, $i\in\{1,2\}$, admits a PL type inequality with distortion function $v_i$ and set $A_i$. Then $(X_1\times X_2,\mu_1\otimes\mu_2)$ admits a PL type inequlity with distortion function
$$
v(x,y)=v_1(x_1,y_1)+v_2(x_2,y_2)
\quad\mbox{and set}\quad
A(x,y)=A_1(x_1,y_1)\times A_2(x_2,y_2)
$$
where $x=(x_1,x_2)$ and $y=(y_1,y_2)$:
Define $h_{x_2}(x_1)\colon=h(x_1,x_2)$ and
$H(x_2)=\int h_{x_2}\,d\mu_1$, $F(x_2)=\int f_{x_2}\,d\mu_1$,
$G(x_2)=\int g_{x_2}\,d\mu_1$; if
\begin{equation}\tag{5}\label{5}
\inf_{z_2\in A_2(x_2,y_2)}
H(z_2)
\geq e^{-v_2(x,y)}F(x_2)^{1-s}G(y_2)^{s}
\end{equation}
then
$$
\int h\,d\mu\geq
\Big(\int f\,d\mu\Big)^{1-s}
\Big(\int g\,d\mu\Big)^{s}
$$
Now \eqref{5} is equivalent to
$$
\inf_{z_2\in A_2(x_2,y_2)}
\int h_{z_2}\,d\mu_1
\geq e^{-v_2(x_2,y_2)}
\Big(\int f_{x_2}\,d\mu_1\Big)^{1-s}
\Big(\int g_{y_2}\,d\mu_1\Big)^{s}
$$
which, by assumption holds if for all $x_2,y_2\in X_2$, all $z_2\in A_2(x_2,y_2)$ and all $x_1,y_1\in X_1$:
$$
\inf_{z_1\in A_1(x_1,y_1)}h_{z_2}(z_1)
\geq e^{-v_2(x_2,y_2)-v_1(x_1,y_1)}
f(x_1,x_2)^{1-s}g(y_1,y_2)^s~.
$$
Averaging measures
Suppose we are given a family $\mu_t$, $t\in\R$, of measures on $X$ and a measure $\nu$ on $\R$. Then
$$
\mu(A)\colon=\int\mu_t(A)\,\nu(dt)~.
$$
is another measure on $X$ and for all measurable $h:X\rar[0,\infty]$ we have
Then
$$
\int h\,d\mu=\int\int h(x)\,\mu_t(dx)\,\nu(dt)
$$
Suppose that there is a measure $\l$ on $X$ such that for all $t$: $\mu_t(dx)=\r(t,x)\,\l(dx)$: for all $t,u\in\R$ and all $x,y\in X$:
$$
\inf_{(w,z)\in A(t,u)\times B(x,y)}h(z)\r(w,z)
\geq e^{-v_1(t,u)-v_2(x,y)}f(x)^{1-s}g(y)^s\r(t,x)^{1-s}\r(u,y)^s
$$
then for all $t$:
$$
\int h\,d\mu\geq\Big(\int f\,d\mu\Big)^{1-s}\Big(\int g\,d\mu\Big)^s
$$
Image measures
Let $F:X\rar Y$ be a map such that
$u(F(x_1),F(x_2))\leq v(x_1,x_2)$ - in case of a bijection we may take $u(y_1,y_2)=v(F^{-1}(y_1),F^{-1}(y_2))$ - and $\nu=\mu_F$, i.e.
$\nu(B)=\mu(F^{-1}(B))$, then $\int f\,d\nu=\int f(F)\,d\mu$. If
\begin{equation}\tag{6}\label{6}
\inf_{z\in A(x_1,x_2)}h(F(z))
\geq e^{-v(x_1,x_2)}f(F(x_1))^{1-s}g(F(x_2))^s,
\end{equation}
then $\int h\,d\nu\geq(\int f\,d\nu)^{1-s}(\int f\,d\nu)^s$. By
assumption \eqref{6} is implied by
$$
\inf_{w\in F(A(x_1,x_2))}h(w)
\geq e^{-u(F(x_1),F(x_2))}f(F(x_1))^{1-s}g(F(x_2))^s,
$$
Putting $B(y_1,y_2)=\bigcup\{F(A(x_1,x_2)):x_i\in F^{-1}(y_i)\}$
we get:
$$
\inf_{w\in B(y_1,y_2)}h(w)
\geq e^{-u(y_1,y_2)}f(y_1)^{1-s}g(y_2)^s~.
$$
Thus: $(Y,\nu)$ admits a PL type inequality with distortion function $u$ and set $B$.
Convolution of measures on a group $G$
Let $\mu_i$, $i=1,2$ be (finite) measures on $G$ for non-negative functions $f,g$ and $h$ put $h_{x_2}(x_1)\colon=h(x_1x_2)$ and $H(x_2)=\int h_{x_2}\,d\mu_1$, $F(x_2)=\int f_{x_2}\,d\mu_1$, $G(x_2)=\int g_{x_2}\,d\mu_1$. If
\begin{equation}\tag{7}\label{7}
\inf_{z_2\in A_2(x_2,y_2)}H(z_2)
\geq e^{-v_2(x_2,y_2)}F(x_2)^{1-s}G(y_2)^{s}
\end{equation}
then for $\mu\colon=\mu_1*\mu_2$: $\int h\,d\mu\colon=\int h(x_1x_2)\,\mu_1(x_1)\,d\mu_2(x_2)$ and thus
$$
\int h\,d\mu\geq
\Big(\int f\,d\mu\Big)^{1-s}
\Big(\int g\,d\mu\Big)^{s}
$$
\eqref{7} is equivalent to
$$
\inf_{z_2\in A_2(x_2,y_2)}\int h_{z_2}\,d\mu_1
\geq e^{-v_2(x_2,y_2)}
\Big(\int f_{x_2}\,d\mu_1\Big)^{1-s}
\Big(\int g_{y_2}\,d\mu_1\Big)^{s}
$$
which holds if
$$
\inf_{z_1\in A_1(x_1,y_1)}\inf_{z_2\in A_2(x_2,y_2)}
h_{z_2}(z_1)
\geq e^{-v_1(x_1,y_1)-v_2(x_2,y_2)}
f_{x_2}(x_1)^{1-s}g_{y_2}(y_1)^s
$$
Put $x=x_1x_2$, $y=y_1y_2$, then $x_2=x_1^{-1}x$, $y_2=y_1^{-1}y$ and
\begin{eqnarray*}
v(x,y)&=&\inf\{v_1(x_1,y_1)+v_2(x_1^{-1}x,y_1^{-1}y):\,x_1,y_1\in G\}
\quad\mbox{and}\\
A(x,y)&=&\bigcup\{A_1(x_1,y_1)A_2(x_1^{-1}x,y_1^{-1}y):\,
x_1,y_1\in G\}~.
\end{eqnarray*}
Then \eqref{7} holds if
$$
\inf_{z\in A(x,y)}h(z)\geq e^{-v(x,y)}f(x)^{1-s}g(y)^s
$$
Further Examples
The one sided exponential measure
The one sided exponential measure $\mu_+(dx)=e^{-x}I_{\R^+}\,dx$ admits a PL typ inequality with distortion
\begin{equation}\tag{8}\label{8}
v_s(x,y)=2(s\wedge(1-s))\log\cosh(|x-y|/4)
\quad\mbox{and set}\quad
A_s(x,y)=\{(1-s)x+sy-2v_s(x,y)\}~.
\end{equation}
$\proof$
Let $F:\R_0^+\rar\R_0^+$ be an increasing function such that $F(0)=0$ and for all $x>0$:
$$
\int_0^x e^{-t}\,dt=\int_0^{F(x)} f(t)\,dt
$$
Taking the derivative we get: $e^{-x}=F^\prime f(F)$. Thus $F^\prime\leq1$ if and only if $e^{-x}\leq f(F)$. Since $f$ is decreasing this holds if and only if $F(x)\leq g(e^{-x})=\colon h(x)$, where $g$ denotes the inverse of $f$. Putting
$$
G(x)\colon=\int_0^x e^{-t}\,dt-\int_0^{h(x)} f(t)\,dt
$$
we have to prove that $G(x)\leq0$. Now $G(0)\leq0$, $G(\infty)=0$ and
$$
\int_0^{h(x)}f(t)\,dt
=-G(0)+\int_0^x f(h(t))h^\prime(t)\,dt
=-G(0)+\int_0^x e^{-t}(-e^{-t}g^\prime(e^{-t}))\,dt
$$
and therefore:
$$
G(x)=
\int_0^x e^{-t}(1+e^{-t}g^\prime(e^{-t}))\,dt
+G(0)
=\int_{e^{-x}}^1 1+sg^\prime(s)\,ds
+G(0)
$$
Since $G(\infty)=0$, it suffices to prove that for all $s\in(0,1)$: $1+sg^\prime(s)\geq0$. Putting $s=f(x)$, we have:
$$
1+sg^\prime(s)
=1+\frac{f(x)}{f^\prime(x)}\geq0~.
$$
$\eofproof$
Typical examples are $\g_\a$-distributions with density $\G(\a)^{-1}x^{\a-1}e^{-x}I_{\R^+}(x)$ for $\a\leq1$.
The uniform measure on $(0,a)^n$
The uniform distribution on $(0,a)$, $a>0$, is the image measure of $\mu_+$ under the mapping $T(x)=ae^{-x}$; thus a distortion function is
$$
v_s(x,y)=2(s\wedge(1-s))\log\cosh\Big(\frac14\log\frac xy\Big)~.
$$
We could have taken equally well $T(x)=a(1-e^{-x})$, in which case the distortion function is given by
$$
v_s(x,y)=2(s\wedge(1-s))\log\cosh\Big(\frac14\log\frac{a-x}{a-y}\Big)~.
$$
Since $t\mapsto1/t$ is convex we get by the Hadamard-Hermite inequality:
$|\log x-\log y|=|\int_y^x t^{-1}\,dt|\geq2|x-y|/(x+y)$:
$$
v_s(x,y)\geq2(s\wedge(1-s))\log\cosh\Big(\frac{|x-y|}{2(x+y)}\Big)
\geq2(s\wedge(1-s))\log\cosh\Big(\frac{|x-y|}{4a}\Big)~.
$$
The set $A_s(x,y)$ contains a single point:
$$
ae^{(1-s)(\log(x/a)+s\log(y/a)-2v_s(-\log(x/a),-\log(y/a))}
=\frac{x^{1-s}y^s}{\cosh(\frac14|\log x-\log y|)^{4s\wedge(1-s)}}~.
$$
Thus we may wonder to prove a PL type inequality on e.g. $(0,1)$ directly, i.e. if
$$
h(x^{1-s}y^s/w_s(x-y))\geq e^{-v_s(x-y)}f(x)^sg(y)^{1-s}
$$
then
$$
\int_0^1 h\,dx
\geq\Big(\int_0^1 f\,dx\Big)^{1-s}\Big(\int_0^1 g\,dx\Big)^s
$$
On $(0,a)^n$ a suitable distortion function is
$$
2(s\wedge(1-s))\sum_j\log\cosh\Big(\frac{|x_j-y_j|}{2(x_j+y_j)}\Big)~.
$$
The curve $s\mapsto(x_1^{1-s}y_1^s,\ldots,x_n^{1-s}y_n^s)$ is the geodesic, joining $(x_1,\ldots,x_n)$ and $(y_1,\ldots,y_n)$, with respect to the pull back metric of the canonical metric on $(\R^+)^n$ under the mapping $T^{-1}:(0,a)^n\rar(\R^+)^n$, $T^{-1}(x_1,\ldots,x_n)=(-\log(x_1/a),\ldots,-\log(x_n/a))$; it is given by
$$
\sum_j x_j^{-2}\,dx_j\otimes dx_j~.
$$
The Haar measure on $S^1$
Of course, this is a particular case of the uniform distribution. However, we prefer to choose a gaussian measure for reference; so let $\mu_2$ be the measure $\mu_2(dx)\colon=\pi^{-1/2}e^{-x^2}\,dx$ on $\R$. For this measure we have $A(x,y)=\{(1-s)x+sy\}$ and $v(x,y)=s(1-s)(x-y)^2$. Define $T:\R\rar S^1$ by $T(-x)=-T(x)$ and for $x>0$
$$
T(x)=\left(\sin\Big(2\sqrt\pi\int_0^x e^{-t^2}\,dt\Big),
\cos\Big(2\sqrt\pi\int_0^x e^{-t^2}\,dt\Big)\right)~.
$$
Then the Lipschitz constant of $T$ is $2\sqrt\pi$ and the image measure of $\mu_2$ under $T$ is the normalized Haar measure $m$ on $S^1$. Thus for the measure $m$ on $S^1$ we have
\begin{equation}\tag{9}\label{9}
v_s(x,y)=\frac{s(1-s)}{4\pi}d(x,y)^2
\quad\mbox{and set}\quad
A_s(x,y)
=\left\{T\left((1-s)T^{-1}(x)+sT^{-1}(y)\right)\right\}~.
\end{equation}
The curve $t\mapsto T((1-t)T^{-1}(x)+tT^{-1}(y))$ is the geodesic joining $x,y\in S^1$ with respect to the pull back metric of the canonical euclidean metric of $\R$ under $T^{-1}$. Similarly the normalized volume on $S^n$ can be constructed as the image measure of the gaussian measure on $\R^n$ under a mapping $T:\R^n\rar S^n$ which only depends on the distance.
The Laplace distribution
The Laplace measure $\mu(dx)=\frac12e^{-|x|}\,dx$ is the
convolution of $e^{-x}I_{\R^+}$ and $e^{x}I_{\R^-}$.
$v_1(x)=v_2(x)=2s\wedge(1-s)\log\cosh(x/4)$ and
$$
A_1(x,y)=\{(1-s)x+sy-2v_1(x-y)\},
A_2(x,y)=\{(1-s)x+sy+2v_1(x-y)\}~.
$$
It follows that $v(x,y)=2v_1((x-y)/2)$ and $A(x,y)$ is the interval with center $(1-s)x+sy$ and length $2s\wedge(1-s)|x-y|$. Thus $A(x,y)$ is always a subinterval of the segment joining $x$ and $y$ and both coincide if $s=1/2$. By tensorization we obtain for the $n$-dimensional Laplace measure
$\mu(dx)=2^{-n}e^{-\norm x_1}$:
\begin{equation}\tag{10}\label{10}
v_s(x,y)=4(s\wedge(1-s))\sum_{j=1}^n\log\cosh((x_j-y_j)/8)
\end{equation}
and $A_s(x,y)$ is the parallelepiped with the following properties:
- The center of $A_s(x,y)$ is the point $(1-s)x+sy$.
- The edges of $A_s(x,y)$ are parallel to the coordinate axes and the diagonal is contained in the segment joining $x$ and $y$.
- The length of the diagonal of $A_s(x,y)$ is $2s\wedge(1-s)\norm{x-y}$.
By lemma any symmetric probability measure $\mu(dx)=e^{-V}\,dx$ s.t. for all $x>0$: $V^\prime(x)\geq1$ is the image measure of the Laplace distribution under a Lipschitz mapping with constant at most $1$. If the random variable $X$ is $\g_\a$-distributed for some $\a\in(0,1)$, then the distribution of its symmetrization has this property.
Next we include a result by M. Ledoux, which indicates that actually any log-concave probability on $\R$ is the image measure of the Laplace distribution under a Lipschitz mapping. We need the following definition: Let $f$ be the density of a probability measure $\mu$ on $\R$ such that for all $x\in\R$: $f(x)> 0$; put $F(x)\colon=\int_{-\infty}^x f(y)\,dy$, then the function $I:[0,1]\rar\R$, $x\mapsto f(F^{-1}(x))$, is said to be the isoperimetric function of $\mu$.
$\proof$
Putting $F(x)\colon=\int_{-\infty}^xf(t)\,dt$ an $G(x)\colon=\int_{-\infty}^xg(t)\,dt$, we have $F=G\circ T$ or $T=G^{-1}\circ F$. Thus
$$
T^\prime
=G^{-1\prime}\circ F f
=f/g\circ G^{-1}\circ F
=f/J\circ F
\quad\mbox{i.e.}\quad
T^\prime\circ F^{-1}=I/J~.
$$
$\eofproof$
The isoperimetric function of the Laplace distribution is given by $I(t)=t\wedge(1-t)$. Since the isoperimetric function $J$ of any log-concave probability measure $\mu$ is a concave function on $(0,1)$, it follows that in this case the Lipschitz constant of $T$ is bounded by $I(1/2)/J(1/2)=1/(2g(m))$, where $m$ is the median of $\mu$. Since it's usually not so easy to calculate $g(m)$, the following result, which was communicated to me by M. Fradelizi, will be of interest:
$$
1/(2g(m))\leq\inf\{1/g(x):x\in\R\}~.
$$
Moreover in this case the function $J/I$ is decreasing on $(0,1/2)$ and increasing on $(1/2,1)$, which implies that $T$ is convex on $(-\infty,0)$ and concave on $(0,\infty)$. Typical examples are $\g_\a$-distributions for $\a\geq1$, where the Lipschitz constant is of magnitude $\sqrt\a$.
In order to compute the distortion function of image measures on
the real line we need the following
$\proof$
Putting $S=T^{-1}$, we have to prove that
$$
u(z)\leq\inf\{u(S(x)-S(x-z)):\,x\in\R\}~.
$$
By assumption for $z\neq0$ the infimum of the function $x\mapsto v(S(x)-S(x-z))$ is a local minimum. Thus
$$
v^\prime(S(x)-S(x-z))(S^\prime(x)+S^\prime(x-z))=0
$$
and this holds if and only if $x=x-z$ or $x=-(x-z)$,
i.e. if and only if $x=z/2$.
$\eofproof$
For $p > 1$ let $\mu_p$ be the probability measure with density $\frac12c_pe^{-|t|^p}$, $c_p=1/\G(1+1/p)$. Then $\mu_p$ is the image measure of $\mu_1$ under a mapping $T$ s.t. $|T^\prime|\leq1/c_p$ and for $x > 0$: $T(x)\leq x^{1/p}$: Since $\mu_p$ is symmetric, $m=0$ is a median of $\mu_p$ and by lemma the Lipschitz constant is $1/c_p$. As for the second assertion, we have to show that
$$
F(x)\colon=\int_0^{x^p} e^{-t}-\int_0^{x}c_pe^{-t^p}\,dt\leq0~.
$$
Obviously: $F(0)=F(\infty)=0$. Moreover, the equation $F^\prime(x)=0$ is equivalent to $px^{p-1}-c_p=0$, hence there is a single solution $x_p$ to this equation and since $F$ decreases on $[0,x_p]$, it follows that $F(x)\leq0$.
Let $F:\R\rar\TT$ be the covering map $x\mapsto e^{ix}$ and $\mu$ the image measure of $\mu_t(dx)=(4\pi t)^{-1/2}e^{-x^2/4t}$ under $F$, then $u(z_1,z_2)\colon=s(1-s)d(z_1,z_2)^2/4t$ is a distortion function and $A(z_1,z_2)\colon=\{z_s e^{2\pi isn}:n\in\Z\}$ is a suitable set. If $s=p/q$ s.t. $p,q\in\N$, $(p,q)=1$, then $A(z_1,z_2)=\{z:z^q=z_1^{q-p}z_2^p\}$, in particular $A(z_1,z_2)$ contains exactly $q$ points.
<Home>
Last modified: Tue Sep 17 13:37:59 CEST 2024