In probability theory, the multidimensional Chebyshev's inequality[1] is a generalization of Chebyshev's inequality, which puts a bound on the probability of the event that a random variable differs from its expected value by more than a specified amount.
Let
be an
-dimensional random vector with expected value
and covariance matrix
![{\displaystyle V=\operatorname {E} [(X-\mu )(X-\mu )^{T}].\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3007db3c8528ef8a140908d3d11e21bd5009a33b)
If
is a positive-definite matrix, for any real number
:

Proof
Since
is positive-definite, so is
. Define the random variable

Since
is positive, Markov's inequality holds:
![{\displaystyle \Pr \left({\sqrt {(X-\mu )^{T}V^{-1}(X-\mu )}}>t\right)=\Pr({\sqrt {y}}>t)=\Pr(y>t^{2})\leq {\frac {\operatorname {E} [y]}{t^{2}}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b14fbba15865485b741adf85030e6b98404c7168)
Finally,
[1][2]
Infinite dimensions
There is a straightforward extension of the vector version of Chebyshev's inequality to infinite dimensional settings[more refs. needed].[3] Let X be a random variable which takes values in a Fréchet space
(equipped with seminorms || ⋅ ||α). This includes most common settings of vector-valued random variables, e.g., when
is a Banach space (equipped with a single norm), a Hilbert space, or the finite-dimensional setting as described above.
Suppose that X is of "strong order two", meaning that

for every seminorm || ⋅ ||α. This is a generalization of the requirement that X have finite variance, and is necessary for this strong form of Chebyshev's inequality in infinite dimensions. The terminology "strong order two" is due to Vakhania.[4]
Let
be the Pettis integral of X (i.e., the vector generalization of the mean), and let

be the standard deviation with respect to the seminorm || ⋅ ||α. In this setting we can state the following:
- General version of Chebyshev's inequality.

Proof. The proof is straightforward, and essentially the same as the finitary version[source needed]. If σα = 0, then X is constant (and equal to μ) almost surely, so the inequality is trivial.
If

then ||X − μ||α > 0, so we may safely divide by ||X − μ||α. The crucial trick in Chebyshev's inequality is to recognize that
.
The following calculations complete the proof:
![{\displaystyle {\begin{aligned}\Pr \left(\|X-\mu \|_{\alpha }\geq k\sigma _{\alpha }\right)&=\int _{\Omega }\mathbf {1} _{\|X-\mu \|_{\alpha }\geq k\sigma _{\alpha }}\,\mathrm {d} \Pr \\&=\int _{\Omega }\left({\frac {\|X-\mu \|_{\alpha }^{2}}{\|X-\mu \|_{\alpha }^{2}}}\right)\cdot \mathbf {1} _{\|X-\mu \|_{\alpha }\geq k\sigma _{\alpha }}\,\mathrm {d} \Pr \\[6pt]&\leq \int _{\Omega }\left({\frac {\|X-\mu \|_{\alpha }^{2}}{(k\sigma _{\alpha })^{2}}}\right)\cdot \mathbf {1} _{\|X-\mu \|_{\alpha }\geq k\sigma _{\alpha }}\,\mathrm {d} \Pr \\[6pt]&\leq {\frac {1}{k^{2}\sigma _{\alpha }^{2}}}\int _{\Omega }\|X-\mu \|_{\alpha }^{2}\,\mathrm {d} \Pr &&\mathbf {1} _{\|X-\mu \|_{\alpha }\geq k\sigma _{\alpha }}\leq 1\\[6pt]&={\frac {1}{k^{2}\sigma _{\alpha }^{2}}}\left(\operatorname {E} \|X-\mu \|_{\alpha }^{2}\right)\\[6pt]&={\frac {1}{k^{2}\sigma _{\alpha }^{2}}}\left(\sigma _{\alpha }^{2}\right)\\[6pt]&={\frac {1}{k^{2}}}\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cd96928e0d682ce0f0476baa7516a56148c59a8a)
References