Given an order $ X_k $ from $ k $ independent random variables reading book says I can form "truncated" random variables, i.

$ X_k (n) = X_k cdot textbf {1} _ { { omega: | X_k ( omega) | leq n }} $

Where $ omega $ means a result from the sample space. Define $ S_n $ and $ has {S} _n $ as:

$$

S_n = X_1 + X_2 + … + X_n

$$

$$

has {S} _n = X_1 (n) + X_2 (n) + … X_n (n)

$$

So that means that $ has {S} _n $ is the sum of $ n $ truncated random variables. We also define $ m_n $ as:

$$

m_n = mathbb {E} (X_1 (n))

$$

Since the variable distributions (of $ X_k $) are the same ones we have $ m_n = mathbb {E} (X_k (n)) $ for all $ k geq 1 $,

The book reads that the following inequality is "evident" where given $ epsilon> 0 $, we have:

$$

P bigg ( big | frac {S_n} {n} – m_n big | geq epsilon bigg) leq P bigg ( big | frac { has {S} _n} {n} – m_n big | geq epsilon bigg) + P bigg ( has {S} _n neq S_n bigg)

$$

But I'm not clear at all and I got lost. How can we have the above inequality? (ie we could get some big ones $ X_k $ in the $ S_n $so that the left term is much larger than the truncated right term).