real analysis – Bounding an integral by splitting the interval of integration

My question concerns a method that was used to prove the following statement
$$
lim_{Atoinfty} frac{1}{A}int_1^A A^{frac{1}{x}} dx = 1.
$$

The method is as follows. First we obtain the easy lower bound
$$
frac{1}{A}int_1^A A^{frac{1}{x}} dx > 1 – frac{1}{A}, A > 1
$$

and then to get a tight upper bound, we show that for every $delta > 0$ and for every $K > 0$ there exists $A_0(delta, K) > 1$ such that for all $A > A_0$, $1 + delta < Klog A < A$ and
$$
frac{1}{A}int_1^A A^{frac{1}{x}} dx < delta + A^{-frac{delta}{1+delta}}log A + e^{frac{1}{K}}.
$$

We prove the last statement by dividing the interval of integration into $3$ pieces $(1, 1+delta), (1+delta, Klog A), (Klog A, A)$ and estimating the integrand on each piece. First sending $Atoinfty$ and then Sending $delta to 0$ and $K to infty$, we obtain
$$
1 le liminf frac{1}{A}int_1^A A^{frac{1}{x}} dx le limsup frac{1}{A}int_1^A A^{frac{1}{x}} dx le 1.
$$

My questions are: what motivates the division of the region of integration into $3$ pieces? Is there an intuitive explanation as to why separate bounds on the different regions are effective? Is this a general method that works in other situations as well? If so, I would love to see a sketch of an example and/or a general principle on splitting up regions of integration to obtain desired bounds.