How did it compute the gradient of Bayesian expression?

From Bayes Theorem:

begin{equation}
p(x|y) = frac{p(x) p(y|x)}{p(y)} =frac{p(x) p(y|x)}{int p(x) p(y|x)}
end{equation}

If we take gradients with respect to $x$ in both sides of the equation we obtain:
begin{equation}
nabla_x log p(x|y) = nabla_x log p(x) + nabla_x log p(y|x)
end{equation}

Someone could explain me how did he reach here? My level of math is not very high but as far as I understand after applying the chain rule I got the following expression:

begin{equation}
nabla_x log p(x|y) = frac{p(y|x)}{p(y)}nabla_x log p(x) + frac{p(x)}{p(y)}nabla_x log p(y|x)
end{equation}