Distributional error probability of deterministic algorithm implies error probability of randomized algorithm?


Consider some problem $P$ and let’s assume we sample the problem instance u.a.r. from some set $I$.
Let $p$ be a lower bound on the distributional error of a deterministic algorithm on $I$, i.e., every deterministic algorithm fails on at least a $p$-fraction of $I$.

Does this also imply that every randomized algorithm $mathcal{R}$ must fail with probability $p$ if, again, we sample the inputs u.a.r. from $I$?

My reasoning is as follows:
Let $R$ be the random variable representing the random bits used by the algorithm.
begin{align}
Pr[ text{$mathcal{R}$ fails}]
&= sum_rho Pr[ text{$mathcal{R}$ fails and $R=rho$}] \
&= sum_rho Pr[ text{$mathcal{R}$ fails} mid R=rho] Pr[ R=rho ] \
&ge p sum_rho Pr[ R=rho ] = p.
end{align}

For the inequality, I used the fact that once we have fixed $R = rho$, we effectively have a deterministic algorithm.

I can’t find the flaw in my reasoning, but I would be quite surprised if this implication is true indeed.