check my answer – Average time complexity of linear search

It is usually assumed that the average time complexity of linear search, i.e., deciding whether an item $i$ is present in an unordered list $L$ of length $n$ is $O(n)$ (linear).
I have read several proofs, usually assuming either:

  • that $i$ and the elements of $L$ take values in an infinite domain, in which case the probability of $i$ being is $L$ tends towards 0 and the algorithm will iterate over all elements.
  • that $i$ is present exactly once in $L$, in which case the expectancy on the number of iterations is $(1 + 2 + … + n)/n in O(n)$.

However, I would like to introduce the case where $i$ and the elements of $L$ take values in a finite domain of size $d$. It this case studied somewhere? I have failed to find a proof online, but I have found one that finds that the complexity is $O(d)$, and would like to know whether it is correct.
I assume that each element of $L$ is a random value from $(1..d)$. I want the expectancy $E$ on the number of iterations of the linear search algorithm. I also assume that $L$ has infinite size, without lost of generality, I do not want to question the fact that linear search is $O(n)$ on average, it clearly is. My point is that linear search is both $O(n)$ AND $O(d)$. I have interest in the case where $d ll n$.

At the first iteration, there is an $1/d$ probability that the sought element is found and the algorithm stops. In other cases, i.e. with a probability of $(d-1)/d$, the algorithm iterates. Recursively, I have $E = 1 + Ecdot(d-1)/d$, which boils down to $E=d$.

Also, this seems related to the fact that $sum_{n=0}^{infty} frac{n}{e^n} in O(1)$, if the recursion is unrolled. My proof, however, seems almost too easy. Is there a fail somewhere?