complexity theory – Can one assume that the constants/ weights of a neural network with n-inputs are bounded by $2^{poly(n)}$?

For boolean threshold gates on n inputs it is known that the real valued weights of the gate can be assumed to be bounded by $2^{poly(n)}$ (See Proposition 1 in https://users-cs.au.dk/~arnsfelt/Papers/exactcircuits.pdf).

Is this known for arbitrary neural networks? In particular I’m interested in the following question: Let N be a neural network that uses Relu activations to compute a continuous function from $(0,1)^n -> (0,1)$. Can N be approximated by a neural network (that uses Relu activations) of comparable size, but with constants bounded $2^{poly(n)}$?