Is there a connection between the Undecidability Theorem and “software complexity”?

I was reading Complexity: The Emerging Science at the Edge of Order and Chaos and a certain passage got me really intrigued. When discussing Chris Langton’s explorations of artificial life algorithms, the author paraphrases Chris stating the following about a certain set of evolutionary algorithms:

“This is the undecidability theorem, one of the deepest results of computer science: unless a computer program is utterly trivial, the fastest way to find out what it will do is to run it and see. There is no general purpose procedure that can scan the code and the input and give you the answer any faster than that. That’s why the old saw about computers only doing what their programmers tell them to do is both perfectly true and virtually irrelevant; any piece of code that’s complex enough to be interesting will always surprise its programmers. That’s why any decent software package has to be endlessly tested and debugged before it is released and that’s why the users always discover very quickly that the debugging was never quite perfect.”

But I can’t easily see how the undecidability theorem allows one to conclude that “unless a computer program is utterly trivial, the fastest way to find out what it will do is to run it and see”. Can such a connection be made formally, or was it more of an empirical observation?