95Mar27

Just an intellectual burp as I finish digesting this particular idea:

If we accept "No Silver Bullet II", then verifying a program is vastly harder than merely running it, in general: Most programs run in more or less linear time, or at least polynomial, while verification is asserted to be of exponential complexity in general. Does this make sense?

I think it does. Put most baldly, verification requires running the program on all possible sets of inputs, and there are an exponentially large number of them. It -should- be harder to do this than to run the program on a specific dataset.

At first blush, the above is ridiculous: To verify a program, we don't literally need to consider all input datasets: We can often dispose of all possible integer values for a given input just by considering the zero, negative and positive cases, say.

But I think that on deeper reflection, the argument is in fact essentially correct. When doing verification, we need in some sense to symbolically execute the program, retaining not the actual state of the program, but rather an abstraction of the >set< of states the program can reach, in a process somewhat akin to the LALR(1) state construction algorithm. Conditional statements will tend to split each pre-existing state, doubling the size of our set of potential states which need to be considered making total verification work exponential in the number of conditional statements in the program. We can be clever and avoid this effect much of the time, of course. But if we can only do this some fixed percentage of the time, such as 99%, say, then the verification cost remains exponential in program size, and all we have done is drop the constant factor a bit: We're still going to hit a soft but very strong wall at a certain program size, and never get beyond it.

The above is informal, jumps from one perspective to another without apology, and takes a different view of the verification process than is customary today, with operational semantics being frowned upon, but I think it correctly captures the nature of the problem, and that correctly puzzling it out from other perspectives will therefor eventually yield the same conclusion, perhaps after a greater or lesser amount of analysis.

Thus, for the long term, we likely really do want to focus on systems which are not guaranteed to return the correct result, merely guaranteed to either return a correct result or else to drop into the exception handler with an appropriate error message -- something which I think can be done within practical efficiency limits.