"Fundamental Theorem" of Memory

This isn't particularly deep, but I'd never crisply focussed on it before: The memory store in a general-purpose computing system must be an information amplifier -- you must be able to get more information back as value than you present as key.

It is this property which lets one construct complex datastructures by retrieving from memory both a complete key for a subsequent fetch, and also some additional information.

In principle, any amount of information in excess of the key size is sufficient -- even a microbit. In practice, it is convenient and symmetric if the memory produces two keys worth of information in response to presentation of a single key: This gives us the Lisp "cons cell", a simple datastructure capable of emulating any other datastructure to within (I think) log(N) efficiency.

The centrality of this "Theorem" is obscured in the special case of the von Neuman machine by its obsession with small integers, resulting in main memory being densely indexed by sequential keys, and hence allowing us to trivially generate a "next" key, and hence fetch an unlimited amount of information from a single key, using the convention of associating consecutive cells of storage.

However, in a more general memory in which keys are distributed randomly through a large, structured space, this trick of implicitly generating a "next" key from the current key may not be easy, or even possible.