There seems to be a tacit assumption in the Computer Science field that scalability is good. I've never seen it argued; I think that in fact it is very probably just plain wrong.

Conceptually, the simplest way to build a big system is to take a small system and monotonically enlarge it. Whee! If you want a mammal, just zoom up an amoeba. If you want a molecule, just zoom up an atom. If you want a city, just zoom up a building. If you want a solar system, just zoom up a planet. If you want a galaxy, just zoom up a solar system. Simple. Problem solved. Next!

Except mammals aren't big amoebas, molecules aren't big atoms, cities aren't big buildings, solar systems aren't big planets, galaxies aren't big solar systems.

                 *     *     * 
    * * * Big Systems Don't Scale * * *
                 *     *     * 

Big systems have never scaled.
Big systems don't scale.
Big systems never will scale.

Constructing a city will never be just constructing a building writ large, constructing an application will never be just writing a subroutine writ large, and constructing a computer network will never be just constructing a computer writ large.

Dad has observed that the mammallian brain is not a scalable design -- and it's the best there is at what it does, so far.

I'm reading Christian Huitema's book on the Internet, and it doesn't scale. It doesn't scale either direction: The networking algorithms used for single LANs don't work at the Autonomous System scale, nor vice versa, and the networking algorithms used at the Internet level don't work at the Autonomous System scale, nor vice versa, either. And it's not for lack of trying or foresight: The split between levels has in each case been forced by bitter experience, when scaling up a design from one level to the next Just Didn't Work, and urgent necessity forced introduction of a new solution.

I think the take-home lesson, not particularly new but perhaps underestimated in importance, is:

                  *         *          *
  * * *  Every order-of-magnitude change of scale   * * *
  * * *  produces  a qualitatively new problem, and * * *
  * * *  calls for a qualitatively new solution.    * * *
                  *         *          *

Maybe the quantitative cutoff is two orders of magnitude, or even three, but the basic principle remains: You can't build jet aircraft the way you build dragonflies, nor vice versa, and trying to do so will only produce valuable experience, not valuable results.

Systems behave radically differently on different scales, hence the optimal design is radically different on different scales, hence whenever cost or performance or such are at all important, you'll see different designs being used for the different scales of a system.

Which means that Teracomputer is basically just wrong to adopt as a design principle the notion that a computer should be able to scale indefinitely: The result will be a hybrid design which is markedly suboptimal on every scale. Since there is time, money and motivation enough to design appropriate machines for each scale, Teracomputer will lose out in every market to machines designed to be efficient on that scale of operation.

You don't coordinate ten processors the way you coordinate a million processors, not unless your money comes from the Pentagon, and probably not even then.

You don't coordinate a billion processors the way you coordinate a million processors: You grab a fresh sheet of paper and work out an appropriate design for that scale from scratch.

... and you don't build nanoscale machines the way you build a Babbage engine, either, with sliding links, cams and joints! Sheesh! :)

--- later ---

If my earlier argument is correct, then it would appear to follow that:

You cannot solve the problem of intelligence by working on toy problems "and then just scaling up by 10^10": Working on toy problems will inevitably produce toy solutions. Scaling up by 10^10 will result in a completely different problem, and your toy solution will probably be less an asset than a hindrance in solving the new problem.

That's yet another line of reasoning in support of my general assertion that one of the biggest stumbling blocks to solving the AI problem is that traditional divide-and-conquer attacks don't work on it...