Small Integers Considered Harmful

One hack is to apply the universal tuple
representation to them.

From a set-of-tuples point of view, a 2D
vector field is a set of tuples in which

1) two tuple entries are implicit instead of explicit: (x,y)

2) there are special constraints (continuity or such) on
   those two special tuple slots

3) the hardware implementation makes nearest-neighbor
   operations cheaper on (x,y) than other slots (perhaps).

Analytically:

 -> Storing some of the information in
    a different format from the rest often adds pointless
    complexity;

 -> Storing stuff implicitly rather than explicitly usually
    obfusticates the analysis

 -> Hardware issues really should be suppressed.

All of which suggest that from an analytical point
of view, vector spaces are perhaps (at least
sometimes) better understood as (quite possibly
infinite) sets of tuples with certain properties
assumed/asserted/given over certain tuple slots.

In particular, when normal spatial intuition gives
out or is misleading, the explicit representation
may be more perspicuous than the implicit
representation.  I am thinking especially of the
situation in quantum mechanics when spacetime
topology goes foamy-weird down near the Fermi (?)
size domain.  Thinking in terms of twisted sheets
may overwhelm, but thinking in terms of
constraints on certain tuple slots in a large
tuple set may allow productive consideration even
when "space" stops having a topologically simple,
continuous distance metric?

---

In the biological context at hand, the brain's 2D
maps often have wildly nonlinear coordinate
mappings (e.g., the retina coordinate system; the
"homuncular" sensory and motor maps on the central
sulcus...), so treating the coordinates as
explicit instead of implicit may make practical
sense anyhow on a von Neuman emulation.  If your
field vectors are of dimension 10-100 anyhow
(reasonable guesses for actual brain maps?),
adding two more to explicate the coordinate
mapping isn't a major added overhead.

Perhaps the coordinate axes of a 1D or 2D brain
map should be understood somewhat in the spirit of
sorting a table in normal programming?  It lets
you process the table contents in a more
interesting fashion, and locate neighboring values
along that axis more easily, but doesn't
essentially alter the table contents or meaning:
More of an implementation efficiency issue than a
fundamental analytical issue?

-

An opposing point of view would be that one is
assuming the 2D field to be sufficiently sampled
so as to approximate a continuous scalar field:
The other tuple values may not possess this
property.

I think the answer is more to set up a formalism
capable of representing/remembering such slot
properties, than to restrict onesself to the
implicit formulation.  The definition or schema
or whatever for a given tupleset needs to note
when the values in a given slot possess the
property of density (to use arithmetic-theoretical
terminology) or when a given set of tuple slots
form a key field (to use relational database
terminology).

-

     Small Integers Considered Harmful       :)

Computer science (or computer programming, at
least) has never really dealt extensively and
routinely with infinite sets, but perhaps it is
time to do so.  If one defines a tupleset as
infinite along a given slot (dimension), but
either described analytically or declared to be
bandwidth-limited by a given value and hence
satisfactorily approximatible by sampling at or
above the Nyquist frequency, one can and perhaps
should be able to work conceptually in terms of
datasets/spaces which are infinite and perhaps
continuous along some dimensions while being
discrete and perhaps finite along other
dimensions.

Conventional Algolic style programming has always
limited itself to small integers and smaller sets
of them, and many of the nastiest limitations of
the resulting programs seem to derive directly
from the poverty of this dataspace.

"Scientific" programming has been dealing with
large matrices and floating-point datasets for
some time: Perhaps this should be seen not as a
specialized aberration, but as an indication that
it is almost impossible to construct computations
relating in interesting ways to external reality
without leaving behind the prison of small
integers and smaller sets of them?

Can we learn to construct analogues in higher
dataspaces of the sorts of computations we now
perform on small discrete enumerated finite
functions (i.e., tables of small integers)?  Can
we find a general paradigm in which we replace
descrete if-then-else style control structures
with continuous operations like minimization of an
energy function and iteration of a continuous map
to a fixpoint?

The CommonLisp appendix on "Series" claims to be
able to replace explicit looping operations by
their general series operators in virtually all
cases: I'm going to take a look at them and see if
they don't provide not just a nice abstraction of
what we do with loops, but a model which
generalizes naturally to continuous spaces.  (I
also want to look again at what APL's Ken Iverson
is doing with his "J" variant...)