2
Einstein established mass-energy equivalence: E=mc

It is clear from relationships between entropy, energy,
and information, that something similar can be said
there, but what is it?

Consider Heisenber's Uncertainty principle, conventionally
written 

   DxDp > h

where Dx is uncertainty in x position and Dp is uncertainty
in momentum.

Now, momentum is mass*velocity, so this can equally validly
be written

   DxDvm > h

where m is mass.  Solving for m gives

        h
  m >  ----
       DxDv

But uncertainty is just the inverse of information:  As
our information about x grows, our uncertainty about its
position decreases in direct proportion:  They are merely
two different ways of expressing the same thing.  So we
can rewrite as

  m > hIxIv

where Ix is information on x position and Iv is
information on x veloctity.

Next, note that the inequality only came from consideration
of experimental inadequacies:  We do not always get as much
information about x and v about a particle as we potentially
could.  The particle, however, contains the same amount of
information whether or not we happen to measure it well: If
our interest is fundamental physics rather than the current
state of experimental technique, we may validly rewrite as

  m = hIxIv

Finally, the above expresses the fact that the finite
information content of a particle may slosh back and
forth between x and v representations.  We may legitimately
choose to ignore this detail and instead write

  m = hI

Which is in fact the information/mass equivalency counterpart to
Einstein's mass/energy equivalence equation: Mass and information are
the same thing, with Plank's constant h as the constant of
proportionality.


Returning for the moment to Einstein's formula, if

	  2
    E = mc

then
  
     E
    ---  = m
      2
     c

but

    m = hI

so


     E
    ---  = hI
      2
     c


or

    E      2
    -  = hc
    I

or
         2
   E = hc I

Now if (as I am told) h has units of "action" == energy*time,
then hc**2 has units of energy*area/sec.  This seems to imply
I has units sec/area.

So bits==sec/area?  I find that toally baffling, which presumably
means it is either ridiculous or profound. :)

Going back to the inverse representation, it means
the units of uncertainty are area/second?  something/second
is at least a more intuitive notion, a rate.

Is there any intuition to be had here?

Volume/second is something a moving 2-D wavefront sweeps
out:  area/second is something a moving line sweeps out.

Is this by any extreme chance a link to string theory??

Suppose we imagine a closed loop of string-theory style
string somehow sweeping its way along.

To start out simple, we might suppose it to be a photon, constrained
to travel at lightspeed.

If the wavelength of the photon is proportional to the length of the
string, than as the wavelength grows, the area/sec swept out grows
in direct proportion -- and of course the uncertainty also grows
in direct proportion.

Is this how we make sense of uncertainty ~~ area/sec?

If so, is there more intuition to be gotten from information ~~ sec/area
than just deriving it as the inverse of the previous?

As the hypothetical string gets shorter, its information increases,
and it takes longer to sweep out a given area:  That gives a more
direct visualization of area/sec being proportional to information,
but not a very satisfyingly direct relationship.

What if the Universe is in some interesting sense one-dimensional?
And what if our string-theory string is now considered to be open
instead of a closed loop, and to be occupying some subset of the
full universal length?

Now we can clearly play something closely akin to the standard
information-theoretic bitcounting game (I think it counter-
productive to think -too- carefully about the details of this
argument at this point...):

*  If a string occupies the entire universal interval, it takes
   zero bits of information to specify it.

*  If a string occupies half the universal interval, it takes
   one bit to specify it:  Which half of the universal string?

*  If a string occupies one quarter of this universe, it
   requires two bits to specify it.

And so forth:  As a string gets shorter, its position carries
more information.

We can also turn the relation back the other way:  If we can
measure the information content of a string, and independently
its length, we can infer the length of the universe.

                                      2
Does this by any chance mean that E=mc  and m=hI plus standard
lab measurements are in principle sufficient to establish the
size of the universe?

This isn't clear to me. :)

(What about particles that travel at less than lightspeed?  I've
thought for years that all particles travel at lightspeed -- it is
just that some do so in a more or less straight line, and some are
doing something like travelling partly or wholly around one of the
rolled-up dimensions of string theory (say) instead of directly along
one of our familiar three unrolled dimensions.  You may take this, if
you like, as a explanation for why we "need" those extra rolled-up
dimensions: They provide the only way for matter to be (effectively)
slow-moving enough to form molecules, and hence intelligences -- the
Anthropic Principle strikes again *wrygrin*.)



---



This above all has some interesting consequences to think about, actually.

If E=hI holds, and energy is conserved, then obviously information is
also conserved, in the sense intended at least, that of total information
content.

(A particular datum, such as the angle of some particular leaf at some
particular time in the Jurassic, may of course not be conserved.  True
quantum randomness together with fixed total information content imply
that specific data are constantly being lost.)

Or consider a hydrogen electron dropping from a "high" to a "low"
orbital.

The bigger the energy difference, the more energetic will be the
photon given off: That's the conventional interpretation.  What is the
information interpretation?

It would appear perhaps to be that when an electron drops from a very
large, extended orbital down to a small, confined orbital, its
position carries less information relative to the nucleus than before:
Only information sufficient to locate it to the same "absolute"
precision within a much smaller volume is now needed.

Information being conserved, the excess capacity must go somewhere, so
it is carried away in the form of increased precision in the position
of the emitted photon, which is to say, a shorter photon wavelength.

If this is correct, we should be able to compute the energy level of
an orbital as a fairly simple integral of the volume times the
electron probability density at each point in the volume, no?

I wonder if this is significantly different from, or simpler than, the
standard way of computing orbital energy levels, whatever it is.  (I
don't recall ever seeing such a computation done, or even described.)