manifest density

I have known uncertainty: a state unknown to the Greeks.

Gömböc

by James Buckland

A Gomboc is a wild little thing, a weeble with constant density. Weebles, also known as roly-poly men in some countries, are children’s toys shaped like eggs, with a weight at the bottom so that, when pushed over, the toy will right itself. This works from virtually any position — the density gradient and the smooth shape ensure that the weeble will return to the upright position from any possible placement.

Mathematically, this can be described as a monostatic polytope, a shape that has instability on all but one of its faces — that is, it’s unstable enough to right itself from all placements but one. This is a terribly hard field of study, and in 2006, a Hungarian team of scientists — Gábor Domokos and Péter Várkonyi — found the Gomboc, a three-dimensional mono-monostatic polytope, a shape with constant density (instead of being weighted, like a weeble) that will roll, wobble, and right itself from any position whatsoever. It works by the same principles — the relation between the object’s center of mass and its proximity to the surface of the object lead it to trace a path of contact with the ground which will, eventually, find itself upright.

Perhaps without coincidence, this Gomboc (meaning little sphere in Hungarian) is not too dissimilar from another shape known for righting itself — turtle shells. In fact, as part of their research into the properties of the then-theoretical Gomboc, the duo spent time in Budapest measuring turtle shells for their properties. They visited pet shops, the Budapest zoo, and the Hungarian Museum of Natural History; they took careful geometric measurements of the shells. Nature did get there first.

In fact, it is not unprecedented for nature to find employment for the geometric shapes of objects. The traditional elongated-oval shape of an egg, for instance, evolved in such a way that, when set adrift, the egg will only trace small, tight circles, instead of rolling far away from the nest. Red blood cells in mammals have a similar property — their disc-torus shape allows increased laminar flow, as well as more surface area.

Advertisements

Shamir’s Secret Sharing

by James Buckland

Shamir’s Secret Sharing is a method of encrypting information by splitting it into n parts, such that the number of parts necessary to entirely reconstruct the information is less than n, often considerably. It has applications in coding theory, cryptography, and error-correcting codes.

Any two parabolas define a set of two points.

It works on a very simple geometric principle. Take two points on a plane. Each has an x and y coordinate, some of them possibly negative. Through these two points pass any number of polynomials — we’ll stick with parabolas so that the degree k is low. We’ll select, say, three parabolas that pass through these two points.

This is the basis for the entire encryption algorithm. Shamir’s Secret Sharing can be expanded outwards into any number of points (the more points, the more information in this chunk) with any number n of k-degree polynomials. The actual secret to be shared is the first term of each polynomial. The mathematics is not terribly complicated, but it works on this incredibly simple geometric principle.

South-Pointing Chariot

by James Buckland

The South-Pointing Chariot was an ancient Chinese design for a compass, a chariot with a pointing statue on it which would face south at all times, achieved via the use of precision differential gears connected to the wheels. That is to say, if the chariot were facing south, and it were turned 90˚ to face east, the statue on it would turn 90˚ in the other direction to compensate, so that it would always remain facing south. There are many arguments over the engineering possibility of such a construction; whether it would work, whether it would remain precise; how it was calibrated. For the purposes of this essay, we’ll assume the machine existed to within a certain level of accuracy.

An interesting side effect of the South-Pointing Chariot is as follows: even assuming it works as precisely as possible, it will still tend to drift over time. This is due to the curvature of the Earth, and a property called holonomy. This property is best described with a thought experiment.

You are an intrepid explorer, able to cross great distances on foot. You are also a salesman, holding one of those large plastic arrows, used to advertise the location of a shop. You are at the equator — perhaps somewhere in South America, maybe Brazil. You are holding the arrow due north. Now walk a quarter of the Earth’s circumference north, until you’re at the north pole. For this whole time, the arrow has still been facing north, until the very instant you hit the pole. Now it’s technically facing south, but really it’s tangent to the surface of the Earth, pointing off into space.

That’s fine. Turn right, ninety degrees, but compensate with the arrow, so that it’s still facing the same direction it was. That is — if you turn clockwise by 90˚, you rotate it counterclockwise within your hands by 90˚, to compensate. Now, take a step off the pole. The arrow is facing East.

Don’t believe me? Walk south, to the equator again. You’re no longer Brazil-bound — you’ll end up somewhere in Africa, a quarter of the Earth’s circumference around the equator from Brazil. Your arrow is facing east. You walk back along the equator to Brazil (a long but convenient chain of archipelagos provide a path) and the arrow is still facing east.

Complete the entire cycle again, and the arrow will now be facing south. Every time you trace out this strange triangle, where every turn is a right angle, the arrow will rotate another 90˚ clockwise. If your triangle had a left turn at the north pole, it would have rotated 90˚ counterclockwise instead.

This is called holonomy — loosely, it is the rotation of an arrow during parallel transport along the surface of a curved surface, such as the Earth. Our particular path traces out a spherical triangle, but holonomy would hold for any shaped path, of any size — simply to a lesser degree.

Back to the South-Facing Chariot. The more of mainland china this chariot might have traced, the more it would have drifted from its original southern bearing. A few experimental journeys could have proven this information very powerful — based on the knowledge that a full trip from equator to pole to equator would turn the machine a full right angle, they might have calculated the radius of the earth from a smaller trip, by proportion. Incredibly powerful stuff.

Rollin Film

by James Buckland

Rollin Film is a helium-specific flavor of the Onnes Effect, a physical phenomena in which superfluids climb the walls of their containers, due to their high surface tension and zero viscosity overcoming the effects of gravity.

Fluids typically are restrained to the bottom of their container, due to the force of gravity pulling them down. For particularly narrow spaces, such as the roots and capillaries of trees, fluids can often exhibit capillary action, which relies on adhesive forces between the fluid and the walls of the container, as well as cohesive forces within the fluid itself. This cohesive force, known as surface tension, often forms a meniscus; a curve within the fluid itself, caused by its attraction or repulsion to the walls of the container. Of course, this capillary action is limited: it can’t climb too high, because of gravity.

But this capillary action is also limited by the viscosity of the fluid — how malleable it is, in a sense. Higher viscosity means more resistance to deformation — peanut butter, tar, and glass are incredibly viscous, whereas water, milk, and blood have a very low viscosity. The viscosity of a fluid is another important factor in determining how high its meniscus reaches.

However, when a fluid is cooled below a specific temperature (for Helium, this is called the lambda point, around 2 degrees Kelvin), it becomes a superfluid: it loses almost all heat capacity, and many of the atoms condense to the lowest possible energy. This state of superfluidity has some unusual side effects: very low liquid density, zero entropy, and zero viscosity, among other things.

Because the viscosity was one of the things preventing further capillary action, a superfluid will demonstrate a notable increase in capillary action — virtually unbounded. Thus, a Rollin Film is a thin film of supercooled liquid helium which will gradually climb the walls of whatever container it resides in, unhindered by gravity.

Baily’s Beads

by James Buckland

tumblr_mlydsnw4Sv1qbn5m1o1_1280Baily’s Beads are the tiny points of light that flicker around the edges of the lunar disk during an eclipse. During a lunar eclipse, the Moon passes directly in front of the Sun when viewed from Earth. Because the Solar System is constantly moving, the apparent sizes of the Sun and Moon — that is, how large they appear in the sky — vary. At worst, they’re within 11% of the same radius; at best, they can be exactly the same size. During eclipses, this produces an interesting effect: when the Moon passes in front of the Sun, it is a tiny bit too small, and hints of sunlight trickle in around the edges, filtered through by mountain ranges and craters on the surface of the moon. This filtering of light — obscured where there is, say, a mountain, and let through where there is a valley — produces a beading effect around the rim of the lunar disk, called Baily’s Beads. Fantastically, this effect was measured and recorded as far back at the 18th century, in a painting by Cosmas Damian Asam, a German painter/architect. This painting, located in Weltenburg Alley, depicts a Bailey’s bead streaming down from the blacked-out sun.

Temperature

by baumsm2a

Temperature exists. While this may seem like an incredibly bold assertion, it’s really very easy to prove.

Like all proofs, this requires a bit of setup: in our case, a few definitions and an axiom. Let’s start with the definitions.

microstate of a physical system is a complete specification of all the parameters of the system–in a classical gas, this would be positions and momenta of every particle; in a quantum system, it would be the wavefunction or quantum state; etc.

macrostate of a physical system is a complete specification of everything we can measure about it–pressure, volume, surface tension, etc.

The fundamental postulate of statistical mechanics says that a system is found with equal probability in each of its accessible microstates, where “accessible” means “consistent with the given macrostate.” In classical systems the state space is continuous and we thus have to turn the probability into a probability density, but this can be easily worked around by dividing the space up into units of h.

On to the proof. Imagine a system with two subsystems, which we will imaginatively call 1 and 2, and which collectively share some energy E. Let us go further, and say that we cannot measure the energy of 1 and 2 directly, only their total, so that E completely specifies a macrostate. 1 and 2 are allowed to exchange thermal energy, but not particles, and we allow them to do so until they are in equilibrium and nothing is changing anymore.  The system has to be in some microstate, so 1 has to have some energy E_1 and 2 has to have some energy E - E_1. Each subsystem also has to have some function that specifies how many microstates are available to it with its energy, which we will call \Omega_1 and \Omega_2. The total number of microstates available to the whole system is thus \int_0^E\Omega_1(E_1)\Omega_2(E-E_1)\, dE_1.

The fundamental postulate of statistical mechanics says that we will find 1+2 in each of these microstates with equal probability. It is thus very probable that we will find the energies of 1 and 2 to be the E_1 and E_2 := E-E_1 that maximize $latex \Omega_1(E_1)\Omega_2(E_2)$. (In practice, this function has such a sharp peak that it’s basically certain you’ll find those energies, but proving that is considerably trickier.) This is the energy partition where the derivative of this expression with respect to E_1 is 0, i.e.

\frac{d\Omega_1}{dE_1}\Omega_2 - \Omega_1\frac{d\Omega_2}{dE_2} = 0.

Rearranging gives us

\frac{1}{\Omega_1}\frac{d\Omega_1}{dE_1} = \frac{1}{\Omega_2}\frac{d\Omega_2}{dE_2}.

Astute readers will recognize these expressions as derivatives of logarithms:

\frac{d\log\Omega_1}{dE_1} = \frac{d\log\Omega_2}{dE_2}.

We have, then, identified a function of a system, depending only on the state of that system, and having the happy advantage of being measurable, which is the same across any two (and thus any N) systems in thermal equilibrium.  It does not seem too unreasonable to call such a function “temperature.”

As a bonus, we’ve also defined entropy along the way, as the logarithm of the number of microstates of a system. (oh, sure, boltzmann’s constant is in there too, but it’s just a conversion factor.) Temperature can then be defined as the derivative of entropy with respect to energy.

Enterprising readers with a bit of leisure time may want to show for themselves, as a diverting and illuminating exercise, that this definition of entropy makes sense, i.e., that it is extensive and obeys the 2nd law of thermodynamics.

Edit: a reader pointed out that this doesn’t define temperature, it defines “thermodynamic beta,” which is \frac{1}{kT}And they are right. But beta is a more fundamental quantity, and hey, what’s a coordinate transformation between friends?

Error Correction and Numerical Masorah

by James Buckland

Error-correcting codes are systems of encryption, compression, or communication that, when modified or distorted by transmission, contain failsafes which allow for the complete or partial reconstruction of the data. Error-detecting codes merely contain a mechanism for the detection of this distortion. The problem of authentication has solved many times over the years — in analog transmission, by seals, and in digital transmission by checksums. However, authentication is a subset of a larger set of problems, known as coding theory.

The first error-detecting code was in 135CE, by Jewish scribes working to produce copies of the Torah. As Spanish/Jewish scholar Maimonidies (better known as Rambam) said in the 12th century,

A Torah scroll missing even one letter is invalid.

Thus it was of critical importance that all new copies of the Torah and its associated writings be identical to the old copies. A system of numerical masorah was developed, containing statistics (the Masorah parva) and applying gematraia to entire pages, in order to provide a hash function. In this way, the accuracy of pages at a time could easily be checked, and thus of chapters, and, eventually, the entire Torah.

Modern equivalents are the checksum, a hash function which applies essentially the same concept to modern digital documents — a method of compressing a large document beyond recognition in a repeatable manner, such that two identical documents would have identical checksums, but even a tiny error would produce an avalanche effect, alerting the reader to an error transcription.

Some coding systems, such as DNA/RNA, use a redundancy method — only the first two bases of a three-base enzyme need to be accurate in order for it to be interpreted properly. However, the massive redundancy of the DNA/RNA system eliminates most of these errors — and those that slip past either become genetic disorders or mutations, which have the added benefit of contributing to evolution.

Golomb Rulers

by James Buckland

Golomb rulers are mathematical structures (rulers, really) which almost perfectly demonstrate my earlier article on the oft-delayed values of pure mathematics. A perfect Gombol ruler will be a set of markings which can measure every consecutive integer distance below its total length. For example: the gombol ruler of order four (four markings) and length six (in total, six units long) contains, within it, the measurements: 

golomb_4_optimal

which are consecutive, optimal, and non-repeating. This is a fascinating mathematical structure — particularly the proof that a perfect (non-repeating) Golomb ruler with more than four marks on it cannot exist.

However, it is the applications, as duly noted in Bill Rankin’s 1993 thesis on the topic (section 1.2), that are of particular interest — indeed, it has applications in radio telecommunications, x-ray crystallography, radio arrays, and anything else which requires the deliberate and efficient mechanical use of integer-valued nodes on wavelengths — that is, the need for an apparent continuum to be split into integer values. In addition, Golomb rulers can be used by information theory to produce efficient error-correcting codes (transmissions with additional information about their coherent communication) as a sort of hash function; a self-orthogonal structure which reacts predictably to any errors in communication.

In conclusion — Golomb rulers are wicked cool.

Olber’s Paradox

by James Buckland

If the universe were ageless, eternal, and unbounded in all directions (Einstein’s static universe model), every point in the night sky would terminate in a star, and thus the universe would be bright rather than dark; hot rather than cold. Edgar Allan Poe said it best in his 1848 essay Eureka — A Prose Poem:

No astronomical fallacy is more untenable, and none has been more pertinaciously adhered to, than that of the absolute illimitation of the Universe of Stars. The reasons for limitation, as I have already assigned them, a priori, seem to me unanswerable; but, not to speak of these, observation assures us that there is, in numerous directions around us, certainly, if not in all, a positive limit — or, at the very least, affords us no basis whatever for thinking otherwise. Were the succession of stars endless, then the background of the sky would present us an uniform luminosity, like that displayed by the Galaxy — since there could be absolutely no point, in all that background, at which would not exist a star. The only mode, therefore, in which, under such a state of affairs, we could comprehend the voids which our telescopes find in innumerable directions, would be by supposing the distance of the invisible background so immense that no ray from it has yet been able to reach us at all. That this may be so, who shall venture to deny? I maintain, simply, that we have not even the shadow of a reason for believing that it is so.

This is called Olbers’ paradox, and it is a thought experiment to explain the observed darkness of the night sky. If the conditions set above were true, we would see light in all directions; but we don’t, so it must not be. Olber’s paradox is often used to support theories of a finite universe, in which a finite number of stars radiate a finite amount of light in all directions — that is, the Big Bang theory, in which the universe has a set date on which it came into existence, and all effects can be traced backwards to it. However, there are a number of reasonable resolutions to Olber’s paradox — that is, conditions under which the universe might be infinite, unbounded, and eternal, without producing a brilliant night sky.

The number of stars in the universe may not be infinite. This is a common response — that, just because the universe is infinite, the number of stars (and, accordingly, the amount of matter) need not be infinite. However, this would imply that the density of the universe is zero — 1/\infty, which is clearly an unphysical result, given the very real non-zero density of the objects around us.

Another objection is that there may be dust particles obscuring the view of the stars. This is another common response — it would make sense that the universe could be infinite but obscured, and that massive dust clouds around each star, or in the path of its light, could absorb and therefore dim the night sky. However, these dust clouds would also absorb heat from their stars, and thus would radiate energy proportional to their absorbed light. Even if they radiate a tiny fraction of their absorbed energy, a tiny fraction of an infinity of stars is still an infinity of light.

The most interesting objection is the notion that the structure of the universe may not be isotropic — that is, uniform in all directions. This would be a consequence of fractal cosmology, the theory that the structure of the universe varies wildly across scales; as in, the density and complexity of the universe, high at small scales, would decrease dramatically at larger scales. However: this argument, mostly a purely mathematical one, also violates the cosmological principle, a commonly accepted theorem stating that the universe is, generally, the same everywhere, in scale, structure, physical law, etc; that it is, in the words of William Keel, “playing fair with scientists.” This doesn’t rule out the model of a fractal universe, but it does reduce the support for it.

In the end, we generally accept the size of the universe to be more or less a moot point — modern developments in physics such as red shift have put limits on the size of the observable universe. Thus, it may be infinite and infinitely populated with stars, but the physical nature of the universe would make it impossible to observe any of them past a certain radius. This radius, a hard limit on the matter in the universe that we can see, would be equivalent to the speed of light times the age of the universe — in other words, any light from matter further away would not have reached us yet.

 

Archimedes’ Estimation

by James Buckland

77m227a

Archimedes, famed Greek scientist/philosopher, once wrote a research paper, aptly titled The Sand Reckoner, in which he assertained the maximum number of grains of sand that could fit in the universe. This was, of course, totally ridiculous and inaccurate, but, as with most science of antiquity, it was the process, not the result, that produced the most fruitful work, such as hitherto unknown methods of counting extremely large numbers.

The motivation for this work lies in a contemporary philosophy, the assumption that sand was infinite, uncountable. In early systems of numerical measurement, counting systems often reached an upper bound at 1000 or so, anything greater being of little use in daily life and therefore being easily described as uncountable. This innate human instinct, incredibly, hasn’t died out since Ancient Greece. This is an easily acceptable proposition for the unscientific mind, but apparently Archimedes had other intentions.

The Sand Reckoner, in totality, calculates the number of grains of sand that could fit inside the universe, which was at the time defined roughly as a sphere whose center was the center of the Earth and whose radius was the distance between the centers of the Earth and Sun. Ultimately, he derives an approximation of 1063 grains of sand.

Let’s check his work. The number of grains of sand N would be equal to \dfrac{M}{m}, where M is the mass of a sand-filled universe, and m is the mass of a single grain of sand. Thus,

N = \dfrac{M}{m} = \dfrac{V\rho}{m} = \dfrac{\frac{4}{3}\pi r^3 \rho}{m}.

Given the mass of a grain of sand m = 6.48\times10^{-5} \text{ kg}, the radius of the given sphere r = 1\text{AU}\approx 1.5\times10^8\text{ km} , and the density of sand \rho = 1442\text{ kg/m}^3, we find N = 8.09 \times 10^{42} grains of sand, which is not so bad, for an ancient greek. Much of the disparity between our calculations has to do with our varying definitions of the size of the universe — his calculation also accounted for the sphere enclosing the paths of the distant stars, which he thought were not so much further from Earth than the Sun’s orbit.

There is another oddity — this calculation’s relevance to the Eddington number, a calculation for the number of protons in the universe, which is currently pegged at around 1080 protons; given the number of protons in a grain of sand, we find that Archimedes’ 1063 grains of sand in the universe times an approximate 1022 protons in a grain of sand gives us a number not so unlike the Eddington number, off by a factor of only 105. This is… a coincidence, but an extraordinary one.