This is the first of a multi-part entry, and while its focus is on basic isoperimetric problems, my hope is that all pieces together shed some light on a simple but interesting idea. At this stage its more of an observation, really, but at least its easily stated: *solutions to isoperimetric problems and confluence of various means appear to go hand in hand*.

Work with the 2012 St. Mary’s REU resulted in these thoughts and observations (and thought experiments), and that’s meant as an assignment of credit, not blame. Its not surprising, then, that these posts will refer to some previously defined concepts and results regarding our group’s “LEGO Isoperimetric Problems,” but with this first post I’ll more-or-less be starting from the ground up, appealing to some basic calculus (vector and single variable) and not much else.

Roughly speaking, this first post covers the first half of this idea, namely some classical isoperimetric problems and a little bit of the math surrounding them. Enjoy!

**Isoperimetric Polytopes**

Every triangle encloses some finite amount of area, as does every rectangle and every square, and they all use a certain amount of perimeter to do this. In fact, every *polygon* in the plane encloses some finite amount of area, using some particular amount of perimeter to do so. Similarly, every polyhedron in space encloses some finite amount of volume, and it does so using some particular amount of surface area. Of course, polyhedra use up a certain amount of edge length, too. For instance, a unit cube has 12 edges, each of length 1, utilizing a total edge length of 12 to enclose one unit of volume. (Whereas it uses 6 units of surface area to enclose this same single unit of volume.)

There’s nothing special about edge lengths, surface area, and volume in three dimensions (except for the fact that these concepts can be visualized). In fact, -dimensional polyhedra, heretofore referred to as *polytopes*, enclose their fair share of lengths, areas, volumes and various “hypervolumes,” too. Every polytope encloses some fixed amount of -dimensional volume, using various amounts of -dimensional volumes along the way, where .

Let’s build our intuition by first considering a tetrahedron, as pictured to the right. Let denote the length of any of its edges, making the total edge length (or 1-dimensional volume) . Moreover, since each “face” is an equilateral triangle with side length , the total surface area (or 2 dimensional volume) is . Lastly, the (3 dimensional) volume enclosed by such a tetrahedron is found to be . With these formulae at hand, we compute the tetrahedron’s various -dimensional volumes under the assumption that it encloses 1 unit of (3-dimensional) volume. This implies . After the dust settles we learn that such a tetrahedron uses 12.24 units of length and 7.21 units of surface area. As noted above, a cube that encloses one unit of volume uses less edge length and less surface area, making it a more efficient shape.

Not pictured to the left or right or any where (without the use of top notch hallucinogens) is a 4-dimensional tetrahedron, often called a hypertetrahedron. Even so, formulae for its various volumes exist, too. For instance, this 4-dimensional polytope has 10 edges of equal length , making . It also contains 10 equilateral triangles and 5 tetrahedrons, yielding values of and for the 2- and 3-dimensional volumes. Its 4 dimensional volume is found to be . (More details on even higher dimensional tetrahedra and other convex polytopes can be found here and here.) A 4-dimensional cube contains 32 edges, 24 squares, and 4 cubes. Is the 4-cube more efficient than the hypertetrahedron?

These considerations give rise to a natural question: which polytope (if any) most efficiently encloses a given amount of volume? Such questions are called ** isoperimetric problems** (for reasons to be explained later), and, as stated, they ambiguously use the words “volume” and “efficiently.” After all, there are different kinds of volumes at play here, and just as many ways to be “efficient.” One way to measure “efficiency,” for instance, is via total edge length , which, when using the top dimensional volume (which I’ll just refer to as “volume” from here on out) gives rise to the following isoperimetric problem:

*Amongst all polytopes enclosing a fixed volume, which one (if any) minimizes ?*Of course, one can measure efficiency via any of the other lower-dimensional volumes, too, leading to isoperimetric problems for -dimensional polytopes.

Before our words get too clunky and awkward we should settle some notation. Given an arbitrary polytope, we’ll label its -dimensional volume as so that denotes its total edge length and its volume. Constraining to be constant and minimizing any of the other (for ) more or less begs us to compare the unconstrained quantities and . A natural way to do this, of course, is to naively search for the polytope that minimizes the fraction . After all, this fraction literally records the amount of -volume per -volume.

Unfortunately, this does not work very well. And by “very well” I actually mean “at all.” Minimizing such a ratio is impossible, it turns out, since expanding your polytope necessarily decreases this fraction. The key observation is that when you rescale a polytope by a factor of , the various volumes rescale differently. Each new, -dimensional volume is given by . If we start with a polytope and then rescale it by the ratio for the new polytope is given by . Letting , the factor tends to zero, and the net result is this: given any polytope, we can make as small (close to zero) as we like. The problem this points to is quite serious: there is no minimum. In terms of the picture above, the larger we make Domo, the smaller we make his (surface area)-to-volume ratio, and this means that even though both Domo’s skin and brain are expanding, the volume of his gray matter is expanding too fast to be contained.

This issue is easily resolved. By taking appropriate powers of our various volumes we form a new fraction that is *scale invariant*. Minimizing

where eliminates this expanding cheat (and keeps Domo safe(r)). Rescaling by multiplies the numerator and denominator by the same amount, . Moreover, it is precisely the scale-invariance of these ratios that connects them with our aforementioned isoperimetric problems. Let’s say we want to minimize the above expression. Since this quantity is scale-invariant, we can agree to rescale any polytope under consideration, making its -dimensional volume constant (most people choose this constant to be 1). Under this generality-preserving constraint, our problem then reduces to minimizing the quantity . Of course, minimizing this expression is the same as minimizing , and so our task is the same as minimizing subject to the constraint that is constant. This is precisely our isoperimetric problem. Moreover, all the connecting steps can be reversed, establishing the equivalence of the two questions.

One is obligated to further mention that our polytopes can be rescaled, without loss of generality, so that the *numerator* is constant. The quantity we are left to minimize is which is the same as *maximizing* . In other words, our question is also equivalent to this one: amongst all polytopes with fixed , which ones maximize ? Interpreting the lower dimensional volume as a “perimeter,” one sees why such questions are labeled as “isoperimetric problems.”

**Know Your Limits**

Even though we’ve taken great care in formulating these problems, solutions can *still* fail to exist. When and , it turns out that, again, there is no minimizer. To be more precise, there is no *polytope* that minimizes the desired ratio.

Using appropriate powers fixed the rescaling cheat, but to understand this new subtlety we ought to take a gander at the two dimensional setting. Here our volume is area, , and we we are searching for a polygon that minimizes

One can make rigorous the claim that regular polygons are more efficient at enclosing areas than irregular ones, but I’ll just take that as a given. The ratio is found to be for regular polygons (with sides). A little bit of calculus confirms that this function is strictly decreasing, and so we can make our fraction smaller simply by increasing the number of sides. (Exactly how small you can make it depends on how well one recalls limits from calculus.) Of course, if you increase the number of sides “forever,” you end up with a circle. The good news is that this produces the general answer to the general or classical isoperimetric problem in the plane (more on this later), but the bad news is that it does *not* produce a polygon. Similar arguments hold in higher dimensions; we can improve these ratios as much as we like by increasing the number of -faces. The limit of such a process always produces a -sphere, *not* a polytope.

This problem is so serious, in fact, that resolving it gave rise to an entirely new field of math, which I coarsely summarize as follows:

*Limits of objects often fail to be objects.*

Swiss mathematician Jakob Steiner ran head first into this phenomenon when trying to solve 2- and 3-dimensional (classical) isoperimetric problems. He argued that, amongst all closed curves with a fixed perimeter, a circle encloses the most amount of area. First he assumes that there is a solution curve, and then he demonstrates how increasing the curve’s symmetry improves its isoperimetric ratio (thus leading to the most symmetric of all closed curves, a circle). What this proves is that *if* there is a solution curve, it must be the circle. The issue Steiner overlooked, of course, is that *there may not be a solution curve at all*, and so his first step is invalid.

It may seem counterintuitive, but it is possible. Some might argue its even likely. Given any closed curve its conceivable that its isoperimetric ratio can be improved via some well-defined adjustment (a ‘la our now outlawed rescaling), and this leads directly to the necessity of limits. Continued adjustments force us to consider sequences of closed curves, and, in accordance with the heuristic above, a limit of such a sequence may result in something that is *not* a curve.

This sounds a bit crazy at first, but examples are abundant. For instance, fractals are not curves, but they commonly pop up as limits of them. In a previous post I discussed Weierstrass functions, interesting limits of differentiable functions that are continuous but themselves no where differentiable. And here’s another example, one that’s particularly simple: let denote a rectangle of length and height . The limiting object is a vertical line, not a rectangle (which is intimately connected with an insanely useful device called the dirac delta function, itself a “generalized function”). Perhaps the most basic example is the familiar limit

which tells us that a limit of positive numbers can be a *non*-positive number.

As far as I can tell there are two more-or-less complementary ways to overcome this obstacle. One approach decries the universe as “too big,” while the other argues its “too small.” For our isoperimetric problems we see that limits sometimes force us to leave the “universe” of polytopes; that is, limits take us to and beyond the limits of our problem. One way to fix this is to use yet a smaller universe, where taking limits is safe(r). Another option is to use a larger universe, one that includes all the possible new objects (like circles and spheres) we didn’t realize we’d need. In terms of my basic example , the first option might correspond to saying “we’ll only take limits of positive numbers that are bigger than 1,” whereas the second one suggests a plan akin to “considering not just positive but all non-negative numbers.” In both cases limits are safe.

Both approaches have produced noteworthy results, including ones about polytope isoperimetric problems. In 1965 Zdzislaw Melzak (who, incidentally, also earns points for having the coolest name ever) used a smaller universe when searching for polyhedra that minimize the ratio

He restricted his search to four-sided polyhedra, and he proved that, amongst these, our friend the tetrahedron uses the least amount of total edge length to enclose a fixed volume.

Many aspects of this problem remain open if one uses the somewhat more generous sub-universe of convex polyhedra, and in 2002 Scott Berger did just that. He expand his universe to include “generalized convex polyhedra,” allowing him to prove that can be minimized. More recently in 2011, friend of the program Ryan Scott proved a more general theorem, showing that can be minimized for every dimension . One similarly needs to allow for “generalized convex polytopes” to accomplish this, and the results have a different flavor to them than Melzak’s. This is to be expected, of course, since the answers in these cases are new kinds of objects, merely limits of familiar shapes. Berger and Scott are able to make precise how “different” their minimizers are from standard polytopes, but more precision is always desired.

As far as my interests in these problems go, I will decry the universe as too big, restricting our space of general polytopes to the much more manageable world of -dimensional boxes. At least I will do this for much of the remaining entries in this multi-part blog post, and, as I will soon discuss, this reduces our questions to ones of multivariable calculus. That being said, when all is said and done I hope to share a few thoughts on this important concept of “universe enlargement” and its many, Earth-shattering consequences, such as how it redefines volume, shape, and function. For now, though, consider this: as abstract as they might be, isoperimetric problems are very useful. Solving them (even in this “smaller box universe”) can help us smuggle cocaine onto an airplane. Who said math was useless?

## One thought on “Means to an End (Part I): Efficient Polytopes, Dying Domos, and Limits of Limits”