Fractals and Epicycles

There is no bilaterally-symmetrical, nor eccentrically-periodic curve used in any branch of astrophysics or observational astronomy which could not be smoothly plotted as the resultant motion of a point turning within a constellation of epicycles, finite in number, revolving around a fixed deferent.

Norwood Russell Hanson, “The Mathematical Power of Epicyclical Astronomy”

 

A friend recently showed me this image…

hilbert_epicycle.gif

…and thus I was drawn into the world of epicycles and fractals.

Epicycles were first used by the Greeks to reconcile observational data of the motions of the planets with the theory that all bodies orbit the Earth in perfect circles. It was found that epicycles allowed astronomers to retain their belief in perfectly circular orbits, as well as the centrality of Earth. The cost of this, however, was a system with many adjustable parameters (as many parameters as there were epicycles).

There’s a somewhat common trope about adding on endless epicycles to a theory, the idea being that by being overly flexible and accommodating of data you lose epistemic credibility. This happens to fit perfectly with my most recent posts on model selection and overfitting! The epicycle view of the solar system is one that is able to explain virtually any observational data. (There’s a pretty cool reason for this that has to do with the properties of Fourier series, but I won’t go into it.) The cost of this is a massive model with many parameters. The heliocentric model of the solar system, coupled with the Newtonian theory of gravity, turns out to be able to match all the same data with far fewer adjustable parameters. So by all of the model selection criteria we went over, it makes sense to switch over from one to the other.

Of course, it is not the case that we should have been able to tell a priori that an epicycle model of the planets’ motions was a bad idea. “Every planet orbits Earth on at most one epicycle”, for instance, is a perfectly reasonable scientific hypothesis… it just so happened that it didn’t fit the data. And adding epicycles to improve the fit to data is also not bad scientific practice, so long as you aren’t ignoring other equally good models with fewer parameters.)

Okay, enough blabbing. On to the pretty pictures! I was fascinated by the Hilbert curve drawn above, so I decided to write up a program of my own that generates custom fractal images from epicycles. Here are some gifs I created for your enjoyment:

Negative doubling of angular velocity

(Each arm rotates in the opposite direction of the previous arm, and at twice its angular velocity. The length of each arm is half that of the previous.)

negative_doubling

Trebling of angular velocity

trebling.gif

Negative trebling

negative_trebling

Here’s a still frame of the final product for N = 20 epicycles:

Screen Shot 2018-11-27 at 7.23.55 AM

Quadrupling

epicycles_frequency_quadrupling.gif

ωn ~ (n+1) 2n

(or, the Fractal Frog)

(n+1)*2^n.gif

ωn ~ n, rn ~ 1/n

radius ~ 1:n, frequency ~ n.gif

ωn ~ n, constant rn

singularity

ωn ~ 2n, rn ~ 1/n2

pincers

And here’s a still frame of N = 20:

high res pincers

(All animations were built using Processing.py, which I highly recommend for quick and easy construction of visualizations.)

The Match Problem

45685722_10218410930130673_4528969500971761664_o

This puzzle has been popping up on my Facebook feed recently. Try to see how high you can get before reading on!

 

 

(…)

 

 

Now, would you be surprised if I told you that with a little interpretive freedom and creativity, you can get numbers larger than the number of atoms in the observable universe? How about if I told you that you can get to numbers large enough that they break set theory? Let me demonstrate for you.

First, we’ll start with the boring solutions.

You can move the bottom match in the 5 up to make it a 9. Then you can rotate the bottom left vertical match in the 0 to make it a nine. This gives 998.

998

Can we do better? Sure! Take the top and bottom matches in the central zero and move them to squeeze in another one, giving you 51,118.

S(1118)

So far we’ve been working purely in base 10. Let’s try to do better by moving to a more exotic number system.

Hexadecimal!

Hexadecimal is a base-16 number system, the digits of which are written as 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F. For instance, we can construct F88:

F88

F88 is only 3976, which is not better than our best so far (51,118). If we’re allowed to interpret a square of matches as a D instead of a zero, we can do slightly better (though to do so we have to be allowed to throw out a toothpick, or place it directly on top of another one):

FD8

FDB is 4059, which is still not better than our current best. To get better, we need to get more clever.

Exponentiation!

Our first strategy is just to shift two matches in the first digit down so as to make it into a 9:

9^8

We can interpret this as 98 = 43,046,721. This is our best so far! But we can do better by applying Knuth’s up-arrow notation.

5^18

This is 518, which is almost 3.8 trillion, 100 thousand times better than 98! But we’re not done yet! If we use a caret (^) to represent exponentiation, we can get even higher!

51^18.png

5118 is 5.4 nonillion, a number with 31 decimal digits long. We could try to do better than this by squeezing in a caret between the 5 and the 1, making 5118 (a number with 83 decimal digits) but this becomes pretty messy and gross.

Alright, so we got up to 83 digit long numbers. Can we get any better? Yep!

Tetration

Tetration is the level above exponentiation. The nth tetration of a is defined as follows:

Just as multiplication is repeated addition and exponentiation is repeated multiplication, tetration is repeated exponentiation! Now we get into numbers whose size is truly impossible to grasp. Let’s shift the top and bottom matches on the middle 0:

11_5118

We can write this equivalently as: Screen Shot 2018-11-12 at 11.06.29 PM.png

How big is this number? There’s really no meaningful way to describe it. We’ve gone far beyond any quantities that can be made physical sense of, like the number of cubic nanometers in the universe, which is a measly 10107. But we’re not done yet!

Busy Beavers!

The busy beaver numbers are a sequence of numbers that arise from the properties of Turing machines. Here’s a brief description: The nth busy beaver number B(N) is the greatest number of ones that a finitely-running N state Turing machine can print on a tape which starts all zero.

Did you think tetration is big? Well, busy beaver numbers are unimaginably larger. In fact, the busy beaver sequence grows larger than any computable function! There’s a neat proof of this that involves the uncomputability of the halting problem. To skim over it, it can be shown that were we to have an upper bound on the busy beaver sequence, then we could find a deterministic algorithm for solving the halting problem in a finite amount of time, which we know is impossible. And if any computable function F(N) grew faster than B(N), then we could find an upper bound on the busy beaver sequence. Thus, it must be the case that no computable function grows as fast as B(N)!

We can exploit the absurd growth rate of the busy beaver sequence if we are allowed the interpretative freedom of assuming parentheses, so that Bn = B(n).

B(118)

Let’s think for a minute about how large B(118) must be. So far, the only values of n for which B(n) is known are B(1), B(2), B(3), and B(4). After this the values grow out of control. A lower bound on B(6) is Screen Shot 2018-11-12 at 11.07.48 PM. For B(7), we have as a lower bound Screen Shot 2018-11-12 at 11.09.22 PM.png. B(8) almost certainly beats our current record. And B(118) is unthinkably large.

We can get even higher than the busy beaver numbers with the maximum shifts function S(N), defined as the number of steps that the longest-finitely-running N state Turing machine takes before halting. This function is guaranteed to be larger than B(N) for all N. Using S(N), and taking the same liberties as above with respect to parentheses, we can get an insanely high value:

S(1118)

This is S(1118), and while it’s undoubtedly larger than B(118), there’s no way to get any intuitive grasp on how much larger. But wait, there’s more! We can get even larger by recursively nesting a busy beaver function within a maximum shifts function:

S(B(9))

We interpret this as S(B(9)). Why is this larger than S(1118)? Well, B(9) is some enormous number, certainly larger than 1118, so S(B(9)) is certainly greater than S(1118).

Now, are we finally done? Have we reached the peak yet? No! It’s time for the largest solution of them all.

The reason that the Busy Beaver numbers and Maximum Shift function are so big is because of the uncomputability of the halting problem. But if we consider Turing machines that have an oracle for the halting problem (call these meta Turing machines), we get a new meta-halting problem: when do these meta Turing machines halt? From the meta-halting problem comes an associated new sequence of Busy Beaver numbers, which grows uncomputable faster than the original Busy Beaver sequence. Then we can equip Turing machines with an oracle for the meta-halting problem, generating a meta-meta-Busy Beaver sequence.

Thus we get a hierarchy of Busy Beaver functions, which, following the notation used by Scott Aaronson here, can be described with Bn(x). Each Bn grows uncomputably faster than the previous Bn-1. There’s a similar hierarchy for the maximum shifts function, and each S_n is going to be an upper bound on each Sn-1.

So we can exploit this hierarchy to create an unimaginably large number (whose actual value is almost certainly independent of the axioms of set theory): Move around the top and bottom matches on the 0 to give S a subscript of 11. Then we get the 11th-up-in-the-hierarchy maximum shifts function S11 applied to 118: S11(118).

S_11(118)

It’s a little gross-looking, but I think it works! I challenge anybody to try to come up with a better solution. 🙂

Gödel’s Second Incompleteness Theorem: Explained in Words of Only One Syllable

Somebody recently referred me to a 1994 paper by George Boolos in which he writes out a description of Gödel’s Second Incompleteness Theorem, using only words of one syllable. I love it so much that I’m going to copy the whole thing here in this post. Enjoy!

First of all, when I say “proved”, what I will mean is “proved with the aid of the whole of math”. Now then: two plus two is four, as you well know. And, of course, it can be proved that two plus two is four (proved, that is, with the aid of the whole of math, as I said, though in the case of two plus two, of course we do not need the whole of math to prove that it is four). And, as may not be quite so clear, it can be proved that it can be proved that two plus two is four, as well. And it can be proved that it can be proved that it can be proved that two plus two is four. And so on. In fact, if a claim can be proved, then it can be proved that the claim can be proved. And that too can be proved.

Now, two plus two is not five. And it can be proved that two plus two is not five. And it can be proved that it can be proved that two plus two is not five, and so on.

Thus: it can be proved that two plus two is not five. Can it be proved as well that two plus two is five? It would be a real blow to math, to say the least, if it could. If it could be proved that two plus two is five, then it could be proved that five is not five, and then there would be no claim that could not be proved, and math would be a lot of bunk.

So, we now want to ask, can it be proved that it can’t be proved that two plus two is five? Here’s the shock: no, it can’t. Or, to hedge a bit: if it can be proved that it can’t be proved that two plus two is five, then it can be proved as well that two plus two is five, and math is a lot of bunk. In fact, if math is not a lot of bunk, then no claim of the form “claim X can’t be proved” can be proved.

So, if math is not a lot of bunk, then, though it can’t be proved that two plus two is five, it can’t be proved that it can’t be proved that two plus two is five.

By the way, in case you’d like to know: yes, it can be proved that if it can be proved that it can’t be proved that two plus two is five, then it can be proved that two plus two is five.

George Boolos, Mind, Vol. 103, January 1994, pp. 1 – 3

A simple probability puzzle

In front of you is an urn containing some unknown quantity of balls. These balls are labeled 1, 2, 3, etc. They’ve been jumbled about so as to be in no particular order within the urn. You initially consider it equally likely that the urn contains 1 ball as that it contains 2 balls, 3 balls, and so on, up to 100 balls, which is the maximum capacity of the urn.

Now you reach in to draw out a ball and read the number on it: 34. What is the most likely theory for how many balls the urn contains?

 

 

(…)

 

(Think of an answer before reading on.)

 

(…)

 

 

The answer turns out to be 34!

Hopefully this is a little unintuitive. Specifically, what seems wrong is that you draw out a ball and then conclude that this is the ball with the largest value on it. Shouldn’t extreme results be unlikely? But remember, the balls were randomly jumbled about inside the urn. So whether or not the number on the ball you drew is at the beginning, middle, or end of the set of numbers is pretty much irrelevant.

What is relevant is the likelihood: Pr(There are N balls | I drew a ball numbered 34). And the value of this is simply 1/N.

In general, comparing the theory that there are N balls to the theory that there are M balls, we look at the likelihood ratio: Pr(There are N balls | I drew a ball numbered 34) / Pr(There are M balls | I drew a ball numbered 34). This is simply M/N.

Thus we see that our prior odds get updated by a factor that favors smaller values of N, as long as N ≥ 34. The likelihood is zero up to N = 33, maxes at 34, and then decreases steadily after it as N goes to infinity. Since our prior was evenly spread out between N = 1 and 100 and zero everywhere else, our posterior will be peaked at 34 and decline until 100, after which it will drop to zero.

One way to make this result seem more intuitive is to realize that while strictly speaking the most probable number of balls in the urn is 34, it’s not that much more probable than 35 or 36. The actual probability of 34 is still quite small, it just happens to be a little bit more probable than its larger neighbors. And indeed, for larger values of the maximum capacity of the urn, the relative difference between the posterior probability of 34 and that of 35 decreases.

Ramanujan’s infinite radicals

Here’s one of my favorite puzzles.

The Problem

Solve the following equation:

Screen Shot 2018-07-28 at 7.00.13 AM

Background

This problem was initially posed by the most brilliant mathematician of the past century, an eccentric prodigy named Srinivasa Ramanujan. He submitted it to the Journal of Indian Mathematical Society without a solution, challenging his fellow mathematicians to solve it. Eventually, when nobody could do it, he gave in and revealed the answer along with a simple proof that probably put them all to shame.

Solution

Ok, so seriously, try this for yourself before reading on. Most people haven’t ever done anything with infinite nested square roots, so this is a good opportunity to be creative and come up with weird ways of solving the problem. Play around with it, give random things names, and see what you can do!

You’ll notice (as I did when I first tried working it out) that solving nested radicals can be really really hard. Things quickly get really ugly. Luckily, the answer is beautifully simple. It’s just 3!

Let’s see why. We start with a very simple identity:

Screen Shot 2018-07-28 at 7.20.54 AM

Taking the square root, we get

Screen Shot 2018-07-28 at 7.21.02 AM

If we just plug in a few integer values, we see the following:

Screen Shot 2018-07-28 at 7.16.28 AM.png

What happens if we plug in the value of 4 from the second line to the 4 in the first? We get

Screen Shot 2018-07-28 at 7.18.10 AM.png

Hm, already looking familiar! We’re not done yet – we now plug in the value of 5 from the third line to this new equation, obtaining:

Screen Shot 2018-07-28 at 7.19.37 AM.png

And once more:

Screen Shot 2018-07-28 at 7.22.31 AM.png

And etc off to infinity.

Screen Shot 2018-07-28 at 7.42.25 AM.png

And that’s the proof!

Generalizing

Now, Ramanujan actually proved a much more general result than this. I’ll prove a less general result than his, but still more general than what we just saw.

The generalization comes from noticing that our initial identity didn’t have to involve adding 1 in (x+1)2. We could have added any number to x, squared it, and then gotten a brand new general theorem about a whole class of continued root problems.

Thus we start with the identity

Screen Shot 2018-07-28 at 7.30.03 AM

or…

Screen Shot 2018-07-28 at 7.30.23 AM.png

Now we perform substitutions exactly as we did above:

screen-shot-2018-07-28-at-7-50-31-am.png

In the end, what we get is a solution for a whole class of initially very difficult seeming problems:

Screen Shot 2018-07-28 at 7.39.58 AM.png

And if we plug in m = 1 and n = 2, we get our initial problem!

Quantum Computing in a Nutshell

You can think about classical bits as like little switches. They have two possible states, ON and OFF, and are in either one or the other.

37081856_10216918284859437_1668214775789649920_n

Then we have logic gates, which are like operations we perform on the switches. If we have one switch, we might flip it from whatever state it’s in to the opposite state. If we have two, we might flip the second switch conditional on the first switch being ON, and otherwise do nothing. And so on.

37074006_10216918284939439_5899339380094402560_n

Put together a bunch of these bits and gates and you get an algorithm. If you arrange them cleverly enough, you end up getting interesting computations like adding binary numbers and factoring primes and playing Skyrim.

Quantum computing is a fundamentally different beast. To start with, the fundamental units of quantum computation are non-deterministic. We call them qubits, for quantum bits. While a qubit can, like a classical bit, be in the state ON and the state OFF, it can also be in other more exotic states:

37171485_10216918315460202_5674648183184031744_n

Important! Just like a classical bit, a qubit will only ever be observed in two possible states, ON and OFF. We never observe a qubit being in the state “36% ON and 64% OFF.” But if we prepare a qubit in that particular state, and then measure it over and over again, we could find that 36% of the time it is ON and 64% of the time it is OFF. That is, the state of a qubit specifies the probability distribution over possible observed outcomes.

Let’s compare the complexity of bits to the complexity of qubits.

To specify the state of a classical bit, we only need to answer a single yes or no question: Is it ON?

To specify the state of a quantum bit, we need to answer a more complicated question: What is the probability that it is ON?

Since probabilities are real numbers, answering the second question requires infinitely more information than answering the first. (In general, specifying a real number to perfect precision requires answering an unbounded number of yes or no questions.) The implication of this is that in some sense, the state of a qubit contains infinitely more information than the state of a classical bit. The trick of quantum computing is to exploit this enhanced information capacity in order to build quantum algorithms that beat out the best classical algorithms.

Moving on! The set of all states that a qubit can be in is the set of all probability distributions over the outcomes ON and OFF.

37156326_10216918369941564_7471468612121788416_n

Now, we could describe a simple probabilistic extension to classical computation, where logic gates transform probability distributions into different probability distributions, and algorithms neatly combine these gates to get useful computations, and be done with it. But it turns out that things are a slight bit more complicated than this. Having described this probabilistic extension, we would still not have quantum computing.

In fact, the states of qubits are specified not by probability distributions over the observations ON and OFF, but an amplitude distribution over these observations.

What are these amplitudes? An amplitude is simply a thing that you square to get the probability of the observation. Amplitudes have a wider range of possible values than probabilities. While a probability has to be greater than zero, an amplitude can be negative (since the square of a negative number will still be positive). In fact, amplitudes can even be complex numbers, in which case the square of the amplitude is just the square of its distance from zero in the complex plane.

37190749_10216918417262747_1993599483794948096_n

Now, why does it matter that the set of quantum states be described by amplitudes rather than probabilities? After all, the amplitudes will always just be converted back to probabilities when we observe the qubits, so what difference does it make if the probability came from a negative amplitude or a positive one? The answer to this question is difficult, but it comes down to this:

For some reason, it appears that on the quantum level, the universe calculates the states of particles in terms of these complex numbers we call amplitudes, not probabilities. This was the discovery of experimentalists in the 1900s that looked at things like the double slit experiment, the Stern-Gerlach experiment, and so on. If you try to just analyze everything in terms of probabilities instead of amplitudes, you will get the wrong answer.

We’ll write the state of a qubit that has an amplitude alpha of being ON and an amplitude beta of being OFF as follows:

37227024_10216918430103068_5777410781888905216_n

It’s useful to have a few different ways of visualizing the state of a qubit.

37179912_10216918454223671_661326054782140416_n.jpg

Now, quantum gates are transformations that take amplitude distributions to different amplitude distributions.

37199704_10216918490904588_4575874598293209088_n

The set of physically realizable quantum gates is the set of all transformations that take normalized states (|α|² + |β|² = 1) to other normalized states. Some of these gates are given special names like the Hadamard gate or the CNOT gate and crop up all the time. And just like with classical states, you can put together these gates in clever ways to construct algorithms that do interesting computations for you.

So, that’s quantum computing!

Now, the first thing to notice is that the set of all things that quantum computers can do must contain the set of all things that classical computers can do. Why? Because classical computers are just a special case of quantum computers where states are only allowed to be either ON or OFF. Every classical logic gate has a parallel quantum logic gate, and so every classical algorithm is a quantum algorithm is disguise.

But quantum computers can also do more than classical computers. In the next posts, I’ll give two examples of quantum algorithms we’ve discovered that solve problems at speeds that are classically impossible: the Deutsch-Jozsa algorithm and Grover’s algorithm. Stay tuned!

Short and sweet proof of the f(xy) = f(x) + f(y) logarithmic property

If you want a continuous function f(x) from the reals to the reals that has the property that for all real x and y, f(xy) = f(x) + f(y), then this function must take the form f(x) = k log(x) for some real k.

A proof of this just popped into my head in the shower. (As always with shower-proofs, it was slightly wrong, but I worked it out and got it right after coming out).

I haven’t seen it anywhere before, and it’s a lot simpler than previous proofs that I’ve encountered.

Here goes:

f(xy) = f(x) + f(y)

differentiate w.r.t. x…
f'(xy) y = f'(x)

differentiate w.r.t. y…
f”(xy) xy + f'(xy) = 0

rearrange, and rename xy to z…
f”(z) = -f'(z)/z

solve for f'(z) with standard 1st order DE techniques…
df’/f’ = – dz/z
log(f’) = -log(z) + constant
f’ = constant/z

integrate to get f…
f(z) = k log(z) for some constant k

And that’s the whole proof!

As for why this is interesting to me… the equation f(xy) = f(x) + f(y) is very easy to arrive at in constructing functions with desirable features. In words, it means that you want the function’s outputs to be additive when the inputs are multiplicative.

One example of this, which I’ve written about before, is formally quantifying our intuitive notion of surprise. We formalize surprise by asking the question: How surprised should you be if you observe an event that you thought had a probability P? In other words, we treat surprise as a function that takes in a probability and returns a scalar value.

We can lay down a few intuitive desideratum for our formalization of surprise, and one such desideratum is that for independent events E and F, our surprise at them both happening should just be the sum of the surprise at each one individually. In other words, we want surprise to be additive for independent events E and F.

But if E and F are independent, then the joint probability P(E, F) is just the product of the individual probabilities: P(E, F) = P(E) P(F). In other words, we want our outputs to be additive, when our inputs are multiplicative!

This automatically gives us that the form of our surprise function must be k log(z). To spell it out explicitly…

Desideratum: Surprise(P(E, F)) = Surprise(P(E)) + Surprise(P(F))

But P(E,F) = P(E) P(F), so…
Surprise(P(E) P(F)) = Surprise(P(E)) + Surprise(P(F))

Renaming P(E) to x and P(F) to y…
Surprise(xy) = Surprise(x) + Surprise(y)

Thus, by the above proof…
Surprise(x) = k log(x) for some constant k

That’s a pretty strong constraint for some fairly weak inputs!

That’s basically why I find this interesting: it’s a strong constraint that comes out of an intuitively weak condition.