Fun with Akaike

The Akaike information criterion is a metric for model quality that naturally arises from the principle of maximum entropy. It balances predictive accuracy against model complexity, encoding a formal version of Occam’s razor and solving problems of overfitting. I’m just now learning about how to use this metric, and will present a simple example that shows off a lot of the features of this framework.

Suppose that we have two coins. For each coin, we can ask what the probability is of each coin landing heads. Call these probabilities p and q.

Akaike.png

Two classes of models are (1) those that say that p = q and (2) those that say that p ≠ q. The first class of models are simpler in an important sense – they can be characterized by a single parameter p. The second class, on the other hand, require two separate parameters, one for each coin.

The number of parameters (k) used by our model is one way to measure model complexity. But of course, we can also test our models by getting experimental data. That is, we can flip each coin a bunch of times, record the results, and see how they favor one model over another.

One common way of quantifying the empirical success of a given model is by looking at the maximum value of its likelihood function L. This is the function that tells you how likely your data was, given a particular model. If Model 2 can at best do better at predicting the data than Model 1, then this should count in favor of Model 2.

So how do we combine these pieces of information – k (the number of parameters in a model) and L (the predictive success of the model)? Akaike’s criterion says to look at the following metric:

AIK = 2k – 2 lnL

The smaller the value of this parameter, the better your model is.

So let’s apply this on an arbitrary data set:

n1 = number of times coin 1 landed heads
n2 = number of times coin 1 landed tails
m1 =number of times coin 2 landed heads
m2 = number of times coin 2 landed tails

For convenience, we’ll also call the total flips of coin 1 N, and the total flips of coin 2 M.

First, let’s look at how Model 1 (the one that says that the two coins have an equal chance of landing heads) does on this data. This model predicts with probability p each heads, on either coin, and with probability 1 – p each tails on either coin.

L1 = C(N,n1) C(M,m1) pn1+m1 (1 – p)n2+m2

The two choose functions C(N, n1) and C(M, m1) are there to ensure that this function is nicely normalized. Intuitively, they arise from the fact that any given number of coin tosses that land heads could happen in many possible, ways, and all of these ways must be summed up.

This function finds its peak value at the following value of p:

p = (n1 + m1) / (N + M)
L1,max = C(N, n1) C(M, m1) (n1 + m1)n1+m1 (n2 + m2)n2+m2 / (N + M)N+M

By Stirling’s approximation, this becomes:

ln(L1,max) ~ -ln(F) – ½ ln(G)
where F = (N + M)N+M/NNMM · n1n1m1m1/(n1 + m1)n1+m1 · n2n2m2m2/(n2 + m2)n2+m2
and G = n1n2m1m/ NM

With this, our Akaike information criterion for Model 1 tells us:

AIC= 2 + 2ln(F) + ln(G)

Moving on to Model 2, we now have two different parameters p and q to vary. The likelihood of our data given these two parameters are given by:

L2 = C(N, n1) C(M, m1) pn1 (1 – p)n2 qm1 (1 – q)m2

The values of p and q that make this data most likely are:

p = n1 / N
q = m1 / M
So L2,max = C(N, n1) C(M, m1) n1n1m1m1n2n2m2m2 / NNMM

And again, using Stirling’s approximation, we get:

ln(L1,max) ~  – ½ ln(G)
So AIC= 4 + ln(G)

We now just need to compare these two AICs to see which model is preferable for a given set of data:

AIC= 4 + ln(G)
AIC= 2 + 2ln(F) + ln(G)

AIC– AIC= 2 – 2lnF

Let’s look at two cases that are telling. The first case will be that we find that both coin 1 and coin 2 end up landing heads an equal proportion of the time, and for simplicity, both coins are tossed the same number of times. Formally:

Case 1: N = M, n1 = m1, n2 = m2

In this case, F becomes 1, so lnF becomes 0.

AIC– AIC1 = 2 > 0
So Model 1 is preferable.

This makes sense! After all, if the two coins are equally likely to land heads, then our two models do equally well at predicting the data. But Model 1 is simpler, involving only a single parameter, so it is preferable. AIC gives us a precise numerical criterion by which to judge how preferable Model 1 is!

Okay, now let’s consider a case where coin 1 lands heads much more often than coin 2.

Case 2: N = M, n1 = 2m1, 2n2 = m2

Now if we go through the algebra, we find:

F = 4N (4/27)(m1+n2) ~ 1.12N
So lnF ~ N ln(1.12) ~ .11 N
So AIC– AIC1 = 2 – .22N

This quantity is larger than 0 when N is less than 10, but then becomes smaller than zero for all other values.

Which means that for Case 2, small enough data sets still allow us to go with Model 1, but as we get more data, the predictive accuracy of Model 2 eventually wins out!

It’s worth pointing out here that the Akaike information criterion is an approximation to the technique of maximizing relative entropy, and this approximation assumes large sets of data. Given this, it’s not clear how reliable our estimate of 10 is for the largest data set.

Let’s do one last thing with our simple models.

As our two coins become more and more similar in how often they land heads, we expect Model 1 to last longer before Model 2 ultimately wins out. Let’s calculate a general relationship between the similarity in the ratios of heads to tails in coin 1 and coin 2 and the amount of time it takes for Model 1 to lose out.

Case 3: N = M, n1 = r·m1, r·n2 = m2

r here is our ratio of p/q – the chance of heads in coin 1 over the chance of heads in coin 2. Skipping ahead through the algebra we find:

lnF = N ln( 2 rr/(r+1) / (r + 1) )

Model 2 becomes preferable to Model 1 when AICbecomes smaller than AIC1, so we can find the critical point by setting ∆AIC = 2 – 2 lnF = 0

lnF = N ln( 2 rr/(r + 1) / (r + 1) ) = 1
N = 1 / ln( 2 rr/(r + 1) / (r + 1) )

We can see that as r goes to 1, N goes to ∞. We can see how quickly this happens by doing some asymptotics:

r = 1 + ε
N ~ 1 / ln(1 + ε)
So ε ~ e1/N – 1

N goes to infinity at an exponential rate as r approaches 1 linearly. This gives us a very rough idea of how similar our coins must be for us to treat them as essentially identical. We can use our earlier result that at r = 2, N = 10 to construct a table:

r N
2 10
1.1 100
1.01 1000
1.001 10,000
1.0001 100,000

Leave a Reply