On complexity and information geometry

AIC and BIC, two of the most important model selection criteria, both penalize overfitting by looking at the number of parameters in a model. While this is a good first approximation to quantifying overfitting potential, it is overly simplistic in a few ways.

Here’s a simple example:

= { y(x) = ax | a ∈ [0, 1] }
= { y(x) = ax | a ∈ [0, 10] }

is contained within ℳ, so we expect that it should be strictly less complex, with lesser overfitting potential, than ℳ₂. But both have the same number of parameters! So the difference between the two will be invisible to AIC and BIC (as well as all other model selection techniques that only make reference to the number of parameters in the model).

A more subtle approach to quantifying complexity and overfitting potential is given by the Fisher information metric. The idea is to define a geometric space over all possible values of the parameter, where distances in this space correspond to information gaps between different distributions.

Imagine a simple two-parameter model:

ℳ = { P(x | a, b) | a, b ∈ ℝ }

We can talk about the information distance between any particular distribution in this model and the true distribution by referencing the Kullback-Leibler divergence:

DKL = ∫ Ptrue(x) log( Ptrue(x) / P(x | a, b)) dx

The optimal distribution in the space of parameters is the distribution for which this quantity is minimized. We can find this by taking the derivative with respect to the parameters and setting it equal to zero:

[DKL] = ∂a [ ∫ Ptrue(x) log( Ptrue(x) / P(x | a, b)) d]
= ∂a [ – ∫ Ptrue(x) log(P(x | a, b)) d]
= – ∫ Ptrue(x) ∂log(P(x | a, b)) d]
= E[ – ∂log(P(x | a, b) ]

[DKL] = E[ – ∂log(P(x | a, b) ]

We can form a geometric space out of DKL by looking at its second derivatives:

aa [DKL] = E[ – ∂aa log(P(x | a, b) ] = gaa
ab [DKL] = E[ – ∂ab log(P(x | a, b) ] = gab
ba [DKL] = E[ – ∂ba log(P(x | a, b) ] = gba
bb [DKL] = E[ – ∂bb log(P(x | a, b) ] = gbb

These four values make up what is called the Fisher information metric . Now, the quantity

ds² = gaa da² + 2 gab da db + gbb db²

defines the information distance between two infinitesimally close distributions. We now have a geometric space, where each point corresponds to a particular probability distribution, and distances correspond to information gaps. All of the nice geometric properties of this space can be discovered just by studying the metric ds². For instance, the volume of any region of this space is given by:

dV = √(det(g)) da db

Now, we are able to see the relevance of all of this to the question of model complexity and overfitting potential. Any model corresponds to some region in this space of distributions, and the complexity of the model can be measured by the volume it occupies in the space defined by the Fisher information metric.

This solves the problem that arose with the simple example that we started with. If one model is a subset of another, then the smaller model will be literally enclosed in the parameter space by the larger one. Clearly, then, the volume of the larger model will be greater, so it will be penalized with a higher complexity.

In other words, volumes in the geometric space defined by Fisher information metric give us a good way to talk about model complexity, in terms of the total information content of the model.

Here’s a quick example:

= { y(x) = ax + b + U | a ∈ [0, 1], b ∈ [0, 10], U a Gaussian error term }
= { y(x) = ax + b + U | a ∈ [-1, 1], b ∈ [0, 100], U a Gaussian error term }

Our two models are represented by a set of gaussians centered around the line ax + b. Both of these models have the same information geometry, since they only differ in the domain of their parameters:

gaa = ∂aa [DKL] = ⅓
gab = ∂ab [DKL] = ½
gba = ∂ba [DKL] = ½
gbb = ∂bb [DKL] = 1

From this, we can define lengths and volumes in the space:

ds² = ⅓ da² + da db + db²
dV = √(det(g)) da db = da db / 2√3

Now we can explicitly compare the complexities of ℳ and ℳ:

C(ℳ) = 5/√3 ≈ 2.9
C(ℳ) = 100/√3 ≈ 53.7

In the end, we find that C(ℳ) > C(ℳ) by a factor of 20. This is to be expected; Model 2 has a 20 times larger range of parameters to search, and is thus 20 times more permissive than Model 1.

While the conclusion is fairly obvious here, using information geometry allows you to answer questions that are far from obvious. For example, how would you compare the following two models? (For simplicity, let’s suppose that the data is generated according to the line y(x) = 1, with x ∈ [0, 1].)

 = { y(x) = ax + b | a ∈ [2, 10], b ∈ [0, 2] }
 = { y(x) = aeᵇˣ | a ∈ [2, 10], b ∈ [0, 2] }

They both have two parameters, but express very different hypotheses about the underlying data. Intuitively, ℳ feels more complex, but how do we quantify this? It turns out that ℳ has the following Fisher information metric:

gaa = ∂aa [DKL] = (2b + 1)-1
gab = ∂ab [DKL] = – (2b + 1)-2
gba = ∂ba [DKL] = – (2b + 1)-2
gbb = ∂bb [DKL] = 4a (2b + 1)-3 – 2 (b + 1)-3

Thus,

dV = (2b + 1)-2 (4a + 1 – (2b+1)3/(b+1)3)½ da db

Combining this with the previously found volume element for ℳ. we find the following:

C(ℳ) ≈ 4.62
C(ℳ₄
) ≈ 14.92

This tells us that ℳ₄ contains about 3 times as much information as ℳ, precisely quantifying our intuition about the relative complexity of these models.

Formalizing this as a model selection procedure, we get the Fisher information approximation (FIA).

FIA  = – log L + k/2 log(N/2π) + log(Volume in Fisher information space)
BIC  = – log L + k/2 log(N/2π)
AIC  = – log L + k
AICc = – logL + k + k ∙ (k+1)/(N – k – 1)

Color coding: Goodness-of-fit DimensionalityComplexity

 

One thought on “On complexity and information geometry

Leave a Reply