Some half-baked thoughts on moral arbitrariness

I find that there’s often a crucial assumption implicit in discussions of abortion ethics. It comes up at mentions of when personhood arises in the development from zygote to fetus to baby. One person claims that some particular moment is the threshold at which personhood arises. The other points out that zooming in to that moment and looking at extremely nearby moments, we see no particular reason to privilege one over the others. This arbitrariness is taken to be a fatal blow for the account of personhood.

This raises an interesting question. Could the fundamental moral laws be arbitrary? By analogy, think about the laws of physics. The laws of physics contain certain parameters like the gravitational constant G and me/mp, the ratio of the mass of an electron to the mass of a proton, whose values are likely arbitrary to some degree. Even taking into account fine-tuning for life, it’s probable that the fine-tuning isn’t infinitely precise and there’ll be some level of arbitrariness in the 1000th decimal place value of G.

If the laws of physics can be arbitrary, why not the laws of morality? Perhaps there’s just one arbitrary point at which moral personhood emerges, and there’s not much motivation for that point over any other. How strange you consider this to be will likely depend on what your meta-ethical theory is. Trivially, if you don’t think there are moral facts at all, then this puzzle never even arises for you. If you think there are moral facts, but there’s somehow socially or biologically determined, then it’s not so puzzling that there would be arbitrariness in the moral facts. But if you’re a moral realist that believes in an objectively true set of laws governing morality, then this view starts to look strange.

Among moral objectivists, it seems to me like anti-Humeans would not be okay with arbitrariness in the laws of morality. In meta-ethics, anti-Humeans are those who believe that moral facts are intrinsically motivating. This doesn’t mesh well with arbitrariness. If the moral laws are arbitrary, then why should I follow them rather than a neighboring set of laws that work just as well? Almost by definition, arbitrariness in the moral laws implies a lack of motivation, both motivation for the letter of the laws and motivation to live by the laws. On the other hand, if one takes a Humean stance on meta-ethics, perhaps arbitrariness is not so puzzling.

Moral arbitrariness might also be troubling to divine command theorists, who believe that the moral rules are set by God. There’s something that seems quite strange about saying that God’s commands are arbitrary to some extent (though to be fair, I say this from a very atheistic perspective, so perhaps my intuitions differ from theists here). But if this feels strange, then why shouldn’t it feel just as strange to say that the laws of the physical universe are arbitrary? Presumably God also decided on the precise values of all the physical parameters, and there seems to be arbitrariness there. Is there something particularly troubling about the idea that God’s choice of moral laws is arbitary?

Moral arbitrariness seems like an inevitable consequence of most, maybe all, moral systems. A rights-based approach has to deal with tradeoffs between different rights: how severe a breach of bodily autonomy is severe enough that it’s better to violate a person’s right to life? Any binary account of personhood seems bound to end up drawing the line at some arbitrary point. And gradualist accounts of personhood come with their own type of arbitrariness: why should the curve of increasing personhood with time look like precisely this, rather than some other very similar curve? Virtue theoretic approaches talk about virtues arising in a happy medium between two vices (e.g. bravery arising between cowardice and foolhardiness), but where is the precise middle point? If one were to completely codify virtue ethics, they would have to say precisely what level of riskiness is bravery, and when it tips over into foolhardiness. But there will always be thought experiments that place you just barely on either side of this threshold and reveal that there is no apparent moral difference between one side and the other.

Perhaps the framework that has the least trouble with moral arbitrariness is consequentialism. Something like utilitarianism says that the threshold for when you should choose Act 1 over Act 2 is exactly when the expected net utility produced by Act 1 exceeds the expected net utility produced by Act 2 (where utility is something like “happiness minus sadness”). Unfortunately, I think that this approach runs in to problems as well. Happiness is not one-dimensional, and neither is suffering. How do you make different types of happiness commensurable? How many sips of hot chocolate are equivalent to a roller-coaster ride? How many minutes in front of a fire on a cold night are equivalent to the moment of insight when you solve a tough mathematical problem? I find it hard to imagine that non-arbitrary answers to these types of questions exist.

If it’s true that most all moral frameworks contain fundamental arbitrariness, as I believe it is, then I think that this turns into a powerful argument against many types of moral realism. If you’re an anti-Humean, then you have to either deny the arbitrariness or explain why arbitrary moral laws would be intrinsically motivating to us. If you think that God created the moral laws, then you have to reckon with the apparent arbitrariness of those laws. Presumably God always makes the optimal choice when one exists, but what does God do when faced with a choice where there is no optimum?

Asymmetry between the infinite and finite

This is well-trod territory for this blog, but I want to give a little reminder of some of the subtleties involving infinity in logic. In particular, there’s a fun asymmetry between the finite and the infinite in terms of the expressive power of first order logic.

Consider the set of all first order sentences that are true in every infinite model. This includes any tautology (like P ∨ ¬ P), as well as some other sentences like ∃x ∃y (x ≠ y) (which says that there are at least two distinct things). Are there any finite models that satisfy this entire set of sentences?

Answer is no: for any finite n you can form a sentence that says “there are at least n things”. E.g. for n = 4 we say ∃x ∃y ∃z ∃w (x≠y ∧ x≠z ∧ x≠w ∧ y≠z ∧ y≠w ∧ z≠w). This basically says “there exists an x, y, z, and w that are all distinct”, i.e. there are at least four things. This can be easily generalized to any finite n. Sentences of this form all appear in our set, because they’re true in every infinite model. But of course, no finite model satisfies all of them. Any finite model contains only so many elements, say N, and the sentence which says “there are at least N+1 things” will be false in this model.

What about the reverse question? If you consider the set of all first order sentences that are true in every fiinite model, are there any infinite models that satisfy this set?

The answer is yes! If there weren’t, then you’d have a collection of sentences that suffice to rule out all infinite models while leaving all finite models. And ruling out all and only the infinite models of a theory turns out to be like a superpower that’s inconsistent with any logic that’s sound and complete like first-order logic. The reasoning, in ultra-condensed form: finitary proofs + soundness + completeness imply compactness, and compactness gives you a quick proof that any theory with arbitrarily large finite models has infinite models (add an infinity of constant symbols to your theory and declare all of them unequal. Since every finite subset of these declarations is satisfied by some model, compactness says that the whole lot is satisfied by some model, which is an infinite model of the starting theory).

This is an interesting asymmetry between the finite and the infinite. If you assert every sentence that’s true in all infinite models, you’ve automatically excluded all finite models. But if you assert every sentence that’s true in all finite models, you haven’t excluded all infinite models. A listener that patiently waited for you to finish saying all those infinitely many sentences might still take you to be talking about some infinite model. (And in fact, by the Lowenheim-Skolem theorem, they could understand you to be talking about a model of any infinite cardinality, which is pretty dramatic.)

TL;DR: There are no finite models that satisfy the set of all first-order sentences that are true in every infinite model. On the other hand, there are infinite models that satisfy the set of all first-order sentences that are true in every finite model.

PA doesn’t trust its provability predicate

Here’s a fun thing: PA has a predicate Provable(n) such that when fed a number n, it returns true if and only if the sentence coded by that number is provable from PA. In other words:

PA ⊢ X iff PA ⊢ Provable(⟦X⟧)

Here we’re using two notational conveniences: ⟦X⟧ means “the number that is the Gödel code of the sentence X”, and PA ⊢ X means “PA proves X”.

Now, if PA can prove something, then it’s true in every model of PA. So perhaps we should expect that for every X, Provable(⟦X⟧) implies that X is true. Or more precisely, Provable(⟦X⟧) → X But PA can’t prove this! Löb’s theorem says that “Provable(⟦X⟧) → X” is provable only when X is provable. So if PA could prove that for every X, then it could also prove every X and would thus be inconsistent!

In other words, if PA is consistent then the following two things are both true:

(1) PA ⊢ X iff PA ⊢ Provable(⟦X⟧)
(2) PA ⊬ Provable(⟦X⟧) → X

By the deduction theorem for first order logic, this also implies that PA conjoined with the assumption that Provable(⟦X⟧) is not in general sufficient to prove X. To put the weirdness in non-technical terms, (2) says that PA doesn’t trust its provability predicate to ensure truth, while (1) says that what PA can prove about its provability predicate is exactly the true statements about provability.

What’s up with this? The explanation is kinda subtle: PA proves X if and only if PA proves Provable(⟦X⟧). But nonetheless “PA proves X” is not equivalent to Provable(⟦X⟧). “PA proves X” implies Provable(⟦X⟧), but it’s possible for Provable(⟦X⟧) to be true (just not provable!) despite PA not actually proving X.

To put it another way:

PA ⊢ X
iff
PA ⊢ Provable(⟦X⟧)
iff
Provable(⟦X⟧) is true in every model of PA

But it’s possible for Provable(⟦X⟧) to be true in some models of PA and false in others! “True in every model of PA” is much stronger than “true in at least one model of PA.”

Best way to think about this is probably in terms of nonstandard models. The statement Provable(⟦X⟧) is an existential statement: It says “there exists a number n that encodes a proof of X from PA”. In nonstandard models, there exist a bunch of nonstandard numbers, and some of these numbers trick our Gödel-encoding scheme, convincing the model that they are actually proofs. So it’s possible in a nonstandard model for Provable(⟦X⟧) to be true and X false. But for the sentences X for which this is true, PA doesn’t actually prove that they’re provable! Why? Because PA can only proves things that are true in every model, and Provable(⟦X⟧) is false in the standard model.

This holds quite generally. Suppose you have any theory T that can express arithmetic. If T is consistent, then no provability predicate can have all four of the following properties:

  1. If T ⊢ X then T ⊢ Prov(X)
  2. T ⊢ (Prov(X) → Prov(Prov(X)))
  3. T ⊢ Prov(A→B) → (Prov(A) → Prov(B))
  4. T ⊢ Prov(X) → X

Now, (1) through (3) jointly entail Löb’s theorem. This means that by adding (4) we allow T to prove every X, contradicting its consistency. So no theory of arithmetic can have a provability predicate that it trusts to ensure truth, so long as that provability predicate satisfies the first three desiderata!

No more than half of anti-sets are big

Background on anti-set theory here.

Suppose that somebody tells you that they have in their hands a model of anti-set theory containing 50 anti-sets. The axiom of anti-infinity gives us some nice immediate restrictions on how large these anti-sets can be. As a quick reminder, the axiom of anti-infinity tells us that every anti-set X must contain another anti-set Y that has a successor that’s not in X. The successor of X is defined to be X ⋃ {X} (an anti-set containing everything in X plus X itself). One big difference between successors in ZFC and successors in anti-set-theory is that anti-sets aren’t always guaranteed to have successors. Another is that anti-sets can be their own successors (this happens if the anti-set contains itself).

Ok, so anti-infinity immediately tells us that there can be no universal set (i.e. a set containing the entire universe). Since every set X must contain a set that has a successor outside X, no set can contain everything, or else it would contain all the successors!

So there are no anti-sets of size 50. How about anti-sets of size 49? It’s recently been proved that there can only be at most 25 sets of size 49. And in general, in a universe of size N no more than half of the anti-sets can be size N-1. Let’s prove it!

First, a naming convention: in a universe of size N, let’s call any sets of size N-1 big sets, and any other sets small sets. Big sets are as big as an anti-set can be. To satisfy anti-infinity, any set X must contain an element Y that has a successor that’s not in X. We’ll call Y the anti-infinity-satisfier for X.

Now, some observations. Suppose that a big set X has a successor. This successor is either size N-1 (if X contains itself) or size N (if not). But no set is size N! So any big set that has a successor, must be its own successor. But this means that big sets cannot be anti-infinity-satisfiers. If a big set X was an anti-infinity-satisfier for some other set Y, then its successor could not be in Y. But its successor is X, and X is in Y!

What this means is that every set must contain a small set as its anti-infinity-satisfier. Now, consider any big set X. X must contain some small set z that serves as an anti-infinity-satisfier for it. But for z to serve as an anti-infinity-satisfier, z must have a successor that’s not in X. What this means is that X must be missing z’s successor.

In other words, every big set must be missing at least one small set’s successor. But remember, big sets are size N-1. This means that they contain every set in the universe besides one. So now we know that the “missing set” that characterizes each big set is always a successor of a small set!

Let’s say that our universe contains k small sets. Then there are at most k successors of small sets. Any big set must be the set of all sets besides one of these successors. Thus by extensionality, there are at most k big sets. Therefore there can never be more big sets than small sets!

And that’s the result: In any finite universe, at most half of the anti-sets are big.

This resolves a conjecture that I posted earlier as an open question: in an N-element model, are there always at least three small sets? The answer is yes. If we had fewer than three small sets, then we’d have to also have fewer than three big sets. This means that our universe is at most size four (two small sets and two big sets). But no four-element models exist! QED.