Sam Harris puzzles me sometimes.
I recently saw a great video explaining Hume’s is-ought distinction and its relation to the orthogonality thesis in artificial intelligence. Even if you are familiar with these two, the video is a nice and clear exposition of the basic points, and I recommend it highly.
Quick summary: Statements about what one ought to do are logically distinct from ‘is’ statements – those that simply describe the world. Statements in the second category cannot be derived from statements in the first, and vice versa. Artificial intelligence researchers recognize this in the form of the orthogonality thesis – an AI can have any combination of intelligence level and terminal values, and learning more about the world or becoming more intelligent will not result in value convergence.
This is more than just pure theory. When you’re building an AI, you actually have to design its goals separately from its capacity to describe and model the world. And if you don’t do so, then you’ll have failed at the alignment task – ensuring that the goals of the AI are friendly to humanity (and most of the space of all possible goals looks fairly unfriendy).
Said more succinctly: if you think that a sufficiently intelligent being that spends enough time observing the world and figuring out all of its “is” statements would naturally start to converge towards some set of “ought” statements, you’re wrong. While this may seem very intuitively compelling, it just isn’t the case. Values and beliefs are orthogonal, and if you think otherwise, you’d make a bad AI designer.
I am very confused by the existence of apparently reasonable people that have spent any significant amount of time thinking about this and have concluded that “ought”s can be derived from “is”s, or that “ought”s are really just some special type of “is”s.
Case in point: Sam Harris’s “argument” for getting an ought from an is.
Getting from “Is” to “Ought”
1/ Let’s assume that there are no ought’s or should’s in this universe. There is only what *is*—the totality of actual (and possible) facts.
2/ Among the myriad things that exist are conscious minds, susceptible to a vast range of actual (and possible) experiences.
3/ Unfortunately, many experiences suck. And they don’t just suck as a matter of cultural convention or personal bias—they really and truly suck. (If you doubt this, place your hand on a hot stove and report back.)
4/ Conscious minds are natural phenomena. Consequently, if we were to learn everything there is to know about physics, chemistry, biology, psychology, economics, etc., we would know everything there is to know about making our corner of the universe suck less.
5/ If we *should* to do anything in this life, we should avoid what really and truly sucks. (If you consider this question-begging, consult your stove, as above.)
6/ Of course, we can be confused or mistaken about experience. Something can suck for a while, only to reveal new experiences which don’t suck at all. On these occasions we say, “At first that sucked, but it was worth it!”
7/ We can also be selfish and shortsighted. Many solutions to our problems are zero-sum (my gain will be your loss). But *better* solutions aren’t. (By what measure of “better”? Fewer things suck.)
8/ So what is morality? What *ought* sentient beings like ourselves do? Understand how the world works (facts), so that we can avoid what sucks (values).
This is clearly a bad argument. Depending on what he intended it to be, step 5 is either a non sequitur or a restatement of his conclusion. It’s especially surprising because the mistake is so clearly visible.
I don’t know what to make of this, and really have no charitable interpretation. My least uncharitable interpretation is that maybe the root of the problem is an inability to let go of the longing to ground your moral convictions in objectivity. I’ve certainly had times where I felt so completely confident about a moral conviction that I convinced myself that it just had to be objectively true, although I always eventually came down from those highs.
I’m not sure, but this is something that I am very confused about and want to understand better.
I started out trying to write a charitable interpretation of his argument, but I might have just made up something completely different haha. It’s hardddd to try to argue for moral truths being real in virtually any sense of the word. 😐 Anyway…
“Should” statements are just ideas created by brains, which don’t say anything about stuff that exists independently of brains. Specifically, “should” statements refer to certain unique qualitative feelings/ideas that correspond to a strong and/or certain type of preference. “I should not hurt people” = “I have a strong/deep-seated/other-specific-type-of- preference which results in a unique qualitative response against hurting people.” We call that unique qualitative response a moral feeling.
There are true facts about preferences, often closely linked to true facts about moral feelings. For example, brains hate being pressed against hot stoves. This is near-universal and would be silly to debate. From this we can reasonably (and correctly) guess that “you should not press me against a hot stove” is a near-universal moral feeling. A more general near-universal moral feeling is, “In general, you should try to make people happy rather than sad.”
Now say someone asks you, “What does it mean to act morally?” You could say, “Well, it means acting in accordance with a specific type of unique qualitative response that people often have to certain types of preferences, which pushes them in one direction or the other, often in competition with other feelings that arise from those same preferences but which are qualitatively different.” Or you could say, “Basically, you should try to make people happy rather than sad.” The first statement describes what “acting morally” literally is, and the second gives you a representative example of how someone would likely go about it.
The second statement might sound like it’s claiming something “objective” about the external world, but it’s not necessarily—we’re just moving from high-level description of morality to a low-level answer within the space of moral feelings. And it’s a fact that moral feelings in brains generally say, “To be moral, you should make people happy rather than sad.” This is approaching the argument that when someone asks a question *within the space of moral feelings* there is sometimes a genuinely correct answer.
Here’s a moral question with an incredibly consistent answer across all individuals: “If I was able to, should I cause every conscious creature in the entire universe to suffer as much as possible for all eternity?” Everyone’s moral feelings say, “No.”
Therefore, it makes sense to say there is one right answer to the above question, when we ask it *within the space of moral feelings*. To put it another way, *if the word “should” is to be used at all,* at the very least you have to agree that you should not cause infinite suffering to all creatures. Anything else just doesn’t make sense.
Someone (ie Sam Harris) might then think that moral feelings are the types of things you should systematize, and that from the above admission you can derive the rest of a “true” moral framework (meaning that there are correct answers to a bunch of questions within the space of moral feelings, not that moral feelings actually correspond to some type of moral truth that exists in the “real world” outside human brains).
This is really good and clear. One of the best defenses of a realist-ish stance I’ve seen. Lots of food for thought. Thanks! 😀