Earning to give or waiting to give

Merry Christmas! In the spirit of the season, let’s talk about altruism.

Anybody familiar with the effective altruism movement knows about the concept of earning to give. The idea is that for some people, the ideal altruistic career path might involve them making lots of money in a non-charitable role and then donating a significant fraction of that to effective charities. Estimates from GiveWell for the current lowest cost to save a life put it around $2,300, which indicates that choosing a soulless corporate job that allows you to donate $150,000 a year could actually be better than choosing an altruistic career in which you are on average saving 65 lives per year, or more than a life per week!

What I’m curious about lately is the concept of earning to save up to give. At the time of writing, the rate of return on US treasury bills is 1.53%. US treasury bills are considered to be practically riskless – you get a return on your investment no matter what happens to the economy. Assuming that this rate of return and the $2,300 figure both stay constant, what this means is that by holding off on donating your $150,000 for five years, you expect to be able to donate about $11,830 more, which corresponds to saving 5 extra lives. And if you hold off for twenty years, you will be able to save 23 more lives!

If you choose to take on bigger risk, you can do even better than this. Instead of investing in treasury bills, you could put your money into a diversified portfolio of stocks and expect to get a rate of return of around 7% (the average annualized total return of the S&P 500 for the past 90 years, adjusted for inflation). Now you run the risk of the stock market being in a slump at the end of five years, but even if that happens you can just wait it out longer until the market rises again. In general, as your time horizon for when you’re willing to withdraw your investment expands, your risk drops. If you invest your $150,000 in stocks and withdraw after five years, you expect to save an extra 26 lives than you would by donating right away!

In general, you can choose your desired level of risk and get the best available rate of return by investing in an appropriate linear combination of treasury bills and stocks. And if you want a higher rate of return than 7%, you can take on more risk by leveraging the market (short selling the risk-free asset and using the money to invest more in stocks). In this model, the plan for maximizing charitable giving would be to continuously invest your income at your chosen level of risk, donating nothing until some later date at which time you make an enormous contribution to your choice of top effective charities. In an extreme case, you could hold off on donations your entire life and then finally make the donation in your will!

Alright, so let’s discuss some factors in favor and against this plan.

Factors in favor
Compounding interest
Decreasing moral and factual uncertainty

Factors against
Shrinking frontier of low hanging fruit
Personal moral regression
Bad PR
Future discounting
Infinite procrastination

Compounding interest and vanishing low hanging fruits

First, and most obviously, waiting to give means more money, and more money means more lives. And since your investment grows exponentially, being sufficiently patient can mean large increases in amount donated and lives saved. There’s a wrinkle in this argument though. While your money grows with time, it might be that the effective cost of improving the world grows even quicker. This could result from the steady improvement of the world: problems are getting solved and low hanging altruistic fruits are being taken one at a time, leaving us with a set of problems that take more money to solve, making them less effective in terms of impact per dollar donated. If the benefit of waiting is a 5% annual return on your investment, but the cost is a 6% decrease in the effective number of lives saved per dollar donated, then it is best to donate as soon as possible, so as to ensure that your dollars have the greatest impact.

Estimates of the trends in effectiveness of top charities are hard to come by, but crucially important for deciding when to give and when to wait.

Moral and factual uncertainty

Say that today you donate 50,000 dollars to Charity X, and tomorrow an exposé comes out revealing that this charity is actually significantly less effective than previously estimated. That money is gone now, and you can’t redirect it to a better charity in response to this new information. But if you had waited to donate, you would be able to respond and more accurately target your money to the most effective charities. The longer you wait to donate, the more time there is to gather the relevant data, analyze it, and come to accurate conclusions about the effectiveness of charities. This is a huge benefit of waiting to give.

That was an example of factual uncertainty. You might also be concerned about moral uncertainty, and think that in the future you will have better values than today. For instance, you may think that your future values, after having had more time to reflect and consider new arguments, will be more rational and coherent than your current values. This jumps straight into tricky meta-ethical territory; if you are a moral realist then it makes sense to talk about improving your values, but as a non-realist this is harder to make sense of. Regardless, there certainly is a very common intuition that individuals can make moral progress, and that we can “improve our values” by thinking deeply about ethics.

William MacAskill has talked about a related concept, applied on a broader civilizational level. Here’s a quote from his 80,000 Hours interview:

Different people have different sets of values. They might have very different views for what an optimal future looks like. What we really want ideally is a convergent goal between different sorts of values so that we can all say, “Look, this is the thing that we’re all getting behind that we’re trying to ensure that humanity…” Kind of like this is the purpose of civilization. The issue, if you think about purpose of civilization, is just so much disagreement. Maybe there’s something we can aim for that all sorts of different value systems will agree is good. Then, that means we can really get coordination in aiming for that.

I think there is an answer. I call it the long reflection, which is you get to a state where existential risks or extinction risks have been reduced to basically zero. It’s also a position of far greater technological power than we have now, such that we have basically vast intelligence compared to what we have now, amazing empirical understanding of the world, and secondly tens of thousands of years to not really do anything with respect to moving to the stars or really trying to actually build civilization in one particular way, but instead just to engage in this research project of what actually is a value. What actually is the meaning of life? And have, maybe it’s 10 billion people, debating and working on these issues for 10,000 years because the importance is just so great. Humanity, or post-humanity, may be around for billions of years. In which case spending a mere 10,000 is actually absolutely nothing.

Personal moral regression

On the other side of the issue of changing values over time, we have the problem of personal moral regression. If I’m planning to save up for an eventual donation decades down the line, I might need to seriously worry about the possibility that when the time comes to donate I have become a more selfish person or have lost interest in effective altruism. Plausibly, as you age you might become more attached to your money or expect a higher standard of living than when you were younger. This is another of these factors that is hard to estimate, and depends a lot on the individual.

Bad PR

Earning to give is already a concept that draws criticism in some quarters, and I think that waiting to give may look worse in some ways. I could easily see the mainstream revile the idea of a community of self-proclaimed altruists that mostly sit around and build up wealth with the promise to donate it some time into the future. Tied in with this is the concern that by removing the signaling value of your form of altruism, some of the motivation to actually be altruistic in the first place is lost.

Future discounting

Economists commonly talk about temporal discounting, the idea of weighting current value higher than future value. Give somebody a choice between an ice cream today and two ice creams in a month, and they will likely choose the ice cream today. This indicates that there is some discount rate on future value to make somebody indifferent to a nominally identical current value. This discount rate is often thought of purely descriptively, as a way to model a particular aspect of human psychology, but it is also sometimes factored into recommendations for policy.

For instance, some economists talk about a social discount rate, which represents the idea of valuing future generations less than the current generation. This discount rate actually also factors importantly into the calculation of the appropriate value of a carbon tax. Most major calculations of the carbon tax explicitly use a non-zero social discount rate, meaning that they assume that future generations matter less than current generations. For instance, William Nordhaus’s hugely influential work on carbon pricing used a “3 percent social discount rate that slowly declines to 1 percent in 300 years.”

I don’t think that this makes much moral sense. If you apply a constant discount rate to the future, you can end up saying things like “I’d rather get a nickel today than save the entire planet a thousand years from now.” It seems to me that this form of discounting is simply prejudice against those people are most remote from us: those that are in our future and as such do not exist yet. This paper by Tyler Cowen and Derek Parfit argues against a social discount rate. From the paper:

Remoteness in time roughly correlates with a whole range of morally important facts. So does remoteness in space. Those to whom we have the greatest obligation, our own family, often live with us in the same building. We often live close to those to whom we have other special obligations, such as our clients, pupils, or patients. Most of our fellow citizens live closer to us than most aliens. But no one suggests that, because there are such correlations, we should adopt a spatial discount rate. No one thinks that we would be morally justified if we cared less about the long-range effects of our acts, at some rate of n percent per yard. The temporal discount rate is, we believe, as little justified.

Infinite procrastination

There’s one final consideration I want to bring up, which is more abstract than the previous ones. Earlier I imagined somebody who decides to save up their entire life and make their donation in their will. But we can naturally ask: why not set up your will to wait another twenty years and then donate to the top-rated charities from your estimate of the most reliable charity evaluator? And then why stop at just twenty years? Why not keep on investing, letting your money build up more and more, all to the end of making some huge future donation and makes an absolutely enormous benefit to some future generation? The concern is that this line of reasoning might never terminate.

While this is a fun thought experiment, I think there are a few fairly easy holes that can be poked in it. In reality, you will eventually run up against decreasing marginal value of your money. At some point, the extra money you get by waiting another year actually doesn’t go far enough to make up for the human suffering you could have prevented. Additionally, the issue of vanishing low hanging fruits will become more and more pressing, pushing you towards

Summing up

We can neatly sum up the previous considerations with the following formula.

V = Q − p(1 − d)(Q + I − F)(R − p)
If V > 0, then giving now is preferable to giving in a year. If V < 0, then you should wait for at least a year.

Q = Current lives saved per dollar
R = Rate of return on investment
I = Reflection factor: yearly increase in lives saved per dollar from better information and longer reflection
p = Regression factor: expected percentage less you are willing to give in a year
F = Low hanging fruit factor: yearly decrease in lives saved per dollar
d = Temporal discount factor: percentage less that lives are valued each year

The ideal setting for waiting to give is where F is near zero (the world’s issues are only very slowly being sorted out), I is large (time for reflection is sorely needed to bring empirical data and moral clarity), p is near zero (no moral regression), R is large (you get a high return on your investment), and d is near zero (you have little to no moral preference for helping current people over future people).

Leave a Reply