MichaelDickens

Comments

Replaceability Concerns and Possible Responses

I believe this only applies for certain causes, mainly global poverty. If you want to work on existential risk, movement building, or cause prioritization, basically no organizations are working on these except for EA or EA-adjacent orgs. Many non-EA orgs do cause prioritization, but they generally have a much more limited range of what causes they're willing to consider. Animal advocacy is more of a middle ground, I believe EAs make up somewhere between 10% and 50% of all factory farming focused animal advocates.

(This is just my impression, not backed up by any data.)

New Top EA Causes for 2020?

I'm a bit late to the party on this one, but I figured out recently that determining the correct discount rate could be the top EA cause.

The case for investing to give later

What you basically seem to be calculating is the optimal degree of free riding that you can get away with to maximize the impact of your own dollars.

If other people spend too much now and not enough later, then by investing, you do more good for the world than if you spent now. This maximizes the impact of your own dollars without reducing the impact of anyone else's, so it increases the total well-being of the world. And it's the optimal strategy if your goal is to maximize total well-being.

Can High-Yield Investing Be the Most Good You Can Do?

However I have seen surprisingly little engagement from the EA community on this particular topic, possibly due to the only recent publication of Trammell’s 80,000 Hours podcast, his patient philanthropy paper, and the two blog posts by Dickens and Hoeijmakers I reference above. None of those sources directly reference the question of optimizing high-yielding investments

I did previously write about optimizing investments: https://forum.effectivealtruism.org/posts/g4oGNGwAoDwyMAJSB/how-much-leverage-should-altruists-use

The post mainly talks about using leverage, but I do talk about specific investment choices in this section. See also this followup post.

Right now, a large percent of EA money is in Facebook stock (at least according to Forbes). Holding money in a single stock has about 2x the risk of the S&P 500 and ~4x the risk of the global market portfolio, but without any additional expected return. Diversifying this money seems to me to be the most important improvement to the EA investment portfolio, although I don't know how tractable it is. I don't have any personal connections to Cari Tuna or Dustin Moskovitz, so I can't speak to their reasons for continuing to hold most of their net worth in Facebook stock. Based on a quick back-of-the-envelope calculation, moving their money from Facebook to the global market portfolio would be worth an expected >$1 billion per year.

In the long run, you cannot earn greater than market returns, because eventually you will have so much money that you'll run out of above-market investing opportunities (and that's assuming you can identify above-market opportunities in the first place). So it's not obvious that changing investment returns affects whether to give now or later unless you're only talking about the next few decades. I do think it's plausible that the best time to give could be a few decades from now, plus it's still good to earn as high an investment return as possible even if you still believe you should spend your budget relatively quickly.

The Importance of Unknown Existential Risks

Related to this, I find anthropic reasoning pretty suspect, and I don't think we have a good enough grasp on how to reason about anthropics to draw any strong conclusions about it. The same could be said about choices of priors, e.g., MacAskill vs. Ord where the answer to "are we living at the most influential time in history?" completely hinges on the choice of prior, but we don't really know the best way to pick a prior. This seems related to anthropic reasoning in that the Doomsday Argument depends on using a certain type of prior distribution over the number of humans who will ever live. My general impression is that we as a society don't know enough about this kind of thing (and I personally know hardly anything about it). However, it's possible that some people have correctly figured out the "philosophy of priors" and that knowledge just hasn't fully propagated yet.

The Importance of Unknown Existential Risks

Thanks for this perspective! I've heard of the Doomsday Argument but I haven't read the literature. My understanding was that the majority belief is that the Doomsday Argument is wrong, we just haven't figured out why it's wrong. I didn't realize there was substantial literature on the problem, so I will need to do some reading!

I think it is still accurate to claim that very few sources have considered the probability of unknown risks relative to known risks. I'm mainly basing this off the Rowe & Beard literature review, which is pretty comprehensive AFAIK. Leslie and Bostrom discuss unknown risks, but without addressing their relative probabilities (at least Bostrom doesn't, I don't have access to Leslie's book right now). If you know of any sources that address this that Rowe & Beard didn't cover, I'd be happy to hear about them.

Movement building and investing to give later

I'm glad you wrote this! Movement-building is an important complement to financial investing, and can benefit the future in many of the same ways.

Maximizing the number of longtermists at time t may require periods of spending alternated with periods of investment.

I believe your model gives this result because of the constraint that you have to either spend or invest all of your salary in each period. If you allow spending greater than 0% or less than 100% of your salary, I believe you will get the result that you maximize the number of longtermists by spending some fixed proportion of your salary in each period. Alternating between periods is a way of approximating this.

I added related functionality to your script here: https://github.com/michaeldickens/public-scripts/blob/master/movement-building-model.py

Also, there is a bug in the invest function, money += (money + salary) * market_rate should be money = (money + salary) * market_rate.

Estimating the Philanthropic Discount Rate

Future utility is not less valuable, but the possibility of extinction means there is a chance that future utility will not actualize, so we should discount the future based on this chance.

That's pretty much right. I would add that another reason why complete loss of capital is "special" is because it is possible to recover from any non-complete loss via sufficiently high investing returns. But if you have $0, no matter how good a return you get, you'll still have $0.

Estimating the Philanthropic Discount Rate

Naively I would have thought that a double chance of getting half your assets expropriated would be approximately as bad as losing all of them.

Diminishing marginal utility means these two events are pretty different. According to the standard assumption of constant relative risk aversion, losing all your assets produces -infinity utility. I don't think this is a realistic assumption, but it's required to make the optimal consumption problem have an analytic solution. I've done some rough numeric analysis where the utility function is bounded below at 0 instead of at -infinity, and based on what I've seen, it generally recommends about the same consumption schedule. (I only did a super preliminary analysis, so I'm not confident about this.)

Similarly, organizations that avoid value drift will tend to gain power over time relative to those that don't.

Perhaps it would be more accurate to say that an organization that avoids value drift and also consumes its resources slowly (more slowly than r - g) will gain resources over time.

Load More