Hide table of contents

There have been several posts recently about investing financially to give later. I am overall uncertain about whether the marginal donor should invest, but I worry that existing analyses are missing some key movement-building effects that might be important. In particular, it seems plausible to me that:

  • Maximizing the fraction of the world’s population that’s aligned with longtermist values is comparably important to maximizing the fraction of the world’s wealth controlled by longtermists.
  • A substantial fraction of the world population can become susceptible to longtermism only via slow diffusion from other longtermists, and cannot be converted through money.

If the above are true, we may want to invest only if we think our future money can be efficiently spent creating new longtermists. If we believe that spending can produce longtermists now, but won’t do so in the future, then we should instead be spending to produce more longtermists now instead.

Illustrative mathematical model

[Disclaimer: I am not an economist. Phil Trammell looked at this model and said that it does demonstrate my overall point, but also that the better way to do this would probably use control theory.]

[Update: Changes made after super helpful comment from Michael Dickens below.]

I created an extremely simplified model to try and illustrate the effects of spending on movement building vs. investing. In this model:

  • Our goal is to maximize the number of longtermists at some critical time t (perhaps the hinge of history).
  • We start with some number of longtermists and some number of people susceptible to longtermism.
  • Each year, the number of susceptible people grows by some amount proportional to the number of longtermists that year.
  • Each year we are paid a fixed salary. With our accumulated money, we can do some combination of:
    • Invest and receive some market rate of return.
    • Spend as much of our money as possible to convert susceptible people to longtermists at a constant cost per person. Critically, we can only create as many longtermists as there are susceptible people.

I’m not going to go into depth analyzing this model, but you can play with it here. The key observations are:

  • We can be effectively constrained either by the amount of money available or by the extent to which we can effectively spend it on longtermists.
  • Maximizing the number of longtermists at time t may require some amount of spending on movement building early on.

What should we take away from this?

This model above could be unrepresentative in many ways-- it’s not clear that movements can be modeled well as growing proportionally to their current size, we don’t have to spend money to convert people, etc. But it does gesture to some real aspects of the world:

  • Generally, people are more susceptible to new ideas when more of the people around them actively endorse those ideas.
  • There are important actors who we cannot affect through money alone, but could if the movement were bigger. For example, I don’t know of a good way to spend money to cause current key political figures to pivot into longtermism, but in a world where longtermism becomes a large social movement, younger people or the children of longtermists could hold key positions.

As such, I think we should consider treating movement growth as a compounding resource that is useful in and of itself and is not fungible with money.

This doesn’t necessarily imply that the marginal dollar put towards movement building now is better than investing (and even in the simplified illustrative model a large fraction of total money should often go to investing). But I think we should take it into consideration when thinking about the effects of our donations.

Comments5


Sorted by Click to highlight new comments since:

I'm glad you wrote this! Movement-building is an important complement to financial investing, and can benefit the future in many of the same ways.

Maximizing the number of longtermists at time t may require periods of spending alternated with periods of investment.

I believe your model gives this result because of the constraint that you have to either spend or invest all of your salary in each period. If you allow spending greater than 0% or less than 100% of your salary, I believe you will get the result that you maximize the number of longtermists by spending some fixed proportion of your salary in each period. Alternating between periods is a way of approximating this.

I added related functionality to your script here: https://github.com/michaeldickens/public-scripts/blob/master/movement-building-model.py

Also, there is a bug in the invest function, money += (money + salary) * market_rate should be money = (money + salary) * market_rate.

This is awesome, you're completely right and I'm totally updating my post with your model.

It definitely does strike me as there needing to be a lot of continued longtermist research and field-building to allow us to be in a good position to deploy a large amount of capital at a critical time. It's not easy to deploy capital overnight.

Sorry if this is a strawman of the "invest to give later" argument or something already addressed, but I think it's important to put out there if it hasn't been already.

I emphatically agree that we could be in a much better position to productively deploy capital, and that there may be significant room for improvement here. 

I'm not sure how relevant this is to "give now vs. later". At first glance, it seems like a key question might be what timeframe we're thinking about when considering "giving later". 

If it's on the order of a few decades, then I agree it would be somewhat farcical to say we should just wait. E.g., if we thought there'd be some pivotal AI-related moment in 2050, then "let's invest all capital and not do anything until 2040" would strike me as an obviously bad strategy. 

If, on the other hand, the idea is to use the capital in hundreds of years, then I'm less sure (but then it also becomes harder to see if we can successfully invest over such long time horizons).

Thank you for raising some additional considerations against giving later. I think this is really valuable for the ongoing discussion that seems to be strongly tilted in favor of investing and giving later.

Even beyond your argument for movement growth, there seem to be many other intuitive considerations where similar arguments could be made. For instance, you consider that "converting" longtermists is an activity that is not only related to money but also to time and room for growth.

You need time to convert dollars into results given that there are generally strong limitations to room for more funding that is tied to the current allocation of resources in the world. I would guess one could model this as some kind of game where at each time point t you can effectively invest x amount into cause y where x is a function of cumulative money spent on cause y. It could be plausible to model this as a gaussian function (i.e., a bell curve) where money invested in the beginning leads to strong growth in room for more funding in the next round and then declines again at some point when full saturation (i.e., all money that could reasonable be spent is spent) is approached. Interestingly, this is both an argument for giving now and giving later as there is limited room where money could be spent effectively.

Going beyond this "simple" view, it would also be interesting to model how problems grow over time as they are not addressed. The most obvious example is climate change. If somehow a US president in the 80s could have been convinced to shift policy towards renewables... the problem would have likely required much less resources overall. This indicates that the money required to be spent on problems is a function of the time at which it is discovered and how much resources are directed to it over time.

I am not a mathematician but if any of this is remotely plausible, I am not sure that the thinking so far has considered such complications (i.e., at least I haven't seen models that model these things but I also haven't been searching in depth) and at least my intuition tells me that integrating such consideration could radically tip the balance toward a strong preference for giving as early as reasonable and provide a good argument for investing into infrastructure that would help us identify and address problems effectively as they emerge.

This could be an interesting topic for a PhD with simulations chops. Or even a benchmarking platform where different agent strategies can compete against each other.[1]


    1. See Ketter, W., Peters, M., Collins, J., and Gupta, A. 2016. “COMPETITIVE BENCHMARKING: AN IS RESEARCH APPROACH TO ADDRESS WICKED PROBLEMS WITH BIG DATA AND ANALYTICS,” MIS Quarterly (40:4), p. 34. ↩︎

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
gergo
 ·  · 11m read
 · 
Crossposted on Substack and Lesswrong. Introduction There are many reasons why people fail to land a high-impact role. They might lack the skills, don’t have a polished CV, don’t articulate their thoughts well in applications[1] or interviews, or don't manage their time effectively during work tests. This post is not about these issues. It’s about what I see as the least obvious reason why one might get rejected relatively early in the hiring process, despite having the right skill set and ticking most of the other boxes mentioned above. The reason for this is what I call context, or rather, lack thereof. Subscribe to The Field Building Blog On professionals looking for jobs It’s widely agreed upon that we need more experienced professionals in the community, but we are not doing a good job of accommodating them once they make the difficult and admirable decision to try transitioning to AI Safety. Let’s paint a basic picture that I understand many experienced professionals are going through, or at least the dozens I talked to at EAGx conferences. 1. They do an AI Safety intro course 2. They decide to pivot their career 3. They start applying for highly selective jobs, including ones at OpenPhilanthropy 4. They get rejected relatively early in the hiring process, including for more junior roles compared to their work experience 5. They don’t get any feedback 6. They are confused as to why and start questioning whether they can contribute to AI Safety If you find yourself continuously making it to later rounds of the hiring process, I think you will eventually land the job sooner or later. The competition is tight, so please be patient! To a lesser extent, this will apply to roles outside of AI Safety, especially to those aiming to reduce global catastrophic risks. But for those struggling to penetrate later rounds of the hiring process, I want to suggest a potential consideration. Assuming you already have the right skillset for a given role, it might
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read