This is a special post for quick takes by alex lawsen. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I'm fairly disappointed with how much discussion I've seen recently that either doesn't bother to engage with ways in which the poster might be wrong, or only engages with weak versions. It's possible that the "debate" format of the last week has made this worse, though not all of the things I've seen were directly part of that.

I think that not engaging at all, and merely presenting one side while saying that's what you're doing, seems better than presenting and responding to counterarguments (but only the weak ones), which still seems better than strawmanning arguments that someone else has presented.

Now posted as a top-level post here.

I like your framing  of PhDs "as more like an entry-level graduate researcher job than ‘n more years of school’".  Many people outside of academia don't understand this, and think of graduate school as just an extension of undergrad when it is really a completely different environment. The main reason to get a PhD is if you want to be a professional researcher (either within or outside of academia), so from this perspective, you'll have to be a junior researcher somewhere for a few years anyway. 

In the context of short timelines: if you can do direct work on high impact problems during your PhD, the opportunity cost of a 5-7 year program is substantially lower. 

However, in my experience, academia makes it very hard to focus on questions of highest impact; instead people are funneled into projects that are publishable by academic journals. It is really hard to escape this, though having a supportive supervisor (e.g., somebody who already deeply cares about x-risks, or an already tenured professor who is happy to have students study whatever they want) gives you a better shot at studying something actually useful. Just something to consider even if you've already decided you're a good personal fit for doing a PhD!

When Roodman's awesome piece on modelling the human trajectory came out, I feel like far too little attention was paid to the catastrophic effects of including finite resources in the model. 

I wonder if part of this is an (understandable) reaction to the various fairly unsophisticated anti-growth arguments which float around in environmentalist and/or anticapitalist circles. It would be a mistake to dismiss this as a concern simply because some related arguments are bad. To sustain increasing growth, our productive output per unit resource has to become arbitrarily large (unless space colonisation). It seems not only possible but somewhat likely that this "efficiency" measure will reach a cap some time before space travel meaningfully increases our available resources.

I'd like to see more sophisticated thought on this. As a (very brief) sketch of one failure mode:

- Sub AGI but still powerful AI ends up mostly automating the decision making of several alrge companies, which with their competitive advantage then obtain and use huge amounts of resources.

- They notice each other, and compete to grab those remaining resources as quickly as possible.

- Resources gone, very bad.

(This is along the same lines as "AGI acquires paperclips", it's not meant to be a fully fleshed out example, merely an illustrative story)

Just flagging that space doesn't solve anything - it just pushes back resource constraints a bit. Given speed-of-light constraints, we can only increase resources via space travel ~quadratically with time, which won't keep up with either exponential or hyperbolic growth.

~quadratically

Why not  cubically? Because the Milky Way is flat-ish?

Volume of a sphere with radius increasing at constant rate has a quadratic rate of change.

Ah yeah. Damn, I could have sworn I did the math before on this (for this exact question) but somehow forgot the result.😅

This is why you should have done physics ;)

Thanks, this is useful to flag. As It happens I think the "hard cap" will probably be an issue first, but it's definitely noteworthy that even if we avoid this there's still a softer cap which has the same effect on efficiency in the long run.

And yes, wasting or misusing resources due to competitive pressure in my view is one of the key failure modes to be mindful of in the context of AI alignment and AI strategy. FWIW, my sense is that this belief is held by many people in the field, and that a fair amount of thought has been going into it. (Though as with most issues in this space I think we don't have a "definite solution" yet.)

Yes, I think it is very likely that growth eventually needs to become polynomial rather than exponential or hyperbolic. The only two defeaters I can think of are (i) we are fundamentally wrong about physics or (ii) some weird theory of value that assigns exponentially growing value to sub-exponential growth of resources. 

This post contains some relevant links (though note I disagree with the post in several places, including its bottom line/emphasis).

I'm considering taking the very +EV betting opportunities available with the US election with the money I plan to donate over the next 6 months, then donating the winnings (or not donating if I lose).

Some more discussion on my twitter here but I'm interested in thoughts from EAF members too. It's not a huge amount of money either way.

I ended up doing this.

This went well :) Congrats EAF meta, Rethink, and GFI on your winnings.

Together with a few EA friends, I ended up betting a substantial amount of money on Biden. It went well for me, too, as well as for some of my friends. I think presidential elections present unusually good opportunities for both betting and arbitrage, so it may be worth coordinating some joint effort next time.

(As a note of historical interest, during the 2012 US election a small group of early EAs made some money arbitraging Intrade.)

EA fellowships and summer programmes should have (possibly more competitive) "early entry" cohorts with deadlines in September/October, where if you apply by then you get a guaranteed place, funding, and maybe some extra perk to encourage it, could literally be a slack with the other participants.

Consulting, finance etc have really early processes which people feel pressure to accept in case they don't get anything else, and then don't want to back out of.

Given the probably existence of several catastrophic "tipping points" in climate change, as well as feedback loops more generally such as melting ice reducing solar reflectivity, it seems likely that averting CO2 emissions in the future is less valuable than doing so today.


To do: Figure out an appropriate discount rate to account for this.

I like the idea:word ratio in this post.

Discounting the future consequences of welfare producing actions:

  • there's almost unanimous agreement among moral philosophers that welfare itself should not be discounted in the future.
  • however many systems in the world are chaotic, and it's very uncontroversial that in consequentialist theories the value of an action should depend on the expected utility it produces.
  • is it possible that the rational conclusion is to exponentially discount future welfare as a way of accounting for the exponential sensitivity to initial conditions exhibited by the long term consequences of one's actions?

Lots of Givewell's modelling assumes that health burdens of diseases or deficiencies are roughly linear in a harm vs. severity sense. This is a defensible default assumption, but seems important enough when you dig in to the analysis that it would be worth investigating when it comes to whether there's a more sensible prior.

I started donating regularly but following the thought process:

Some amount of money exists which is small enough that I wouldn't notice not having it.

This is clearly a lower bound on how much I am morally obligated to donate, because not having it costs me 0 utility, but giving it awa generates positive utility for someone else.

I ended up donating £1/month, but committing never to cancel this and periodically review it. I now donate much, much more.

To do:

Compare the benefits of encouraging other people to take a similar approach with the potentially harm associated with this approach going wrong, specifically moral licensing kicking in at relatively small donation amounts.

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig