This is a special post for quick takes by jackva. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I really liked several of the past debate weeks, but I find it quite strange and plausibly counterproductive to spend a week in a public forum discussing these questions.

There is no clear upside to reducing the uncertainty on this question, because there are few interventions that are predictably differentiated along those lines.

And there is a lot of communicative downside risk when publicly discussing trading off extinction versus other risks / foregone benefits, apart from appearing out of touch with > 95% of people trying to do good in the world ("academic" in the bad sense of the word).

I have the impression we have not learned from the communicative mistakes of 2022 in that we are again pushing arguments of limited practical import that alienate people and limit our coalitional options.

Is this question really worth discussing and publicly highlighting when really getting more buy in into existential risk prevention work broadly construed would be extremely desirable and naturally, in the main, both reduce extinction risk and increase the quality of futures where we survive?

I disagree that we should avoid discussing topics so as to avoid putting people off this community.[1] 

  • I think some of EA's greatest contributions come from being willing to voice, discuss and seriously tackle questions that seemed weird or out of touch at the time (e.g. AI safety). If we couldn't do that, and instead remained within the overton window, I think we lose a lot of the value of taking EA principles seriously.
  • If someone finds the discussion of extinction or incredibly good/bad futures offputting, this community likely isn't for them. That happens a lot!
  1. ^

    Perhaps for some distasteful-to-almost-everyone topics, but this topic doesn't seem like that at all.

This is not what I am saying, my point is about attentional highlighting.

I am all for discussing everything on the Forum, but I do think when we set attentional priorities -- as those weeks do -- we could reflect whether we are targeting things that are high value to be discussed and how they land with and how they affect the broader world could be a consideration here.

I think messaging to the broader world that we focus our attention on a question that will only have effects for the small set of funders that are hardcore EA-aligned makes ourselves small.

By crude analogy it's like having a whole Forum week on welfare weights at the opportunity cost of a week focused on how to improve animal funding generally.

We could have discussion weeks right now on key EA priorities in the news, from the future of effective development aid, to great power war and nuclear risk, to how to manage AI risk under new political realities, that all would seem to affect a much larger set of resourcing and, crucially, also signal to the wider world that we are a community engaging on some of the most salient issues of the day.

I think setting a debate week on a topic that has essentially no chance of affecting non-EA funders is a lost opportunity and I don't think it would come out on top as a topic in a prioritization of debate weeks topic in the spirit of "how can we do the most good?"

On a more personal level, but I think this is useful to report here, because I don't think I am the only one with this reaction: I've been part of this community for a decade and have built my professional life around it -- and I do find it quite alienating that, at a time where we are close to a constitutional crisis in the US, where USAID is in shambles and where the post WW2-order is in question, we aee not highlighting how to take better action in those circumstances but instead discussing a cause prioritization question that seems very unlikely to affect major funding. It feeds the critique of EA that I've previously seen as bad faith -- that we are too much armchair philosophers.

It seems like you're making a few slightly different points:

  1. There are much more pressing things to discuss than this question.
  2. This question will alienate people and harm the EA brand because it's too philosophical/weird.
  3. The fact that the EA Forum team chose this question given the circumstances will alienate people (kind of a mix between 1 and 2).

I'm sympathetic to 1, but disagree with 2 and 3 for the reasons I outlined in my first comment.

I think that's fine -- we just have different views on what a desirable size of the potential size of the movement would be.

To clarify -- my point is not so much that this discussion is outside the Overton Window, but that it is deeply inside-looking / insular. It was good to be early on AI risk and shrimp welfare and all of the other things we have been early on as a community, but I do think these issues have a higher tractability in mobilizing larger movements / having an impact outside our community than this debate week has.

On a more personal level, but I think this is useful to report here, because I don't think I am the only one with this reaction: I've been part of this community for a decade and have built my professional life around it -- and I do find it quite alienating that, at a time where we are close to a constitutional crisis in the US, where USAID is in shambles and where the post WW2-order is in question, we aee not highlighting how to take better action in those circumstances but instead discussing a cause prioritization question that seems very unlikely to affect major funding. It feeds the critique of EA that I've previously seen as bad faith -- that we are too much armchair philosophers.

I do think it's a good chance to show that the EA brand is not about short-term interventions but principles of first thinking, being open to weird topics, and inviting people to think outside of the media bubble. At the same time, I would like to see more stories out there (very generally speaking) about people who have used the EA principles to address current issues (at EA Germany, we have been doing this every month for 2 years and were happy to have you as one of the people in our portraits). It's great that Founders Pledge and TLYCS are acting on the crisis, and Effektiv Spenden is raising funds for that. But I'm glad they are doing this with their brands, leaving EA to focus on the narrow target group of impartially altruistic and truth-seeking people who might, in the future, build the next generation of organizations addressing these or other problems.

I have the impression we have not learned from the communicative mistakes of 2022 in that we are again pushing arguments of limited practical import that alienate people and limit our coalitional options.

In my view, the mistakes of 2022 involved not being professional in running organizations and strategically doing outreach. Instead of the broad communication under their EA brand then, I'm much more positive about how GWWC, 80k, or The School for Moral Ambition are spreading ideas that originated from EA. I hope we can get better at defining our niche target group for the EA brand and working to appeal to them instead of the broad public.

Thanks for laying out your view in such detail, Patrick!

I find it hard to grasp how the EA Forum can be so narrow -- given there are no Fora / equivalents for the other brands you mention.

E.g. I still expect the EA Forum is widely perceived as the main place where community discussion happens beyond the narrow mandate you outline so which attentional priorities will be set here will be seen as a broader reflection of the movement than what I think you intend.

I think the main issue is that I was interpreting your point about the public forum's perception as a fear that people outside could see EA as weird (in a broad sense). I would be fine with this.

But at the same time, I hope that people already interested in EA don't get the impression from the forum that the topics are limited. On the contrary, I would love to have many discussions here, not restricted by fear of outside perception.

This comment really makes me appreciate the nuanced way to give feedback with disagree and karma -- I think it is quite useful to incentivize critical feedback that the two can be, and are, distinguished.

I think this is a fair point - but it's not the frame I've been using to consider debate week topics.

My aim has been to generate useful discussion within the effective altruism community. I'd like to choose topics which nudge people to examine assumptions they've been making, and might lead to them changing their minds, and perhaps their priorities, or the focus of their work. I haven't been thinking about debate weeks as a piece of communications work/ as a way of reaching out to a broader audience. This question in particular was chosen because the Forum audience wouldn't necessarily have cached takes on it - an audience outside the Forum would need a lot of context to get what we are talking about.

Perhaps I'm missing something though - do you think this is more public facing than I'm assuming? To be clear, I know that it is public, but it's not directed at an outside audience in the way a book or podcast or op-ed might be. 

Edit: I'm also uncertain on the claim that "there are few interventions that are predictably differentiated along those lines" - I think Forethought would disagree, and though I'm not sure I agree with them, they've thought about it more than I have. 

Thanks for engaging and for giving me the chance to outline more clearly and with more nuance what my take is.

I covered some of this in my reply to Ollie, but basically (a) I do think that Forum weeks are significant attentional devices signaling what we see as priorities, (b) the Forum has appeared in detail in many EA-critical pieces and (c) there are many Forum weeks we could be running right now that would be much better both from a point of action guiding and perception in the wider world.

I take as given -- I am not the right person to evaluate this -- that there are some interventions that some EA funders might decide along those considerations.

But I am pretty confident it won't matter to the wider philanthropic world, almost no one is thinking about philanthropic interventions saying "does this make a world better where we survive v does this mostly affect probability of extinction?"

If EA were ascendant and we'd be a significant share of philanthropy maybe that'd be a good question to ask.

But in a world where our key longtermist priorities are not well funded and where most of the things we can be doing to broadly reduce risks are not clearly alignable to either side of the crux here, I think making this a key attentional priority seems to have, at least, significant opportunity cost.

EDIT: I am mostly trying to give a consistent and clearly articulated perspective here, I am surely overlooking things and you have information on this that I do not have. I hope this is useful to you, but I don't want to imply I am able to have an all-things-considered view.

Thanks for engaging on this as well! I do feel the responsibility involved in setting event topics, and it's great to get constructive criticism like this. 

To respond to the points a bit (and this is just my view- quite quickly written because I've got a busy day today and I'm happy to come back and clarify/change my mind in another reply): 

(a) - maybe, but I think the actual content of the events almost always contains some scepticism of the question itself, discussion of adjacent debates etc... The actual topic of the event doesn't seem like a useful place to look for evidence on the community's priorities. Also, I generally run events about topics I think people aren't prioritising. However, I think this is the point I disagree with the least - I can see that if you are looking at the forum in a pretty low-res way, or hearing about the event from a friend, you might get an impression that 'EA cares about X now'. 

(b) - The Forum does appear in EA-critical pieces, but I personally don't think those pieces distinguish much between what one post on the Forum says and what the Forum team puts in a banner (and I don't think readers who lack context would distinguish between those things either). So, I don't worry too much about what I'm saying in the eyes of a very adversarial journalist (there are enough words on the forum that they can probably find whatever they'd like to find anyway). 

To clarify - for readers and adversarial journalists - I still have the rule of "I don't post anything I wouldn't want to see my name attached to in public" (and think others should too), but that's a more general rule, not just for the Forum. 

(c)- I'm sure that it isn't the optimum Forum week. However (1) I do think this topic is important and potentially action-relevant - there is increasing focus on 'AI Safety', but AI Safety is a possibly vast field with a range of challenges that a career or funding could address, and the topic of this debate is potentially an important distinction to have a take on when you are making those decisions. And (2) I'm pretty bullish on forum events, and I'd like to run more, and get the community involved more, so any suggestions for future events are always welcome. 

 


 

Thanks for clarifying this!

I think ultimately we seem to have quite different intuitions on the trade-offs, but that seems unresolvable. Most of my intuitions there come from advising non-EA HNWs (and from spending time around advisors specialized in advising these), so this is quite different from mostly advising EAs.

Thank you for sharing your disagreements about this! :)

I would love for there to be more discussion on the Forum about how current events affect key EA priorities. I agree that those discussions can be quite valuable, and I strongly encourage people who have relevant knowledge to post about this.

I’ll re-up my ask from my Forum update post: we are a small team (Toby is our only content manager, and he doesn’t spend his full 1 FTE on the Forum) and we would love community support to make this space better:

  1. We don’t currently have the capacity to maintain expertise and situational awareness in all the relevant cause areas. We’re considering deputizing others to actively support the Forum community — if you’re interested in volunteering some time, please let us know (feel free to DM myself or Toby).
  2. In general, we are happy to provide support for people who may want to discuss or post something on the Forum but are unsure how to, or are unsure if that’s a good fit. For example, if you want to run an AMA, or something like a Symposium for a specific topic, you can ask us for help! :) Please have a low bar for reaching out to myself or Toby to ask for support.

Historically, the EA Forum has strongly leaned in the direction of community-run space (rather than CEA-run space). Recently we’ve done a bit more proactively organizing content (like Giving Season and debate weeks), but I really don’t want to discourage the rest of the community from making conversations happen on the Forum that you think are important. We have such little capacity and expertise on our team, relative to the entirety of the community, so we won’t always have the right answers!

To address your specific concerns: I’ll just say that I’m not confident about what the right decision would have been, though I currently lean towards “this was fine, and led to some interesting posts and valuable discussions”. I broadly agree with other commenters so I’ll try not to repeat their points. Here are some additional considerations:

  1. Debate weeks take a long time to plan out (around a month, though it depends on the topic), since it requires a bunch of coordination, which makes it particularly hard to do this around current events (for example, at some point I thought that the USAID cut was going to be reversed, and if that happened after we decided on the debate week topic we’d need to pivot our plans, and possibly this would make posts that people wrote in advance pretty useless).
  2. USAID in particular was discussed at various points on the Forum previously and those posts got a lot of karma/attention, so it’s not clear to me if a debate week on that topic would have been clearly more valuable.
  3. Traditional news sources, and even some relevant academic communities, are likely much better at reaching non-EA funders than the Forum could do right now, even on our best days. So if our goal were around influencing non-EA funders, I don’t think we would do any interventions that utilize the EA Forum.
  4. RE: “I do think that Forum weeks are significant attentional devices signaling what we see as priorities” — I would be surprised if anyone who doesn’t actively use the Forum thought this, partially because there’s not really a way to access Forum events after they are done, so they are quite hard to find. The biggest Forum event that we run is Giving Season (it spans ~2 months), which I think you’d agree is much more action-relevant and palatable to people who don’t associate with EA, and I would be somewhat surprised to learn that that event influenced any non-EA funders (at least I haven’t heard any stories about this happening), so I would be significantly more surprised if any non-EA funders were influenced by a debate week. (I think these rarely get any outside coverage, and I even know of people who work at EA orgs who don’t know about our debate week events.)
[comment deleted]2
0
0
Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f