Quick takes

Martin Gould's Five insights from farm animal economics over at Open Phil's FAW newsletter points out that (quote) "blocking local factory farms can mean animals are farmed in worse conditions elsewhere": 

Consider the UK: Local groups celebrate blocking new chicken farms. But because UK chicken demand keeps growing — it rose 24% from 2012-2022 — the result of fewer new UK chicken farms is just that the UK imports more chicken: it almost doubled its chicken imports over the same time period. While most chicken imported into the UK comes from the EU, wh

... (read more)

Zooming out, regarding other examples of altruistic mistakes that we might be making, I think there are a lot of scenarios in which banning something or making something less appealing in one locations is intended to reduce the bad thing, but actually just ends up shifting the thing elsewhere, where there are even fewer regulations.

  • One critique of the United States's drug policy is that it doesn't halt the production or trade of dangerous drugs, but simply pushes it elsewhere (the balloon effect).
  • When a jurisdiction bans chicken farmers from using small ca
... (read more)

The recent pivot by 80 000 hours to focus on AI seems (potentially) justified, but the lack of transparency and input makes me feel wary.

https://forum.effectivealtruism.org/posts/4ZE3pfwDKqRRNRggL/80-000-hours-is-shifting-its-strategic-approach-to-focus

 

TLDR;

80 000 hours, a once cause-agnostic broad scope introductory resource (with career guides, career coaching, online blogs, podcasts) has decided to focus on upskilling and producing content focused on AGI risk, AI alignment and an AI-transformed world.


According to their post, they will still host t... (read more)

My blog might be of interest to people 

I really liked several of the past debate weeks, but I find it quite strange and plausibly counterproductive to spend a week in a public forum discussing these questions.

There is no clear upside to reducing the uncertainty on this question, because there are few interventions that are predictably differentiated along those lines.

And there is a lot of communicative downside risk when publicly discussing trading off extinction versus other risks / foregone benefits, apart from appearing out of touch with > 95% of people trying to do good in the world ("a... (read more)

Showing 3 of 14 replies (Click to show all)
4
Sarah Cheng
Thank you for sharing your disagreements about this! :) I would love for there to be more discussion on the Forum about how current events affect key EA priorities. I agree that those discussions can be quite valuable, and I strongly encourage people who have relevant knowledge to post about this. I’ll re-up my ask from my Forum update post: we are a small team (Toby is our only content manager, and he doesn’t spend his full 1 FTE on the Forum) and we would love community support to make this space better: 1. We don’t currently have the capacity to maintain expertise and situational awareness in all the relevant cause areas. We’re considering deputizing others to actively support the Forum community — if you’re interested in volunteering some time, please let us know (feel free to DM myself or Toby). 2. In general, we are happy to provide support for people who may want to discuss or post something on the Forum but are unsure how to, or are unsure if that’s a good fit. For example, if you want to run an AMA, or something like a Symposium for a specific topic, you can ask us for help! :) Please have a low bar for reaching out to myself or Toby to ask for support. Historically, the EA Forum has strongly leaned in the direction of community-run space (rather than CEA-run space). Recently we’ve done a bit more proactively organizing content (like Giving Season and debate weeks), but I really don’t want to discourage the rest of the community from making conversations happen on the Forum that you think are important. We have such little capacity and expertise on our team, relative to the entirety of the community, so we won’t always have the right answers! To address your specific concerns: I’ll just say that I’m not confident about what the right decision would have been, though I currently lean towards “this was fine, and led to some interesting posts and valuable discussions”. I broadly agree with other commenters so I’ll try not to repeat their points. Here ar
4
Toby Tremlett🔹
Thanks for engaging on this as well! I do feel the responsibility involved in setting event topics, and it's great to get constructive criticism like this.  To respond to the points a bit (and this is just my view- quite quickly written because I've got a busy day today and I'm happy to come back and clarify/change my mind in another reply):  (a) - maybe, but I think the actual content of the events almost always contains some scepticism of the question itself, discussion of adjacent debates etc... The actual topic of the event doesn't seem like a useful place to look for evidence on the community's priorities. Also, I generally run events about topics I think people aren't prioritising. However, I think this is the point I disagree with the least - I can see that if you are looking at the forum in a pretty low-res way, or hearing about the event from a friend, you might get an impression that 'EA cares about X now'.  (b) - The Forum does appear in EA-critical pieces, but I personally don't think those pieces distinguish much between what one post on the Forum says and what the Forum team puts in a banner (and I don't think readers who lack context would distinguish between those things either). So, I don't worry too much about what I'm saying in the eyes of a very adversarial journalist (there are enough words on the forum that they can probably find whatever they'd like to find anyway).  To clarify - for readers and adversarial journalists - I still have the rule of "I don't post anything I wouldn't want to see my name attached to in public" (and think others should too), but that's a more general rule, not just for the Forum.  (c)- I'm sure that it isn't the optimum Forum week. However (1) I do think this topic is important and potentially action-relevant - there is increasing focus on 'AI Safety', but AI Safety is a possibly vast field with a range of challenges that a career or funding could address, and the topic of this debate is potentially an important

Thanks for clarifying this!

I think ultimately we seem to have quite different intuitions on the trade-offs, but that seems unresolvable. Most of my intuitions there come from advising non-EA HNWs (and from spending time around advisors specialized in advising these), so this is quite different from mostly advising EAs.

Learnings from a day of walking conversations 

Yesterday, I did 7 one-hour walks with Munich EA community members. Here's what I learned and why I would recommend it to similarly extroverted community members:

Format

  • Created an info document and 7 one-hour Calendly slots and promoted them via our WhatsApp group
  • One hour worked well as a default timeframe - 2 conversations could have been shorter while others could have gone longer
  • Scheduling more than an hour with someone unfamiliar can feel intimidating, so I'll keep the 1-hour format
  • Walked approxima
... (read more)

This is a small appreciation post for the deep and prompt community engagement from the 80k team after their announcement of their new strategic direction.

No organization is under any obligation to respond to comments and criticisms about their strategy, and I've been impressed by the willingness of so many 80k staff members to engage in both debate and reassurance - at least 5 people from the organization have weighed in. 

It has both helped me understand their decision better and made the organization feel more caring and kind then if they had just d... (read more)

Showing 3 of 4 replies (Click to show all)
4
Ozzie Gooen
I broadly agree with this!  At the same time, I'd flag that I'm not quite sure how to frame this. If I were a donor to 80k, I'd see this action as less "80k did something nice for the EA community that they themselves didn't benefit from" and more "80k did something that was a good bet in terms of expected value." In some ways, this latter thing can be viewed as more noble, even though it might be seen as less warm. Basically, I think that traditional understandings of "being thankful" sort of break down when organizations are making intentional investments that optimize for expected value. I'm not at all saying that this means that these posts are less valuable or noble or whatever. Just that I'd hope we could argue that they make sense strictly through the lens of EV optimization, and thus don't need to rely as much on the language of appreciation. (I've been thinking about this with other discussions) 

There's truth there and I would agree its better EV to engage too. There could be many different motives. Higher EV, damage control reaction, kindness, community building, nostalgia for those old days when they were global health people too ;).

 Regardless though I like to frame things in more human and interpersonal terms and will continue to do so :)

4
NickLaing
https://forum.effectivealtruism.org/posts/4ZE3pfwDKqRRNRggL/80-000-hours-is-shifting-its-strategic-approach-to-focus yep

I'm visiting Mexico City, anyone I should meet / anyone would like to meet up?

About me: Ex President LSE EA, doing work in global health, prediction markets, AIS.https://eshcherbinin.notion.site/me

Counting people is hard. Here are some readings I've come across recently on this, collected in one place for my own edification: 

  1. Oliver Kim's How Much Should We Trust Developing Country GDP? is full of sobering quotes. Here's one: "Hollowed out by years of state neglect, African statistical agencies are now often unable to conduct basic survey and sampling work... [e.g.] population figures [are] extrapolated from censuses that are decades-old". The GDP anecdotes are even more heartbreaking
  2. Have we vastly underestimated the total number of people on E
... (read more)
5
jablevine
Good links. My favorite example is Papua New Guinea, which doubled their population estimate after a UN Population Fund review. Chapter 1 of Fernand Braudel's The Structures of Everyday Life is a good overview of the problem in historical perspective. 

Wow, that's nuts. Thanks for the pointer.

I think it would be really useful for there to be more public clarification on the relationship between effective altruism and Open Philanthropy. 

My impression is that:
1. OP is the large majority funder of most EA activity. 
2. Many EAs assume that OP is a highly EA organization, including the top.
3. OP really tries to explicitly not take responsibility for EA and does not claim to themselves be highly EA.
4. EAs somewhat assume that OP leaders are partially accountable to the EA community, but OP leaders would mostly disagree. 
5. From the poi... (read more)

Showing 3 of 20 replies (Click to show all)

Thanks for writing this Ozzie! :) I think lots of things about the EA community are confusing for people, especially relationships between organizations. As we are currently redesigning EA.org it might be helpful for us to add some explanation on that site. (I would be interested to hear if anyone has specific suggestions!)

From my own limited perspective (I work at CEA but don’t personally interact much with OP directly), your impression sounds about right. I guess my own view of OP is that it’s better to think of them as a funder rather than a collab... (read more)

3
Matrice Jacobine
Presumably https://reflectivealtruism.com/category/billionaire-philanthropy/?
6
AnonymousTurtle
Many people claim that Elon Musk is an EA person, @Cole Killian has an EA Forum account and mentioned effective altruism on his (now deleted) website, Luke Farritor won the Vesuvius Challenge mentioned in this post (he also allegedly wrote or reposted a tweet mentioning effective altruism, but I can't find any proof and people are skeptical)

I want to see a bargain solver for AI alignment to groups: a technical solution that would allow AI systems to solve the pie cutting problem for groups and get them the most of what they want, for AI alignment. The best solutions I've seen for maximizing long run value involve using a bargain solver to decide what ASI does, which preserves the richness and cardinality of people's value functions and gives everyone as much of what they want as possible, weighted by importance. (See WWOTF Afterwards, the small literature on bargaining-theoretic approaches to... (read more)

2
Parker_Whitfill
Is the alignment motivation distinct from just using AI to solve general bargaining problems? 

I don't know! It's possible that you can just solve a bargain and then align AI to that, like you can align AI to citizens assemblies. I want to be pitched.

I want to see more discussion on how EA can better diversify and have strategically-chosen distance from OP/GV.

One reason is that it seems like multiple people at OP/GV have basically said that they want this (or at least, many of the key aspects of this). 

A big challenge is that it seems very awkward for someone to talk and work on this issue, if one is employed under the OP/GV umbrella. This is a pretty clear conflict of interest. CEA is currently the main organization for "EA", but I believe CEA is majority funded by OP, with several other clear st... (read more)

Showing 3 of 19 replies (Click to show all)

Yeah I agree that funding diversification is a big challenge for EA, and I agree that OP/GV also want more funders in this space. In the last MCF, which is run by CEA, the two main themes were brand and funding, which are two of CEA’s internal priorities. (Though note that in the past year we were more focused on hiring to set strong foundations for ops/systems within CEA.) Not to say that CEA has this covered though — I'd be happy to see more work in this space overall!

Personally, I worry that funding diversification is a bit downstream of impro... (read more)

6
sapphire
I think its better to start something new. Reform is hard but no one is going to stop you from making a new charity. The EA brand isn't in the best shape. Imo the "new thing" can take money from individual EAs but shouldn't accept anything connected to OpenPhil/CEA/Dustin/etc.  If you start new you can start with a better culture. 
6
huw
AIM seems to be doing this quite well in the GHW/AW spaces, but lacks the literal openness of the EA community-as-idea (for better or worse)

Over the years I've written some posts that are relevant to this week's debate topic. I collected and summarized some of them below:

"Disappointing Futures" Might Be As Important As Existential Risks

The best possible future is much better than a "normal" future. Even if we avert extinction, we might still miss out on >99% of the potential of the future.

Is Preventing Human Extinction Good?

A list of reasons why a human-controlled future might be net positive or negative. Overall I expect it to be net positive.

On Values Spreading

Hard to summarize but this p... (read more)

It seems like "what can we actually do to make the future better (if we have a future)?" is a question that keeps on coming up for people in the debate week.

I've thought about some things related to this, and thought it might be worth pulling some of those threads together (with apologies for leaving it kind of abstract). Roughly speaking, I think that:

... (read more)
huw
40
2
3
1

The World Happiness Report 2025 is out!

Finland leads the world in happiness for the eighth year in a row, with Finns reporting an average score of 7.736 (out of 10) when asked to evaluate their lives.

Costa Rica (6th) and Mexico (10th) both enter the top 10 for the first time, while continued upward trends for countries such as Lithuania (16th), Slovenia (19th) and Czechia (20th) underline the convergence of happiness levels between Eastern, Central and Western Europe.

The United States (24th) falls to its lowest-ever position, with the United Kingdom (23rd)

... (read more)
Showing 3 of 4 replies (Click to show all)

Clueless, although there are bound to be outliers and exceptions even if we don't understand why.

7
huw
Here is their plot over time, from the Chapter 2 Appendix. I think these are the raw per-year scores, not the averages. I find this really baffling. It’s probably not political; the Modi government took power in 2014 and only lost absolute majority in late 2024. The effects of COVID seem to be varied; India did relatively well in 2020 but got obliterated by the Delta variant in 2021. Equally, GDP per capita steadily increased over this time, barring a dip in 2020. Population has steadily increased, and growth has steadily decreased. India have long had a larger residual value than others in the WHR’s happiness model; they’re much less happy than their model might predict. Without access to the raw data, it’s hard to say if Gallup’s methodology has changed over this time; India is a huge and varied country, and it’s hard to tell if Gallup maintained a similar sample over time.
2
Mo Putera
Thanks for digging up that plot, I'd been looking for annual data instead of 3-year rolling averages. Here's what WHR say about their methodology which seems relevant.  That Gallup website doesn't say if they've changed their methodology over time; that said, they seem to try their best to maintain a similar sample over time, e.g.  I remain as baffled as you are. 

Sharing https://earec.net, semantic search for the EA + rationality ecosystem. Not fully up to date, sadly (doesn't have the last month or so of content). The current version is basically a minimal viable product! 

On the results page there is also an option to see EA Forum only results which allow you to sort by a weighted combination of karma and semantic similarity thanks to the API!

Final feature to note is that there's an option to have gpt-4o-mini "manually" read through the summary of each article on the current screen of results, which will give... (read more)

If you could set a hiring manager a work task for an hour or two, what would you ask them to do? In this situation you're applying for a job with them.

Has the past two months seen more harm done (edit: by the new administration) than the totality of good EA has done since it's existence? 

5
JWS 🔸
I don't really get the framing of this question. I suspect, for any increment of time one could take through EAs existence, then there would have been more 'harm' done in the total rest of world during that time. EA simply isn't big enough to counteract the moral actions of the rest of the world. Wild animals suffer horribly, people die of preventable diseases etc constantly, formal wars and violent struggles occur affecting the lives of millions. There sheer scale of the world outweighs EA many, many times over. So I suspect you're making a more direct comparison to Musk/DOGE/PEPFAR? But again, I feel like anyone wielding using the awesome executive power of the United States Government should expect to have larger impacts on the world than EA.

True, I was vague and should have specified the new administration. 

If people do think that this is true, as it seems obvious to you, then perhaps EA should have allocated resources differently?

I realized that the concept of utility as a uniform, singular value is pretty off-putting to me. I consider myself someone who is inherently aesthetic and needs to place myself in a broader context of the society, style and so on. I require a lot of experiences— in some way, I need more than just happiness to reach a state of fulfillment. I need to have aesthetic experience of beauty, the experience of calmness, the anxiety of looking for answers, the joy of building and designing. 

The richness of everyday experience might be reducible to two dimensions: positive and negative feelings but this really doesn't capture what a fulfilling human life is. 

You might appreciate Ozy Brennan's writeup on capabilitarianism. Contrasting with most flavors of utilitarianism: 

Utilitarians maximize “utility,” which is pleasure or happiness or preference satisfaction or some more complicated thing. But all our ways of measuring utility are really quite bad. Some people use self-reported life satisfaction or happiness, but these metrics often fail to match up with common-sense notions about what makes people better off. GiveWell tends to use lives saved and increased consumption, which are fine as far as they go,

... (read more)

A good friend turned me onto The Telepathy Tapes. It presents some pretty compelling evidence that people who are neurodivergent can more easily tap into an underlying universal Consciousness. I suspect Buddhists and other Enlightened folks who spend the time and effort quieting their mind and letting go of ego and dualism can also. I'm curious what others in EA (self-identified rationalists for example) make of this…

Load more