I agree with the first half of your comment. Do you think that the EA community (or the EA Forum) should solely focus on cause prioritisation though?
I feel excited about scope sensitive decision-making and evidence-based prioritisation being used at various levels of abstraction/concreteness, e.g. cause prioritisation, intervention prioritisation, organisation prioritisation.
I welcome/encourage this being done carefully and well (and discussed on the Forum) even if I disagree with someone's prioritisation at another level of abstraction, and I don't ...
Thank you for sharing Allegra! Welcome to the Forum, and congrats on writing and sharing this.
I think this is well written and engaging! I agree it seems a real shame for these people and for the world that the existing services have been cut. And I do think that your bullet point list suggests it's worth considering/evaluating.
I think a stronger case would delve into more detail on these claims, which aren't currently substantiated: "Sudan ranks extraordinarily high on scale, neglect, and tractability", and "Emergency networks, women's coalitions, and ind...
Jamie, thank you so much for this thoughtful and constructive feedback! I really appreciate you taking the time to engage with this so carefully.
You're absolutely right that these claims need more substantiation. I made a deliberate choice to keep the initial post relatively brief to give people baseline knowledge and invite engagement rather than overwhelming readers with data upfront. But I'm glad you're pushing me to go deeper.
Let me provide more detail on each dimension, while being honest about where the evidence is strong and where it's limited:
On Sc...
I built an interactive chicken welfare experience - try it and let me know what you think
Ever wondered what "cage-free" actually means versus "free-range"? I just launched A Chicken's World - a 5-minute interactive game where you experience four different farming systems from an egg-laying hen's perspective, then guess which one you just lived through and how common that system is.
Reading "67 square inches per hen" is one thing, but actually trying to move around in that space is another. My hope is that the interactive format makes welfare conditions visc...
Very cool! No reply needed/expected, just sharing a few misc reflections:
I didn't follow the reason for excluding Dwarkesh; you already have quality adjustments multipliers so you can just include him and apply the adjustments. (Id be interested to see, since I think it's relevant, he has an influential audience, and he has solid revenue, which I think will lead to high cost-effectiveness in your model.)
In the other direction: I'm not sure if your goal was to compare most cost-effective opportunities among current/established YouTubers, but if you're tryin...
The brand new episode with Kyle Fish from Anthropic (released since you wrote this comment) discusses some reasons why AI Safety and AI welfare efforts might conflict or be mutually beneficial, if you're interested!
Make your high-impact career pivot: online bootcamp (apply by Sept 14)
Many accomplished professionals want to make a bigger difference with their career, but don’t always know how to turn their skills into real-world impact.
We (the Centre for Effective Altruism) have just launched a new, free, 4-day online career bootcamp designed to help with that.
How it works:
Thanks Karen! Interested if you have specific things in mind for implications of the economic angle? I can certainly see it playing into some of the "Predict how AI will change things, and try to make that go well for animals" predictions, or leading to more of an emphasis on "Shift towards all-inclusive AI safety".
Cool post!
Misc thought that this seems analogous to some of the points/ideas/arguments in https://www.forethought.org/research/ai-tools-for-existential-security (albeit for different tech and to primarily address different problems)
I think this is a great explanation of an important dynamic and opportunity. I feel confident that doing the sorts of things explained in this post has benefited my career a lot.
Appreciated the footnote about informal hiring having tradeoffs; it's not clearly good that hiring often operates this way. But the good news is that "just start really trying to do useful/impactful things" is not only helpful for the world, but helpful for people's high-impact job search. A win-win!
This is cool! Cost-effectiveness estimates would be great, but given that they're likely quite cheap per individual, my guess is that they work out as pretty cost-effective as long as we think that there is a real (average) long-term reduction in animal product consumption, and we don't see the small animal replacement problem rear it's head?
(E.g. IIRC one problem is just that we often have to rely on self-report and it's hard to rigorously assess what changes people really make, if any.)
On that note, I'd be interested if you have an impression of the quality of the studies, and whether you indeed expect this kind of effect?
(Also, could you explain what you mean by "retention rate"? Seems pretty important.)
Exciting! Great if you can connect and support impact-focused freelancers to achieve their goals.
As someone who may be looking for freelance support in the next few weeks/months, I'm wondering what I gain by posting or using the directory here, rather than other (not-altruism-focused) platforms like Fiverr, Upwork, etc?
Productive conference meetup format for 5-15 people in 30-60 minutes
I ran an impromptu meetup at a conference this weekend, where 2 of the ~8 attendees told me that they found this an unusually useful/productive format and encouraged me to share it as an EA Forum shortform. So here I am, obliging them:
Good news if true! Thanks for sharing.
Some other guesses/hypotheses of things that may contribute to positive mental health:
(Ironically, I suppose the title -- "We don't have evidence that the best charities are over 1000x more cost effective than the average" -- is also an overly confident claim, where a question might have been better, unless the original poster had carried out an exhaustive search for relevant evidence)
I agree with other comments that the 80k article is the place to go.
But I also want to specifically praise and thank the original poster for (1) noticing an important seeming empirical claim being bandied around (2) noticing that the evidence being used seemed insufficient (3) sharing that potentially important discovery.
(For what it's worth, before the 80k article, I also worried that people in the EA community were excessively confident in similar claims.)
Also, even if charities differ significantly on a specific, narrow metric, they may differ less subs...
I don't think our capacity has been as stretched as LTFF. We get fewer applications.
Id guess the median application wait time is around 4 weeks.
It feels somewhat uninformative to share a mean, because sometimes there are substantial delays due to:
I haven't looked these things up though; let me know if you're keen for a more precise answer.
As for applicant questions: likewise, I pers...
Sounds exciting!
It’s really about exploring what kinds of hurdles might come up in this 80/20 approach — for example, getting a clearer picture of where enough high-quality videos already exist and where important content is still missing. But also more generally: what else might turn out to be more complicated than expected? The other key question is: do people like and actually use the platform?
Makes sense to me. But this one...
...And ideally, does it help move people from ambition to action — for example, by inspiring them to donate, explore new career pat
Sounds very cool! I think video courses is a great idea, since I expect that a lot of people (myself included) at least sometimes find it a lot easier and more fun to watch videos than to read things.
Quite intrigued which videos you intend to use; when I've created EA-relevant online courses in the past, a dearth of high-quality, relevant videos has been a bottleneck. I ended up creating my own content/videos. (There are sometimes things like EA Global talks, but they often aren't sufficiently introductory and broad, e.g. they'll be about a specific intervention, org, or argument rather than about a cause area or topic.)
What uncertainties are you testing with the pilot? Is it mainly about demand/sign ups/views?
This was very cool. Extremely creative! And emotive. 168k views is impressive. Thanks for putting in the work into this in your spare time!
I'd be so curious to know if (m)any people donated to ACE as a result! (You could maybe ask ACE if they had recent donations citing your channel name or 'Youtube video's or some such as how they heard). Also wondering if you got (m)any new Patreon subscribers as a result
Core concepts: Shared identity formation, in-group solidarity, boundary maintenance
Key findings:
Evidence strength: Strong. Multiple longitudinal studies across diverse movements consistently show correlation between ident...
Seems important, thanks for raising! Your first suggestion seems very plausible to me, your second seems somewhat plausible but less likely/important.
My first reaction is that animal advocacy orgs should consider optimising for community building and mobilisation (as an interim goal). My impression (which may be wrong) from my involvement with the movement was roughly that orgs were usually optimising for mobilisation around specific objectives rather than actually trying to set up a long-term community and strong activist base. I expect a simple mindset s...
Since you requested responses: I agree with something like: 'conditional upon AI killing us all and then going on to do things that have zero moral (dis)value, it then matters little who was most responsible for that having happened'. But this seems like an odd framing to me:
This is an important question. Thank you for raising it, and highlighting some interesting considerations in your original post!
Rather than attempt to answer comprehensively, I want to highlight a particular aspect that I've been thinking about recently: risks from ideological fanaticism. My colleague David Althaus is leading on an extensive post/report on this topic which we hope to post soon, but to summarise a few of the risks we're worried about (which are not solely reducible to authoritarianism):
I think a really important question in addressing this is something like - does the USA remain 'unfanatical' if the shackles are taken off powerful people. This is where I think the analysis of the USA goes a little bit wrong - we need to think about what the scenario looks like if it is possible for power to be much much more extremely concentrated than it is now. Certainly, in such a scenario, its not obvious thatit will be true post AGI that "even a polarizing leader cannot enforce a singular ideology or eliminate opposition"
There are some related resources and discussion in my quick take here
(Apologies if this is too late!)
From a quick glance this seems like some really cool and promising outcomes! I'd have been interested to know more detail about the "Intended Actions of Respondents" (e.g.s of specific promising things people are doing as a result) and what the costs were after accounting for organiser remuneration as well.
I was pretty surprised how many accepted attendees you had for such low online advertising costs. That suggests there's some real low-hanging fruit of potentially interested people. I'd also be interested in whether (m)any of the people who applied and attended through this method ended up being strong participants in the event and/or taking follow-up actions.
Apologies, missed this comment!
EA outreach is still in-scope, it just wasn't an area we highlighted in this post. That's partly because we tend to get quite a few applications of this sort anyway. (I'm not sure but my vague impression is that the average quality of such applications is lower, too.)
Hey! Does Canopy Retreats still exist in any form? I see the website is down but not sure if that's because it migrated, got absorbed into a larger org, or everyone just moved onto other things. Thanks!
(In the meantime, for anyone else coming back across this post, I stumbled across "Skylark": "We plan and facilitate transformative events. We help you shape a bold vision for your community, manage every operational detail, and lead workshops". Seemingly run by EAs with testimonials all by EAs.)
I really appreciate you reasoning independently, working through to try to overcome scope insensitivity (and communicate clearly/graphically to others!), and make important prioritisation decisions that affect how you can best help others. Interesting to see your thought process; thanks for sharing!
Great idea! Seems good to try out and I imagine that a bunch of the infrastructure and expertise CEA has already built up will easily transfer over.
I'm intrigued about the summary costs (total, per attendee average); $, CEA staff time, local organiser staff time etc. I think the linked posts at the top contained some but not all of this. Intrigued to hear how it goes going forward.
In case you and @David Michelson haven't seen them, I and some colleagues did a bunch of research into social movement case studies a few years ago.
https://forum.effectivealtruism.org/posts/ATpxEPwCQWQAFf4XX/key-lessons-from-social-movement-history
Not to suggest that more wouldn't be useful, just an FYI in case you didn't know and would find them helpful!
ank you for this post—it looks very interesting. I’ve given it a quick skim but wanted to check in on a concern/critique I have before engaging more closely with the recommendations.
Most of the post seems dedicated to explaining why the Fabians were so successful.
However, I’m not yet convinced that they actually caused meaningful change. You begin by listing some of their goals and then highlight how many of those goals came to fruition, but that doesn’t establish their causal role in making those changes happen.
It looks like you provide two main for...
Random idea on the random idea: such an event (or indeed similar social opportunities for ETGers) could charge for participation and aim to fully cover costs, or even make a profit that gets donated.
EtGers have money they want to give away, and this is clearly a service that should be supporting them to address a need they have --> they should be willing to pay for it.
Also, if the service just focused on providing EtGers with fun, social connections, and a great community rather than 'overfitting' to what seems directly relevant to impact, I think it mi...
Thanks for the useful post Marcus!
If people reading might be a good fit for running a project helping to improve funding diversification, I encourage them to apply to the EA Infrastructure Fund. We are keen to receive applications that help with this (and aren't currently very funding constrained ourselves).
As for ideas for projects; Marcus lists some above, I list some on my post, and you might have ideas of your own.
I didn't write that wording originally (I just copied it over from this post), so I can't speak exactly to their original thinking.
But I think the phrasing includes the EA community, it just uses the plural to avoid excluding others.
Some examples that jump to mind:
...I would like to more clearly understand what the canonical "stewards of the EA brand" in CEA and the E
Hi Daniel! I don't have a lot to elaborate on here; I haven't really thought much about the practicalities, I was just flagging that proposals and ideas relating to regranting seem like a plausible way to help with funding diversification.
Also, just FYI, on the specific intervention idea, which could be promising, that would fall in the remit of EA Funds' Animal Welfare Fund (which I do not work at), not the Infrastructure Fund (which I work at). I didn't check with fund managers there if they endorse things I've written here or not.
Based on this information alone, EAIF would likely prefer an application later (e.g. if there is some event affecting the uncertainty that would pass) to avoid us wasting our time.
But I don't think this would particularly affect your chances of application success. And maybe there are good reasons to want to apply sooner?
And I wouldn't leave it too long anyway, since sometimes apps take e.g. 2 months to be approved. Usually less, and very occasionally more.
I think fairly standard EA retreats / fellowships are quite good at this
Maybe. To take cause prio as an example, my impression is that the framing is often a bit more like: 'here are lots of cause areas EAs think are high impact! Also, cause prioritisation might be v important.' (That's basically how I interpret the vibe and emphasis of the EA Handbook / EAVP.) Not so much 'cause prio is really important. Let's actually try and do that and think carefully about how to do this well, without just deferring to existing people's views.'
So there's a direct ^ ve...
Mm they don't necessarily need to be small! (Ofc, big projects often start small, and our funding is more likely to look like early/seed funding in these instances.) E.g. I'm thinking of LessWrong or something like that. A concrete example of a smaller project would be ESPR/SPARC, which have a substantial (albeit not sole) focus on epistemics and rationality, that have had some good evidence of positive effects, e.g. on Open Phil's longtermism survey.
But I do think the impacts might be more diffuse than other grants. E.g. we won't necessarily be able to co...
Thanks! Sorry to hear the epistemics stuff was so frustrating for you and caused you to leave EA.
Yes, very plausible that the example interventions don't really get to the core of the issue -- I didn't spend long creating those and they're more meant to be examples to help spark ideas rather than confident recommendations on the best interventions or some such. Perhaps I should have flagged this in the post.
Re "centralized control and disbursion of funds": I agree that my example ideas in the epistemics section wouldn't help with this much. Would the "fund...
I’ve been working a few hours per week at the Effective Altruism Infrastructure Fund as a Fund Manager since Summer this year.
EA’s reputation is at a bit of a low point. I’ve even heard EA described as the ‘boogeyman’ in certain well-meaning circles. So why do I feel inclined to double down on effective altruism rather than move onto other endeavours? Some shower thoughts:
You highlight a couple of downsides. Far from all of the downsides of course, but none of the advantages either.
I feel a bit sad to read this since I've worked on something related[1] to what you post about for years myself. And a bit confused why you posted this; do you think that you think EAs are underrating these two downsides? (If not, it just feels a bit unnecessarily disparaging to people trying their best to do good in the world.)
Appreciate you highlighting your personal experience though; that's a useful anecdote.
"Targeting of really y
Another consideration I just encountered in a grantmaking decision:
Other decision-makers in EA might be those views we are most inclined to defer to or cooperate with. So upon noticing that an opportunity is underfunded in EA specifically but not the world at large, arguably I should update away from wanting to fund it upon considering Open Phil and EA donations specifically as opposed to donations in the world more broadly. Whereas I think the thrust of your post implies the opposite.
(@Ariel Simnegar 🔸, although again no need to reply. Possibly I'm getti...
Thanks so much to everyone who took the time to play through this and provide such thoughtful feedback! I really appreciate it, and apologies for the delay in implementing these changes.
Here's what I've updated based on your suggestions:
Bug Fixes:
- URL (@BrianTan ): Thanks for flagging! I think this should be fixed globally now.
- Perching bug (@Ben Stewart ): You can now press P again to exit perching - no more getting stuck!
- Arrow keys (@Sanjay): Fixed - they now work
- Battery cage crowding (Ben): Adjusted the spacing to show the realistic density - you should n
... (read more)