Hi! I'm a long time effective altruist (14+ years) and utilitarian/utilitarian-adjacent. This is a sweet and earnest post - you're clearly bright and I admire your dedication at such a young age. You're right to recognize that burnout isn't utilitarian, but I worry that your "donate everything / camper van" framing is premature and probably wrong.
At age 14/15, the highest-EV move is almost always building optionality, not making binding commitments to extreme frugality. You're right to maximize income, but I think you're thinking too much about this in ter...
Hi. Thanks for writing this. I find electoral reform to be a genuinely interesting cause area, and I appreciate the effort to apply EA frameworks to it. I have a few concerns with the framing and some factual details:
On neglectedness: The claim that this is "the most neglected intervention in EA" doesn't match the track record. The Center for Election Science has received over $2.4M from Open Philanthropy, $100K from EA Funds, and $40K+ from SFF. 80,000 Hours has a problem profile on voting reform calling it a "potential highest priority area" and did a fu...
See also "A Model Estimating the Value of Research Influencing Funders" that comes to a similar conclusion
You might like "A Model Estimating the Value of Research Influencing Funders" which makes a similar point, but quantitatively
Hi David - I work a lot on semiconductor/chip export policy, so very important to think about the strategy here.
My biggest issue is that "short vs. long" timelines is not a binary. I agree that under longer timelines, say post-2035, China likely can significantly catch up on chip manufacturing. (Seems much less likely pre-2035.) But I think the controls logic matters really strongly for timelines 2025-2035 and still might create a larger strategic advantage post-2035.
Who has the chips still matters, since it determines whether the country has enough comput...
Congrats! I also thought it was great.
Sorry for the slightly off-topic question but I noticed EAG London 2025 talks are uploaded to YouTube but I didn't see any EAG Bay Area 2025 talks. Do you know when those will go up?
If you're considering a career in AI policy, now is an especially good time to start applying widely as there's a lot of hiring going on right now. I documented in my Substack over a dozen different opportunities that I think are very promising.
Thank you for sharing your perspective and I'm sorry this has been frustrating for you and people you know. I deeply appreciate your commitment and perseverance.
I hope to share you a bit of perspective from me as a hiring manager on the other side of things:
Why aren’t orgs leaning harder on shared talent pools (e.g. HIP’s database) to bypass public rounds? HIP is currently running an open search.
It's very difficult to run an open search for all conceivable jobs and have the best fit for all of them. And even if you do have a list of the top candidates ...
At least in my own normative thought, I don't just wonder about what meets my standards. [...] I think the most important disagreement of all is over which standards are really warranted.
Really warranted by what? I think I'm an illusionist about this in particular as I don't even know what we could be reasonably disagreeing over.
For a disagreement about facts (is this blue?), we can argue about actual blueness (measurable) or we can argue about epistemics (which strategies most reliably predict the world?) and meta-epistemics (which strategies most reli...
You're right that I need to bite the bullet on epistemic norms too and I do think that's a highly effective reply. But at the end of the day, yes, I think "reasonable" in epistemology is also implicitly goal-relative in a meta-ethical sense - it means "in order to have beliefs that accurately track reality." The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.
You say I've "replaced all the important moral questions wi...
You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But I'd argue this actually supports my view rather than undermining it.
The key difference: epistemic norms have a built-in goal - accurate representation of reality. When we ask "should I expect emeralds to be green or grue?" we're implicitly asking "in order to have beliefs that accurately track reality, what should I expect?" The standard is baked into the enterprise of belief formation itself.
But moral norms lack thi...
Why couldn't someone disagree with you about the purpose of belief-formation: "sure, truth-seeking feels obviously correct to you, but that's just because [some story]... not because we've discovered some goal-independent truth."
Further, part of my point with induction is that merely aiming at truth doesn't settle the hard questions of epistemology (any more than aiming at the good settles the hard questions of axiology).
To see this: suppose that, oddly enough, the grue-speakers turn out to be right that all new emeralds discovered after 2030 are observed ...
Thanks!
I think all reasons are hypothetical, but some hypotheticals (like "if you want to avoid unnecessary suffering...") are so deeply embedded in human psychology that they feel categorical. This explains our moral intuitions without mysterious metaphysical facts.
The concentration camp guard example actually supports my view - we think the guard shouldn't follow professional norms precisely because we're applying a different value system (human welfare over rule-following). There's no view from nowhere; there's just the fact that (luckily) most of us share similar core values.
Do you think there's an epistemic fact of the matter as to what beliefs about the future are most reasonable and likely to be true given the past? (E.g., whether we should expect future emeralds to be green or grue?) Is probability end-relational too? Objective norms for inductive reasoning don't seem any less metaphysically mysterious than objective norms for practical reasoning.
One could just debunk all philosophical beliefs as mere "deeply embedded... intuitions" so as to avoid "mysterious metaphysical facts". But that then leaves you committed to think...
You were negative toward the idea of hypothetical imperatives elsewhere but I don't see how you get around the need for them.
You say epistemic and moral obligations work "in the same way," but they don't. Yes, we have epistemic obligations to believe true things... in order to have accurate beliefs about reality. That's a specific goal. But you can't just assert "some things are good and worth desiring" without specifying... good according to what standard? The existence of epistemic standards doesn't prove there's One True Moral Standard any more than the...
"Nihilism" sounds bad but I think it's smuggling in connotations I don't endorse.
I'm far from a professional philosopher but I don't see how you could possibly make substantive claims about desirability from a pure meta-ethical perspective. But you definitely can make substantive claims about desirability from a social perspective and personal perspective. The reason we don't debate racist normative advice is because we're not racists. I don't see any other way to determine this.
Distinguish how we determine something from what we are determining.
There's a trivial sense in which all thought is "subjective". Even science ultimately comes down to personal perspectives on what you perceive as the result of an experiment, and how you think the data should be interpreted (as supporting some or another more general theory). But it would be odd to conclude from this that our scientific verdicts are just claims about how the world appears to us, or what's reasonable to conclude relative to certain stipulated ancillary assumptions. Commonse...
Morality is Objective
People keep forgetting that meta-ethics was solved back in 2013.
fwiw, I think the view you discuss there is really just a terminological variant on nihilism:
...The key thing to understand about hypothetical imperatives, thus understood, is that they describe relations of normative inheritance. “If you want X, you should do Y,” conveys that given that X is worth pursuing, Y will be too. But is X worth pursuing? That crucial question is left unanswered. A view on which there are only hypothetical imperatives is thus a form of normative nihilism—no more productive than an irrigation system without any liquid to flow through
I recently made a forecast based on the METR paper with median 2030 timelines and much less probability on 2027 (<10%). I think this forecast of mine is weaker to much fewer of titotal's critiques, but still weak to some (especially not having sufficient uncertainty around the type of curve to fit).
...What do the superforecasters say? Well, the most comprehensive effort to ascertain and influence superforecaster opinions on AI risk was the Forecasting Research Institute’s Roots of Disagreement Study.[2] In this study, they found that nearly all of the superforecasters fell into the “AI skeptic” category, with an average P(doom) of just 0.12%. If you’re tempted to say that their number is only so low because they’re ignorant or haven’t taken the time to fully understand the arguments for AI risk, then you’d be wrong; the 0.12% figure was obtained after
I just saw that Season 3 Episode 9 of Leverage: Redemption ("The Poltergeist Job") that came out on 2025 May 29 has an unfortunately very unflattering portrayal of "effective altruism".
Matt claims he's all about effective altruism. That it's actually helpful for Futurilogic to rake in billions so that there's more money to give back to the world. They're about to launch Galactica. That's free global Internet.
[...] But about 50% of the investments in Galactica are from anonymous crypto, so we all know what that means.
The main antagonist and CEO of Fu...
Here's my summary of the recommendations:
If you've liked my writing in the past, I wanted to share that I've started a Substack: https://peterwildeford.substack.com/
Ever wanted a top forecaster to help you navigate the news? Want to know the latest in AI? I'm doing all that in my Substack -- forecast-driven analysis about AI, national security, innovation, and emerging technology!
Something that I personally would find super valuable is to see you work through a forecasting problem "live" (in text). Take an AI question that you would like to forecast, and then describe how you actually go about making that forecast. The information you seek out, how you analyze it, and especially how you make it quantitative. That would
I'm very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year. This is why I framed my comment the way I did hopefully making it clear that donating to neglected animal work is very much an answer I endorse.
I also agree it's very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact. I think there's higher-level strategic issu...
Since it looks like you're looking for an opinion, here's mine:
To start, while I deeply respect GiveWell's work, in my personal opinion I still find it hard to believe that any GiveWell top charity is worth donating to if you're planning to do the typical EA project of maximizing the value of your donations in a scope sensitive and impartial way. ...Additionally, I don't think other x-risks matter nearly as much as AI risk work (though admittedly a lot of biorisk stuff is now focused on AI-bio intersections).
Instead, I think the main difficult judgement ca...
Thanks for the comment, I think this is very astute.
~
Recently it seems like the community on the EA Forum has shifted a bit to favor animal welfare. Or maybe it's just that the AI safety people have migrated to other blogs and organizations.
I think there's a (mostly but not entirely accurate) vibe that all AI safety orgs that are worth funding will already be approximately fully funded by OpenPhil and others, but that animal orgs (especially in invertebrate/wild welfare) are very neglected.
I don't think that all AI safety orgs are actually fully funded...
I basically endorse this post, as well as the use of the tools created by Rethink Priorities that collectively point to quite strong but not overwhelming confidence in the marginal value of farmed animal welfare.
I think more EAs should consider operations/management/doer careers over research careers, and that operations/management/doer careers should be higher status within the community.
I get a general vibe that in EA (and probably the world at large), that being a "deep thinking researcher"-type is way higher status than being an "operations/management/doer"-type. Yet the latter is also very high impact work, often higher impact than research (especially on the margin).
I see many EAs erroneously try to go into research and stick to research despite having very ...
For operations roles, and focusing on impact (rather than status), I notice that your view contrasts markedly with @abrahamrowe’s in his recent ‘Reflections on a decade of trying to have an impact’ post:
...Impact Through Operations
- I don’t really think my ops work is particularly impactful, because I think ops staff are relatively easy to hire for compared to other roles. However I have spent a lot of my time in EA doing ops work.
- I was RP’s COO for 4 years, overseeing its non-research work (fiscal sponsorship, finance, HR, communications, fundraising, etc
Do you have any ideas or suggestions (even rough thoughts) regarding how to make this change, or for interventions that would nudge peoples' behavior?
Off the top of my head: A subsidized bootcamp on core operations skills? Getting more EAG speakers/sessions focused on operations-type topics? Various respected and well-known EAs publicly stating that Operations is important and valuable? A syllabus (readings, MOOCs, tutorials) that people can work their way through independently?
One question I often grapple with is the true benefit of having EAs fill certain roles, particularly compared to non-EAs. It would be valuable to see an analysis—perhaps there’s something like this on 80,000 Hours—of the types of roles where having an EA as opposed to a non-EA would significantly increase counterfactual impact. If an EA doesn’t outperform the counterfactual non-EA hire, their impact is neutralized. This is why I believe that earning to give should be a strong default for many EAs. If they choose a different path, they should consider wheth...
I'm not sure I understand the expectations enough about what these questions are looking for to answer.
Firstly, I don't think "the movement" is centralized enough to explicitly acknowledge things as a whole - that may be a bad expectation. I think some individual people and organizations have done some reflection (see here and here for prominent examples), though I would agree that there likely should be more.
Secondly, It definitely seems very wrong to me though to say that EA has had no new ideas in the past two years. Back in 2022 the main answer to "how...
It's very difficult to underrate how much EA has changed over the past two years.
For context, two years ago was 2022 July 30. It was 17 days prior to the "What We Owe the Future" book launch. It was also about three months before the FTX fraud was discovered (but at this time it was massively underway in secret) and the ensuing bankruptcy. We were still at the height of the Big Money Big Longtermism era.
It was also about eight months before the FLI Pause Letter, which I think coincided with roughly when the US and UK governments took very serious and inten...
This looks pretty much right, as a description of how EA has responded tactically to important events and vibe shifts. Nevertheless it doesn't answer OP's questions, which I'll repeat:
Your reply is not about new ideas, or the movement acknowledging it was wrong (except about Bankman-Fried personally, which doesn't seem like what OP is asking about), or new organizatio...
I agree with all this advice. I also want to emphasize that I think researchers ought to spend more time talking to people relevant to their work.
Once you’ve identified your target audience, spend a bunch of time talking to them at the beginning, middle, and end of the project. At the beginning learn and take into account their constraints, at the middle refine your ideas, and at the end actually try to get your research into action.
I think it’s not crazy to spend half of your time on the research project talking.
Thanks Linch. I appreciate the chance to step back here. So I want to apologize to @Austin and @Rachel Weinberg and @Saul Munn if I stressed them out with my comments. (Tagging means they'll see it, right?)
I want to be very clear that while I disagree with some of the choices made, I have absolutely no ill will towards them or any other Manifest organizer, I very much want Manifold and Manifest to succeed, and I very much respect their rights to have their conference the way they want. If I see any of them I will be very warm and friendly and there's reall...
BTW I want to add -- to all those who champion Hanania because they think free speech should mean that anyone should be able to be platformed without criticism or condemnation, Hanania is no ally to those principles:
Here's Hanania:
...I don’t feel particularly oppressed by leftists. They give me a lot more free speech than I would give them if the tables were turned. If I owned Twitter, I wouldn’t let feminists, trans activists, or socialists post. Why should I? They’re wrong about everything and bad for society. Twitter [pre-Musk] is a company that is overw
Has anyone said he should be platformed without criticism? The point of contention seems to be that many people think he shouldn't have been a speaker at all and that everyone who interacts with him is tainted. That is not a subtle difference.
As HL Mencken famously said, “The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all.”
If principles only apply to the people that uphold them, then they're not principles: they're just another word for tribalism. Lovely conflict theory you've got there.
Yeah, because there's such a geographically clustered dichotomy in views between the London set and the SF set, it seems pretty important to me to give it 24 hours yeah.
Also just a general generic caution: we should know that this poll will mainly be seen by only the most active and most engaged people, which may not be representative enough to generalize.
I reached out to Hanania and this is what he said:
"“These people” as in criminals and those who are apologists for crimes. A coalition of bad people who together destroy cities. Yes, I know how it looks. The Penny arrest made me emotional, and so it was an unthinking tweet in the moment."
He also says it's quoted in the Blocked and Reported podcast episode, but it's behind a paywall and I can't for the life of me get Substack to accept my card, so I can't doublecheck. Would appreciate if anybody figured out how to do that and could verify.
I think gene...
To be clear, I haven't cut ties with anyone other than Manifold (and Hanania). Manifold is a very voluntary use of my non-professional time and I found the community to be exhausting. I have a right to decline to participate there, just as much as you have a right to participate there. There's nothing controlling about this.
Hi - thanks for this comment. As someone working on export control policy, let me give you my perspective.
Firstly, an important precondition for a cooperative pause is leverage. You don't get China to agree to a mutual pause by first giving away your main strategic advantage. You get them to agree by making the alternative to be "a race they're losing", which is worse than cooperation. Export controls are thus part of what creates the conditions for being able to pause. If you equalize compute access first, China has no reason to agree to a pause because t... (read more)