I'm confused by the strong negative reaction to this comment. I guess it's about the CoGi funding, which does sound like I was wrong. But it seems to be true that there's no option to directly apply for funding for a new project (NickLaing mentions the GH funding circle, but they completed one round last year and their website doesn't currently imply there would be any more).
I think this helps explain the decline of GHD in the OP - AIM's charity list notwithstanding, no-one in the movement is incentivised to come up with practical ideas in the field.
Last I heard it was something like 10% of their GCR budget.
It's also basically impossible to apply for GHD funding. I recently decided to put my money where my mouth is and get involved in an early stage GHD project, but there's basically no EA-aligned funder who's willing to let you approach them.
SFF are exclusively longtermist, EA GHD as mentioned basically shut down, and Givewell and CoGi don't accept unsolicited applications. So as far as I can see if you think you have an idea in the GHD space and need funding for it you basically have to look outside the EA world (someone tell me if I missed something!)
Hey Arepo!
Last I heard it was something like 10% of their GCR budget.
I don't think that's right — CG gave $400m to GHW in 2025, and to get a sense of what % that might be, Alexander Berger (CEO of CG) shared that overall "Coefficient Giving directed over $1 billion in 2025" in his recent letter.
It seems like, considering how intelligent and creative our species is, we should expect that, even in very dire conditions, we would be able to re-build civilization.
That shouldn't necessarily be the primary concern. Though it also seems that people who've studied our ability to rebuild civilisation are substantially more pessimistic.
I think the simple answer is that it's become less prioritised by the central orgs (the EA GHD fund is on indefinite hiatus, GHD is a diminishing part of CoGi's budget, 80k moved away from it almost entirely, Rethink seem to have shifted towards animal welfare, CEA seem to have an increasingly longtermist/AI focus, etc). This gives a top-down cultural impetus away from the subject, and just means there's less money in it.
It's also, for better or worse, as an evidence-oriented field, a subject that's harder to have amateur conversations about. I've been con...
I agree that the depth of the evidence conversations doesn't lend itself to amateur discussion on the forum and I also feel like there's not much I have to add to the GHD discussions here because of that.
Don't think it's fair to say it's not prioritised among the orgs. My understanding is that Coefficient Giving still gives huge amounts to GiveWell charities and grants.
I agree that the average college student encountering EA today should focus on issues related to AI safety
I broadly nodded along to your OP, but strong disagree here. There are tonnes of people working in AI safety, to the extent that it's already hypercompetitive and the marginal value of one more person getting in the long queue for such a job seems very low.
Meanwhile I continue to find the case for AI safety, as least as envisioned by EA doomers, highly speculative. That's not to say it shouldn't get any attention, but there's a far better e...
I'm still highly sceptical of neglectedness as anything but a first-pass heuristic for how prioritisation organisations might spend their early research - firstly because there are so many ways a field can be 'not neglected' and still highly leveraged (e.g. Givewell and Giving What We Can were only able to have the comparative impact they did because the global health field had been vigorously researched but no-one had systemically done the individual-level prioritisation they did with the research results); secondly because it encourages EA to reject esta...
Sure, so we agree?
Ah, sorry, I misunderstood that as criticism.
Do you think that forecasting like this will hurt the information landscape on average?
I'm a big fan of the development e.g. QRI's process of making tools that make it increasingly easy to translate natural thoughts into more usable forms. In my dream world, if you told me your beliefs it would be in the form of a set of distributions that I could run a monte carlo sim on, having potentially substituted my own opinions if I felt differently confident than you (and maybe beyond that there'...
I weakly disagree here. I am very much in the "make up statistics and be clear about that" camp.
I'm sympathetic to that camp, but I think it has major epistemic issues that largely go unaddressed:
Thanks for the shout-out :) If you mentally replace the 'multiplanetary' state with 'post-AGI' in this calculator, I do think it models the set of concerns Will's talking about here pretty well.
Thanks Ozzie! I'll definitely try this out if I ever finish my current WIP :)
Questions that come to mind:
A 10% chance of transformative AI this decade justifies current EA efforts to make AI go well.
Not necessarily. It depends on
a) your credence distribution of TAI after this decade,
b) your estimate of annual risk per year of other catastrophes, and
c) your estimate of the comparative longterm cost of other catastrophes.
I don't think it's unreasonable to think, for example, that
But the expected value of existential risk reduction is—if not infinite, which I think it clearly is in expectation—extremely massive.
I commented something similar on your blog, but as soon as you allow that one decision is infinite in expectation you have to allow that all outcomes are, since whatever possibility of infinite value you have given that action must still be present without it.
...If you think the Bostrom number of 10^52 happy people has a .01% chance of being right, then you’ll get 10^48 expected future people if we don’t go extinct, meaning red
I agree with Yarrow's anti-'truth-seeking' sentiment here. That phrase seems to primarily serve as an epistemic deflection device indicating 'someone whose views I don't want to take seriously and don't want to justify not taking seriously'.
I agree we shouldn't defer to the CEO of PETA, but CEOs aren't - often by their own admission - subject matter experts so much as people who can move stuff forwards. In my book the set of actual experts is certainly murky, but includes academics, researchers, sometimes forecasters, sometimes technical workers - sometime...
I agree that the OP is too confident/strongly worded, but IMO this
which is more than enough to justify EA efforts here.
could be dangerously wrong. As long as AI safety consumes resources that might have counterfactually gone to e.g. nuclear disarmament, stronger international relations, it might well be harmful in expectation.
This is doubly true for warlike AI 'safety' strategies like Aschenbrenner's call to intentionally arms race China, Hendrycks, Schmidt and Wang's call to 'sabotage' countries that cross some ill-defined threshold, and Yudkowsky c...
That makes some sense, but leaves me with questions like
IMO it would help to see a concrete list of MIRI's outputs and budget for the last several years. My understanding is that MIRI has intentionally withheld most of its work from the public eye for fear of infohazards, which might be reasonable for soliciting funding from large private donors but seems like a poor strategy for raising substantial public money, both prudentially and epistemically.
If there are particular projects you think are too dangerous to describe, it would still help to give a sense of what the others were, a cost breakdown for tho...
(Speaking in my capacity as someone who currently works for MIRI)
I think the degree to which we withheld work from the public for fear of accelerating progress toward ASI might be a little overrepresented in the above. We adopted a stance of closed-by-default research years ago for that reason, but that's not why e.g. we don't publish concrete and exhaustive lists of outputs and budget.
We do publish some lists of some outputs, and we do publish some degree of budgetary breakdowns, in some years.
But mainly, we think of ourselves as asking for money fr...
You might want to consider EA Serbia, which I was told in answer to a similar question has a good community, at least big enough to have their own office. I didn't end up going there, so can't comment personally, but it's on a latitude with northern Italy, so likely to average pretty warm - though it's inland, so 'average' is likely to contain cold winters and very hot summers.
(but in the same thread @Dušan D. Nešić (Dushan) mentioned that air conditioning is ubiquitous)
Should our EA residential program prioritize structured programming or open-ended residencies?
You can always host structured programs, perhaps on a regular cycle, but doing so to the exclusion of open-ended residencies seems to be giving up much of the counterfactual value the hotel provided. It seems like a strong overcommitment to a concern about AI doom in the next low-single-digit years, which remains (rightly IMO) a niche belief even in the EA world, despite heavy selection within the community for it.
Having said that, to some degree it sounds l...
Thanks for the extensive reply! Thoughts in order:
I would also note that #3 could be much worse than #2 if #3 entails spreading wild animal suffering.
I think this is fair, though if we're not fixing that issue then it seems problematic for any pro- longtermism view, since it implies the ideal outcome is probably destroying the biosphere. Fwiw I also find it hard to imagine humans populating the universe with anything resembling 'wild animals', given the level of control we'd have in such scenarios, and our incentives to exert it. That's not to say we could...
Very helpful, thanks! A couple of thoughts:
It looks like this is driven entirely by Givewell/global health and development reduction, and that actually the other fields have been stable or even expanding.
Also, in an ideal world we'd see funding from Longview and Founders Pledge. I also gather there's a new influx of money into the effective animal welfare space from some other funder, though I don't know their name.
Most of these aren't so much well-formed questions, as research/methodological issues I would like to see more focus on:
Thanks! The online courses page describes itself as a collection of 'some of the best courses'. Could you say more about what made you pick these? There are dozens or hundreds of online courses these days (esp on general subjects like data analysis), so the challenge for pursuing them is often a matter of filtering convincingly.
Good luck with this! One minor irritation with the structure of the post is I had to read halfway down to find out which 'nation' it referred to. Suggest editing the title to 'US-wide', so people can see at a glance if it's relevant to them.
I remember him discussing person-affecting views in Reasons and Person, but IIRC (though it's been a very long time since I read it) he doesn't particularly advocate for them. I use the phrase mainly because of the quoted passage, which appears (again IIRC) in both The Precipice and What We Owe the Future, as well as possibly some of Bostrom's earlier writing.
I think you could equally give Bostrom the title though, for writing to my knowledge the first whole paper on the subject.
Cool to see someone trying to think objectively about this. Inspired by this post, I had a quick look at the scores on the world happiness report to compare China to its ethnic cousins, and while there are many reasons to take this with a grain of salt, China does... ok. On 'life evaluation', which appears to be the all things considered metric (I didn't read the methodology, correct me if I'm wrong), some key scores:
Taiwan: 6.669
Philippines: 6.107
South Korea: 6.038
Malaysia: 5.955
China: 5.921
Mongolia: 5.833
Indonesia: 5.617
Overall it's ranked 68th of 147 li...
Thanks :)
I don't think I covered any specific relationship between factors in that essay (except those that were formally modelled in), where I was mainly trying to lay out a framework that would even allow you to ask a question. This essay is the first time I've spent meaningful effort on trying to answer it.
I think it's probably ok to treat the factors as a priori independent, since ultimately you have to run with your own priors. And for the sake of informing prioritisation decisions, you can decide case by case how much you imagine your counterfactual action changing each factor.
You don't need a very high credence in e.g. AI x risk for it to be the most likely reason you and your family die
I think this is misleading, especially if you agree with the classic notion of x-risk as excluding events from which recovery is possible. My distribution of credence over event fatality rates is heavily left-skewed, so I would expect far more deaths under the curve between 10% and 99% fatality than between 99% and 100%, and probably more area to the left even under a substantially more even partition of outcomes.
I fear we have yet to truly refute Robin Hanson’s claim that EA is primarily a youth movement.
FWIW my impression is that CEA have spent significantly more effort on recruiting people from universities than any other comparable subset of the population.
Somehow despite 'Goodharting' being a now standard phrase, 'Badharting' is completely unrecognised by Google.
I suggest the following intuitive meaning: failing to reward a desired achievement, because the proxy measure you used to represent it wasn't satisfied:
'No bonus for the staff this year: we didn't reach our 10% sales units growth target.'
'But we raised profits by 30% by selling more expensive products, you Badharting assholes!'
I guess in general any decision binds all future people in your lightcone to some counterfactual set of consequences. But it still seems practically useful in interpersonal interactions to distinction a) between those that deliberately restrict their action set/those that just provide them in expectation with a different access set of ~the same size, and b) between those motivated by indifference/those motivated specifically by an authoritarian desire to make their values more consistently with ours.
Muchos hugs for this one. I'm selfishly glad you were in London long enough for us to meet, fwiw :)
I feel like this is a specific case of a general attitude in EA that we want to lock in our future selves to some path in case our values change. The more I think about this the worse it feels to me, since
a) your future values might in fact be better than your current ones, or if you completely reject any betterness relation between values then it doesn't matter either way
b) your future self is a separate person. If we imagine the argument targeting any...
I'm not doing the course, but I'm pretty much always on the EA Gather, and usually on for coworking and accountabilitying compatible with my timezone (UTC+8). Feel free to hop on there and ping me - there's a good chance I'll be able to reply at least by text immediately, and if not pretty much always within 12 hours.
To be strictly accurate, perhaps I should have said 'the more you know about AI risks and AI safety, the higher your p(doom)'. I do think that's an empirically defensible claim. Especially insofar as most of the billions of people who know nothing about AI risks have a p(doom) of zero.
That makes it sound like a continuous function when it isn't really. Sure, people who've never or barely thought about it and then proceed to do so are likely to become more concerned - since they have a base of ~0 concern. That doesn't mean the effect will have the same shap...
I'm really glad you gave this talk, even as something of a sceptic of AI x-risk. As you say, this shouldn't be a partisan issue. I would contest one claim though:
> Generally, the more you know about AI, the higher your p(doom), or estimated probability that ASI would doom humanity to imminent extinction.
I don't see the evidence for this claim, and I keep seeing people in the doomer community uncritically repeat it. AI wouldn't be progressing if everyone who understood it became convinced it would kill us all.
Ok, one might explain this progress via...
I resonate strongly with this, to the extent that I worry most of the main EA 'community-building' events don't actually do anything to build community. I can count on the fingers of 0 hands the number of meaningful lasting relationships I've subsequently developed with anyone I met at an EAG(x), after having been to maybe 10 over the course of 8ish years.
That's not to say there's not value in the shorter term relationships that emerge from them - but by comparison I still think fondly of everyone I met at a single low-cost EA weekend retreat over a ...
These were all (with the exception of 3) written in the six months following FTX, when it was still being actively discussed, and at least three other EA-relateed controversies had come out in succession. So I probably misremembered FTX specifically (thinking again, IIRC there was an FTX tag that was the original 'autodeprioritised' tag, whose logic maybe got reverted when the community separation logic was introduced). But I think it's fair to say that these were a de facto controversy-deprioritisation strategy (as the fourth post suggests) rather than a measured strategy that could be expected to stay equally relevant in calmer times.
I feel like people are treating the counterfactual as 'no way to filter out community posts'; whereas the forum software currently allows you to filter for any given tag, and could easily be tweaked to (or possibly already allows you to) filter out a particular tag.
So the primary counterfactual isn't 'no separation' it's 'greater transparency and/or community involvement in what gets tagged "community"'.
You may well be right that CEA is biased (its hard not to be) and the criteria could be
made clearer.
My suggestion if the current separation is kept would be to reallow community tagging of posts, but require it to go above a certain threshold and/or have a delay, so that posts don't bounce back and forward between the two feeds.
I'm also not sure community posts get less attention (forum team can tell me).
FWIW I suspect both that being tagged community causes reduced attention, but that they get less attention overall since many low-karma posts slide ...
Increase relative to what counterfactual? I think it might be true both that annual risk of bad event goes up from AI, and that all-time risk decreases (on the assumption that we're basically reaching a hurdle we have to pass anyway, and that I'm highly sceptical that we gain much in practice by implementing forceful procedures that slow us down from getting there).
Weakly in favour of moderate regulation, though I think within the EA movement the extinction case is much overstated, the potential benefits are understated, and I can imagine regulatory efforts backfiring in any number of ways.
Having said that after I voted I realised there are a number of conditional actions lower down in the poll that I would be sympathetic to given the conditions (e.g. pause if mass unemployment or major disaster)
>I think it's historically pretty incorrect that the grounding in cost-effectiveness is what made EA good
FWIW the reasons you're giving here are closely related to the reasons why I'm sceptical that modern AI-focused EA is in fact as good. I don't think it's unreasonable to support AI safety work, but I think it's throwing away most of the epistemics that could make EA a long-term robustly positive influence. EA's original tagline used to be 'using evidence and reason', but the extreme AI safety focus seems to drop the 'evidence' part.
To believe you sho... (read more)