All of Arepo's Comments + Replies

>I think it's historically pretty incorrect that the grounding in cost-effectiveness is what made EA good

FWIW the reasons you're giving here are closely related to the reasons why I'm sceptical that modern AI-focused EA is in fact as good. I don't think it's unreasonable to support AI safety work, but I think it's throwing away most of the epistemics that could make EA a long-term robustly positive influence. EA's original tagline used to be 'using evidence and reason', but the extreme AI safety focus seems to drop the 'evidence' part.

To believe you sho... (read more)

6
richard_ngo
In hindsight I shouldn't have used the phrase "what made EA good", since by this point I'm skeptical about both the AI safety version and the "original spirit" of EA. I guess that makes me one of the people you're describing who joined a long time ago (in my case, over a decade) and have now disengaged. I do think that int/a is less likely than EA to be significantly harmful, and I'm excited about that. Whether or not it has a decent chance of actually doing something meaningful will depend a lot on the vision of the founders (and Euan in particular). Right now I'm not seeing what will prevent it from dissolving into the background of general hippie-adjacent things (kinda like a lot of the Game B and metacrisis stuff seems to have done). But we'll see.

I'm confused by the strong negative reaction to this comment. I guess it's about the CoGi funding, which does sound like I was wrong. But it seems to be true that there's no option to directly apply for funding for a new project (NickLaing mentions the GH funding circle, but they completed one round last year and their website doesn't currently imply there would be any more). 

I think this helps explain the decline of GHD in the OP - AIM's charity list notwithstanding, no-one in the movement is incentivised to come up with practical ideas in the field.

Last I heard it was something like 10% of their GCR budget.

It's also basically impossible to apply for GHD funding. I recently decided to put my money where my mouth is and get involved in an early stage GHD project, but there's basically no EA-aligned funder who's willing to let you approach them. 

SFF are exclusively longtermist, EA GHD as mentioned basically shut down, and Givewell and CoGi don't accept unsolicited applications. So as far as I can see if you think you have an idea in the GHD space and need funding for it you basically have to look outside the EA world (someone tell me if I missed something!)

3
Arepo
I'm confused by the strong negative reaction to this comment. I guess it's about the CoGi funding, which does sound like I was wrong. But it seems to be true that there's no option to directly apply for funding for a new project (NickLaing mentions the GH funding circle, but they completed one round last year and their website doesn't currently imply there would be any more).  I think this helps explain the decline of GHD in the OP - AIM's charity list notwithstanding, no-one in the movement is incentivised to come up with practical ideas in the field.

Hey Arepo!

Last I heard it was something like 10% of their GCR budget.

I don't think that's right — CG gave $400m to GHW in 2025, and to get a sense of what % that might be, Alexander Berger (CEO of CG) shared that overall "Coefficient Giving directed over $1 billion in 2025" in his recent letter.

3
NickLaing
Yep this is a legitimate concern, its hard for new projects that aren't being incubated through CE for sure. I think there are decent arguments for bigger funders not funding new initiatives though. I think its not the worst for friends/family/non EA funds to help starting new initiatives before official funders get involved. Also (I could be wrong) if you made a very strong argument here on the forum there might be people willing to help. The Global Health Funding circle is another EA avenue for newer ventures :). Also Scott Alexander's yearly giveaway is open to new ideas and they fund a bunch of GHD stuff

It seems like, considering how intelligent and creative our species is, we should expect that, even in very dire conditions, we would be able to re-build civilization.

That shouldn't necessarily be the primary concern. Though it also seems that people who've studied our ability to rebuild civilisation are substantially more pessimistic.

I think the simple answer is that it's become less prioritised by the central orgs (the EA GHD fund is on indefinite hiatus, GHD is a diminishing part of CoGi's budget, 80k moved away from it almost entirely, Rethink seem to have shifted towards animal welfare, CEA seem to have an increasingly longtermist/AI focus, etc). This gives a top-down cultural impetus away from the subject, and just means there's less money in it.

It's also, for better or worse, as an evidence-oriented field, a subject that's harder to have amateur conversations about. I've been con... (read more)

9
Tom Vargas
A late comment to say that I don't think RP takes the view that any given cause area is more important than another, either philosophically or in practice. Our GHD team produces a steady stream of-I think-interesting and helpful reports. Perhaps this perception stems from the fact that a lot of our GHD work is not public (for various reasons), or simply that people don't engage with it as much as they might have in the past.
5
NickLaing
Love this @Arepo and i largely agree. I think  there's plenty of uncertainty and space for amateur- ish discussions about GHD stuff. Yes even taking about specific interventions it helps to have specific knowledge but mostly it's figure-out-able for a switched on person. i would say a lot of Technical AI discussion is harder- i struggle to understand some of the threads on lesswrong!

I agree that the depth of the evidence conversations doesn't lend itself to amateur discussion on the forum and I also feel like there's not much I have to add to the GHD discussions here because of that.

Don't think it's fair to say it's not prioritised among the orgs. My understanding is that Coefficient Giving still gives huge amounts to GiveWell charities and grants.

I agree that the average college student encountering EA today should focus on issues related to AI safety

 

I broadly nodded along to your OP, but strong disagree here. There are tonnes of people working in AI safety, to the extent that it's already hypercompetitive and the marginal value of one more person getting in the long queue for such a job seems very low. 

Meanwhile I continue to find the case for AI safety, as least as envisioned by EA doomers, highly speculative. That's not to say it shouldn't get any attention, but there's a far better e... (read more)

5
lilly
I didn’t mean this to be that deep; I meant (1) the average college student EA (i.e., many EAs should still pursue other kinds of careers) and (2) AI safety broadly construed (to include issues related to biorisk, policy, and many issues unrelated to x-risk). I don’t know much about how competitive jobs are throughout this space, but at least in some spheres (eg, academic philosophy) there is growing interest in AI, so much so that it’d be prudent for a philosophy PhD student to work on issues related to AI solely to get a job (i.e., bracketing any interest in EA/having a socially valuable career). I assume that’s true in at least some other spheres as well (policy?), and while I could see that changing in the next few years, it feels like the entire job market will change a lot in the next few years, such that I doubt the advice “don’t go into AI safety because it’s oversaturated; do X instead” will be reliable advice for most X.

I'm still highly sceptical of neglectedness as anything but a first-pass heuristic for how prioritisation organisations might spend their early research - firstly because there are so many ways a field can be 'not neglected' and still highly leveraged (e.g. Givewell and Giving What We Can were only able to have the comparative impact they did because the global health field had been vigorously researched but no-one had systemically done the individual-level prioritisation they did with the research results); secondly because it encourages EA to reject esta... (read more)

Sure, so we agree?

Ah, sorry, I misunderstood that as criticism.

Do you think that forecasting like this will hurt the information landscape on average? 

I'm a big fan of the development e.g. QRI's process of making tools that make it increasingly easy to translate natural thoughts into more usable forms. In my dream world, if you told me your beliefs it would be in the form of a set of distributions that I could run a monte carlo sim on, having potentially substituted my own opinions if I felt differently confident than you (and maybe beyond that there'... (read more)

2
Nathan Young
I dunno, I think that sounds galaxy-brained to me. I think that giving numbers is better than not giving them and that thinking carefully about the numbers is better than that. I don't really buy your second order concerns (or think they could easily go in the opposite direction)

I weakly disagree here. I am very much in the "make up statistics and be clear about that" camp.

 

I'm sympathetic to that camp, but I think it has major epistemic issues that largely go unaddressed:

  • It systemically biases away from extreme probabilities (it's hard to assert < than , for e.g., but many real-world probabilities are and post-hoc credences look like they should have been below this)
  • By focusing on very specific pathways towards some outcome, it diverts attention towards easily definable issues, and hence away from the prospe
... (read more)
2
Nathan Young
Yeah, I think you make good points. I think that forecasts are useful on balance, and then people should investigate them. Do you think that forecasting like this will hurt the information landscape on average?  Personally, to me, people engaged in this forecasting generally seem more capable of changing their minds. I think the AI2027 folks would probably be pretty capable of acknowledging they were wrong, which seems like a healthy thing. Probably more so than the media and academic?  Sure, so we agree? (Maybe you think I'm being derogatory, but no, I'm just allowing people who scroll down to the comments to see that I think this article contains a lot of specific, quite technical criticisms. If in doubt, I say things I think are true.)

I would strongly push back on the idea that a world where it's unlikely and we can't change that is uninteresting. In that world, all the other possible global catastrophic risks become far more salient as potential flourishing-defeaters.

Thanks for the shout-out :) If you mentally replace the 'multiplanetary' state with 'post-AGI' in this calculator, I do think it models the set of concerns Will's talking about here pretty well.

Thanks Ozzie! I'll definitely try this out if I ever finish my current WIP :)

Questions that come to mind:

  • Will it automatically improve as new versions of the underlying model families are released?
  • Will you be actively developing it?
  • Feature suggestion: could/would you add a check for obviously relevant literature and 'has anyone made basically this argument before'?
2
Ozzie Gooen
Sure thing! 1. I plan to update it with new model releases. Some of this should be pretty easy - I plan to keep Sonnet up to date, and will keep an eye on other new models.  2. I plan to at least maintain it. This year I can expect to spend maybe 1/3rd the year on it or so. I'm looking forward to seeing what use and the response is like, and will gauge things accordingly. I think it can be pretty useful as a tool, even without a full-time-equivalent improving it. (That said, if anyone wants to help fund us, that would make this much easier!) 3. I've definitely thought about this, can prioritize. There's a very high ceiling for how good background research can be for either a post, or for all claims/ideas in a post (much harder!). Simple version can be straightforward, though wouldn't be much better than just asking Claude to do a straightforward search. 

A 10% chance of transformative AI this decade justifies current EA efforts to make AI go well.

Not necessarily. It depends on 

a) your credence distribution of TAI after this decade, 

b) your estimate of annual risk per year of other catastrophes, and 

c) your estimate of the comparative longterm cost of other catastrophes.

I don't think it's unreasonable to think, for example, that

  • there's a very long tail to when TAI might arrive, given that its prospects of arriving in 2-3 decades are substantially related to to its prospects of arriving this d
... (read more)
6
David T
I would add that it's not just extreme proposals to make "AI go well" like Yudkowsky's airstrike that potentially have negative consequences beyond the counterfactual costs of not spending the money on other causes. Even 'pausing AI' through democratically elected legislation enacted as a result of smart and well-reasoned lobbying might be significantly negative in its direct impact, if the sort of 'AI' restricted would have failed to become a malign superintelligence but would have been very helpful to economic growth generally and perhaps medical researchers specifically. This applies if the imminent AGI hypothesis is false, and probably to an even greater extent it if it is true. (The simplest argument for why it's hard to justify all EA efforts to make AI go well based purely on its neglectedness as a cause is that some EA theories about what is needed for AI to go well directly conflict with others; to justify the course of action one needs to have some confidence not only that AGI is possibly a threat but that the proposed approach to it at least doesn't increase the threat. It is possible that both donations to a "charity" that became a commercial AI accelerationist and donations to lobbyists attempting to pause AI altogether were both mistakes, but it seems implausible that they were both good causes)

But the expected value of existential risk reduction is—if not infinite, which I think it clearly is in expectation—extremely massive.

I commented something similar on your blog, but as soon as you allow that one decision is infinite in expectation you have to allow that all outcomes are, since whatever possibility of infinite value you have given that action must still be present without it.

If you think the Bostrom number of 10^52 happy people has a .01% chance of being right, then you’ll get 10^48 expected future people if we don’t go extinct, meaning red

... (read more)

I agree with Yarrow's anti-'truth-seeking' sentiment here. That phrase seems to primarily serve as an epistemic deflection device indicating 'someone whose views I don't want to take seriously and don't want to justify not taking seriously'.

I agree we shouldn't defer to the CEO of PETA, but CEOs aren't - often by their own admission - subject matter experts so much as people who can move stuff forwards. In my book the set of actual experts is certainly murky, but includes academics, researchers, sometimes forecasters, sometimes technical workers - sometime... (read more)

6
Thomas Kwa🔹
Yeah, while I think truth-seeking is a real thing I agree it's often hard to judge in practice and vulnerable to being a weasel word. Basically I have two concerns with deferring to experts. First is that when the world lacks people with true subject matter expertise, whoever has the most prestige--maybe not CEOs but certainly mainstream researchers on slightly related questions-- will be seen as experts and we will need to worry about deferring to them. Second, because EA topics are selected for being too weird/unpopular to attract mainstream attention/funding, I think a common pattern is that of the best interventions, some are already funded, some are recommended by mainstream experts and remain underfunded, and some are too weird for the mainstream. It's not really possible to find the "too weird" kind without forming an inside view. We can start out deferring to experts, but by the time we've spent enough resources investigating the question that you're at all confident in what to do, the deferral to experts is partially replaced with understanding the research yourself as well as the load-bearing assumptions and biases of the experts. The mainstream experts will always get some weight, but it diminishes as your views start to incorporate their models rather than their views (example that comes to mind is economists on whether AGI will create explosive growth, and how recently good economic models have been developed by EA sources, now including some economists that vary assumptions and justify differences from the mainstream economists' assumptions). Wish I could give more concrete examples but I'm a bit swamped at work right now.

I agree that the OP is too confident/strongly worded, but IMO this

which is more than enough to justify EA efforts here. 

could be dangerously wrong. As long as AI safety consumes resources that might have counterfactually gone to e.g. nuclear disarmament, stronger international relations, it might well be harmful in expectation.

This is doubly true for warlike AI 'safety' strategies like Aschenbrenner's call to intentionally arms race China, Hendrycks, Schmidt and Wang's call to 'sabotage' countries that cross some ill-defined threshold, and Yudkowsky c... (read more)

6
Mjreard
A 10% chance of transformative AI this decade justifies current EA efforts to make AI go well. That includes the opportunity costs of that money not going to other things in the 90% worlds. Spending money on e.g. nuclear disarmament instead of AI also implies harm in the 10% of worlds where TAI was coming. Just calculating the expected vale of each accounts for both of these costs. It's also important to understand that Hendrycks and Yudkowsky were simply describing/predicting the geopolitical equilibrium that follows from their strategies, not independently advocating for the airstrikes or sabotage. Leopold is a more ambiguous case, but even he says that the race is already the reality, not something he prefers independently. I also think very few "EA" dollars are going to any of these groups/individuals.

Well handled, Peter! I'm curious how much of that conversation was organic, how much scripted or at least telegraphed in advance?

Ronnie had a bit of a script but also improvised a lot. I had no script, no knowledge of Ronnie's script, and was very blind to the whole thing. But we did many takes, so I was able to adapt accordingly.

2
NickLaing
yeah maybe they can change their name to just that

That makes some sense, but leaves me with questions like

  • Which projects were home runs, and how did you tell that a) they were successful at achieving their goals and b) that their goals were valuable?
  • Which projects were failures that you feel were justifiable given your knowledge state at the time?
  • What do these past projects demonstrate about the team's competence to work on future projects?
  • What and how was the budget allocated to these projects, and do you expect future projects projects to have structurally similar budgets?
  • Are there any other analogies y
... (read more)
6
Duncan Sabien
Noting that this is more "opinion of an employee" than "the position of MIRI overall"—I've held a variety of positions within the org and can't speak for e.g. Nate or Eliezer or Malo: * The Agent Foundations team feels, to me, like it was a slam dunk at the time; the team produced a ton of good research and many of their ideas have become foundational to discussions of agency in the broader AI sphere * The book feels like a slam dunk * The research push of 2020/2021 (that didn't pan out) feels to me like it was absolutely the right bet, but resulted in (essentially) nothing; it was an ambitious, many-person project for a speculative idea that had a shot at being amazing. I think it's hard to generalize lessons, because various projects are championed by various people and groups within the org ("MIRI" is nearly a ship of Theseus).  But some very basic lessons include: * Things pretty much only have a shot at all when there are people with a clear and ambitious vision/when there's an owner * When we say to ourselves "this has an X% chance of working out" we seem to be actually pretty calibrated * As one would expect, smaller projects and clearer projects work out more frequently than larger or vaguer ones (Sorry, that feels sort of useless, but.) From my limited perspective/to the best of my ability to see and descrbe, budget is essentially allocated in a "Is this worth doing?  If so, how do we find the resources to make it work?" sense.  MIRI's funding situation has always been pretty odd; we don't usually have a pie that must be divided up carefully so much as a core administrative apparatus that needs to be continually funded + a preexisting pool of resources that can be more or less freely allocated + a sense that there are allies out there who are willing to fund specific projects if we fall short and want to make a compelling pitch.   Unfortunately, I can't really draw analogies that help an outsider evaluate future projects.  We're intending to try

IMO it would help to see a concrete list of MIRI's outputs and budget for the last several years. My understanding is that MIRI has intentionally withheld most of its work from the public eye for fear of infohazards, which might be reasonable for soliciting funding from large private donors but seems like a poor strategy for raising substantial public money, both prudentially and epistemically. 

If there are particular projects you think are too dangerous to describe, it would still help to give a sense of what the others were, a cost breakdown for tho... (read more)

(Speaking in my capacity as someone who currently works for MIRI)

I think the degree to which we withheld work from the public for fear of accelerating progress toward ASI might be a little overrepresented in the above.  We adopted a stance of closed-by-default research years ago for that reason, but that's not why e.g. we don't publish concrete and exhaustive lists of outputs and budget.

We do publish some lists of some outputs, and we do publish some degree of budgetary breakdowns, in some years.

But mainly, we think of ourselves as asking for money fr... (read more)

Answer by Arepo2
0
0

You might want to consider EA Serbia, which I was told in answer to a similar question has a good community, at least big enough to have their own office. I didn't end up going there, so can't comment personally, but it's on a latitude with northern Italy, so likely to average pretty warm - though it's inland, so 'average' is likely to contain cold winters and very hot summers.

(but in the same thread @Dušan D. Nešić (Dushan) mentioned that air conditioning is ubiquitous)

1
James Brobin
That's an interesting suggestion! I would not have thought of that!
Arepo
5
2
0
70% agree

Should our EA residential program prioritize structured programming or open-ended residencies?

 

You can always host structured programs, perhaps on a regular cycle, but doing so to the exclusion of open-ended residencies seems to be giving up much of the counterfactual value the hotel provided. It seems like a strong overcommitment to a concern about AI doom in the next low-single-digit years, which remains (rightly IMO) a niche belief even in the EA world, despite heavy selection within the community for it.

Having said that, to some degree it sounds l... (read more)

4
Attila Ujvari
Thanks for the thoughtful comments, Arepo. A few clarifications: We're definitely not abandoning open-ended residencies. We're trying to find the right balance between open residency and structured programming. That's exactly why we're polling: we don't have the answer yet and want community input as one data point among many. On AI safety focus: I think there's a misread here. We're not narrowing to short-timelines AI doom. Our scope is AI safety, reducing global catastrophic risks, and remaining roughly cause-neutral but EA-aligned. We're following where both talent and high-impact opportunities are concentrated, not locking ourselves into a single timeline view. You're right that we need to align with funding realities to keep operations running, but we're actively working to avoid being locked into any single cause area. The goal is to remain responsive to what the ecosystem needs as things evolve. That is why we're doing very rudimentary market research that directly reaches the end user above, using the poll.
2
Chris Leong
Oh, I think AI safety is very important; short-term AI safety too though not quite 2027 😂. Knock-off MATS could produce a good amount of value, I just want the EA hotel to be even more ambitious.

Thanks for the extensive reply! Thoughts in order:

I would also note that #3 could be much worse than #2 if #3 entails spreading wild animal suffering.

I think this is fair, though if we're not fixing that issue then it seems problematic for any pro- longtermism view, since it implies the ideal outcome is probably destroying the biosphere. Fwiw I also find it hard to imagine humans populating the universe with anything resembling 'wild animals', given the level of control we'd have in such scenarios, and our incentives to exert it. That's not to say we could... (read more)

Very helpful, thanks! A couple of thoughts:

  • EA grantmaking appears on a steady downward trend since 2022 / FTX.


It looks like this is driven entirely by Givewell/global health and development reduction, and that actually the other fields have been stable or even expanding.

Also, in an ideal world we'd see funding from Longview and Founders Pledge. I also gather there's a new influx of money into the effective animal welfare space from some other funder, though I don't know their name.

5
Lorenzo Buonanno🔸
  This seems the opposite of what the data says up to 2024 Comparing 2024 to 2022, GH decreased by 9%, LTXR decreased by 13%, AW decreased by 23%, Meta decreased by 21% and "Other" increased by 23% I think the data for 2025 is too noisy and mostly sensitive to reporting timing (whether an org publishes their grant reports early in the year or later in the year) to inform an opinion

Kudos to whoever wrote these summaries. They give a great sense of the contents and at least wth mine capture the essence of it much more succinctly than I could!

6
Toby Tremlett🔹
Cheers! That was me + GPT
Answer by Arepo4
1
0

Most of these aren't so much well-formed questions, as research/methodological issues I would like to see more focus on:

  • Operationalisations of AI safety that don't exacerbate geopolitical tensions with China - or ideally that actively seek ways to collaborate with China on reducing the major risks.
  • Ways to materially incentivise good work and disincentivise bad work within nonprofit organisations, especially effectiveness-minded organisations
  • Looking for ways to do data-driven analyses on political work especially advocacy; correct me if wrong, but the recom
... (read more)

Thanks! The online courses page describes itself as a collection of 'some of the best courses'. Could you say more about what made you pick these? There are dozens or hundreds of online courses these days (esp on general subjects like data analysis), so the challenge for pursuing them is often a matter of filtering convincingly.

5
Probably Good
Thanks for the question! We've sourced the courses based on a mix of: * Soliciting recommendations from domain experts, mainly when asking for resource recommendations during research for other articles (like our career profiles). * Doing our own research to find courses that seemed to be particularly high-quality, reputable, coming from orgs that we think are likely to provide strong overviews, and/or especially relevant to impact-focused careers. That said, we want to emphasize that these are the best courses we know of; we're certain there are many great courses we don't know about that should also be included on this list (please let us know if you know of any that you think could be relevant). We'll see if we can make all this clearer in the article!

Good luck with this! One minor irritation with the structure of the post is I had to read halfway down to find out which 'nation' it referred to. Suggest editing the title to 'US-wide', so people can see at a glance if it's relevant to them.

1
Elaine Perlman
You are right. Fixed the issue. Thanks, Arepo! 

I remember him discussing person-affecting views in Reasons and Person, but IIRC (though it's been a very long time since I read it) he doesn't particularly advocate for them. I use the phrase mainly because of the quoted passage, which appears (again IIRC) in both The Precipice and What We Owe the Future, as well as possibly some of Bostrom's earlier writing. 

I think you could equally give Bostrom the title though, for writing to my knowledge the first whole paper on the subject.

1
River
Ah. I mistakenly thought that Parfit coined the term "person affecting view", which is such an obviously biased term I thought he must have been against longtermism, but I can't actually find confirmation of that so maybe I'm just wrong about the origin of the term. I would be curious if anyone knows who did coin it.

Cool to see someone trying to think objectively about this. Inspired by this post, I had a quick look at the scores on the world happiness report to compare China to its ethnic cousins, and while there are many reasons to take this with a grain of salt, China does... ok. On 'life evaluation', which appears to be the all things considered metric (I didn't read the methodology, correct me if I'm wrong), some key scores:

Taiwan: 6.669

Philippines: 6.107

South Korea: 6.038

Malaysia: 5.955

China: 5.921

Mongolia: 5.833

Indonesia: 5.617

Overall it's ranked 68th of 147 li... (read more)

3
Joseph_Chu
As I mentioned in another comment, while China ranks in the middle on the World Happiness Report, it actually ranked highest on the IPSOS Global Happiness Report from 2023, which was the last year that China was included in the survey.
2
OscarD🔸
Interesting, yes perhaps liberalising/democratising China may be desirable but not worth the geopolitical cost to try to make happen.

Thanks :)

I don't think I covered any specific relationship between factors in that essay (except those that were formally modelled in), where I was mainly trying to lay out a framework that would even allow you to ask a question. This essay is the first time I've spent meaningful effort on trying to answer it.

I think it's probably ok to treat the factors as a priori independent, since ultimately you have to run with your own priors. And for the sake of informing prioritisation decisions, you can decide case by case how much you imagine your counterfactual action changing each factor.

You don't need a very high credence in e.g. AI x risk for it to be the most likely reason you and your family die

 

I think this is misleading, especially if you agree with the classic notion of x-risk as excluding events from which recovery is possible. My distribution of credence over event fatality rates is heavily left-skewed, so I would expect far more deaths under the curve between 10% and 99% fatality than between 99% and 100%, and probably more area to the left even under a substantially more even partition of outcomes. 

I fear we have yet to truly refute Robin Hanson’s claim that EA is primarily a youth movement.

FWIW my impression is that CEA have spent significantly more effort on recruiting people from universities than any other comparable subset of the population.

Somehow despite 'Goodharting' being a now standard phrase, 'Badharting' is completely unrecognised by Google. 

I suggest the following intuitive meaning: failing to reward a desired achievement, because the proxy measure you used to represent it wasn't satisfied:

'No bonus for the staff this year: we didn't reach our 10% sales units growth target.'

'But we raised profits by 30% by selling more expensive products, you Badharting assholes!'

I guess in general any decision binds all future people in your lightcone to some counterfactual set of consequences. But it still seems practically useful in interpersonal interactions to distinction a) between those that deliberately restrict their action set/those that just provide them in expectation with a different access set of ~the same size, and b) between those motivated by indifference/those motivated specifically by an authoritarian desire to make their values more consistently with ours.

2
Henry Stanley 🔸
Seems like a reasonable distinction - but also not sure how many people move to an EA hub expressly because it binds their future self to do EA work/be in said hub long-term?

Muchos hugs for this one. I'm selfishly glad you were in London long enough for us to meet, fwiw :)

I feel like this is a specific case of a general attitude in EA that we want to lock in our future selves to some path in case our values change. The more I think about this the worse it feels to me, since 

a) your future values might in fact be better than your current ones, or if you completely reject any betterness relation between values then it doesn't matter either way

b) your future self is a separate person. If we imagine the argument targeting any... (read more)

4
Henry Stanley 🔸
Surely this proves too much? Any decision with long-term consequences is going to bind your future self. Having kids forces your future self onto a specific path (parenthood) just as much as relocating to an EA hub does.

I'm not doing the course, but I'm pretty much always on the EA Gather, and usually on for coworking and accountabilitying compatible with my timezone (UTC+8). Feel free to hop on there and ping me - there's a good chance I'll be able to reply at least by text immediately, and if not pretty much always within 12 hours.

To be strictly accurate, perhaps I should have said 'the more you know about AI risks and AI safety, the higher your p(doom)'. I do think that's an empirically defensible claim. Especially insofar as most of the billions of people who know nothing about AI risks have a p(doom) of zero.

That makes it sound like a continuous function when it isn't really. Sure, people who've never or barely thought about it and then proceed to do so are likely to become more concerned - since they have a base of ~0 concern. That doesn't mean the effect will have the same shap... (read more)

I'm really glad you gave this talk, even as something of a sceptic of AI x-risk. As you say, this shouldn't be a partisan issue. I would contest one claim though:

> Generally, the more you know about AI, the higher your p(doom), or estimated probability that ASI would doom humanity to imminent extinction.

I don't see the evidence for this claim, and I keep seeing people in the doomer community uncritically repeat it. AI wouldn't be progressing if everyone who understood it became convinced it would kill us all. 

Ok, one might explain this progress via... (read more)

4
Geoffrey Miller
Arepo - thanks for your comment. To be strictly accurate, perhaps I should have said 'the more you know about AI risks and AI safety, the higher your p(doom)'. I do think that's an empirically defensible claim. Especially insofar as most of the billions of people who know nothing about AI risks have a p(doom) of zero. And I might have added that thousands of AI devs employed by AI companies to build AGI/ASI have very strong incentives not to learn about too much about AI risks and AI safety of the sort that EAs have talked about for years, because such knowledge would cause massive cognitive dissonance, ethical self-doubt, regret (as in the case of Geoff Hinton), and/or would handicap their careers and threaten their salaries and equity stakes. 

I resonate strongly with this, to the extent that I worry most of the main EA 'community-building' events don't actually do anything to build community. I can count on the fingers of 0 hands the number of meaningful lasting relationships I've subsequently developed with anyone I met at an EAG(x), after having been to maybe 10 over the course of 8ish years. 

That's not to say there's not value in the shorter term relationships that emerge from them - but by comparison I still think fondly of everyone I met at a single low-cost EA weekend retreat over a ... (read more)

3
Sam Smith 🔸
Yeah, I totally get that and feel similar - I think being in a community is very different to trying to artificially engineer one (e.g. you have to be in a community to build it) Oh cool, I've used the Gather Town a little and have been building a smaller discord coworking community for some EA Uni Societies which seems like it might be serving a similar goal :) 

Good luck with this - I've always wished for more global EA events!

To clarify, is this explicitly a rebranding of EAGx Virtual, or is it meant to be something qualitatively different? If the latter, could you say more about what the differences will be?

4
Kiran Sargent 🔹
Hi @Arepo! Thank you! EA Connect 2025 will have a very similar format to EAGxVirtual (talks, workshops, networking, mentorship program, etc.), but this year it’s being organized directly by the CEA team rather than as an EAGx event. We wanted to establish EA Connect as its own event series under CEA. 

These were all (with the exception of 3) written in the six months following FTX, when it was still being actively discussed, and at least three other EA-relateed controversies had come out in succession. So I probably misremembered FTX specifically (thinking again, IIRC there was an FTX tag that was the original 'autodeprioritised' tag, whose logic maybe got reverted when the community separation logic was introduced). But I think it's fair to say that these were a de facto controversy-deprioritisation strategy (as the fourth post suggests) rather than a measured strategy that could be expected to stay equally relevant in calmer times.

I feel like people are treating the counterfactual as 'no way to filter out community posts'; whereas the forum software currently allows you to filter for any given tag, and could easily be tweaked to (or possibly already allows you to) filter out a particular tag.

So the primary counterfactual isn't 'no separation' it's 'greater transparency and/or community involvement in what gets tagged "community"'.

You may well be right that CEA is biased (its hard not to be) and the criteria could be 
made clearer.

My suggestion if the current separation is kept would be to reallow community tagging of posts, but require it to go above a certain threshold and/or have a delay, so that posts don't bounce back and forward between the two feeds.

I'm also not sure community posts get less attention (forum team can tell me).

FWIW I suspect both that being tagged community causes reduced attention, but that they get less attention overall since many low-karma posts slide ... (read more)

2
Jason
If the separation is going to continue, I'd prefer it be entrusted to (elected? appointed but independent-of-CEA?) stewards. My concern is that community tagging might end up being voting by a different name (users will be less likely to tag things they like).

I would love to see more events like this in the community. Honestly, those handful of low-budget attendee-run retreats I've been to have been worth substantially more than EAG(x)s 

5
Kestrel🔸
I'm collating feedback right now, but I'll be doing a post about how EA in the Lakes went and what that means for future things I run. Practically, it worked, it particularly engaged and inspired effective givers (to the point where I estimate it "paid" for itself many times over in increased effective giving, notwithstanding that it made a small surplus on the entry fee), and it was good for making fairly deep social connections though less so professional connections. Workers (both EA workers and effective givers) reported gains in mental health and productivity. People's average reported personal value from attendance (minus what they paid for travel) was more than twice the per-person running cost.

Increase relative to what counterfactual? I think it might be true both that annual risk of bad event goes up from AI, and that all-time risk decreases (on the assumption that we're basically reaching a hurdle we have to pass anyway, and that I'm highly sceptical that we gain much in practice by implementing forceful procedures that slow us down from getting there).

Arepo
2
0
0
20% disagree

Weakly in favour of moderate regulation, though I think within the EA movement the extinction case is much overstated, the potential benefits are understated, and I can imagine regulatory efforts backfiring in any number of ways.

Having said that after I voted I realised there are a number of conditional actions lower down in the poll that I would be sympathetic to given the conditions (e.g. pause if mass unemployment or major disaster)

Load more