lexande

425Joined Nov 2018

Posts
1

Sorted by New

Comments
29

Is there a link to what OpenPhil considers their existing cause areas? The Open Prompt asks for new cause areas so things that you already fund or intend to fund are presumably ineligible, but while the Cause Exploration Prize page gives some examples it doesn't link to a clear list of what all of these are. In a few minutes looking around the Openphilanthropy.org site the lists I could find were either much more general than you're looking for here (lists of thematic areas like "Science for Global Health") or more specific (lists of individual grants awarded) but I may be missing something.

Maybe, though given the unilateralist's curse and other issues of the sort discussed by 80k here I think it might not be good for many people currently on the fence about whether to found EA orgs/megaprojects to do so. There might be a shortage of "good" orgs but that's not necessarily a problem you can solve by throwing founders at it.

It also often seems to me that orgs with the right focus already exist (and founding additional ones with the same focus would just duplicate effort) but are unable to scale up well, and so I suspect "management capacity" is a significant bottleneck for EA. But scaling up organizations is a fundamentally hard problem, and it's entirely normal for companies doing so to see huge decreases in efficiency (which if they're lucky are compensated for by economies of scale elsewhere).

the primary constraint has shifted from money to people

This seems like an incorrect or at best misleading description of the situation. EA plausibly now has more money than it knows what to do with (at least if you want to do better than GiveDirectly) but it also has more people than it knows what to do with. Exactly what the primary constraint is now is hard to know confidently or summarise succinctly, but it's pretty clearly neither of those. (80k discusses some of the issues with a "people-constrained" framing here.) In general large-scale problems that can be solved by just throwing money or throwing people at them are the exception and not the rule.

For some cause areas the constraint is plausibly direct workers with some particular set of capabilities. But even most people who want to dedicate their careers to EA could not become effective e.g. AI safety researchers no matter how hard they tried. Indeed merely trying may be negative impact in the typical case due to opportunity cost of interviewers' time etc (even if EV-positive given the information the applicant has). One of the nice things about money is that it basically can't hurt, and indeed arguments about the overhead of managing volunteer/unspecialised labour were part of how we wound up with the donation focus in the first place.

I think there is a large fraction of the population for whom donating remains the most good they can do, focusing on whatever problems are still constrained by money (GiveDirectly if nothing else) because the other problems are constrained by capabilities or resources which they don't personally have or control. The shift from donation focus to direct work focus isn't just increasing demandingness for these people, it's telling them they can't meaningfully contribute at all. Of course inasmuch as it's true that a particular direct work job is more impactful than a very large amount of donations it's important to be open and honest about this so those who actually do have the required capabilities can make the right decisions and tradeoffs. But this is fundamentally in tension with building a functioning and supportive community, because people need to feel like their community won't abandon them if they turn out to be unable to get a direct work job (and this is especially true when a lot of the direct work in question is "hits-based" longshots where failure is the norm). I worry that even people who could potentially have extraordinarily high impact as direct workers might be put off by a community that doesn't seem like it would continue to value them if their direct work plans didn't pan out.

I really enjoyed this post, but have a few issues that make me less concerned about the problem than the conclusion would suggest:

- Your dismissal in section X of the "weight by simplicity" approach seems weak/wrong to me. You treat it as a point against such an approach that one would pay to "rearrange" people from more complex to simpler worlds, but that seems fine actually, since in that frame it's moving people from less likely/common worlds to more likely/common ones.

- I lean towards conceptions of what makes a morally relevant agent (or experience) under which there are only countably many of them. It seems like two people with the exact same full life experience history are the same person, and the same seems plausible for two people whose full-life-experience-histories can't be distinguished by any finite process, in which case each person can be specified by finitely much information and so there are at most countably many of them. I think if you're willing to put 100% credence on some pretty plausible physics you can maybe even get down to finitely many possible morally relevant morally distinct people, since entropy and the speed of light may bound how large a person can be.

- My actual current preferred ethics is essentially "what would I prefer if I were going to be assigned at random to one of the morally-relevant lives ever eventually lived" (biting the resulting "sadistic conclusion"-flavoured bullets). For infinite populations this requires that I have some measure on the population, and if I have to choose the measure arbitrarily then I'm subject to most of the criticisms in this post. However I believe the infinite cosmology hypotheses referenced generally come along with fundamental measures? Indeed a measure over all the people one might be seems like it might be necessary for a hypothesis that purports to describe the universe in which we in fact find ourselves. If I have to dismiss hypotheticals that don't provide me with a measure on the population as ill-formed and assign zero credence to universes without a fundamental measure that's a point against my approach but I think not a fatal one.

It seems like this issue is basically moot now? Back in 2016-2018 when those OpenPhil and Karnofsky posts were written there was a pretty strong case that monetary policymakers overweighted the risks of inflation relative to the suffering and lost output caused by unemployment. Subsequently there was a political campaign to shift this (which OpenPhil played a part in). As a result, when the pandemic happened the monetary policy response was unprecedentedly accomodative. This was good and made the pandemic much less harmful than it would have been otherwise, at the cost of elevated but very far from catastrophic inflation this year (which seems well worth it given the likely alternative). And indeed Berger in that 80k interview brings the issue up primary as a past "big win", mission accomplished, and says it's unclear whether they will take much further action in this space.

A major case where this is relevant is funding community-building, fundraising, and other "meta" projects.  I agree that "just imagine there was a (crude) market in impact certificates, and take the actions you guess you'd take there" is a good strategy, but in that world where are organizations like CEA (or perhaps even Givewell) getting impact certificates to sell? Perhaps whenever someone starts a project they grant some of the impact equity to their local EA group (which in turn grants some of it to CEA), but if so the fraction granted would probably be small, whereas people arguing for meta often seem to be acting like it would be a majority stake.

Unfortunately this competes with the importance of interventions failing fast. If it's going to take several years before the expected benefits of an intervention are clearly distinguishable from noise, there is a high risk that you'll waste a lot of time on it before finding out it didn't actually  help, you won't be able to experiment with different variants of the intervention to find out which work best, and even if you're confident it will help you might find it infeasible to maintain motivation when the reward feedback loop is so long.

This request is extremely unreasonable and I am downvoting and replying (despite agreeing with your core claim) specifically to make a point of not giving in to such unreasonable requests, or allowing a culture of making them with impunity to grow. I hope in the future to read posts about your ideas that make your points without such attempts to manipulate readers.

It seems unlikely that the distribution of 100x-1000x impact people is *exactly* the same between your "network" and "community" groups, and if it's even a little bit biased towards one or the other the groups would wind up very far from having equal average impact per person. I agree it's not obvious which way such a bias would go. (I do expect the community helps its members have higher impact compared to their personal counterfactuals, but perhaps e.g. people are more likely to join the community if they are disappointed with their current impact levels? Alternatively, maybe everybody else is swamped by the question of which group you put Moskovitz in?) However assuming the multiplier is close to 1 rather than much higher or lower seems unwarranted, and this seems to be a key question on which the rest of your conclusions more or less depend.

I’ve made the diagram assuming equal average impact whether someone is in the ‘community’ or ‘network’ but even if you doubled or tripled the average amount of impact you think someone in the community has there would still be more overall impact in the network.

People in EA regularly talk about the most effective community members having 100x or 1000x the impact of a typical EA-adjacent person, with impact following a power law distribution. For example, 80k attempts to measure "impact-adjusted significant plan changes" as a result of their work, where a "1" is a median GWWC pledger (which is already more commitment than a lot of EA-adjacent people, who are curious students or giving-1% or something, so maybe 0.1). 80k claims credit for dozens of "rated 10" plan changes per year, a handful of "rated 100" per year, and at least one "rated 1000" (see p15 of their 2018 annual report here).

I'm personally skeptical of some of the assumptions about future expected impact 80k rely on when making these estimates, and some of their "plan changes" are presumably by people who would fall under "network" and not "community" in your taxonomy. (Indeed on my own career coaching call with them they said they thought their coaching was most likely to be helpful to people new to the EA community, though they think it can provide some value to people more familiar with EA ideas as well.) But it seems very strange for you to anchor on a 1-3x community vs network impact multiplier, without engaging with core EA orgs' belief that 100x-10000x differences between EA-adjacent people are plausible.

Load More