Hide table of contents

Summary

  1. Returns to community building are higher in some cause areas than others
    1. For example: a cause-general university EA group is more useful for AI safety than for global health and development.
  2. This presents a trilemma: community building projects must either:
    1. Support all cause areas equally at a high level of investment, which leads to overinvestment in some cause areas
    2. Support all cause areas equally at a low level of investment, which leads to underinvestment in some cause areas, or
    3. Break cause-generality
  3. This trilemma feels fundamental to EA community building work, but I’ve seen relatively little discussion of it, and therefore would like to raise awareness of it as a consideration
  4. This post presents the trilemma, but does not argue for a solution

Background

  1. A lot of community building projects have a theory of change which aims to generate labor
  2. Labor is more valuable in some cause areas than others
    1. It’s slightly hard to make this statement precise, but it’s something like: the output elasticity of labor (OEL) depends on cause area
    2. E.g. the amount by which animal welfare advances as a result of getting one additional undergraduate working on it is different than the amount by which global health and development advances as a result of getting one additional undergraduate working on it[1]
    3. Note: this is not a claim that some causes are more valuable than others; I am assuming for the sake of this post that all causes are equally valuable
  3. I will take as given that this difference exists now and is going to exist into the future (although I would be interested to hear arguments that it doesn’t/won’t)
  4. Given this, what should we do?
  5. My goal with this post is mostly to point out that we probably should do something weird, and less about suggesting a specific weird thing to do

What concretely does it mean to have lower or higher OEL?

I’m using CEA teams as examples since that’s what I know best, though I think similar considerations apply to other programs. (Also, realistically, we might decide that some of these are just too expensive if OEL goes down or redirect all resources to some projects with high starting cost if OEL goes up.)

ProgramHow it looks with high investment[2]How it looks with low investment
Events

Catered

Coffee/drinks/snacks

Recorded talks

Convenient venues

Bring your own food

Venues in inconvenient locations

Unconference/self-organized picnic vibes

Groups

Paid organizers

One-on-one advice/career coaching

Volunteer-organized meet ups

Maybe some free pizza

Online

Actively organized Forum events (e.g. debates)

Curated newsletter, highlights

Paid Forum moderators

Engineers and product people who develop the Forum

A place for people to post things when they feel like it, no active solicitation

Volunteer-based moderation

Limited feature development

Communications

Pitching op-ed’s/stories to major publications

Create resources like lists of experts that journalists can contact

Fund publications (e.g. Future Perfect)

People post stuff on Twitter, maybe occasionally a journalist will pick it up

What are Community Builders’ options?

I see a few possibilities:

  1. Don’t change our offering based on the participant’s[3] cause area preference
    1. …through high OEL cause areas subsidizing the lower OEL cause areas
      1. This has historically kind of been how things have worked (roughly: AI safety subsidized cause-general work while others free-rode)
      2. This results in spending more on the low OEL cause areas than is optimal
      3. And also I’m not sure if this can practically continue to exist, given funder preferences
    2. …through everyone operating at the level low OEL cause areas choose
      1. This results in spending less on high OEL cause areas than is optimal
      2. I’m also not sure how sustainable this is – e.g. if EA events are a lot less nice than AI safety events, AI safety people might just stop going to EA events[4]
    3. …through choosing some middle ground between what the low and high OEL cause areas want
      1. This results in inefficiencies on both sides
  2. Change our offering based on the participant’s cause area
    1. I explore this below

Can this be mitigated by moral trade?

  1. It seems to me like there are some opportunities for moral trade. E.g. if you have a university group, then maybe the Econ students go to GH&D, psychology students to digital sentience, etc. since these are the cause areas in which they have the strongest comparative advantage.
  2. Jonas suggests that working in cause areas other than your main one can sharpen skills and remove insularity.
  3. Historically, more speculative causes have benefited from being attached to less speculative ones by being able to point to the latters’ achievements as examples of actually doing something useful
    1. (Though this also has bad effects)
  4. There is potentially opportunity for moral trade on the individual level (e.g. I am a fit for biosecurity but want to work on animal welfare, I trade with someone who has the opposite skill set), which makes the value of individuals' labor less dependent on their cause area preferences.[5]
  5. I think the above mitigates some of the cause area differences, but I think we are still inevitably going to end up with substantial differences between cause areas. Some reasons why this seems inevitable:
    1. Different cause areas will have different existing levels of capital and labor
    2. Different cause areas will require different balances of capital versus labor (e.g. biology research might require expensive lab equipment, whereas global priorities research mostly just requires labor)
    3. Different cause areas will require different types of labor (notably, some cause areas might not value a randomly chosen undergraduate very much at all)
  6. It would be surprising if all of these factors perfectly canceled out

What if we could have our cake and eat it too?

  1. EA seems to be a very memetically fit set of ideas, perhaps more so than any individual cause area
  2. For example: I have heard from some AI safety university group organizers that, even though the vast majority of their group members have no interest in EA, amongst the ones who actually go on to have a career in AI safety a large fraction are EA-involved
  3. It would be extremely convenient if the best way to generate labor for a specific cause area was a cause-neutral presentation of EA ideas
  4. My guess is that cause-neutral activities are 30-90% as effective as cause-specific ones (in terms of generating labor for that specific cause), which is remarkably high, but still less than 100%
    1. e.g. spending $100 on an EA group will get you 30-90% as much labor for animal welfare as spending $100 on an animal welfare group would
  5. So I think the trade-offs here are less severe than one might expect, but still enough to mean that we have to prioritize

What would it look like to change our offering based on the participant’s cause area?

Note: cause area is not solely a self-reported preference, but also your impact in that cause area. Some people might really prioritize a cause area, but be unlikely to contribute to it (or vice versa), and this would presumably be considered.

ProgramWhat changing the offering based on cause area could look like
Events

Having cause area-specific events

Having a different admissions bar based on the applicant’s cause area

Having a different ticket price that the attendee needs to pay based on the applicant’s cause area

Groups

Having cause area-specific groups

Giving differential support to group members based on cause area (e.g. group organizers are paid to organize some types of events but not others, or group organizers give one-on-one advising only to people interested in certain cause areas)

OnlineProactively generate and curate content from some cause areas; others are just driven by whatever people want to upload
CommunicationsPush stories and journalist resources for some cause areas, but not others

 

Some possible flow-through effects  of changing our offering based on the participant’s cause area

Negative effects:

  1. (More) people lying about their cause area preferences in order to receive more favorable treatment
  2. People working on lower OEL cause areas become more elite (e.g. only the top 5% of animal rights advocates get into EAG, but the top 30% of AI safety workers get in, meaning that AR attendees are more elite than AI safety ones), leading to weird social dynamics
  3. Lower OEL cause area aficionados being bitter about having a worse experience despite being equally (or more) dedicated, talented, etc.
    1. Also general exacerbation of the complaints we already hear about elitism
  4. (Not a complete list)

Positive effects:

  1. People rationally adjusting their career plans in response to “price signals”
    1. Importantly including people switching to earning to give because they realize their cause area has more labor than capital
  2. Less “bait and switch” vibe/complaints about intro materials – we are up front that some career paths are more valuable than others
  3. (Maybe) more efficient allocation of capital and labor across cause areas
  4. (Not a complete list)

Do we actually have to solve this now?

  1. Explicitly choosing any branch of this trilemma is going to upset a lot of people
  2. There is therefore a strong temptation to ignore the problem
  3. But, of course, ignoring the problem just means implicitly choosing one branch of the trilemma
  4. My guess is that explicitly choosing a branch will result in a better outcome
  5. I am therefore interested in discussion on this topic. Note that CEA is one logical entity who can make this choice, but approximately everyone involved in cause-general EA community building faces this trilemma, and I expect that e.g. different group organizers will choose different solutions

Thanks to Chana Messinger for suggesting this memo and Chana, Jake McKinnon, Gina Stuessy, Saul Munn, Charles He, Campbell Jordan, and Lizka Vaintrob for helpful feedback

  1. ^

     In some ways asking about OEL for randomly chosen undergrads is assuming the answer to the question. E.g. we would get different answers if the question was about the value of a randomly chosen developmental economist. Nonetheless, I think there is still some useful sense in which some cause areas generally get more value from labor than other cause areas.

  2. ^

     For simplicity, I’m assuming that the optimal level of investment correlates perfectly with the output elasticity of labor, but obviously this isn’t true. Notably, labor supply may be more or less responsive to changes in investment.

  3. ^

     Participant = attendee for events, group member for groups, etc.

  4. ^

     It’s not clear to me that this is true, and I would be interested in evidence in either direction. There are certainly many anecdotal examples of high net worth EA’s being perfectly willing to attend a conference in a rundown hotel, for example. But I do have a fairly strong prior that you can usually accomplish things by spending money, so if you spend less money you will be less able to accomplish things like “attract people from XYZ group”.

  5. ^

     Even if this theoretically works though, I expect it to be difficult in practice. E.g. it’s hard for people to maintain a motivation to work on something they don’t care about but are doing just for moral trade reasons, and it’s hard for each side of a match like this to actually find each other.

102

0
1

Reactions

0
1

More posts like this

Comments31
Sorted by Click to highlight new comments since:

My guess is that cause-neutral activities are 30-90% as effective as cause-specific ones (in terms of generating labor for that specific cause), which is remarkably high, but still less than 100%

This isn't obvious to me. If you want to generate generic workers for your animal welfare org, sure, you might prefer to fund a vegan group. But if you want people who are good at making explicit tradeoffs, focusing on scope sensitivity, and being exceptionally truth-seeking, I would bet that an EA group is more likely to get you those people. And so it seems plausible that a donor who only prioritized animal welfare would still fund EA groups if they otherwise wouldn't exist.

In a related point, I would have been nervous (before GPT-4 made this concern much less prominent) about whether funding an AI Safety group that mostly just talked about AI got more safety workers, or just got more people interested in working on explicit AGI. 

Relatedly: I expect that the margins change with differing levels of investment. Even if you only cared about AI safety, I suspect that the correct amount of investment in cause-general stuff is significantly non-zero, because you first get the low-hanging fruit of the people who were especially receptive to cause-general material, and so forth.

So it actually feels weird to talk about estimating these relative effectiveness numbers without talking about which margins we're considering them at. (However, I might be overestimating the extent to which these different buckets are best modelled as having distinct diminishing returns curves.)

I agree. A funder interested in career changes in one cause area will probably only reach a subset of potential talent if they only target people who are already interested in this cause area vs. generally capable individuals who could choose directions. 

In the business context, you could imagine a recruiter having the option to buy a booth at a university specialising in the area the company is working in vs. buying one at a broad career fair of a top university. While the specialised university may bring more people that have trained in and are specialised in your area, you might still go for the top university as talent there might have overall greater potential, has the ability to more easily pivot or can contribute in more general areas like leadership, entrepreneurship, communications or similar.

One aspect here is also the timeframe you're looking at. If we think of EA community building as talent development, and we're working with people who might have many years until they peak in their careers, then focussing on a specific cause area might be limiting. A funder who is interested in job changes in one cause area now can still see the value of a pipeline of generally capable people skilling up in different areas of expertise before being a good fit for a new role. The Open Phil EA/LT Survey touches on this, and similarly, Holden's post on career choices for longtermists also covers broader skills independent of cause area.

In the business context, you could imagine a recruiter having the option to buy a booth at a university specialising in the area the company is working in vs. buying one at a broad career fair of a top university. While the specialised university may bring more people that have trained in and are specialised in your area, you might still go for the top university as talent there might have overall greater potential, has the ability to more easily pivot or can contribute in more general areas like leadership, entrepreneurship, communications or similar.

 

I think this is a spot on analogy, and something we've discussed in our group a lot.

One additional cost of cause specific groups is that once you brand yourself inside a movement, you get drawn into the politics of that movement. Other existing groups perceive you as a competitor for influence and activists. Hence they become much less tolerant of differences in your approach. 

For example an animal advocacy group advocating for cultivated meat work in my country would frequently be bad-mouthed by other activists for not being a vegan group(because cultivated meat production uses some animal cells taken without consent). 

My observation is that animal activists are much more lenient when an organisation doesn't brand itself as an "animal" organisation.

Yep, I think this is a good point, thanks!

And so it seems plausible that a donor who only prioritized animal welfare would still fund EA groups if they otherwise wouldn't exist.

I think this is possibly correct but unfortunately not at a level to be cruxy - animal welfare groups just don't get that much funding to begin with, so even if animal welfare advocates valued EA groups a bit above animal welfare groups, it's still pretty low in absolute.

Do people object to there being AI safety-specific or X-risk-specific groups and events? Animal advocates have their own events and groups, like the AVA summit (although not exclusively effective animal advocacy). I think this + relatively cause-general EA groups and events is a good solution if

  1. those who prioritize AI safety don't want to pay disproportionately for general EA community building without it emphasizing AI safety roughly in proportion to their contributions, and
  2. a decent share of EA community members don't want AI safety to take over EA.

Maybe cause-general EA groups and events would end up underfunded if most members strongly prefer one or two causes over the rest, but this coordination problem seems solvable, and I think a lot of members of the community support EA in general or support multiple causes.

This post was interesting - sorry I'm replying so late. As a community builder on the side of my day-job where I'm an accountant. I see your table above a bit differently and think it would be better to think of the costs being split into:

* Capital expenditure (ie. an investment into infrastructure that can be used over the long term but unlikely to see a large return in the short term eg. Assets like content that can be referenced at any point in time, infrastructure that people can use to productively contribute)
* Operating expenditure (ie. Recurring costs that do not create an asset that can be used at a later date. E.g. Marketing, event expenditure, salaries)

As standard you want to keep your operating expenses as low as you can while still growing and invest as much as you can in infrastructure that enables long term growth. 

Curious about your thoughts @Ben_West🔸 
 

ProgramCapital / Infrastructure InvestmentOperating Expenses 
Events

Recorded talks

 

Bring your own food

Venues in inconvenient locations

Unconference/self-organized picnic vibes
Convenient venues
Catered

Coffee/drinks/snacks

Groups

Paid organizers (I think you could argue this is an investment in a future EA leader but it would depend on if there's investment in their growth)

One-on-one advice/career coaching

Volunteer-organized meet ups

Maybe some free pizza

Online

A place for people to post things when they feel like it, no active solicitation

Volunteer-based moderation (Investment in a community of active contributors to the EA project)

Engineers and product people who develop the Forum

Wikis

Events functionality

Groups functionality (no need to maintain separate mailing list)

Curated newsletter, highlights

Paid Forum moderators

Limited feature development

Actively organized Forum events (e.g. debates)

Communications

Create resources like lists of experts that journalists can contact

Fund publications (e.g. Future Perfect)

Maintenance of EA IP (ie. brand)

People post stuff on Twitter, maybe occasionally a journalist will pick it up

Pitching op-ed’s/stories to major publications

Weird the bottom half of the table got cut off 

Here it is

Online

A place for people to post things when they feel like it, no active solicitation

Volunteer-based moderation (Investment in a community of active contributors to the EA project)

Engineers and product people who develop the Forum

Wikis

Events functionality

Groups functionality (no need to maintain separate mailing list)

Curated newsletter, highlights

Paid Forum moderators

Limited feature development

Actively organized Forum events (e.g. debates)

Communications

Create resources like lists of experts that journalists can contact

Fund publications (e.g. Future Perfect)

Maintenance of EA IP (ie. brand)

People post stuff on Twitter, maybe occasionally a journalist will pick it up

Pitching op-ed’s/stories to major publications

Interesting, I had not thought about things this way before.

It seems uncontroversial that we should value assets which can be used over the long term more highly then those which can't, all else being equal, but I mostly see people modeling this by amortizing them with some constant discount rate. I am vaguely aware of that accountants instead classify expenses as opex vs. capex but honestly couldn't explain to you why that's better.

I guess you are saying that it is useful to do so because, heuristically, you should prefer to cut investment in opex before cutting investment in capex?

Yes that'd be my sense.

Capital expenditure is money spent on an asset which can reasonably be assumed to generate future value for the entity either by increasing productivity or reducing costs. Expenses (including both cost of program/outcome and overhead costs) wouldn't really be an investment, it'd be a cost for something that doesn't have the ability to generate future returns or reduce the future marginal expenditure needed to generate the outcome again.

High quality assets like content, software and infrastructure can generate passive impact with minimal maintenance. Eg. 80k and Scott Alexander's content are still cited as the most common sources for new GWWC pledges.

Employees aren't usually considered assets for external financial reporting purposes because they are not owned by the shareholders and are free to leave. However, for a movement of people who all share ownership of EA and are not tied to any one cause area or charity, I think they can reasonably be defined as assets and a key insight from 80k is that for those with a motivation to do good effectively, there is a clear incentive to invest in your own career capital (as well as positive sum to invest time multiplying the impact of others).

The highest value assets IMO are EAs that can demonstrate a strong ability to apply the core skills (ie. cause prioritisation, impact evaluation and reasoning about evidence) AND can independently contribute. I've been somewhat concerned about the reduction in opportunities for low investment contribution since I don't think passive consumption of content is as effective for building EA knowledge that can be applied on high impact projects. The purpose of providing these opportunities (like wiki contributing, encouraging running events, volunteering etc) is more about investing in future capacity of EAs than the direct impact.

More detailed accounting answer you can skip 👇🏻

Capex isnt an expense as it doesn't go through the income statement (AKA Profit and Loss statement ie. part of the accounts focused on annual finance performance). Capex creates an asset that sits on the statement of financial position (AKA balance sheet ie. part of the accounts focussed on the value of the business/charity).

Capex doesn't go through the income statement all at once, rather the costs go through incrementally as depreciation (where there is a clear useful life of the asset like machinery or equipment) or as amortization (this is less useful for charities as it's mostly for tax purposes). The accounting problem this solves is that putting the full cost of the asset through the income statement all at once in the year of purchase is not a fair representation of the economic reality for the business since it isn't an expense for that one year.

Thoughtful post.

If you're perceived as prioritising one EA cause over another, you might get pushback (whether for good reason or not). I think that's more true for some of these suggestions than for others. E.g. I think having some cause-specific groups might be seen as less controversial than having varying ticket prices for the same event depending on the cause area. 

[Terminological nitpick:]

You're writing "cause-general" but I keep on thinking you mean "cause-impartial".

Like, I think that providing more resources for some cause areas than others is being partial towards those causes. But it feels kind of orthogonal to the question of cause-generality. You could provide equal resources to all (of a fixed finite number of) causes, but have those resources be things which are useful to those causes rather than transferable across cause areas. That's being cause-impartial but not cause-general. Vice-versa, you could make all of your resources be training on general EA principles, but selectively provide them just to people working around certain cause areas (cause-general but not cause-impartial).

Is this just a terminological issue where you could switch over all the words (at least if you agreed with me on the meanings), or is there something subtle going on where you do mean cause-general in some places but not others?

Actually, hmm, per Schubert's article, "cause-impartial" may not be exactly right either. He uses that to point to ex ante neutrality in epistemic stance towards causes. The thing you mean is more like ex post even-handedness in allocation of resources between causes. 

Of Schubert's terms the one that's a closest match to your meaning is "cause-agnostic", but it sounds wrong, since it's obviously an epistemic property, whereas the thing you're talking about isn't. I think that "impartiality" would be an apt word for this ex post even-handedness, except that it's already taken.

At this point I think you should either coin a new term, use "cause-agnostic", or use the generic "cause-neutral" for the thing you're talking about. (I care about the concept of cause-generality, and would prefer that it's not blurred into this, which is why I'm defending what might be seen as an arcane point of terminology.)

Thanks! I'm not sure that I'm using the terms correctly, but I think this is how Schubert intended them to be used?

Notably: the examples in my article are CEA projects, and he says "The Centre for Effective Altruism is a paradigmatic example of a cause-general organization."

I think you are saying something like: running EA Global is cause-impartial (because the "resource" of attending EAG is impartial to the attendees cause), but not cause-general because attendance cannot be transferred between causes. Is that correct? If so, I think this is the thing that Schubert was trying to clarify with his comment about "Cause-flexible capacities and broad impact capacities" in the article.

I agree that that comment is highly relevant to this discussion.

I also agree with Schubert that CEA is a paradigm example of a cause-general organization, and I don't think the things you're discussing in your post are really about giving up cause-generality.

I think you are saying something like: running EA Global is cause-impartial (because the "resource" of attending EAG is impartial to the attendees cause), but not cause-general because attendance cannot be transferred between causes. Is that correct?

No, I'd say it's cause-general (because the "resource" of attendance is not specific to the attendee's cause), but not cause-agnostic if you take into account the cause areas people support in making admissions decisions. (You could hypothetically have a version of EAG where admission decisions were blind to attendees' cause areas; in this case it would be cause-agnostic.) Some content at EAG is also cause-general, while other content is cause-decided.

I don't think the things you're discussing in your post are really about giving up cause-generality

Thanks, could you give a specific example?

Sorry, I wrote that based on overall impressions without checking back on the details, and I was wrong in some cases.

Curating content that's about particular causes probably is giving up on cause generality (but definitely giving up cause agnosticism). Different admission bars for events isn't giving up cause-generality. Cause-specific events could still be cause-general (e.g. if you had an event on applying general EA principles in your work, but aimed just at people interested in GHD), but in practice may not be (if you do a bunch of work that's specifically relevant for that cause area).

Cause-specific events could still be cause-general (e.g. if you had an event on applying general EA principles in your work, but aimed just at people interested in GHD), but in practice may not be (if you do a bunch of work that's specifically relevant for that cause area).

Not sure I understand this. Schubert's definition:

Cause-general investment have a wide scope: they can affect any cause.

A GHD event about general EA principles does not seem like it "can affect any cause."

Or, I guess there is some trivial butterfly-effect sense in which everything can affect everything else, but it seems like a GHD conference has effects which are pretty narrowly targeted at one cause, even if the topics discussed are general EA principles.

My read: an event about general EA principles, considered as a resource, is, as Schubert puts it, cause-flexible: it could easily be adapted to be specialized to a different cause. The fact that it happens to be deployed, in this example, to help GHD, doesn't change the cause-flexibility of the resource (which is a type of cause-generality).

I guess you could say that it was cause-flexible up until the moment you deployed it and then it stopped being cause-flexible. I think it's still useful to be able to distinguish cases where it would have been easy to deploy it to a different cause from cases where it would not; and since we have cause-agnostic vs cause-decided to talk about the distinction at the moment of commitment am trying to keep "cause general" to refer to the type of resource rather than the way it's deployed.

(I don't feel I have total clarity on this; I'm more confident that there's a gap between how you're using terms and Schubert's article than I am about what's ideal.)

Different admission bars for events isn't giving up cause-generality.

Why not? I think we agree (?) that EAG in its current form is ~cause-general. If we changed it so that the admission bar depends on the applicant's cause, isn't that making it less cause-general?

Less cause-agnostic (which is a property of what causes you're aiming at), not less cause-general (which is a property of the type of resource which is being deployed; and remains ~constant in this example).

Ahhh ok, I think maybe I'm starting to understand your argument. I think you are saying something like: the "resources being deployed" at an event are things like chairs, the microphone for the speaker, the carpet in the room, etc. and those things could be deployed for any cause, making them cause general.

In my mind the resources being deployed are things like the invites, the agenda, getting speakers lined up, etc. and those are cause-specific.

I would maybe break this down as saying that the Marriott or whoever is hosting the event is a cause general resource, but the labor done by the event organizing team is cause specific. And usually I am speaking to the event organizing team about cause generality, not the marriott, which is why I was implicitly assuming that the relevant resources would become cause specific, though I understand the argument that the Marriott's resources are still cause general.

Thanks, I think this is helpful towards narrowing in on whether there's a disagreement:

I'm saying that the labour done by the organizing team in this case is still cause general. It's an event on EA principles, and although it's been targeted at GHD, it would have been a relatively easy switch early in the planning process to decide to target it at GCR-related work instead.

I think this would no longer be the case if there was a lot of work figuring out how to specialise discussion of EA principles to the particular audience. Especially if you had to bring a GHD expert onboard to do that.

Great, two more clarifying questions:

  1. You say that the labor is cause general because it could counterfactually have been focused on another cause, but would you say that the final event itself is cause general?
  2. Would you say that a donation which is restricted to only be used for one cause is cause general because it could counterfactually have been restricted to go to a different cause?

I think figure 2 in Schubert's article is important for my conception of this.

On question 1: I think that CEA has developed cause-general capacity, some of which (the cause-flexible) is then being deployed in service of different causes. No, I don't think that the final event is cause-general, but I don't think this undercuts CEA's cause-generality (this is just the nature of cause-flexible investments being eventually deployed).

On question 2: I don't think the donation itself is cause-general, but I'd look at the process that produced the donation, and depending on details I might want to claim that was cause-general (e.g. someone going into earning to give).

ok, if we agree that the final event is not cause general, then I'm not sure I understand the concern. Are you suggesting something like: where I say "community building projects must either:.. or break cause-generality" I instead say "... or be targeted towards outputs (e.g. events) that break cause-generality?"

Hmm. I suppose I don't think that "break cause-generality" is a helpful framing? Like there are two types of cause-general capacity: broad impact capacity and cause-flexible capacity. The latter is at some point going to be deployed to a specific cause (I guess unless it's fed back into something general, but obviously you don't always want to do that).

On the other hand your entire post makes (more) sense to me if I substitute in "cause-agnostic" for "cause-general". It seemed to me like that (or a very close relative) was the concept you had in mind. Then it's just obviously the case that all of the things you are talking about maybe doing would break cause-agnosticism, etc.

I'm very interested if "cause-agnostic" doesn't seem to you like it's capturing the important thing.

As you mentioned elsewhere, "cause agnosticism" feels like an epistemic state, rather than a behavior. But even putting that aside: It seems to me that one could be convinced that labor is more useful for one cause than it is for another, while still remaining agnostic as to the impact of those causes in general.

Working through an example, suppose:

  1.  I believe there is a 50% chance that alternative proteins are twice as good as  bed nets, and fifty percent chance that they are half as good. (I will consider this a simplified form of being maximally cause-agnostic.)
  2.  I am invited to speak about effective altruism at  a meat science department
  3.  I believe that the  labor of the meat scientists I'm speaking to would be ten times as good for  the alternative protein cause if they worked on alternative proteins then it would be for the bed net cause if they worked on bed nets,  since their skills are specialized towards working on AP.
  4.  So my payoff matrix is:
    1.  Talk about alternative proteins, which will get all of them working on AP: 
    2. Talk about bed nets, which will get all of them working on bed nets: 
    3.  Talk about EA  in general, which I will assume results in  a fifty percent chance that they will work on  alternative proteins and fifty percent chance that they work on bed nets: 
  5.  I therefore choose to talk about alternative proteins

 It feels like this choice is entirely consistent with me maintaining a maximally agnostic view about which cause is more impactful?

Thanks for the example. I agree that there's something here which comes apart from cause-agnosticism, and I think I now understand why you were using "cause-general".

This particular example is funny because you also switch from a cause-general intervention (talking about EA) to a cause-specific one (talking about AP), but you could modify the example to keep the interventions cause-general in all cases by saying it's a choice between giving a talk on EA to (1) top meat scientists, (2) an array of infectious disease scientists, or (3) random researchers.

This makes me think there's just another distinct concept in play here, and we should name the things apart.

Isn't this true for the provision of any public non-excludable good? A faster road network, public science funding, or clean water benefit some people, firms, and industries more than others. And to the degree community-building resources can be discretized, ordinary market mechanics can distribute them, in which case they cease to be cause-general.

On the other side of the argument, consider that any substantial difference in QALY / $ implies that 
a QALY maximizer should favor giving $ to some causes over others, and this logic holds in general for [outcome you care about] / [resource you're able to allocate]. like if that resource is labor, attention, or eventspace-hours you rederive the issue laid out in the original post.

Curated and popular this week
Relevant opportunities