Hide table of contents

A rough untested idea that I'd like to hear others' thoughts about. This is mostly meant as a broader group strategy framing but might also have interesting implications for what university group programming should look like. 

EA university group organizers are often told to "backchain" our way to impact:

What's the point of your university group?

"To create the most impact possible, to do the greatest good we can"

What do you need in order to create that?

"Motivated and competent people working on solving the world's most pressing problems"

And as an university group, how do you make those people?

"Find altruistic people, share EA ideas with them, provide an environment where they can upskill"

What specific things can you do to do that?

"Intro Fellowships to introduce people to EA ideas, career planning and 1-1s for upskilling"

This sort of strategic thinking is useful at times, but I think that it can also be somewhat pernicious, especially when it naively justifies the status quo strategy over other possible strategies.[1] It might instead be better to consider a wide variety of framings and figure out which is best.[2] One strategy framing I want to propose that I would be interested in testing is viewing university groups as "impact driven truth-seeking teams."

What this looks like

An impact-driven truth-seeking team is a group of students trying to figure out what they can do with their lives to have the most impact. Imagine a scrappy research team where everyone is trying to figure out the answer to this research question—"how can we do the most good?" Nobody has figured out the question yet, nobody is a purveyor of any sort of dogma, everyone is in it together to figure out how to make the world as good as possible with the limited resources we have. 

What does this look like? I'm not all that sure, but it might have some of these elements:

  • An intro fellowship that serves an introduction to cause prioritization, philosophy, epistemics, etc.
  • Regular discussions or debates about contenders for "the most pressing problem of our time"
    • More of a focus on getting people to research and present arguments themselves than having conclusions presented to them to accept
  • Active cause prioritization
    • Live google docs with arguments for and against certain causes
    • Spreadsheets attempting to calculate possible QALYs saved, possible x-risk reduction, etc
    • Possibly (maybe) even trying to do novel research on open research questions
  • No doubt some of the elements we identified before in our backchaining are imporant too—the career planning and the upskilling
  • I'm sure there's much more that could be done along these lines that I'm missing or that hasn't been thought of yet at all

Another illustrative picture—imagine instead of university groups being marketing campaigns for Doing Good Better, we could each be a mini-80,000 hours research team,[3] trying to start at first principles and building our way up, assisted by the EA movement, but not constrained by it.

Cause prio for it's own sake for the sake of EA

Currently, the modus operandi of EA university groups seems to be selling the EA movement to students by convincing them of arguments to prioritize the primary EA causes. It's important to realize that the EA handbook serves as an introduction to the movement called Effective Altruism [4] and the various causes that it has already identified as being impactful, not as an introductory course in cause prioritization. It seems to me that this is the root of much of the unhealthy epistemics that can arise in university groups.[5] 

I don't think that students in my proposed team should stop engaging with the movement and its ideas. On the contrary, I think that more ideas about doing good better have come from this mileu than any other in history. I don't think it's a crime to defer at times. But before deferring, I think it's important to realize that you're deferring, and to make sure that you understand and trust who or what you're deferring to (and perhaps to first have an independent impression). Many intro fellowship curricula (eg the EA handbook) come across more as manifestos than research agendas—and are often presented as an ideology, not as evidence that we can use to make progress on our research question.

I think it's best that university groups unabashedly explore beyond the boundaries of the EA forum and consider a wide range of opinions. Some people might see it as too much of a risk to take that a few promising students who think deeply about the world's most pressing problems could come out after having done their cause prioritization not aligned with the EA movement, choosing some "non-EA cause" or preferring not to affiliate with the EA brand. On first look, this seems to be a loss for EA, but healthier epistemics of people trying to solve the most pressing problems is a win for the greater good,[6] and if that's what this movement cares about—it is a win too for the effective altruists. 

Possible problems with this approach

  • Some people might not be as interested in joining an EA group that looks like what I've proposed above. A cause prioritization introductory course, for example, might require a good amount of effort and might put off students who aren't interested in math/econ/philosophy. I'm not sure if this is a good fence or if we would be losing students who could contribute a lot of good to the world.
  • This proposal might just be untenable—it might be too much to try to get already busy students to try to become a "research team," or maybe only very few students would be interested.
  • Maybe EA groups should be an intro to the EA movement, maybe the whole epistemics thing is overrated. It might be true that the world in which we make lots of EAs is better than the world in which we make lots of good cause prioritizers.

Thoughts and feedback are very appreciated! 

  1. ^

    Some other unhealthy effects this (might) have: 

    a) create an unhealthy mindset of "Theory of Change"-executors (the organizers) and "Theory of Change"-subjects (the group members). 

    b) like I discuss next, ignore less obvious factors like epistemics—where does having good epistemics fit into this? It doesn't, because the sort of epistemics we're discussing is a property of groups more importantly than it is of people, so it doesn't fit neatly into this chain-of-thought questioning (though I'm open to the idea that sufficiently good backchaining might solve this.)

  2. ^

    Is this naive, heretical frontchaining? I think that you can answer those questions above in a hundred different valid ways, leading to many different possible backchained strategies. The backchaining above might help with finding the key ingredients you need to secure to make an impact, but IMO it shouldn't be your group strategy. Instead, group strategy should come from testing different hypotheses about how groups might best work. (We probably don't know of and probably haven't tried the optimal strategies!). In what follows, I propose one such hypothesis. 

  3. ^

    This analogy doesn't work perfectly. I chose 80k because they do work both on cause prioritization and also on testing fit/career planning/upskilling etc, which I might not be fully conveying by the title of this post. I don't mean that we should just do cause prio research and never get around to doing anything. See more here.

  4. ^

    The presence of marketing material like this really makes this clear.

  5. ^

    I think plenty has already been said about this (to the point that I think it's been overstated and overgeneralized) and I won't comment too much on it.

  6. ^

    Why should we care about epistemics? I think that this is an important question to ask ourselves. If we (assuming a moral-realist sort of act utilitarianism) figure out what the most pressing problems are with absolute certainty, then maybe we should start prosletyzing by just telling people what the most important causes are and convicing them to do them. This seems especially true if we are perfectly rational reasoners and so are the people we're trying to convince.

    The problem is, none of these assumptions are true. We're just guessing at what the most pressing problems are (especially as undergraduate university group organizers!) and there are all sorts of other moral uncertainties in addition to the factual ones. I think that this should probably be more developed.

32

1
0
1

Reactions

1
0
1

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 12:19 PM

I like this idea. I do wonder to what extent EA is implemented more as a field-building exercise, getting others to act on their previously determined conclusions, rather than promoting the self-determination of individuals through education on rationality, epistemics, and philosophy to guide action.

As someone who has first-hand experience with many points mentioned in the post: I can say that the current state of college-level EA groups is fairly limited to theoretical conduct rather than actions. I can guess that there might be multiple reasons, but I can mention some that I have personally observed:

 

  • IRL College studies sometimes hardly align with EA values, especially for students in technical and business institutions.
  • Most scholars prefer studying doing good for an extended period. The rest of the time is saved for studying their original streams.
  • The common and foremost goal[1] for college EA groups is organizing events for reading and discussion of just resources(posts and blogs). IRL EA college groups are typical Whatsapp groups serving as crossposting events, that's all. There may be opportunities present locally, but typically. college students can't afford to get involved at the student level.
  • College students are often surrounded by substantially large groups of non-EA people.

 

 

 

  1. ^

    First-hand experience. 

I'm not entirely sure whether your point here is in agreement or disagreement with my previous statement. Correct me if I am wrong, but I think you're saying something along the lines of:

"students aren't taking immediate action and are only going over theory, thus the community building efforts focused on theory must be open-minded and don't represent field building"

If that's the case, I don't see the complete logic of how because it 'can' work out that way that it 'must', and the ambiguity is what I'm focused on and would like more evidence on. Success metrics in my experience or uni groups particularly focus on participation rather than depth/retention of engagement.

I agree with the proposal of University groups as impact-driven truth-seeking teams and mentioned a few of my observations corresponding to your comment. Of course, it can work out. I tried to think about some of the reasons behind the same ambiguity you mentioned. It is just my two cents. I, too, consider the importance of participation above all. 

This is a very interesting idea. I'd love to see if someone could make it work.

[Opinion exclusively my own]

I think this framing has a lot of value. Funnily enough, I've heard tales of groups like this from the early days of EA groups, when people were just figuring things out, and this pattern would sometimes pop up.

I do want to push back a little bit on this:

But before deferring, I think it's important to realize that you're deferring, and to make sure that you understand and trust who or what you're deferring to (and perhaps to first have an independent impression). Many intro fellowship curricula (eg the EA handbook) come across more as manifestos than research agendas—and are often presented as an ideology, not as evidence that we can use to make progress on our research question.

The EA handbook (which nowadays is what the vast majority of groups use for their intro fellowships) includes three “principles-first” weeks (weeks 1, 2, and 7), which are meant to help participants develop their own opinions with the help of only the basic EA tools or concepts.

Furthermore, week 7 (“What do you think”) includes a reading of independent impressions, and learning that concept (and discussing where EA might be wrong) is one of the key objectives of the week:

A key concept for this session is the importance of forming independent impressions. In the long run, you’re likely to gain a deeper understanding of important issues if you think through the arguments for yourself. But (since you can’t reason through everything) it can still sometimes make sense to defer to others when you’re making decisions.

In the past, a lot of work has been put in trying to calibrate how “principles-based” or “cause-oriented” intro fellowships should be, and I think the tradeoff can be steep for university groups since people can get rapidly disenchanted by abstract philosophical discussion about cause prioritization (as you mention). This can also lead to people treating EA as a purely intellectual exercise, instead of thinking of concrete ways in which EA ideas should (for example) change their career plans.

That said, I think there are other ways in which we could push groups further along in this direction, for example:

  • We could create programming (like fellowships or workshops) around cause prioritization, exploring different frameworks and tools in the field. Not just giving a taste of these frameworks (like the handbook does), but also teaching hands-on skills that participants can use for actual cause prioritization research.
  • We could push for more discussion centered around exploratory cause research, for example, by creating socials or events in which participants try to come up with cause candidates and do some preliminary research on how promising they are (i.e. a “cause exploration hackathon”).

I know there has been some previous work in this direction. For example, there's this workshop template, or this fellowship on global priorities research. But I think we don't yet have a killer demo, and I would be excited about groups experimenting with something like this.

Curated and popular this week
Relevant opportunities