Hide table of contents

A Vision for Harvard EA, 2018–19

Cullen O’Keefe, President

Google Doc Version (easier-to-read with footnotes)

Preface

This document will hopefully provide a short, comprehensible, and concrete model for how I would like to run the Harvard University Effective Altruism Student Group (“HUEASG”)  this year.[1] I’m sharing it here so that other EA leaders to use it to the extent they find useful.[2]

This third draft is a result of synthesizing ideas from the below documents, receiving feedback from readers of earlier drafts as well as from conversations with other EA leaders (to whom I am immensely grateful). 

Foundational Texts

(Read these first if you haven’t)

● Ales Flidr & James Aung, Heuristics from Running Harvard and Oxford EA Groups

● CEA, A Three-Factor Model of Community Building [hereinafter, Three-Factor Model]

● CEA, The Funnel Model [hereinafter, Funnel Model]

● CEA, A Model of an EA Group [hereinafter, Model of a Group]

● CEA, CEA’s Current Thinking [hereinafter, Current Thinking]

● CEA, Effective Altruism Community Building [hereinafter, Community Building]

Guiding Principles

It would be needlessly redundant to try to formulate, from scratch, a set of values for HUEASG. Instead, I will stress the importance of remaining aligned with the most recent principles and best practices published by EA leaders (e.g., CEA).[3]  CEA’s current definition of EA is: “using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.”[4]  CEA lists the following as EA guiding principles:[5] 

  • Commitment to others
  • Scientific Mindset
  • Openness
  • Integrity
  • Collaborative Spirit[6] 

 

Current thinking within CEA endorses a long-termist approach to EA: “We believe that the most effective opportunities to do good are aimed at helping the long-term future.”[7]  HUEASG should mirror this,[8] without marginalizing or eschewing short-term causes.  Note that this encompasses both prevention of existential risk and “trajectory shifting.”[9] 

 

Some might think that this framing conflicts with our fundamental commitment to cause neutrality.[10]  However, as we use it here, “cause-neutral” means roughly “cause-impartial”: “select[ing] causes based on impartial estimates of impact.”[11]  Thus, it is perfectly compatible with impartially-reached cause decidedness.[12]  Furthermore, long-termism is an epistemic framing rather than a cause:[13]  it encourages us to pay substantial consideration to interventions’ long-term effects. “However, we recognize that this argument rests on some significant moral and empirical assumptions, and so we remain uncertain about how valuable the long-term future is relative to other problems. We think that there are other important cause areas, particularly animal welfare and effective global health interventions [even when considering only the short-term effects of these].”[14]  Thus, the long-termist framing effects a quantitative change of focus, not a wholesale rejection of any particular cause.[15] 

Role of HUEASG Within EA

Generalizing a bit, mainstream EA organizations are not primarily funding-constrained.[16]  Meanwhile, important talent gaps persist within major organizations.[17]  A related observation is that “some people have the potential to be disproportionately impactful. It appears as though some people may be many orders of magnitude more impactful than average just by virtue of the resources (money, skills, network) they have available.”[18] 

 

Because of these and other factors, most value from our group will likely come from Core EAsroughly, people who give ≥80% weighting to EA considerations when deciding on a career[19] path.[20]  Key routes to value are, therefore: catalyzing,[21]  cultivating,[22]  and retaining[23] potential Core EAs.

 

Relatedly, Harvard EA likely creates the most value for the EA community by playing to our comparative advantages. Our main comparative advantage is access to young, promising,[24] and well-connected students.

 

Since students broadly lack the resources, skills, and knowledge to be immediately useful to the EA community,[25] this all implies that our priorities should be:[26] 

  • Catalyzing, cultivating, and retaining Core EAs, with a special emphasis on educating Core EAs on the intellectual/academic foundations of effective altruism[27] 
  • Helping Core EAs devise career plans[28] 
  • Directing Core EAs to useful resources and upskilling opportunities

Relation to the Funnel Model

Given the above, a major heuristic for HUEASG activities should be how they fit in with the Funnel Model. That is, we should ask ourselves, “How will this activity move people further down the funnel?”[29]  “Trying to get a few people all the way through the funnel is more important than getting every person to the next stage.”[30]  I explore the implications of this below.

The Top of the Funnel: “Taste of EA”

This idea draws heavily from Flidr & Aung. 

“Introduction to EA”-type events have been the default beginning-of-the-year activity for a while. I think we should move away from this model for a number of reasons. First, I worry that “Intro to EA” events are potentially low fidelity, which can have bad effects on our group reputation and EA generally.[31]  Relatedly, since EA comprises a number of nuanced ideas across several different academic disciplines, Intro to EA events that are expansive enough to cover all important points are not very deep, and are likely overwhelming to newcomers.[32] 

Flidr & Aung therefore offer the following advice:

[T]hink about outreach efforts as an ‘offer’ of EA where people can get a taste of what it’s about and take it or leave it. It’s OK if someone’s not interested. A useful heuristic James used for testing whether to run an outreach event is to ask “to what extent would the audience member now know whether effective altruism is an idea they would be interested in?” It turned out that many speaker events that Oxford were running didn’t fit this test, and neither did the fundraising campaign.

Don't "introduce EA". It's fine if people don't come across EA ideas in a particular sequence. First, find entry points that capture a person's interest. If someone finds EA interesting and likes the community, they will absorb the basics pretty soon.

My own model is that Core EAs usually need a few very important traits:

(1) Dedication to doing good[33] 

(2) Human capital (i.e., skills and resources)[34] 

(3) Reliability/dependability

(4) Knowledge of topics related to effective altruism[35]

 

Early outreach efforts should optimize for some combination of (1) and (4).[36]  That is, we should should aim at enticing people predisposed to doing good and people from fields with a strong track record of producing Core EAs (e.g., philosophy, computer science, economics, biology).[37]  Optimization for the remaining traits comes at later stages of the funnel.

 

Admittedly, I still don’t have a great idea of what this will look like. Combined with insights from Oxford EAs, my experience at HLS suggests that introductory talks and student org fairs are still lucrative mass outreach tools. However, going forward I expect to:

  • put more emphasis on introducing the main motivations for EA (e.g., differential cost-effectiveness, lack of quantification and evidence)
  • put more emphasis on introducing specific, representative projects that EAs do, the value of which is comprehensible to non-EAs
  • put less emphasis on more specific concepts in EA (e.g., scope insensitivity, long-termism)

 

Huw Thomas suggests that focused outreach to students in historically EA-productive disciplines (e.g., computer science) might also be worthwhile. However, to promote group diversity, broad outreach is still highly desirable.

Next Steps: 1-on-1s

1-on-1s (1:1s) are a good next step for several reasons.[38]  First, they begin to screen (albeit very lightly) for the third trait-cluster Core EAs need: reliability/dependability.[39]  Second, 1:1s offer a good way to build friendship with potential Core EAs (more later). Third, they offer a good way to communicate EA ideas in a more high-fidelity manner.[40]  Finally, they offer a good opportunity to “signpost”: point newcomers to existing EA literature[41] and organizations that suit their interests,[42] thus increasing their knowledge of EA.

The Expanding Core

The final step, of course, is to continue the Funnel process by moving committed individuals towards Core involvement. I admit that I don’t have a solid model for what this should look like. If my hypothesis about what traits a Core EA needs is correct, then the person should be largely self-motivated to continue to learn EA content on their own. If this too is right, then perhaps most of the value from org leadership can do is from:

  • Preventing or slowing down attrition
  • Continuing to signpost to and networking on behalf[43] of the newcomer
  • Aiding in career planning[44]
  • Activities aimed at increasing follow-through with EA-informed career plans[45] 

I imagine that formal events will be less important to this process than developing a sense of community with newer Core members.[46]  Getting new and prospective Core EAs to feel at home in and strongly identify with EA is likely one of the best ways to get them to remain in the Core. Social events seem like a good way to achieve this. We should also ask current org members about what programming they would find valuable.[47]

Implications for Group Structure and Function

Specialize Events and Roles

School groups should map their planning to the Funnel Model.[48]  For groups with sufficient personnel, it might make sense to begin to specialize for each stage, too (i.e., have dedicated outreach, 1:1, and Core leads). 

Note that, although these are ordinal steps, there is no reason why a group cannot make parallel efforts at each level (including regular 1:1s).[49] That is, the Funnel does not have to map cleanly onto the entire school year, with outreach happening only at the beginning.

Focus on Skill-Building

While we should encourage EAs to embrace other means of upskilling,[50] we should also grow Core EAs’ human capital when it plays to our comparative advantages.[51]  Along this line, Ales Flidr suggests “[f]ocusing on the memes of rationality, prioritization and exploring unconventional options [for doing good].”

Strong Focus on Structured EA Social Events

This model puts a strong emphasis on community social events, since I think those are quite likely to effectively move people down the Funnel and retain Core EAs. Such social events, however, should be somewhat structured while still allowing ample time for casual socializing. 

Implications for Pledge Work

This implies that we should put less emphasis on pledge work. I think there’s still a place for such work now,[53] but such work should not be the main activity of our groups except insofar as it is a good mechanism for driving group organizers to stay committed.[54]  

Ideally, an EA organization would have programming cleanly differentiated to engage people in all stages of the Funnel (including those unlikely to move further down). In such a world, pledge work would remain valuable as a way to get non-Core EAs (and others) to enact EA principles. Indeed, I think it’s plausible that a lot of EA’s future value will come from changing norms about philanthropy, if not career choice. But right now, we probably lack the personnel to enact such a tiered approach. In the long run, it might be worthwhile to consider creating two subsidiary EA groups: one focused on traditional pledge-type work and one focused on career choice.[55] 

Other Avenues for Value

Ales Flidr suggests the following as possible means of creating value with HUEASG:

  • Targeted relationship building with key professors and their grad students, particularly those who have a good chance
  • Related, studying people who went through the law school (professors, students) who had the greatest impact on the world. What they did, how they interacted with groups, etc. Similarly research current faculty and students.
  • Study what concretely most successful groups (by our metrics, i.e., thinking and behavior change) do.
  • Trying to create better working connections with [academic EA institutions like the Research Scholars Programme] and GPI

Endnotes

[1] This encompasses all Harvard schools except for Harvard College.

[2] The following people have been especially influential to my thinking: Frankie Andersen-Wood, James Aung, Chris Bakerlee, Harri Besceli, Ryan Carey, Holly Elmore, Ales Flidr, Eric Gastfriend, Kit Harris, Jacob Lagerros, Ed Lawrence, Darius Meissner, Linh Chi Nguyen, Alex Norman, Huw Thomas, and Hayden Wilkinson. All mistakes are my own.

 

Unless a direct quotation, references to any of the above do not necessarily imply a direct endorsement, but rather suggest that the idea is related to (including potentially a reply to) their comments or input.

[3] Cf. CEA, Current Thinking (“We believe that individuals will have a greater impact if they coordinate with the community, rather than acting alone.”).

[4]  CEA, CEA’s Guiding Principles.

[5] Id.

[6]  See also Stefan Schubert & Owen Cotton-Barratt, Considering Considerateness.

[7]  CEA, Current Thinking.

[8]  H/T Darius Meissner.

[9]  See CEA, Current Thinking.

[10]  H/T James Aung; Holly Elmore; Huw Thomas.

[11]  See Stefan Schubert, Understanding Cause-Neutrality.

[12]  Cf. id.

[13]  H/T Holly Elmore.

[14]  CEA, Current Thinking; H/T Huw Thomas.

[15]  H/T Holly Elmore.

[16]  See 80K, What Are The Most Important Talent Gaps in the Effective Altruism Community? (mean reported funding constraint of 1.3 out of 4); cf. CEA, Current Thinking (“We believe that CEA can currently be most useful by allocating money to the right projects rather than by bringing in more donors.”).

[17]  See 80K, supra note 16. Note that respondents said they would need $250,000 to release the most recent junior hire for three years. This suggests a very high tradeoff in value between career-oriented activities and donation-oriented ones. In practice, I imagine that this means that our resources will virtually always be best spent when optimized for career changes.

[18]  CEA, Current Thinking. H/T Darius Meissner.

[19]  With earning to give as a plausible EA career. H/T James Aung.

[20]  This definition borrows heavily from Ed Lawrence.

[21]  Defined as the counterfactual act of considering dedicating one’s career to EA.

[22]  Defined as increasing one’s dedication to being or becoming a “core EA.”

[23]  Defined as protecting people completely dedicated to being a “core EA” from becoming less involved (e.g., due to burnout, attrition, and value drift).

[24]  Reasons Harvard students are promising include high career flexibility compared to other students, access to substantial political and social capital due to Harvard affiliation, access to world class academic resources, high expected career earnings, and social attitudes conducive to EA. H/T Darius Meissner.

[25]  See Flidr & Aung.

[26]  A comment from Holly Elmore: “I'm torn between this path and a path of spreading the word through Harvard. My guess is that the latter raises the prestige of EA among influential people, re-anchors people on doing more charity than they previously considered, and increases the chance of finding potential core EAs. The former (the one you have here) seems to make EA into a secret society. There's something uncomfortable about that to me, but I'm open to it. I think HUEA/HCEA were strongest when we had a lot of public-facing things and a big behind-the-scenes focus on core organizers. The organizers had something to that was more immediately altruistic than planning their careers and it kept them in touch with the basics of EA.”

 

I’m largely in agreement with this. I also agree that long-run changes in cultural attitudes towards philanthropy are potentially an important part of EA’s expected value. However, while I think there continues to be value in those activities—and so we should therefore not eschew them—the foregoing consideration make this seem less valuable as compared to developing Core EAs, though still absolutely valuable. We are agreed that direct work is good for boosting morale and keeping people engaged.

[27]  See CEA, Community Building. H/T Darius Meissner.

[28]  Holly Elmore raised the following questions in a previous draft: “Are we just a feeder for EA orgs or should we consider part of our role to cultivate creative thinking about how to accomplish a lot of good? Or are you just saying we shouldn't encourage earning to give and instead focus on developing people who are well-versed in and dedicated to EA?”

 

Regarding the first question, I see “creative thinking about how to accomplish a lot of good” as perfectly compatible with my definition of Core EA. More specifically, being a “Core EA” need not and should not be limited to pursuing curren 80K priority career paths. Indeed, more individualized—and by extension less orthodox—guidance might play to our comparative advantage since we have the opportunity to develop closer relationships to Core EAs than organizations like 80K. Also, “career” might be defined more loosely than “what one does to pay the bills.” 

 

On the second question, current CEA thinking is that “[w]e think that the community continues to benefit from some people focused on earning-to-give . . . .” CEA, Current Thinking; see also 80K, Career Reviews (recommending or sometimes recommending several careers at least partially for ETG reasons). “Roughly, we think that if an individual is a good fit to work on the most important problems, this should probably be their focus, even if they have a high earning potential. If direct work is not a good fit, individuals can continue to have a significant impact through donations.” CEA, Current Thinking.

[29]  Cf. Flidr & Aung (“Engagement is more important than wide-reach.”).

[30]  CEA, Model of a Group.

[31]  See Flidr & Aung; CEA, Current Thinking; see also Kerry Vaughan, The Fidelity Model of Spreading Ideas.

[32]  Cf. Vaughan, supra note 31.

[33]  See CEA, Community Building. Thanks to Frankie Andersen-Wood for pushing for clarification of this concept. 

[34]  H/T Hayden Wilkinson; Darius Meissner.

[35]  E.g., evolutionary biology, philosophy of mind, economics, moral philosophy, political science. H/T Darius Meissner.

[36]  H/T Darius Meissner.

 [37] Thanks to James Aung, Chris Bakerlee, Ed Lawrence, and Huw Thomas for developing this.

[38]  Cf. Flidr & Aung (“Default to 1:1's. In hindsight, it is somewhat surprising that 1:1 conversations are not the default student group activity. They have a number of benefits: you get to know people on a personal level, you can present information in a nuanced way, you can tailor recommended resources to individual interests etc. Proactively reach out to members in your community and offer to grab a coffee with them or go for a walk. 1:1's also give you a good yardstick to evaluate how valuable longer projects have to be to be worth executing: e.g. a 7-hour project would have to be at least as valuable as 7 1:1's, other things equal. Caveat: we definitely don’t mean to imply that you should cut all group or larger-scale activities. We will share some ideas for such activities in a follow-up post.”).

[39]  The idea being that willingness to sign up for a 1:1 is somewhat indicative of one’s openness to EA and to putting serious thought into doing good generally. This is distinct from screening for moral dedication: Frankie Andersen-Wood and Darius Meissner usefully point out that for many people, moral dedication/conviction develops over time with exposure to, e.g., moral philosophy. 

 

Linh Chi Nguyen and Chris Bakerlee rightly suggest that, while 1:1s are valuable, other modes of outreach and onboarding retain value. One example is allowing people to “nonbindingly sit in a discussion round where they can check out if they like the (people in the) community.” H/T Linh Chi Nguyen.

[40]  See Vaughan, supra note 31 (“An example of a high fidelity method of communicating EA would be a lengthy personal conversation. In this context you could cover a large number of ideas in great detail in an environment (face-to-face conversation) that is particularly well-suited to updating.”). But cf. Flidr & Aung (“Don't teach, signpost. Avoid the temptation to teach EA to people. There’s a lot of great online content, and you won’t be able to explain the same ideas as well or in as much nuance as longform written content, well-prepared talks or podcast episodes.”).

[41]  Darius Meissner suggests that lending out EA books might be a good way to do this.

[42]  Cf. Flidr & Aung (“Instead of viewing yourself as a teacher of EA, think of yourself as a signpost. Be able to point people to interesting and relevant material on all areas of EA, and remove friction for people learning more by proactively recommending them content. For example, after a 1:1 meeting, message over 3 links that are relevant to their current bottleneck/area of interest.”).

[43]  That is, introducing the new Core EA to other Core EAs with relevant interests.

[44]  A major uncertainty for me is how to do this in a way that is aligned with 80K’s work. Perhaps there needs to be a single, trustable, well-read HUEASG career advisor who is responsible with staying up-to-date with 80K’s latest thinking and priorities.

[45]  Frankie Andersen-Wood suggests activities focused on mental health, skill building, rationality, and connection to EA communities outside of school.

[46]  Holly Elmore rightly points out that “onerous work of putting on events that makes the core members 1) actually trust each other and 2) makes the club feel legit.” H/T also Linh Chi Nguyen.

[47]  H/T Darius Meissner.

[48]  Of course, some events may straddle stages of the Funnel.

[49]  H/T James Aung.

[50]  H/T Ed Lawrence.

[51]  H/T Frankie Andersen-Wood.

[52]  H/T Holly Elmore; Linh Chi Nguyen; Chris Bakerlee. I agree with Chris Bakerlee that “events I've been to at Eric [Gastfriend]’s are a pretty good model: nominally focused on a specific topic (e.g., the state of EA animal advocacy; end-of-year donation discussion) but allowing plenty of time and opportunity for people to mill about, munch on things, and talk about whatever. Something that would be advertised via facebook event rather than postering the Science Center.”

[53]  Mainly because this new model is novel. Although we should feel free to change trajectories and priorities if we think it’s more effective, I think a complete break from pledge organizations right now would damage our relationships with historically supportive organizations. This is probably a bad dynamic to undertake without very good reason to think that we have a better plan. Cf. Schubert & Cotton-Barratt, supra note 6. Such work is also valuable because, anecdotally, a number of Core EAs (e.g., me) have become Core EAs by first being interested in global poverty pledge work. So supporting this on-ramp is an important part of the Funnel too.

[54]  H/T Holly Elmore.

[55]  I understand Oxford EA is considering this.

12

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

Not a comment on the content, but on the style of writing: I found it very hard to a read a document with so many endnotes - it was about half the scroll length - and gave up: it was too tricky to keep flicking down to the important content and then back up again.

Thanks Michael! I've linked to a Google Doc version with footnotes for ease-of-reading: https://docs.google.com/document/d/1i1-57jRg7vrcTBXAcqIFAYzGoC-bftIMnElqPKFGMVk/edit?usp=sharing

Please feel free to steal the html used for footnotes in EA forum posts like this one.

  • In-page anchor links: <a id="ref1" href="#fn1">&sup1;</a>
  • Linked footnote: <p id="fn1">&sup1; <small>Footnote text.</small></p>
  • Footnote link back to article text: <a href="#ref1">↩</a>

EDIT Jan 2022: The EA Forum now supports rich footnotes. Use that instead!

Curated and popular this week
Relevant opportunities