Hide table of contents

This is partly based on my experiences working as a Program Officer leading Open Phil’s Longtermist EA Community Growth team, but it’s a hypothesis I have about how some longtermists could have more of an impact by their lights, not an official Open Phil position.

Context: I originally wrote this in July 2022 as a memo for folks attending a retreat I was going to. I find that I refer to it pretty frequently and it seems relevant to ongoing discussions about how much meta effort done by EAs should focus on engaging more EAs vs. other non-EA people. I am publishing it with light-ish editing, and some parts are outdated, though for the most part I more strongly hold most of the conclusions than I did when I originally wrote it. 

Tl;dr: I think that recruiting and talent pipeline work done by EAs who currently prioritize x-risk reduction (“we” or “us” in this post, though I know it won’t apply to all readers) should put more emphasis on ideas related to existential risk, the advent of transformative technology, and the ‘most important century’ hypothesis, and less emphasis on effective altruism and longtermism, in the course of their outreach. 

A lot of EAs who prioritize existential risk reduction are making increasingly awkward and convoluted rhetorical maneuvers to use “EAs” or “longtermists” as the main label for people we see as aligned with our goals and priorities. I suspect this is suboptimal and, in the long term, infeasible. In particular, I’m concerned that this is a reason we’re failing to attract and effectively welcome some people who could add a lot of value. The strongest counterargument I can think of right now is that I know of relatively few people who are doing full-time work on existential risk reduction on AI and biosecurity who have been drawn in by just the “existential risk reduction” frame [this seemed more true in 2022 than 2023]. 

This is in the vein of Neel Nanda’s "Simplify EA Pitches to "Holy Shit, X-Risk"" and Scott Alexander’s “Long-termism vs. Existential Risk”, but I want to focus more on the hope of attracting people to do priority work even if their motivations are neither longtermist nor neartermist EA, but instead mostly driven by reasons unrelated to EA. 


EA and longtermism: not a crux for doing the most important work

Right now, my priority in my professional life is helping humanity navigate the imminent creation of potential transformative technologies, to try to make the future better for sentient beings than it would otherwise be. I think that’s likely the most important thing anyone can do these days. And I don’t think EA or longtermism is a crux for this prioritization anymore. 

A lot of us (EAs who currently prioritize x-risk reduction) were “EA-first” —  we came to these goals first via broader EA principles and traits, like caring deeply about others; liking rigorous research, scope sensitivity, and expected value-based reasoning; and wanting to meet others with similar traits. Next, we were exposed to a cluster of philosophical and empirical arguments about the importance of the far future and potential technologies and other changes that could influence it. Some of us were “longtermists-second”; we prioritized making the far future as good as possible regardless of whether we thought we were in an exceptional position to do this, and that existential risk reduction would be one of the core activities for doing it. 

For most of the last decade, I think that most of us have emphasized EA ideas when trying to discuss X-risk with people outside our circles. And locally, this worked pretty well; some people (a whole bunch, actually) found these ideas compelling and ended up prioritizing similarly. I think that’s great and means we have a wonderful set of dedicated and altruistic people focused on these priorities. 

But I have concerns. 

I’d summarize the EA frame as, roughly, “use reasoning and math and evidence to figure out how to help sentient beings as much as possible have better subjective experiences, be open to the possibility this mostly involves beings you don’t feel emotionally attached to with problems you aren’t emotionally inspired by” or, a softer “try to do good, especially with money, in a kind of quantitative, cosmopolitan way”. I’d summarize the LT frame as “think about, and indeed care about, the fact that in expectation the vast majority of sentient beings live very far away in the future (and far away in space), who in expectation are very different from you and everything you know, and think about whether you can do good by taking actions that might allow you to positively influence these beings.”

Not everyone is into that stuff. Mainly, I’m worried we (again, EAs who currently prioritize x-risk reduction) are missing a lot of great people who aren’t into the EA and LT “frame” on things; e.g. they find it too utilitarian or philosophical (perhaps subconsciously), and/or there are subtle ways it doesn’t line up with their aesthetics, lifestyle preferences and interests. I sometimes see hints that this is happening. Both frames ask for a lot of thinking and willingness to go against what many people are emotionally driven by. EA has connotations of trying to be a do-gooder, which is often positive but doesn’t resonate with everyone. People usually want to work on things that are close to them in time and space; longtermism asks them to think much further ahead, for reasons that are philosophically sophisticated and abstract. It also connotes sustainability and far-off concerns in a way that’s pretty misleading if we’re worried about imminent transformative tech. 

Things have changed

Now, many EA-first and longtermist-first people are, in practice, primarily concerned about imminent x-risk and transformative technology, have been that way for a while, and (I think) anticipate staying that way.

And I’m skeptical that the story above, if it were an explicit normative claim about how to best recruit people to existential risk reduction causes, passes the reversal test if we were starting anew. I’d guess that if most of us woke up without our memories here in 2022 [now 2023], and the arguments about potentially imminent existential risks were called to our attention, it’s unlikely that we’d re-derive EA and philosophical longtermism as the main and best onramp to getting other people to work on that problem. In fact, I think that idea would sound overly complicated and conjunctive, and by default we wouldn’t expect the optimal strategy to use a frame that’s both quite different from the one we ultimately want people to take, and demanding in some ways that that one isn’t. As a result, I think it would seem more plausible that people who believe it should directly try to convince people existential risks are large and imminent, and that once someone buys those empirical claims, they wouldn’t need to care about EA or longtermism to be motivated to address them.

An alternative frame

By contrast, the core message of an “x-risk first” frame would be “if existential risks are plausible and soon, this is very bad and should be changed; you and your loved ones might literally die, and the things you value and worked on throughout your life might be destroyed, because of a small group of people doing some very reckless things with technology. It’s good and noble to try to make this not happen”. I see this as true, more intuitive, more obviously connected to the problems we’re currently prioritizing, and more consistent with commonsense morality (as evinced by e.g. the fact that many of the most popular fictional stories are about saving the world from GCRs or existential risks). 

I don’t think the status quo evolved randomly. In the past, I think x-risks seemed less likely to arise soon, or at all, so EA + LT views were more likely to be cruxes for prioritizing them. I still think it would have been worth trying the things I’m suggesting ten years ago, but the case would have looked a lot weaker, Specifically, there are some changes that make an x-risk first (or similar) recruiting onramp more likely to succeed, looking forward:

  • AI capabilities have continued to advance. Compared to the status quo a decade ago in 2012, AIs outperform humans in many more areas, AI progress is far more apparent,  the pace of change is faster, and all of this is much more widely known. [This seems much more true in 2023 than 2022, when I originally wrote this line, and now seems to me like a stronger consideration than the rest.]
  • The arguments for concern about AI alignment have been made more strongly and persuasively, by a larger number of credible people. 
  • COVID-19 happened and made concern about anthropogenic biorisk seem more credible.
  • COVID-19 happened and a lot of respected institutions handled it less well than a lot of people expected, engendering a greater sense of things not being under control and there not being a deep bench of reasonable, powerful experts one can depend on.
  • [maybe] Brexit, Trump’s presidency, crackdowns in China, Russia’s war on Ukraine, etc., have normalized ideas about big societal changes and dangers that affect a huge number of people happening relatively frequently and suddenly. 

Who cares?

I think there should be a lot more experimentation with recruiting efforts that aren’t “EA-first” or “longtermist-first”, to see if we can engage people who are less into those frames. The people I’d be excited about in this category probably wouldn’t be the kind of people that totally reject EA and LT; they might nod along to the ideas, but wind up doing something else that felt more exciting or compelling to them. More broadly, I think we should be running lots of experiments (communicating a wide range of messages in a wide range of styles) to increase our “surface area”.  

Some other reasons to be skeptical of the status quo:

  • It might not be sustainable; I think if timelines start to seem very short, especially if there are warning shots and more high-profile people attempting to sound various alarms, I think the “EA-first” onramp will look increasingly convoluted and out of place; it won’t just leave value on the table, it might seem actively especially uncompelling and out of touch. 
  • I think leading with EA causes to more people feeling surprised and disappointed, because something that seemed to be and on occasion represents itself as an accessible way to try to be a good person, is in fact sometimes elitist/elite-focused, inaccessible, and mostly pretty alienated from its roots, generating general bad feelings and lower morale. I think existential risk reduction, by virtue of the greater transparency of the label, is less likely to disappoint.
  • Relatedly, I think EA is quite broad and so reliably generates conflicting access needs problems (e.g. between people working on really unusual topics like wild animal suffering who want to freely discuss e.g. insect sentience, and people working on a policy problem in the US government who more highly prioritize respectability) and infighting between people who prioritize different cause areas, and on the current margin more specialization seems good.
  • At least some EAs focused on global health and wellbeing, and on animal welfare, feel that we are making their lives harder, worsening their reputations, and occupying niches they value with LT/x-risk stuff (like making EAG disproportionately x-risk/LT-focused). Insofar as that’s true, I think we should try hard to be cooperative, and more specialization and brand separation might help. 
  • Something about honesty; it feels a bit dicey to me to intro people to EA first, if we want and expect them to end up in a more specific place with relatively high confidence, even though we do it via EA reasoning we think is correct.
  • Much of the value in global health and farm animal welfare, as causes, is produced by people uninterested in EA. On priors, I’d expect that people in that category (“uninterested in EA”) can also contribute a lot of value in x-risk reduction..
  • Claim from Buck Shlegeris: thinking of oneself and one’s work as part of a group that also includes near-term priorities makes it socially awkward and potentially uncooperative to the group to argue aggressively that longtermist priorities are much more important, if you believe it, and having a multi-cause group makes it harder to establish a norm of aggressively “going for the throat” and urging other to do the same on what you think is the most important work. 

I suspect it would be a useful psychological exercise for many of us to temporarily personally try out “shaking free” of an EA- or LT-centric frames or identities, to a much greater extent than we have so far, for our own clarity of thought about these questions. 

I think readers of this post are, in expectation, overvaluing the EA and longtermism frames

Because:

  • They are “incumbent” frames, so they benefit from status quo bias, and a lot of defaults are built around them and people are in the habit of referring to them 
  • We (mostly) took this onramp, so it’s salient to us
  • Typical mind fallacy; I think people tend to assume others have more similar minds to themselves than is the case, so they project out that what is convincing to them will also convince others.
  • They probably attract people similar to us, who we enjoy being around and communicate with more easily. But, damn it, we need to win on these problems, not hang out with the people we admire the most and vibe with the best.
  • Most of us have friends, allies, and employees who are relatively more committed to EA/LT and less committed to the x-risk reduction frame, and so it’s socially costly to move away from EA/LT.
  • Given that we decided to join the EA/LT community, this implies that the EA and LT frames suggested priorities and activities that were a good fit for us and let us achieve status — and this could bias us toward preferring those frames. (For example, if an x-risk frame puts less emphasis on philosophical reasoning, people who’ve thrived in EA through their interest in philosophy may be unconsciously reluctant to use it.) 

Concrete things I think are good

  • Recruiting + pipeline efforts that don’t form natural monopolies in tension with existing EA infrastructure, focused on existential risk reduction, the most important century, AI safety, etc.. Like:
    • Organizations and groups
    • Courses, blogs, articles, videos books
    • Events and retreats
    • 1:1 conversations with these emphases

Concrete things I’m uncertain about

  • Trying to build lots of new community infrastructure of the kind that creates natural monopolies or have strong network effects around an x-risk frame (e.g. an “Existential Risk Forum”)

Counterarguments:

  • In my view, a surprisingly large fraction of people now doing valuable x-risk work originally came in from EA (though also a lot of people have come in via the rationality community), compared to how many I would have expected, even given the historical strong emphasis on EA recruiting. 
  • We’re still highly uncertain about which strategies are best from an EA perspective, which is a big part of why truth-seeking and patience are important. 
    • However, it seems unlikely that we’ll end up shifting our views such that “transformative tech soon” and “the most important century” stop seeming like plausible ideas that justify a strong focus on existential risk.
  • EA offers a lot of likable ideas and more accessible success stories, because of its broad emphasis on positive attributes like altruism and causes like helping the global poor; this makes existential risk reduction seem less weird and connects it to things with a stronger track record
    • However, I think the PR gap between EA and x-risk reduction has closed a lot over the last year, and maybe is totally gone
    • And as noted above, I think there are versions of this that can be uncooperative with people who prioritize causes differently, e.g. worsening their reputations
  • Transformative tech/MIC/x-risk reduction isn’t a very natural frame either; we should be more cause-specific (e.g. recruiting into TAI safety or bio work directly). 
    • I think we should do some of this too, but I suspect a broader label for introducing background concepts like the difference between x-risk and GCRs, and the idea of transformative technology, is still helpful.
  • Some people brought up that they particularly want people with cosmopolitan, altruistic values around transformative tech. 


Anti-claims

(I.e. claims I am not trying to make and actively disagree with) 

  • No one should be doing EA-qua-EA talent pipeline work
    • I think we should try to keep this onramp strong. Even if all the above is pretty correct, I think the EA-first onramp will continue to appeal to lots of great people. However, my guess is that a medium-sized reallocation away from it would be good to try for a few years. 
  • The terms EA and longtermism aren’t useful and we should stop using them
    • I think they are useful for the specific things they refer to and we should keep using them in situations where they are relevant and ~ the best terms to use (many such situations exist). I just think we are over-extending them to a moderate degree 
  • It’s implausible that existential risk reduction will come apart from EA/LT goals 
    • E.g. it might come to seem (I don’t know if it will, but it at least is imaginable) that attending to the wellbeing of digital minds is more important from an EA perspective than reducing misalignment risk, and that those things are indeed in tension with one another. 
    • This seems like a reason people who aren’t EA and just prioritize existential risk reduction are less helpful from an EA perspective than if they also shared EA values all else equal, and like something to watch out for, but I don’t think it outweighs the arguments in favor of more existential risk-centric outreach work.

Thanks to lots of folks who weighed in on this, especially Aaron Gertler, who was a major help in polishing and clarifying this piece

Comments19
Sorted by Click to highlight new comments since: Today at 7:05 AM

This seems basically right to me. That said, I thought I'd share some mild pushback because there are incentives against disagreeing with EA funders (not getting $) and so, when uncertain, it might be worth disagreeing publicly, if only to set some kind of norm and eliciting better pushback.

My main uncertainty about all this, beyond what you've already mentioned, is that I'm not sure it would've been good to "lock in our pitch" at any previous point in EA history (building on your counterargument "we’re still highly uncertain about which strategies are best from an EA perspective, which is a big part of why truth-seeking and patience are important."). 

For example, what if EAs in the early 2010s decided to stop explaining the core principles of EA, and instead made an argument like:

  • Effective charities are, or could very plausible be, very effective.
  • Effective charities are effective enough that donating to them is a clear and enormous opportunity to do good.
  • The above is sufficient to motivate people to take high-priority paths, like earning to give. We don’t need to emphasise more complicated things like rigorous research, scope sensitivity, and expected value-based reasoning.

This argument is probably different in important respects to yours, but illustrates the point. If we started using the above argument instead of explaining the core principles of EA, it might have taken a lot longer for the EA movement to identify x-risks/transformative tech as a top priority. This all seems pretty new in the grand scheme of things, so I guess I expect our priorities to change a lot.

But then again, things haven't changed that much recently, so I'm convinced by:

many EA-first and longtermist-first people are, in practice, primarily concerned about imminent x-risk and transformative technology, have been that way for a while, and (I think) anticipate staying that way.

Dislaimer: Amazing post thanks @ClaireZabel  - I haven't thought about this nearly as hard as you, so this is at best a moderately thought through hot take

I feel like the attraction of the deep and rich philosophy that is effective altruism may be underrated.

Effective Altruism is a coherent philosophy that was built from the ground up (and continues to branch and grow), with a rich philosophical underpinning from a bunch of great thinkers growing out of utilitarian roots. This has attracted a relatively stable, committed and slow growing movement which has branched and expanded, while remaining true to the phililsophical tenets from whence it came. 

If we look at enduring movements which inspire people and create change, such as the civil rights movement, religions, political systems like capitalism and socialism, they have a rich and well thought out philisophical underpinnings - they don't stand without those. I'm not sure just floating X-risk and "We're all going to die" is going to galvanise support and get people on board. 

I think pushing X-risk will get a lot of people will nod their heads and say "yeah", and perhaps get instagram likes but very few of those initially enthusiatic people will passionately devote their lives to fighting against said risk. Unlike so many who are fighting x-risk like you say after arriving at that point through a rich journey exploring effective altruism (or similar). 

Preaching these clear ideas like "X-risk" or " without the philosophical "baggage" may seem like an easy evangelical route, but the house built on the sand can more easily collapse, and the seeds planted on hard ground don't usually flourish.

As a final comment I agree that "Longtermism" isn't a great cause to rally around, largely because it is only a branch of the deeper EA system and not really a standalone philosophy in and of itself. Without EA principles, does longtermism exist?

But why not test it though? Someone can try and start an anti existential-risk group without pulling people from the effective altruism crowd and we can see how it flies. I would love to be proved wrong (and could easily be). 

Somewhat sceptical of this, mainly because of the first 2 counterarguments mentioned:

  • In my view, a surprisingly large fraction of people now doing valuable x-risk work originally came in from EA (though also a lot of people have come in via the rationality community), compared to how many I would have expected, even given the historical strong emphasis on EA recruiting. 
  • We’re still highly uncertain about which strategies are best from an EA perspective, which is a big part of why truth-seeking and patience are important.

Focusing on the underlying search for what is most impactful seems a lot more robust than focusing on the main opportunity this search currently nets. An EA/longtermist is likely to take x-risk seriously as long as this is indeed a top priority, but you can't flip this. The ability of the people working on the world's most pressing problems updating on what is most impactful to work on (arguable the core of what makes EA 'work') would decline without any impact-driven meta framework.

An "x-risk first" frame could quickly become more culty/dogmatic and less epistemically rigorous, especially if it's paired with a lower resolution understanding of the arguments and assumptions for taking x-risk reduction (especially) seriously, less comparison with and dialogue between different cause areas, and less of a drive for keeping your eyes and ears open for impactful opportunities outside of the thing you're currently working on, all of which seems hard to avoid.

It definitely makes sense to give x-risk reduction a prominent place in EA/longtermist outreach, and I think it's important to emphasize that you don't need to "buy into EA" to take a cause area seriously and contribute to it. We should probably also build more bridges to communities that form natural allies. But I think this can (and should) be done while maintaining strong reasoning transparency about what we actually care about and how x-risk reduction fits in our chain of reasoning. A fundamental shift in framing seems quite rash.

EDIT: 

More broadly, I think we should be running lots of experiments (communicating a wide range of messages in a wide range of styles) to increase our “surface area”.

Agreed that more experimentation would be welcome though!

I would go further and say that more people are interested in specific areas like AI safety and biosecurity than the general framing of x-risks. Especially senior professionals that have worked in AI/bio careers. 

There is value for some people to be working on x-risk prioritisation but that would be a much smaller subset than the eventual sizes of the cause specific fields.

You mention this in your counterarguments but I think that it should be emphasised more. 

Thank you for this. I particularly appreciated the “counterarguments” and “anti-claims” sections, which I felt accurately represented the views of those of us who disagree.

One further counterargument I’d mention, to balance out the point you make about honesty, is that it also feels potentially dicey/dishonest to be gaming out ways of how to get people to do things we want, on grounds we don’t ourselves find to be the most compelling. To be clear, I think there is a good way of going about this, which involves being honest about our views while making it clear that we’re trying to appeal to people of different persuasions.

Thanks for your work Claire. I am really grateful.

I feel frustrated that many people learned a new concept "longtermism" which many people misunderstand and relate to EA but now even many EAs don't think this concept is that high priority. Feels like an error from us, that could have been predicted beforehand.

I am grateful for all the hard work that went into popularising the concept and I think weak longtermism is correct. But I dunno, seems like an oops moment that it would be helpful for someone to acknowledge.

I'm not sure I agree with: 

now even many EAs don't think this concept is that high priority

Could be true if you mean "high priority to communicate for community growth purposes", but I still think it's fairly fundamental to a lot of thinking about prioritisation (e.g. a large part of Open Phil's spending is classified as longtermist).

I agree that there have probably been costly and avoidable misunderstandings.

I basically agree with the core message. I'll go one step further and say that existential risk has unnecessary baggage - as pointed out by Carl Shulman and Elliot Thornley the Global Catastrophic Risk and CBA framing rescues most of the implications without resorting to fraught assumptions about the long term future of humanity.

Strongly agreed.

Personally, I made the mitigation of existential risk from AI my life mission, but I'm not a longtermist and not sure I'm even an "effective altruist". I think that utilitarianism is at best a good tool for collective decision making under some circumstances, not a sound moral philosophy. When you expand it from living people to future people, it's not even that.

My values prioritize me and people around me far above random strangers. I do care about strangers (including animals) and even hypothetical future people more than zero, but I would not make the radical sacrifices demanded by utilitarianism for their sake, without additional incentives. On the other hand, I am strongly committed to following a cooperative strategy, both for reputational reasons and for acausal reasons. And, I am strongly in favor of societal norms that incentivize making the world at large better (because this is in everyone's interest). I'm even open to acausal trade with hypothetical future people, if there's a valid case for it. But, this is not the philosophy of EA as commonly understood, certainly not longtermism.

The main case for preventing AI risk is not longtermism. Rather, it's just that otherwise we are all going to die (and even going by conservative-within-reason timelines, it's at least a threat to our children or grandchildren).

I'm certainly hoping to recruit people to work with me, and I'm not going to focus solely on EAs. I won't necessarily even focus on people who care about AI risk: as long as they are talented, and motivated to work on the problems for one reason or the other (e.g. "it's math and it's interesting"), I would take them in.

I strongly disagree that utilitarianism isn't a sound moral philosophy, and don't understand the black and white distinction between longtermism and us not all dying. I might be missing something there is surely at least some overlap betwen those two reasons for preventing AI risk.

But although I disagree I think you made your points pretty well :).

Out of interest, if you aren't an effective altruist, nor a longermist then what do you call yourself?

I strongly disagree that utilitarianism isn't a sound moral philosophy, and don't understand the black and white distinction between longtermism and us not all dying. I might be missing something there is surely at least some overlap betwen those two reasons for preventing AI risk.

I don't know if it's a "black and white distinction", but surely there's a difference between:

  • Existential risk is bad because the future could have a zillion people, so their combined moral weight dominates all other considerations.
  • Existential risk is bad because (i) I personally am going to die (ii) my children are going to die (iii) everyone I love are going to die (iv) everyone I know are going to die, and also (v) humanity is not going to have a future (regardless of the number of people in it).

For example, something that "only" kills 99.99% of the population would be comparably bad by my standards (because i-iv still apply), whereas it would be way less bad by longtermism standards. Even something that "only" kills (say) everyone I know and everyone they know would be comparably bad for me, whereas utilitarianism would judge it a mere blip in comparison to human extinction.

Out of interest, if you aren't an effective altruist, nor a longermist then what do you call yourself?

I call myself "Vanessa" :) Keep your identity small and all that. If you mean, do I have a name for my moral philosophy then... not really. We can call it "antirealist contractarianism", I guess? I'm not that good at academic philosophy.

By contrast, the core message of an “x-risk first” frame would be “if existential risks are plausible and soon, this is very bad and should be changed; you and your loved ones might literally die, and the things you value and worked on throughout your life might be destroyed, because of a small group of people doing some very reckless things with technology. It’s good and noble to try to make this not happen”.

I think a very important counterargument you don't mention is that, as with the Nanda and Alexander posts you mention, this paragraph, and hence the post overall importantly equivocates between 'x-risk' and 'global catastrophic risk'. You mention greater transparency of the label, but it's not particularly transparent to say 'get involved in the existential risk movement because we want to stop you and everyone you love from dying', and then say 'btw, that means only working on AI and occasionally 100% lethal airborne biopandemics, because we don't really care about nuclear war, great power conflict, less lethal pandemics/AI disasters, runaway climate change or other events that only kill 90% of people'.

I think focusing more on concrete ideas than philosophies is reasonable (though, following your second counterargument, I think it's desirable to try doing both in parallel for months or years rather than committing to either). But if we want to rebrand in this direction, I hope we'll either start focusing more on such 'minor' global catastrophes, or be more explicit (as David Nash suggested) about which causes we're actually prioritising, and to what extent. Either way, I don't think 'existential risk' is the appropriate terminology to use (I wrote more about why here).

You have a lot of great ideas, the one trend I see that aligns with some of my thoughts is a general sense that "EA Culture" is not for everyone and how that informs our outreach. I personally love EA culture, but I'm also not a STEM person and I clearly see how some people might refer to EA culture as "borg-like" as Jenn reported in her forum piece, https://forum.effectivealtruism.org/posts/5oTr4ExwpvhjrSgFi/things-i-learned-by-spending-five-thousand-hours-in-non-ea just the other day.  I also appreciate Vanessa's comments below and she seems to be killing it in her work.  Also below zchuang seems to hit it on the head that there's been a weird confluence of events; FTX, WWOTH dropping as LLM's hit the stage publicly, and Covid-19 changing things...if ever there was a time for re-thinking and pivoting on our direction, it's now. 

This broad point is made so much and is so clearly true that I'm more puzzled why it hasn't had more of an impact on EA/LT outreach. I guess inertia is probably a big factor.

It feels like a weird confluence of effects of:

  1. FTX Future Fund spawning at the same time.
  2. What we owe the future dropping at the same time as more visible LLMs.
  3. Cultural pivot in people's system 1s about pandemics because of COVID-19.

As I explore EA, I especially love provocative, thoughtful, introspective posts like this. 

There are always opportunities to improve. The EA philosophy supports data-based, science-based decision-making, so absolutely this should be applied to communication as well. 

This is a great example. 

There is a massive amount of research out there, often done cynically by political parties or advertisers, but no less scientifically valid for that. We can learn from it. We could also, at very low-cost or even free, run research ourselves - if it's well-designed, the results would be reliable and reproducible, and could support (or, I suppose, in theory, refute!) what you're saying, and so help us communicate better and attract more people. 

I believe there is one very tangible opportunity, which you and also @Vanessa in her comments below, are capturing. We in EA tend to be self-selected as rational, logical thinkers - I don't want to overgeneralize, but IMHO if you looked at how each of us would describe ourselves in terms of Myers-Briggs*, most of us would be T's rather than F's. 

So part of what you're saying here is that our message may be a great way to attract "people who think like us" to work on X-risk, but may miss out some others, who "don't think like us" who would be more effectively convinced by a different approach. 

For decades, communication experts have been studying how to use different messages to communicate and convince different people. Again, we see the ugly, cynical side of this in politics and FoxNews and so on. But there is also a good side, which is about communicating an idea to someone in a form that makes it clear and appealing to them. 

If you're interested in pursuing this further, I'd be happy to share some of what I've learned about this over the years of doing quantitative and qualitative research on communication. 


 

 

*I believe the science behind the Myers-Briggs test is very dubious, BUT the categories themselves can be very helpful for communicating concepts like this. T's tend to think rationally - if they are good people (like us!), they think rationally about how to do the most good for the most people. F's are more driven by emotional reactions - they see someone crying, they see a danger to their loved ones, they decide to do something about it. 
 

Related:

(I'm linking these because, for me, they don't appear in the "More posts like this" panel,[1] and I rate them highly and think they're closely related to Claire's post here.)

  1. ^

    "More posts like this" is a fairly new feature, I believe, and I'm not sure if everyone sees the same recommendations there. Apologies if I'm linking the same posts in this comment as some people see there.

[This comment is no longer endorsed by its author]Reply

They are explicitly mentioned in the post though, a few paragraphs in.

Curated and popular this week
Relevant opportunities