It's a common story. Someone who's passionate about EA principles, but has little in the way of resources, tries and fails to do EA things. They write blog posts, and nothing happens. They apply to jobs, and nothing happens. They do research, and don't get that grant. Reading articles no longer feels exciting, but like a chore, or worse: a reminder of their own inadequacy. Anybody who comes to this place, I heartily sympathize, and encourage them to disentangle themselves from this painful situation any way they can.

Why does this happen? Well, EA has two targets.

  1. Subscribers to EA principles who the movement wants to become big donors or effective workers.
  2. Big donors and effective workers who the movement wants to subscribe to EA principles.

I won't claim what weight this community and its institutions give to (1) vs. (2). But when we set out to catch big fish, we risk turning the little fish into bycatch. The technical term for this is churn.

Part of the issue is the planner's fallacy. When we're setting out, we underestimate how long and costly it will be to achieve an impact, and overestimate what we'll accomplish. The higher above average you aim for, the more likely you are to fall short.

And another part is expectation-setting. If the expectation right from the get-go is that EA is about quickly achieving big impact, almost everyone will fail, and think they're just not cut out for it. I wish we had a holiday that was the opposite of Petrov Day, where we honored somebody who went a little bit out of their comfort zone to try and be helpful in a small and simple way. Or whose altruistic endeavor was passionate, costly, yet ineffective, and who tried it anyway, changed their mind, and valued it as a learning experience.

EA organizations and writers are doing us a favor by presenting a set of ideas that speak to us. They can't be responsible for addressing all our needs. That's something we need to figure out for ourselves. EA is often criticized for its "think global" approach. But the EA is our local, our global local. How do we help each other to help others?

From one little fish in the sEA to another, this is my advice:

  1. Don't aim for instant success. Aim for 20 years of solid growth. Alice wants to maximize her chance of a 1,000% increase in her altruistic output this year. Zahara's trying to maximize her chance of a 10% increase in her altruistic output. They're likely to do very different things to achieve these goals. Don't be like Alice. Be like Zahara.
  2. Start small, temporary, and obvious. Prefer the known, concrete, solvable problem to the quest for perfection. Yes, running an EA book club or, gosh darn it, picking up trash in the park is a fine EA project to cut our teeth on. If you donate 0% of your income, donating 1% of your income is moving in the right direction. Offer an altruistic service to one person. Interview one person to find out what their needs are.
  3. Ask, don't tell. When entrepreneurs do market research, it's a good idea to avoid telling the customer about the idea. Instead, they should ask the customer about their needs and problems. How do they solve their problems right now? Then they can go back to the Batcave and consider whether their proposed solution would be an improvement.
  4. Let yourself become something, just do it a little more gradually. It's good to keep your options open, but EA can be about slowing and reducing the process of commitment, increasing the ability to turn and bend. It doesn't have to be about hard stops and hairpin turns. It's OK to take a long time to make decisions and figure things out.
  5. Build each other up. Do zoom calls. Ask each other questions. Send a message to a stranger whose blog posts you like. Form relationships, and care about those relationships for their own sake. That is literally what EA community development is about; a community of like-minded friends is far stronger than an organization of ideologues.
  6. You don't have to brand everything as EA. If you want to encourage your friends to donate to MIRI or GiveWell, you can just talk about those specific organizations. An argument that's true often doesn't need to be argued. "They work to keep AI technology safe for humanity" and "they give bed nets to prevent malaria" are causes that kind of sell themselves. If people want to know why you recommend them, you'll be able to answer very well.
  7. Be a founder and an instigator, even if the organization is temporary, the activity incomplete. Do a little bit of everything. Have the guts to write for this forum, if you have time. Organize an event with a friend. Buy a domain name and throw together a website.
  8. Stay true to the principles, even if you're not sure how to put them into practice.
  9. Don't be bycatch. It's OK to come in and out of EA, and back in again if you want to. The best thing you can possibly do for the community is make EA work for you, rather than just making yourself work for EA.
Comments19


Sorted by Click to highlight new comments since:

I really liked the encouraging tone of this― "from one little fish in the sEA to another" was so sweet― and like the suggestion to instigate small / temporary / obvious projects. Reminds me a bit of the advice in Dive In which I totally failed to integrate when I first read it, but now feels very spot on; I spent ages agnoising over whether my project ideas were Effective Enough and lost months years that could have been spent building imperfect things and nurturing competence and understanding.

tl;dr: I like the post. One thing that I think it gets wrong is that if "picking up trash in the park is a fine EA project to cut our teeth on", then that is a sorry state for EA to find itself in. 

I think that a thing that this post gets wrong is that EA seems to be particularly prone to generating bycatch, and although there are solutions at the individual level, I'd also appreciate having solutions at higher levels of organization. For example, the solution to "you find yourself writing not-so-valuable blogposts" is probably to ask a mentor to recommend you valuable blog posts to write.

One proposal to do that was to build what this post calls a "hierarchical networked structure" in which people have people to ask about which blog posts or research directions would be valuable, and Aaron Gertler is there to offer editing, and further along the way, EA groups have mentors which have an idea of which EA jobs are particularly valuable to apply to, and which are particularly likely to generate disillusionment, and EA group mentors themselves have someone to ask advice to, and so on. This to some extent already exists;  I imagine that this post is valuable enough to get sent on the EA newsletter, which means that involved members in their respective countries will read it and maybe propagate its ideas. But there is way to go. 

Another solution in that space would be to have a forecasting-based decentralized systems, where essentially the same thing happens (e.g., good blog posts to write or small projects to do get recommended, career hopes get calibrated, etc.), but which I imagine could be particularly scalable.

We can also look at past movements in history. In particular, General Semantics also had this same problem, and a while ago I speculated that this lead to its doom. Note also that religions don't have the problem of bycatch at all.

That’s good feedback and a complementary point of view! I wanted to check on this part:

“I think that a thing that this post gets wrong is that EA seems to be particularly prone to generating bycatch, and although there are solutions at the individual level, I'd also appreciate having solutions at higher levels of organization.”

Are you saying that you think EA is not particularly prone to generating bycatch? Or that it is, but it’s a problem that needs higher-level solutions?

Yeah, that's not my proudest sentence. I meant the former, that it is particularly prone to generating bycatch, and hence it would benefit from higher level solutions. In your post, you try to solve this at the level of the little fish, but addressing that at the fisherman level strikes me as a better (though complimentary) idea.

This really resonated for a young EA like myself. EA totally transformed the way I think about my career decisions, but I quickly realized how competitive jobs at the organizations I looked up to were. (The 80k job board is a tough place for an undergraduate to NOT feel like an imposter). This didn't discourage me from EA in general, but it did leave me with some uncertainty about how much I should let EA dictate my pursuits. This post offered some excellent reassurances and reminders to stay grounded.

Inspired me to leave my first comment on the forum. Thank you for the lovely post :)

Thanks a lot for writing this! I think this is a really common trap to fall into, and I both see this a lot in others, and in myself.

To me, this feels pretty related to the trap of guilt-based motivation - taking the goals that I care about, and thinking of them as 'I should do this' or as obligations, and feeling bad and guilty when I don't meet them. Combined with having unrealistically high standards, based on a warped and perfectionist view of what I 'should' be capable of, hindsight bias and the planning fallacy and what I think the people around me are capable of. Which combine to mean that I set myself standards I can never really meet, feel guilty for failing to meet them, and ultimately build up aversions that stop me caring about whatever I'm working on, and to flinch away from it.

This is particularly insidious, because I find the intention behind this is often pure and important to me. It comes from a place of striving to be better, of caring about things, and wanting to live in consistency with my values. But in practice, this intention, plus those biases and failure modes, combine in me doing far worse than I could.

I find a similar mindset to your first piece of advice useful: I imagine a future version of myself that is doing far better than I am today, and ask how I could have gotten there. And I find that I'd be really surprised and confused if I suddenly got way better one day. But that it's plausible to me that each day I do a little bit better than before, and that, on average, this compounds over time. Which means it's important to calibrate my standards so that I expect myself to do a bit better than what I have been  realistically capable of before.

If you resonate with that, I wrote a blog post called Your Standards Are Too High on how I (try to) deal with this problem. And the Replacing Guilt series by Nate Soares is phenomenally good, and probably one of the most useful things I've ever read re own my mental health

What I appreciate the most about this post is simply just the understanding it shows for people in this situation.

It's not easy. Everyone has their own struggles. Hang in there. Take some breaks. You can learn, you can try something slightly different, or something very different. Make sure you have a balanced life, and somewhere to go. Make sure you have good plan B's (e.g., myself, I can always go back to the software industry). In the for-profit and wider world, there are many skills you can learn better than you would working at an EA org.

Ask, don't tell.

This is really good advice, at least for a subset of people.

Whether someone is problem-oriented ("the shortage of widgets in EA") versus solution-oriented ("find a way to use my widget-making skills") often gives me a strong signal of how likely they are to be successful.

I'd add on that sharing the answers people give to your questions (e.g. on this forum) is helpful. The set of things EA needs is vast, and there's no reason for us to all start from scratch in figuring out how to help.

Thank you for this wonderful post!

Humans need places came to mind as I read it. Alongside leveling up one's personal robustness & grit, I think there is much we can do at the systemic level to reduce the incidence of bycatch in EA.

Thanks for this really thoughtful post :)

Zahara's trying to maximize her chance of a 10% increase in her altruistic output.

I agree it's a good idea to focus on small gains. Even better advice might be to focus on learning or building skills which might give Zahara a 10% increase in her altruistic output later on (e.g. focus on studying, learn something new, try and find a job which means she'll learn a lot). 

You could apply the same idea to donating 0% or 1% this year. It's also fine to donate 0% for several years and then give when you're more comfortable.

In general, I think EAs (myself included) are too focused on generating impact today and should focus more on building skills to generate impact later on.

Unfortunately this competes with the importance of interventions failing fast. If it's going to take several years before the expected benefits of an intervention are clearly distinguishable from noise, there is a high risk that you'll waste a lot of time on it before finding out it didn't actually  help, you won't be able to experiment with different variants of the intervention to find out which work best, and even if you're confident it will help you might find it infeasible to maintain motivation when the reward feedback loop is so long.

Sorry, perhaps this wasn't clear - I'm not suggesting investing heavily in a single intervention or cause area for a long time, rather building skills that will be useful for solving a variety of problems (e.g. research skills, experience working in teams etc.).

I propose that March 26th (6 months equidistant from Petrov day) be converse Petrov day.

Nemo day, perhaps

Is it true that some tech folks started up a low-impact-angst group for all the Zaharas out there? (Asking question, not telling!)

I have heard rumors about it, but it seems like it might be a little exclusive.

I heard that you might have to provide evidence that you are not already a highly impactful individual, and that you have some amount of angst in your life, so I'm not sure it's for everyone.

Good point! We wouldn't want to encourage people to lower their impact and become angsty to get into an EA Club -- that's what Clubhouse is for...

I'm really sorry I downvoted... I love the tone, I love the intention, but I worry about the message. Yes, less ambition and more love would probably make us suffer less. But I would rather try to encourage ambition by emphasising love for the ambitious failures. I'm trying to be ambitious, and I want to know that I can spiritually fall back on goodwill from the community because we all know we couldn't achieve anything without people willing to risk failing.

Curated and popular this week
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe
Recent opportunities in Community
46
Ivan Burduk
· · 2m read