TL;DR: It's not enough to say we should celebrate failures: we need to learn from them. We can prevent a lot of unnecessary failures, and at the very least, we can fail more gracefully. I have some ideas and would appreciate yours too.

Epistemic status: I'm confident in my main claim that it's important for the effective altruism community to have a culture that supports graceful (or 'successful') failures, but that we're at risk of falling short of such a culture. I aim to highlight other claims I'm less sure about. I’m much more sure about the problem existing than how we can solve it (as those solutions often involve lots of difficult tradeoffs) but my goal here is to start this conversation about preventing unnecessary failure and what ‘successful’ failure looks like (so we have the kind of failures that are worth celebrating).


In Will MacAskill's recent post on effective altruism and the current funding situation, he mentions that while there is increased funding available for experimentation, if it's too available, it "could mean we lose incentives towards excellence":

If it's too easy to get funding, then a mediocre project could just keep limping on, rather than improving itself; or a substandard project could continue, even though it would be better if it shut down and the people involved worked elsewhere.
...That said, this is something that I think donors are generally keeping in mind; many seed grants won't be renewed, and if a project doesn't seem like a good use of the people running it, then it's not likely to get funded.

He then highlights No Lean Season as a standout example of how we should celebrate our failures — in this case, when an organisation decides to cease operations of a programme they no longer think is effective.

This “celebrating failures” notion is a celebration of both the audacity to try and the humility to learn and change course. It’s a great ideal. I wholeheartedly support it.

However, I fear that without taking meaningful steps towards it we'll fall far short of this ideal, resulting in people burning out, increased reputational risks for the community, and ultimately, significantly reduced impact. 

I think that effective altruism can learn a lot from private sector entrepreneurship, which often takes a high-risk, high-reward approach to achieving great successes. However, we should also learn from its failures, and not just blindly emulate it. For one thing, private sector entrepreneurship can be a very toxic environment (which is partly why I left). For another, we need to be mindful of how our projects are likely to be very different to the private sector.

But there’s huge upside if we succeed: I’m deeply excited for more ambition and entrepreneurship within the effective altruism community. So in light of this, here are a few things that have been on my mind that can hopefully help us fail more gracefully, fail less grotesquely... or even better, fail less (while still achieving more).

1. Remember, not everyone is an entrepreneur (and we shouldn't expect them to be)

I've spent most of my career working in early-stage startups and co-founded a startup myself. From that experience, I've come to think that entrepreneurship[1] is not a good fit for most people.

It can be incredibly stressful to operate with a shoestring budget on very short funding cycles with not a lot of money in the bank — and not just for the leaders, but for the whole organisation. It can be tough to capture talent. Asking someone to leave their comfortable job to come and work for much less in an organisation that might evaporate is a hard sell. Even if you find people willing to do this, they're not necessarily the best people for the jobs. Those who work in early-stage startups are often expected to wear lots of hats and do jobs they have no relevant experience or expertise in. If you manage to keep up and you’re lucky to get to the scale-up stage, half the time you’re the wrong person for the job and someone gets hired above you. It can be incredibly demoralising for even the most ambitious and resilient people.

The result: I've seen startups eat up and spit out dozens[2] of people, and this isn't something that I'd like to see the effective altruism community do. There’s a lot of failure, and it’s rarely graceful. Furthermore, the few “f*#kup nights” I attended only featured the failures of successful people who went on to do great things — there’s no celebration of the third failed attempt, nor the mental breakdown, relationship woes, and long period of financial struggle that followed.

So how can we avoid these pitfalls? A few thoughts:

  • We can’t expect too many people to be entrepreneurs.
  • We need to be careful that we don't put people in situations where they're set up to fail unnecessarily.
  • We can’t just celebrate the failures of those who eventually succeed.

Instead, we can — and should — celebrate the audacity to be ambitious while remaining honest about mistakes, humble about success, and transparent about what we learned. 

And despite my words of warning: I think we can do better. I think we can create an ecosystem where social entrepreneurship is more successful and accessible — not just for founders but for their teams. I believe that the effective altruism community has the capability to be the best place on Earth to be a social entrepreneur. We can take just the best bits of Silicon Valley and marry it with the scientific method. Hell, we can even couple that with the best practices of bigger institutions like governments, corporations, militaries, universities, and religions. Take the best bits, and leave the undesirable practices behind.

2. Early-stage investors work very closely with startup founders (and maybe we should too)

In the private sector, early-stage investors (seed/angel investors, venture capitalists, etc.) usually work very closely with the startups they invest in. They have a lot of experience with startups and can provide guidance and support. This is very lacking in the effective altruism community.

Few people have experience with running effective altruism organisations, and I'd like to see more of them get involved in mentoring and coaching early-stage projects. Additionally, we have many people with public and private sector experience outside effective altruism who could be harnessed as advisors. One of the most important groups we should ensure are deeply involved is funders — and not just for the cash.

In my experience, one of the most valuable things from pitches and meetings with investors is that even if they don’t fund you, they often give constructive feedback — you'll know why they didn’t. Using that feedback, you could change your strategy, reframe your pitch, or get a strong enough signal that you should quit or pivot. For example, I've heard things like, "We love your team and would be excited to fund something else you pitch to us, particularly X" and “Have you checked out startup Y or Z? We’ve already invested in them and think that you should talk.”

Nowadays, venture capital firms (VCs) typically have specialists who work with the startups they invest in (e.g. specialists in pricing, structuring an employee stock ownership plan, or go-to-market strategy in a new location). The VCs I know try not to stretch themselves too thin and often work closely with founders to ensure they’re being developed into leaders. They’ll sit on the board and they’ll read regular reports carefully. Whenever teams I’ve been a part of (in startups and in EA) have received input from a funder, I’ve found it to be particularly helpful and I’ve always welcomed more of it (while also remaining sufficiently independent).

In coming years (just as they have over the past few months), people will do what funders are telling them: they'll be ambitious. They'll pour their heart and soul into coming up with ideas, building teams around them, and developing funding proposals. If all that happens in response is radio silence from the extremely small number of significant funders in this community — and the rest of the community doesn't have a mechanism to help them get feedback — we're going to churn and burn people. Not only is that bad for all the obvious reasons around how we treat people, but it’ll also dry up the project pipeline and we’ll fail to have the impact we seek.

As I’ve said before, I’m extremely aware that giving feedback isn’t easy, and I know it can be costly and time consuming[3]. I’ve both experienced the difficulty in trying to give feedback to others (e.g. with hiring) and am aware of how much work it can be to do well, and all the downsides that can come from it. I’m also aware as someone who’s applied for funding, and having the experience of a lack of feedback causing me to doubt whether the funder and our organisation are aligned on our mission. But I think we can find a better balance where social entrepreneurs get sufficient feedback from funders (and the broader community) that results in us having a higher impact (while maintaining strong community health).

3. New projects need sufficient support and advice (or they may fail for bad reasons)

There are many reasons for projects to fail. Some might be for good reasons (e.g. the intervention wasn't effective but it was worth trying). However, some might be for bad reasons (e.g. the team didn't have enough support and they burned out).

Starting and scaling an organisation is difficult. Being the first and only person in an organisation doing a particular type of role is hard. Learning how to be a good manager takes a lot of work, and on the flip side, it can be difficult to work under inexperienced managers. If a swathe of new projects are going to start and scale up, we're going to need to make sure that there's enough support and advice available.

I worry that the effective altruism community might not have:

  • Enough support and advice for new projects.
  • Enough people with enough experience to provide the right kind of support and advice.
  • Appropriate systems to match people up with the right support and advice.

I really appreciated how Max Dalton encouraged people to mentor others in this EA Global talk. More of that kind of thing, please! Not just for those doing direct work, but also for mid- and late-career professionals[4] who are a part of the wider community and could be contributing their advice, skills, and knowledge.

4. We need to efficiently allocate advisors (or we might not have enough guidance)

At the moment, finding advisors is generally really ad hoc[5] and highly relational. This anarchy means that many projects have insufficient advice, some people with existing networks get much better advice, and a small number of advisors end up stretched too thin.

I'm certainly not alone in finding myself in a tricky position of juggling my day job with my commitment to the good of the wider project of effective altruism.

My day job at Giving What We Can (GWWC) is very much a full-time job; however, I want to provide as much support and advice to people who are starting new things as I can (I know what it's like to lack support, and it sucks). I sincerely want ambitious new projects to succeed (not to fail unnecessarily). Yet I don't want my involvement with other projects to take away from my GWWC work. (Not to mention I also have my own mental health to consider: running a startup while doing a lot of volunteering on other social impact projects likely contributed to my severe burnout in 2019 — I must not let that happen again.)

It gets even tougher when we're trying to scale fast across many new things simultaneously[6].

I'm not sure what the answer is to this. We need to somehow provide support to new projects without taking too much away from existing ones or burning people out.

One potential solution could involve explicitly funding such public goods. For example, funders could give an organisation additional funding to allow their staff to contribute more to effective altruism public goods, despite competing priorities. I’d happily offer my staff “20% time” to contribute to other projects/EA public goods if I had explicit funder/stakeholder support.

Another suggestion from a reviewer (Michael Noetel) that we as individuals can adopt is Cal Newport’s recommendation of having quotas. That can make it easier to commit to help without accidentally biting off more than you can chew. Cal’s experience is that most people are fine when you say, "I've hit my quota, sorry."

I’m sure there are more potential solutions — please add your ideas in the comments!

5. We need to be clear about what we mean by “failure” (and how to fail gracefully)

In the private sector, "failure" is usually very clear: a startup doesn't get funded, it doesn't get acquired, it doesn't go public, it runs out of cash and has to close down.

In the effective altruism community, "failure" is a lot less clear — and "failing gracefully" (like No Lean Season did) is not an easy thing to do. For example:

  • A project not being funded and not knowing why isn't failing gracefully, because they didn't learn anything and their lessons weren't shared.
  • A project that gets funded for six months and then fails because it didn't ask for enough money[7] isn’t failing gracefully, because its team members left before the project really had a chance to prove itself.
  • An established organisation that doesn't quite fit the funding profile of a small number of major donors but has enough funding to chug along with a small staff isn't failing gracefully, because it's unclear whether it's succeeding or failing.

Without evaluation capacity and sharing what we learn, we’re doomed to keep barking up the wrong tree.  Without adequate guidance for new projects, even the most promising ideas may end up as a "failure" before they've gotten underway. Without direct and honest conversations about winding things down or pivoting, organisations might find themselves anaemic but still alive.

No Lean Season was easier to shut down because its staff knew what success looked like, knew what they were measuring, and knew that it fell below the bar. I suspect (but have no direct evidence) it was also easier because it was within the security of a larger organisation (Evidence Action).

This is actually a potential strength of the effective altruism movement: a smaller number of highly coordinated funders and advisors can coordinate early and often to help people redirect their talent. In contrast, private sector startups can keep shopping around for funders for quite a while before realising they should stop (there are roughly 1000x more venture capitalists and angel investors than there are large funders and high-net-worth individuals within the effective altruism community).

As well as becoming clear about what we mean by “failure” and “successful failure,” we also need to think about what we mean by success. There are more and more effective altruism projects that are much harder to measure (especially in longtermism), and I worry about a world in which we start to equate "funded" with "success." And even worse, when "funded to the tune of $X" is seen as a measure of the worth of the people involved. Funding shouldn’t be the goal, and how much funding shouldn’t be a measure of value nor success.

As effective altruism grows (both in terms of funding and people), we'll need to come up with innovative ways of evaluating both new and continuing projects, and giving feedback at scale.

Additionally, it’d be good to develop a playbook to know when and how to end things or change course.

6. Startups systematically have to oversell to survive and get funded (we shouldn't do that)

Private sector startups almost always need to oversell to survive. They have to sell the vision, they have to sell the team, they have to sell the product. They have to sell, sell, sell — more convincingly and extravagantly than the last person.

This is because startup investors are investing on the expectation of high rates of failure. Startups therefore need to oversell the upside for it to seem worth the risk to the investor.

Walk around in the startup ecosystem for a while and you’ll get the feeling that everyone else is revolutionising everything all the time — and there’s absolutely zero risk of it going wrong!

This is a bit of an arms race though: everyone gets better at selling, and it becomes harder to sort the wheat from the chaff. Even worse, it becomes dangerous and self-defeating to admit to anyone that things aren’t going perfectly. All the entrepreneurs around you are pretending that everything is going amazingly in their startup (lest they be seen as weak and undermine their chances of getting investment).

I worry that we might start to see overselling emerge as the norm in the effective altruism community. People may oversell their ideas, oversell their team, oversell their impact, oversell how much they can spend, and severely downplay the risks. And this could lead to a lot of disappointment and disillusionment, and people will stop taking us seriously. All of this ultimately limits our potential impact.

In fact, this might already be the case to some degree. Zvi's post implied this is the case for one major funder, suggesting that a strong strategy to receive funding involves asking for substantially more than you need, and downplaying risks. At GWWC, we’ve experienced the tension between presenting our most rosy and most pessimistic view of our work. It’s hard to not notice your incentives. But it is much easier to thread the needle when you’re confident you and the funder have a strong relationship, with mutual trust and understanding. We’re generally fortunate in this respect, but I worry it’s not the norm, and even in our case, a lack of feedback can be a real challenge.

7. We need safety nets (or we might not catch people when they fall)

In the private sector, there are very few safety nets. If you fail, you're on your own. That's why a disproportionate number of successful entrepreneurs come from at least moderate privilege, and many people aren’t willing to take the risk. It always struck me as a huge loss that some of the world’s most talented people weren’t in a situation where they could reasonably try their hand at entrepreneurship.

I think we can do better with social entrepreneurship in the effective altruism community. If we set up safety nets, more talented people[8] will be more willing to take risks. We want a situation where if things don't work out, we can help lift them up and carry them off into the next thing (ideally, after some time to reflect and recover). Let’s not expect people to tightrope between two skyscrapers without solid training, a sturdy net below them, and medical staff on hand.

Some examples I've encountered of pretty understandable risk avoidance:

  • Someone was worried about doing a summer fellowship because they'd risk losing the health benefits that they usually get from their job.
  • People who can’t switch careers because they can't (in good conscience) sell the risk of working on a short-term grant to their spouse (with kids to support).
  • People who can’t switch to direct work because they fear losing tenure or company shares that are vesting.
  • People who work in a job they hate for years to "build up runway” (and not even contributing to causes they care about) because they think that’s the only way they can take the risk of doing direct work.

One toy idea I've had for a while are paid roles that allow people to contribute their skills to the community without the responsibility of having to drive a project — essential large scale absorption and deployment of talent that's relatively low risk for the talent.

For example, if facilitating fellowships or CEA’s Virtual Programs was like Teach For America or a national service. Alternatively, what if we paid people while we trained them to take the lead on a higher-risk project within a bigger structure? For example, maybe we could expand Charity Entrepreneurship to be closer to the size of CSIRO, make a longermist incubator, or an ‘effective altruism ‘civil service’? In these cases, we’d:

  • Hire people.
  • Train them.
  • Give them decent pay and benefits.
  • Second them off to projects for periods (perhaps permanently or for a longer tour of service if the project goes well) — i.e. they work for a mega-meta-organisation, but are moved to specific projects depending on need.

Such approaches could allow people to take risks they wouldn't usually take, giving us a much more diverse range of talent than you typically see in private sector startups. This, in turn, can help us have more impact.

It’s much easier for your project to fail gracefully if you feel equipped, your livelihood isn't on the line, and you're just "experimenting for a while" instead of quitting your job to work on a three-month grant and then finding yourself unemployed with bills to pay.

I'd be interested in exploring about the appetite for such ideas and other potential ways of de-risking things for people.

8. We need to be careful how we talk about ambition (or we might overload and disappoint people)

I love both how ambitious our community can be and how generous people are with their time and advice. However, we need to make sure that we're not encouraging people to take on too much. This is something I personally find very hard. There are many things I want to work on, and I want to ensure I’ve done my part to help them succeed. 

But it’s not failing gracefully when you fail because you were scattered, stretched too thin, or burned out.

We also need to make sure that we're not encouraging people to set themselves up for disappointment.

We want people to apply for jobs they don’t think they’ll get and want people to apply for funding for projects they’re not sure about. The carrot of “Hey, you might just get this!” can help get people over the line and keep them from taking the path of least resistance (doing nothing). But we also need to manage expectations, and be careful how we talk about the opportunities[9], the probabilities of outcomes, and the cases when it doesn’t work out.

The following are incredibly discouraging:

  • "It should be easy to get funding!" — and then you don't.
  • “You’re perfect for this job!” — and then you don’t make it past the first round with no indication of why.
  • "I'm surprised you didn't get funding for X" — which some people will interpret as, "X is worth funding. Maybe it's you?"

It's not failing gracefully when you feel like a complete and utter failure just for trying.

This is something that I’m trying to work on with things I can affect. For example, in hiring rounds I’m trying to find the balance between the cost (and sometimes risks) of providing individualised feedback and the value the candidates get[10]. I’m also trying to be careful about how I talk about funding and job opportunities with people I think should apply[11].

I think Will’s recent post can help people better understand the funding situation and how to talk about it. I think that the work that CEA’s community health team does contributes to this too. I think that many of the more established effective altruism organisations are taking things like staff wellbeing more seriously. I applaud all of this.

Other than these efforts, I think it helps to keep the downsides front of mind when we’re talking about ambition, to make sure that we’re not accidentally guilting people into doing too much or setting people’s expectations too high.

9. We need to think about how we're going to deal with the hype cycle

In the private sector, there's a lot of hype around new startups. They get a lot of attention, they get a lot of investment, they get a lot of media coverage. And then, often, they fail.

In the effective altruism community, I worry that we end up with our own "hype cycles." It's suddenly much easier to get funding to do X, and high-status people are praising people who do X, so lots of people drop the things they're doing (which actually might be quite important in the long term) to chase the shiny new thing X. Then suddenly the vibe is that "It's too hard to get jobs in X" or "X is overfunded" — and the next hit piece swarming social media is that "These weird EAs think that everyone should do X."

These hype cycles can also completely kneecap organisations that are already doing demonstrably good work. I've already seen people interpret some recruitment tactics as ‘poaching,’ and that can be unhealthy for the community.

I think it’s good to encourage a certain amount of focus and celebrate sticking to a plan long enough to test it out properly. I like CEA’s use of the “Tour of Service” model and Leopold Aschenbrenner made some good comments about effective altruists needing to stick to things a bit longer instead of constantly reassessing.

10. We might be causing some of our own future bottlenecks (and this disrupts our pipelines for people, money, and social capital)

I acknowledge the solutions I've proposed above are often in tension with each other. As one example:

  • On the one hand, people with experience running organisations are often stretched too thin and occasionally unable to provide guidance and mentorship without severe costs to their own mission.
  • On the other hand, new projects struggle to find mentorship, and so don't get their feet off the ground.

To solve this, we need to think about the pipelines and infrastructure available. It isn't failing gracefully when we fail because we lacked a particular resource that could have been provided — if only the community itself hadn’t caused the resource to be lacking (a case of an "own goal").

We need to think beyond the immediate bottlenecks and think about what bottlenecks we'll have in 5, 10, and 20 years’ time (and beyond).

A lot can be done to actively fill up the pipeline for future needs. This is likely to be a whole (ongoing) project that is way outside the scope of this post. However, I do want to draw attention to some examples where I think we could be causing our own bottlenecks.

The “EA is overfunded” meme

Now that we have a couple of billionaires in our community and we don't yet know how best to spend the increased money, I've heard people start saying “EA isn’t funding constrained” or even "EA is overfunded." They seem to imply that there's no point in continuing to fill up the pipeline of funders.

Yet we're still a tiny fraction of philanthropic and global capital, and the world's problems are still immense. As soon as we figure out megaprojects in longtermism, we'll be funding constrained again. If you look across to other causes, we're still an incredibly long way from filling up GiveDirectly's potential room for funding (e.g. $1 trillion to give a single $1,000 payment to the world's poorest billion who are at or near the poverty line). 

If we let this meme spread then we might be creating future funding bottlenecks. I imagine this meme might also lead people to think the funding bar is lower than it truly is and that in turn can exacerbate both evaluation bottlenecks and rejection rates.

Alternative messages to "EA is overfunded" that are more appropriate include:

  • "EA now has the budget for experimentation, so we encourage you to be ambitious."
  • “If you’ve been holding off on longtermist entrepreneurship because you were worried about funding, it’s worth seriously exploring your options here.”
  • "We already know many things that are funding constrained (e.g. cash transfers) and expect to find more after doing some research and experimentation (e.g. longtermist megaprojects)."
  • "We are nowhere close to having enough money to solve the world's problems, and we don't yet know where to start in some areas. But we're trying, and more funding and talent will certainly help."

The narrow focus on “current career bottlenecks”

Career recommendations can often be too narrow if we seem to focus too much on current bottlenecks and naively apply that too broadly. For example, after we realised that AI safety was important[12] and that a few organisations were struggling to hire senior AI safety researchers, it started to feel like everyone and their dog was encouraged to switch their career to study AI safety[13]. Now we find ourselves with a severe dearth of professionals with skills that were devalued for a while (like marketing and communications). We also have people who were recommended to go down a particular path a few years ago wondering if they made the right move[14].

It seems clear that 80,000 Hours doesn’t want this to happen – they actually have a good article about how to read their advice – but it seems this was internalised by many people nonetheless.

It’s always hard to talk about bottlenecks on the margins without causing an overcorrection down the line. A nice contribution to the conversation here was Holden Karnofsky’s post encouraging more people to think about aptitudes instead of career paths.

The narrow focus on “current EA priorities”

When I first came across effective altruism, I was interested in political decision-making (I had just written a thesis on voting systems and deliberative democracy). But I changed my focus in part because I’d too quickly internalised the “current EA priorities” at that point (at the time I was also deeply interested in global poverty, and donating was such an accessible way of helping, so it wasn’t a hard switch). I'm glad that a twentysomething person today with those interests would find a home within effective altruism (even if they later change their views on which causes to prioritise), given the increased interest in this. I hope to capture a wider net of people as we grow, who can filter themselves into places where their comparative advantage is most significant — instead of feeling like we’re asking them to drop everything they're doing for the current bottleneck.

Yes, we want to seriously challenge people to rethink their priorities — there’s a lot of value in that exercise — but we’re bound to have people who know things we don’t, and we don’t want to lose them. We’re also in a fundamentally different situation as we grow and look at problems at a completely different scale (again, not acting as much on the isolated margins as we used to).

Making enemies instead of friends

Sometimes we make enemies (thus losing social and political capital) when we aren't meeting people where they are at or when we are too adversarial. I see this too often in effective altruism social contexts and in places like the EA Forum, where one-upmanship and proving you're smarter than someone else[15] can seem to be valued more than reaching our shared goals. I enjoy intellectual jousting in personal contexts where there are high levels of trust — but on the internet and with new people experiencing the community in-person, we need to work doubly hard to be welcoming.

It can help to make more of an effort to:

  • Find common ground and meet people where they are at (curiosity and socratic questioning instead of lecturing)
  • Charitably interpret what people say (e.g. by steelmanning)
  • Stay humble and curious (avoid seeming like “the chosen people” with “the answers”)
  • Acknowledge the good faith of others (e.g. “Thanks for saying X; that makes me feel like we're getting closer to understanding each other's viewpoints. I think the core of…”)
  • Recognise that we’re all on a journey to do good and celebrate people moving forward in that journey (e.g. starting with “It’s so great that you’ve started giving to the Clean Air Task Force! What inspired that?” instead of starting with “You realise that climate change isn’t an existential risk and you should really be trying to devote your career to X instead?”)

(Note: I do think that our community has a better quality of conversation than many places on the internet – and this is worth celebrating! But I think we can always do better.)

Burning people out

Attracting talented value-aligned people to help us solve important problems and then subsequently burning them out will contribute to our own bottlenecks. I mentioned earlier the risk of burning out advisors and teams, but this applies to all community members. I’ve spoken about workload and competing priorities, but there’s also psychological factors at play here.

In particular, if we attract people who take seriously and deeply internalise the scale of world problems and the idea that consequences really matter, it can be very easy for them to feel like they’re only valued instrumentally (by the impact they have) and that they’re never going to be able to do enough. Psychologically, this is a recipe for disaster — especially if we double down and too narrowly focus on getting young people into longtermist careers, they’ll miss out on the support networks that a more robust community has to offer. 

I think that focusing on things that have a high expected value is really important. However, we cannot be so laser-focused that we cannot see the forest for the trees. I’ve seen too many[16] highly engaged EAs leave the community, become very unproductive, or tell me that they’re on the verge of leaving for psychological reasons. From a purely total consequentialist perspective, I think this undermines our ability to have an impact. From a more human perspective, I think this is tragic: it sucks to see people I care about having such a hard time (and reduces my motivation[17] to bring more people into the community who might have a lot of impact).

11. We need better awareness what has worked or failed and why (or we might keep trying the same things) 

I worry that we're experiencing the file drawer effect with funding applications and project ideas. We’re operating blind if most of the community is completely unaware of what projects are funded, what projects aren’t, and why. I've often seen people working on pretty much the exact same thing (and sometimes the idea has already been rejected by funders). 

The private sector is often intentionally secretive because they're competing to get to market first. But unlike the private sector, we don't need to be: we can cooperate. For example:

  • If people see that certain types of things are consistently not funded because they're considered "too risky," then we might try to figure out if they can be de-risked or come to a common understanding that it's maybe a bad idea.
  • If people see that certain types of things are consistently not funded just because they're "not a good fit" for a particular funder, then we'd know to apply to different funders, or new funders could enter the ecosystem to fund that type of thing.

(I plan to write more about this and the mix of funding diversity and cooperation in the future. If you have thoughts, please let me know.)

Also, back to point #3, we're wasting precious resources if we're duplicating efforts. It sucks to spend a bunch of time pulled away from our core work to advise on something that's destined not to get funded, especially if we could have figured that out earlier.


Epilogue

These are just some initial thoughts I have finally written down. This was mostly a stream-of-consciousness piece, but hey, that meant I actually posted something to the EA Forum (something I’m still terrified of) that I hope will help. I surely could have expanded on and caveated more carefully with a lot of these, but the article was getting pretty long and any of these points could be a post in their own right.

At the end of the day, we need to expect that we are going to fail sometimes. It’ll sometimes be unnecessary, it’ll sometimes be far from graceful, and often we might be lacking any clear lessons. But by speaking up about what we don’t think is working, thinking ahead about what we might try, and treating everyone involved charitably (keep applying Hanlon’s razor), I believe that we can make a significant positive impact — not just on the world, but also on the lives of those participating in this grand project.

I’ve mentioned some potential solutions to the concerns I have, but would really appreciate you sharing your ideas about how we can keep shooting for the stars (with a better targeting system) while managing to float down gracefully when we miss.


Thanks to the following people for quickly providing some input on this post before I posted it (and the mistakes and bad ideas are all my own): Grace Adams, Peter Slattery, Michael Townsend, Michael Noetel, Bradley Tjandra, Nathan Sherban, Katy Moore, Jack Lewars, Julian Hazell, Federico Speziali

  1. ^

    I’m not just talking about founders when I talk about entrepreneurship. Most early employees are in a very similar boat to the founders: they experience a lot of the same risks and uncertainties (to a lesser degree, but also their upside is often much lower).

  2. ^

    I can’t emphasise enough how tragic it is and how little people within the startup ecosystem recognise this negative externality.

  3. ^

    While I was waiting to publish this Linch published this post about the tradeoff/difficulty of giving feedback as a grantmaker which is worth reading to get a different perspective. After reading that I still maintain that we can find a better balance than where we are now and I’d expect it to result in more impact overall.

  4. ^

    I’d be excited to see more deliberate work in this space and hope that organisations like High Impact Professionals might help bridge the gap here.

  5. ^

    Although there are some good efforts out there in the personal mentoring space such as Magnify Mentoring and there’s also Charity Entrepreneurship’s incubation program. A good outcome of EAG and EAGx are often that people find mentorship but this is still pretty ad hoc and highly relational.

  6. ^

    I was recently reviewing and providing advice on many FTX Future Fund applications during the weekend after I had dental surgery — that’s not great! Obviously this is just poor timing with the dental surgery, but it was a short timeframe for an enormous number of people to get funding applications in without knowing if/when there’d be another round, and only so many people who could provide helpful advice. The stakes felt pretty high, but I ended up feeling pretty bad about grant applicants being knocked back when I’d reviewed the applications while being in really poor health and knew I didn’t give great/much feedback

  7. ^

    A plausible mistake if it's the first time someone has done a budget and they didn’t predict salary inflation or missed a potentially expensive item (like the requirement for expensive public liability insurance to run an event at a venue that they already put a deposit down for).

  8. ^

    Particularly those from less privileged backgrounds.

  9. ^

    I’m aware of one organisation in the community that has directly emailed many candidates for a role with email text that seemed to imply every specific recipient of the email had been noticed as particularly good fit. Flattery can increase the rate of applications and help people overcome impostor syndrome, but it’s still important to manage expectations, otherwise you’ll further exacerbate the bottlenecks created by imposter syndrome in the long term (“I thought I wasn’t good enough for X, then Y person told me I was and I still didn’t get past the front door. I mustn’t be good enough for Z either.”).

  10. ^

    At GWWC and Effective Altruism Australia I’ve started a process where candidates who reach different stages can get access to different feedback (e.g. application stage can see their anonymised ratings compared to rest of the field, paid work task stage can see anonymised written feedback on their trial tasks).

  11. ^

    For example, instead of “X is a great idea and should totally be funded!” I’m more likely to say “I’m not aware of anyone doing X type of project and think that it could be valuable for reasons 1, 2, 3. B funder might be suitable to apply to. If you draft up a proposal that covers Y and Z, then I'd be happy to spend an hour reviewing it and 15 minutes to chat about it.”

  12. ^

    Which I strongly agree with. I think it is really important that we commit substantial resources to working on this.

  13. ^

    I’ve had several people complain to me that they felt overly pressured and it didn’t make a lot of sense given their aptitudes and career trajectory. Some of these people I think would have been a good fit eventually if they’d had longer to warm up to the idea instead of being given a hard sell and then abandoned when it didn’t seem to convince them right away.

  14. ^

    One of the reviewers made this comment: “I think that the career recommendations change a bit too abruptly in EA and that not enough support is given for past top recommendations, in fact some of top careers from the past are currently almost discouraged. Which is very confusing for those who followed the recommendations. For example a few years back consulting was regarded as a good way to gain career capital, and now people say ‘Wait, are you a consultant? I think you are wasting your time’.”

  15. ^

    One (of many) places to see the negative types of impressions that potentially aligned people have is in the comments of this post by Matthew Yglesias. E.g. “I genuinely believe in lower-case effective altruism – but what I've seen in the real world is either an Ayn Rand level of selfishness justified by deferred responsibility” or “in fact the 'rationalist' community seems to largely be about internal status-seeking around issues like AI risk than thinking through real problems” or “Humility is often a surrogate for trust and the EAs don't exactly have that”. Of course we cannot control everyone’s impressions, but some language and behaviour is quite off-putting and not so hard to avoid. I acknowledge that every social movement can feel off-putting (especially to outsiders), however, I hold us to a higher standard (which I myself fall short of) because I believe we can come close to meeting that standard and that the upside is big.

  16. ^

     Maybe there’s a sampling bias here of people who are around me, or that I feel like a safe person to speak to so I hear it more often. Or maybe this is more widespread and well-known. I have no idea of the scale of this, just some anecdata.

  17. ^

     I’ve been told by people who are experiencing this that they’re worried that if they were to introduce their friends to EA they’d be responsible for making their friends' lives worse. Ouch.

161

10 comments, sorted by Click to highlight new comments since: Today at 7:36 PM
New Comment

Thank you for writing this!

Here are some of my notes / ideas I wrote while reading.

This “celebrating failures” notion is a celebration of both the audacity to try and the humility to learn and change course. It’s a great ideal. I wholeheartedly support it

Something I remembered when reading this was the idea, which most people here might have been exposed to at one point or another but might have forgotten, that “Adding is favored over subtracting in problem solving” (https://www.nature.com/articles/d41586-021-00592-0).

I believe making it easier for people, organizations, etc… to remember subtractive solutions exist and to implement them would probably be a GOOD thing in EA. I think collectively reinterpreting failures, in some select instances, as subtractive solutions could further this.

However, I fear that without taking meaningful steps towards it we’ll fall far short of this ideal, resulting in people burning out, increased reputational risks for the community, and ultimately, significantly reduced impact.

For one thing, private sector entrepreneurship can be a very toxic environment (which is partly why I left).

While I am not familiar with private sector entrepreneurship and the types of burnout it engenders in people, I am grateful for the occasional (availability heuristic working here, I am only recollecting the past year or so of posts) mental-health post on EAF. These posts upweight the importance of taking breaks and doing other, less taxing activities in fighting the urge to ruthlessly optimize your behavior to get as many things done as possible.

I also find that after a break or walk, I have an easier time with the work I was doing. For me, and probably for many others, long-term effectiveness at thought-work requires some periods of low-intensity activity.

Interesting link https://www.fuckupnights.com/; I didn’t know such a thing existed.

I feel as though this list can be expanded somewhat.

So how can we avoid these pitfalls? A few thoughts:

  • We can’t expect too many people to be entrepreneurs.
  • We need to be careful that we don’t put people in situations where they’re set up to fail unnecessarily.
  • We can’t just celebrate the failures of those who eventually succeed.

To be more clear, are the pitfalls in “high-risk, high-reward approach[es] to achieving great successes” the following?

  • Rapid turnover rate, due to abundant failures and burnout, among other things like stress and that the work is demoralizing
  • If a failure is celebrated, it is mostly likely a failure that was followed by a noticeable success, which is likely not characteristic of most failures

Here are two additional points that might be helpful in avoiding these pitfalls and that might also be instrumental towards “help[ing] us fail more gracefully, fail less grotesquely… or even better, fail less (while still achieving more).”

  • We shouldn’t expect non-entrepreneurs to maintain stellar performance in entrepreneurial positions or tasks for too long
  • We should keep track records of our failures and performance inadequacies, rather than shying away from them, so that they can be routinely prevented

In my experience, one of the most valuable things from pitches and meetings with investors is that even if they don’t fund you, they often give constructive feedback — you’ll know why they didn’t.

I’ve been coming across the idea of “garnering more feedback” recently within the EAF community:

  • https://forum.effectivealtruism.org/posts/GskGj9wCzLdP8WgmT/an-easy-win-for-hard-decisions
  • https://forum.effectivealtruism.org/posts/iPqHdRYGCNj5bTn52/help-with-the-forum-wiki-editing-giving-feedback-moderation
  • https://forum.effectivealtruism.org/posts/Khon9Bhmad7v4dNKe/the-cost-of-rejection

Overall, I think increasing the amount and quality of feedback on posts, job rejections, grant applications, etc… is something very much worth moving towards.

I agree there are not enough “Appropriate systems to match people up with the right support and advice.”, but I also believe that there are outlets that do exist but which might not be very visible or easily findable (perhaps 80000 Hours 1 on 1 advice calls could fall into this category).

Quotas and “explicitly funding such public goods” sound like decent solutions, especially if they’re combined (a quota would limit your 10-20% community-feedback public-good funded time, ensuring it doesn’t turn into more than you can handle). I wonder how many EA staff would have to offer 10-20% of their time to make a real difference in the current paucity of feedback.

Another solution might be to have more collaborations between organizations or individuals. I often find that when I co-author or work with another person on a project, I’m typically exposed to new and useful resources and sometimes refine my skills in ways I hadn’t considered prior.

Subjectively evaluating two enterprises in terms of their “funded to the tune of $X” measure seems like a good example of Goodhart’s law creeping in. I think more detailed and extensive transparency between funders and fundees would control overselling, but this is easier stated than implemented.

Wrt to “Some examples I’ve encountered of pretty understandable risk avoidance:”, I think accumulating the outcomes (again, including the failed ones) of risky scenarios that people have faced in the EA community might enable others to make better decisions. For example, making people’s stories of a career change or transition into EA (a lot of the AMA career posts on EAF fulfill this purpose) more visible and then accumulating these stories in a “collection of risky decisions for EAs” page might do well in this regard.

Paid training strikes me as something that is neglected and, if scaled, might really help with the earlier point “We need to be careful that we don’t put people in situations where they’re set up to fail unnecessarily.”.

I would be happy to see the “Making enemies instead of friends” made into its own post.

Again, thank you for writing this, Luke!

Thanks for writing this! I strongly upvoted this comment because I think it contributes a lot to and extends the OP on many different points.

Glad you posted this, Luke. I really like the idea of EA being more ambitious, and I think if the community onboards some of the suggestions in this post, it's more likely to happen. 

One specific comment: I am also concerned by overselling (discussed in section 6), but I think it's worth being aware of how "EA overselling" might look different from regular overselling. 

In my mind, I can imagine grant applications overselling while saying things that sound pretty humble, like:

  • "I think there's a small, but plausible chance that this organisation succeeds in its mission."
  • "We have an extremely competent director, though one risk is that they'd have more impact on a different project."
  • "We understand that the base-rate of success for similar projects is quite low, but our team has a strong inside view that the project will succeed. Our credence in success is 70%."

These kinds of statements genuinely do show self-awareness, and may be of interest to funders, so I think it's a good thing that the community has a norm where this language is rewarded, and not punished. But depending on their context, they might only be a little humble, or even worse, could even have the effect of making the overselling harder to spot. Take the above example, where someone says it's plausible but unlikely they'll succeed in their mission. If their mission involves literally saving humanity from a key extinction risk, claiming  it's plausible they'll succeed might actually be wildly overconfident (or at least misleading). 

I also think EA overselling might come more from ommission

  • Not sharing highly relevant, but negative, information -- e.g., criticisms others have made.
  • Providing a vision for your project that implies more alignment with the funder than there is in reality (e.g., the funder cares about X, but you care about X, Y and Z, yet in the application you de-emphasise Y and Z).
  • Underselling what you and the team would otherwise do if you didn't work on your project.

In any case, I think your suggestion is right. To get around this, it's just really important funders and fundees have a strong, high-trust relationship. But I agree, that's hard to do if there's little communication between them. 

Great post Luke! I just wanted to add another argument to point 8:

8. We need to be careful how we talk about ambition (or we might overload and disappoint people)

I think another related aspect to this (in my experience with High Impact Medicine) is that you also want to be careful about this because even though people might be ambitious, their personal and professional situation might preclude them from taking an 'ambitious' leap. Even though on the whole I think it is net positive to encourage people to be ambitious, we should also caveat this with an appreciation of different career and life situations.  I think a failure to inadequately do this  can make people feel like they are not doing  or are enough. 

Definitely. Thanks for sharing that argument and example!

Fantastic post Luke, thank you for writing it! 

7. We need safety nets (or we might not catch people when they fall)

One example of this kind of thing I admire is Charity Entrepreneurship's support of incubatees after the program. Even if a participant doesn't found a charity or their proposal is rejected,  CE gives them career guidance and connects them to jobs at other charities and organisations in their network, and can possibly extend the provision of a stipend. 

Yeah, that's a fantastic example. I really think that CE are a standout organisation on a lot of fronts.

One potential solution could involve explicitly funding such public goods. For example, funders could give an organisation additional funding to allow their staff to contribute more to effective altruism public goods, despite competing priorities.

I was thinking something similar reading some comments around funds giving (or not giving) feedback. There does seem to be a missed equilibrium:

  • It's in everyone's efforts if there is more feedback, support, coordination etc.
  • It's not in the interests or capability of any one organisation to take this on themselves.

I might not jump to assuming it would all be coming off existing staff's plates though. 

Anyway, great post. 

Thanks Luke. With regards to "better awareness" (#11). Competition is good and so people working on nearly the exact same thing isn't necessarily a bad thing. Ideally, there are identifiable differences so that we can learn from them. I take your concern as being, probably rightfully so, these groups working on things without knowing of similar ongoing/completed attempts.

Would you propose collaborated record keeping between funders and entrepreneurs of ongoing and completed projects, with their potential points of failure? Including, why funding was not approved.

I also really like the criticism of the "EA is overfunded" meme.  I think emphasising the good that can arise from the donations by the global 1% remains an important part of EA, and saying that "EA is overfunded" is contradictory to this.

Thanks Punty!

I take your concern as being, probably rightfully so, these groups working on things without knowing of similar ongoing/completed attempts.

Yep! I agree that competition is sometimes great, but it's the lack of awareness/learning/collaboration that can be a problem.

Would you propose collaborated record keeping between funders and entrepreneurs of ongoing and completed projects, with their potential points of failure? Including, why funding was not approved.

Yep, something like this. For example, I've heard/read many times both these things:

  • The idea seemed good but the team didn't seem like they could execute (knowing this, another team who could be a better fit should definitely apply, and/or the original team could upskill/find new members etc)
  • The idea seems pretty bad (for reasons that might not be immediately obvious) but the team seemed pretty capable (knowing this other people wouldn't pursue the idea, and the team would move onto something else instead of thinking that "they" are the problem).

I also really like the criticism of the "EA is overfunded" meme.  I think emphasising the good that can arise from the donations by the global 1% remains an important part of EA, and saying that "EA is overfunded" is contradictory to this.

Thanks! It's a meme that I think could be incredibly self-defeating.