We've all failed at times. It seems like the stakes are especially high for us as EAs because we're trying to make a difference in the world, and failure means not having as much impact as we could. At the same time, EA sometimes entails doing high-risk, high-value projects that have a 99% chance of having no impact but a 1% chance of making a huge difference. I'm curious to hear about your experiences with failure, how you've dealt with failure, and your suggestions for how EAs can deal with the possibility of failure.
I like this question :)
One thing I've found pretty helpful in the context of my failures is to try to separate out (a) my intuitive emotional disappointment, regret, feelings of mourning, etc. (b) the question of what lessons, if any, I can take from my failure, now that I've seen the failure take place (c) the question of whether, ex ante, I should have known the endeavor was doomed, and perhaps something more meta about my decision-making procedure was off and ought to be corrected.
I think all these things are valid and good to process, but I used to conflate them a lot more, which was especially confusing in the context of risky bets I knew before I started had a substantial chance of failure.
I also noticed that I sometimes used to flinch away from the question of whether someone else predicted the failure (or seems like they would have), especially when I was feeling sad and vulnerable because of a recent failure. Now I try to do a careful manual scan for anyone that was especially foresightful/outpredicted me in a way that seemed like the product of skill rather than chance, and reflect on that until my emotions shift more towards admiration for their skill and understanding, and curiosity/a desire to understand what they saw that I missed. I try to get in a mood where I feel almost greedy for their models, and feel a deep visceral desire to hear where they're coming from (which reminds me a bit of this talk). I envision how I will be more competent and able to achieve more for the world if I take the best parts of their model and integrate it into my own
My biggest mistake was not buying, and holding, crypto early. This was an extremely costly mistake. If I bought and held I would have hundreds of millions of dollars that could have been given as grants. I doubt I will ever make such a costly mistake again.
Going to graduate school was a very bad decision too. After 2.5 years I had to take my L and get out. It was very painful to admit I had been wrong but that is life.
I think one aspect of how to deal with the possibility of failure is how to deal with the possibility of accidental harm / downside risk - i.e., the possibility that an action would make the world worse in some ways (which may or may not outweigh the positive effects). Here is a collection of sources on that topic which people might find useful.
(But this is of course not a complete answer to your question, since the possibility of failure is not always about downside risk. It can also be about actions turning out to be more "expensive" (e.g., time-consuming) than one would like, actions turning out to achieve their intended objectives to a lesser extent than one expected/hoped, or other actions turning out to have probably been better choices than the action one took.)