Thanks for this! Most of what you wrote here matches my experience and what I've seen grantees experience. It often feels weird and frustrating (and counter to econ 101 intuitions) to be like "idk, you just can't exchange money for good and services the obvious way, sorry, no, you can't just pay more money to get out of having to manage that person and have them still do their work well" and I appreciate this explanation of why.
Riffing off of the alliance mindset point, one shift I've personally found really helpful (though I could imagine it backfiring for other people) in decision-making settings is switching from thinking "my job is to come up with the right proposal or decision" to "my job is to integrate the evidence I've observed (firsthand, secondhand, etc.) and reason about it as clearly and well as I'm able".
The first framing made me feel like I was failing if other people contributed; I was "supposed" to get to the best decision, but instead I came to the wrong one that needed to be, humiliatingly, "fixed". The frame is more individualistic, and has more of a sense of some final responsibility that increases emotional heat and isn't explained just by bayesian reasoning.
The latter frame evokes thoughts like "of course, what I'm able to observe and think of is only a small piece of the puzzle, of course others have lots of value of add" and shifts my experience of changing decisions from embarrassing or a sign of failure to natural and inevitable, and my orientation towards others from defensiveness to curiosity and eagerness to elicit their knowledge. And it shifts my orientation towards myself from a stakesy attempt to squeeze out an excellent product via the sheer force of emotional energy, to something more reflective, internally quiet, and focused on the outer world, not what my proposals will say about me.
I could imagine this causing people to go easy on themselves or try less hard, but for me it's been really helpful.
This is a cool idea! It feels so much easier to me to get myself started reading a challenging text if there's a specified time and place with other people doing the same, especially if I know we can discuss right after.
I'm interested in and supportive of people running different experiments with meta-meta efforts, and I think they can be powerful levers for doing good. I'm pretty unsure right now if we're erring too far in the meta and meta-meta direction (potentially because people neglect the meta effects of object-level work) or should go farther, but hope to get more clarity on that down the road.
So to start, that comment was quite specific to my team and situation, and I think historically we've been super cautious about hiring (my sense is, much moreso than the average EA org, which in turn is more cautious than the next-most-specific reference class org).
Among the most common and strongest pieces of advice I give grantees with inexperienced executive teams is to be careful about hiring (generally, more careful than I think they'd have been otherwise), and more broadly to recognize that differences in people's skills and interests leads to huge differences in their ability to produce high-quality versions of various relevant outputs. Often I find that new founders underestimate those differences and so e.g. underestimate how much a given product might decline in quality when handed from one staff member to a new one.
They'll say things like "oh, to learn [the answer to complicated question X] we'll have [random-seeming new person] research [question X]" in a way that feels totally insensitive to the fact that the question is difficult to answer, that it'd take even a skilled researcher in the relevant domain a lot of time and trouble, that they have no real plan to train the new person or evidence the new person is unusually gifted at the relevant kind of research, etc., and I think that dynamic is upstream of a lot of project failures I see. I.e. I think a lot of people have a kind of magical/non-gears-level view of hiring, where they sort of equate an activity being someone's job with that activity being carried out adequately and in a timely fashion, which seems like a real bad assumption with a lot of the projects in EA-land.
But yeah, I think we were too cautious nonetheless.
Cases where hiring more aggressively seems relatively better:
Thanks Miranda, I agree these are things to watch really closely for.
Thanks Akash. I think you're right that we can learn as much from successes and well-chosen actions as mistakes, and also it's just good to celebrate victories. A few things I feel really pleased about (on vacation so mostly saying what comes to mind, not doing a deep dive):
If you look back in a year, and you feel really excited/proud of the work that your team has done, what are some things that come to mind? What would a 95th+ percentile outcome look like? (Maybe the answer is just "we did everything in the Looking Forward" section, but I'm curious if some other things come to mind.)
A mixture of "not totally sure" and "don't want to do a full reveal" but the "Looking Forward" section above lists a bunch of components. In addition:
Thanks for the kinds words, James!
Thoughtful and well-informed criticism is really useful, and I'd be delighted for us to support it; criticism that successfully changes minds and points to important errors is IMO among the most impactful kinds of writing.
In general, I think we'd evaluate it similarly to other kinds of grant proposals, trying to gauge how relevant the proposal is to the cause area and how good a fit the team is to doing useful work. In this case, I think part of being a good fit for the work is having a deep understanding of EA/longtermism, having really strong epistemics, and buying into the high-level goal of doing as much good as possible.
I put a bunch of weight on decision theories which support 2.
A mundane example: I get value now from knowing that, even if I died, my partner would pursue certain Claire-specific projects I value being pursued because it makes me happy to know they will get pursued even if I die. I couldn't have that happiness now if I didn't believe he would actually do it, and it'd be hard for him (a person who lives with me and who I've dated for many years) to make me believe that he actually would pursue them even if it weren't true (as well as seeming sketchy from a deontological perspective).
And, +1 to Austin's example of funders; funders occasionally have people ask for retroactive funding, and say that they only did the thing because their model of the funders suggested the funder would pay.