Manages Impactful Government Careers: https://www.impactfulgovcareers.org/
Help civil servants do the most good
Thanks for this Akash. I like this post and like these examples.
One thing I find has been helpful for me here is to stop talking and thinking about "status" so much and focus more on acknowledging my own emotional experience. e.g. the bullets at the start of this post for example are all example of someone feeling shame.
Shame can be a useful signal about how it would be best for you to act in any given situation, but often isn't particularly accurate in its predictions. Focusing on concepts like "status" instead of the emotions that are informing the concept has, at times, pushed me towards seeing status as way more real than it actually is.
This isn't to say that status is not a useful model in general. I'm just skeptical its anywhere near as useful for understanding your own experience as it is to analyzing the behavior of groups more objectively.
Thank Michael, I connect with the hope of this post a lot and EA still feels unusually high-trust to me.
But I suspect that a lot of my trust comes via personal interactions in some form. And it's unclear to me how much of the high level of trust in the EA community in general is due to private connections vs. public signs of trustworthiness.
If it's mostly the former, then I'd be more concerned. Friendship-based trust isn't particularly scalable, and reliance on it seems likely to maintain diversity issues. The EA community will need to increasingly pay bureaucratic costs to keep trust high as it grows further.
I'd be interested in any attempts at quantifying the costs to orgs of different governance interventions and their impact on trust/trustworthiness.
Thanks Jeff - this is helpful!
I don't know who would be best placed to do this, but I can imagine it would be really helpful to have more expansive versions of these diagrams. Especially ones that make the specific nature of the relationships between orgs clear (i.e. going beyond fiscal sponsorship) . A lot of the comments/discussion on this post seem to be about speculation about the specific nature of these relationships.
Here is what I imagined:
I suspect doing something like this would end up being pretty subjective, it would change over time, and there will be disagreement between people involved. e.g. things like "strength of strategic oversight" are going to be pretty ambiguous. But the value in attempts at creating some common knowledge here seems high given the current level of confusion.
(and alongside increasing trustworthiness, this kind of transparency would also be valuable for people setting up new orgs in the ecosystem. Currently, if you want to figure out how your org can/should fit in you have to try and build a picture like the above yourself)
Thank you for this post! One thing I wanted point out was that, this post talks about governance failures by individual organizations. But EA orgs are unusually tightly coupled, so I suspect a lot more work needs to be done on governance at the ecosystem level.
I most recently worked for a government department. This single organsiation was bigger, more complex, and less internally value-aligned than the ecosystem of EA orgs. EA has fuzzier boundaries, but for the most part, functions more cohesively than a single large organisation.
I haven't thought a tonne about how to do this in practice, but I read this report on "Constellation Collaboration" recently and found it compelling. I suspect there is a bunch more thinking that could be done at the ecosystem level.
I am really into writing at the moment and I’m keen to co-author forum posts with people who have similar interests.
I wrote a few brief summaries of things I'm interested in writing about (but very open to other ideas).
Also very open to:
Things I would love to find a collaborator to co-write:
This is great - thanks for writing this.
My addition to this would be that you can increase your empathy for the suffering of others by connecting with your own suffering. Experiences of pain and fear in my own life, definitely make it easier to connect with those feelings in others.
(And as well as helping empathy, connecting with the motivational usefulness of negative experiences can make the experiences themselves feel a little more meaningful (so a little less bad).)
Thank you for writing this Tyler! I really enjoyed it.
I have had a similar journey recently. I've also heard a bunch of other examples from people in this community with similar stories.
There is an interesting tension I find when communicating about this:
If someone's motivational framework is fundamentally shackled to a dominant, altruistic part, then the benefits of losing the shackles need to put in terms that that part will value. But to really succeed at this you need to genuinely abandon the idea that the values of that part of you are above your other values.
For example I've found myself saying things like "now that I have this perspective, I feel better, and more intrinsically motivated, but am not being any less altruistic, I may even be more altruistic across my life".
You touch on this here:
This doesn’t imply discarding the useful outcomes of activities. In fact, I find that when I engage something for its own sake, I’m far more likely to produce virtuosic work. At a meeting of the rationality community, Anna Salamon once argued that to use truth-seeking as solely a means to fight existential risk (x-risk) would compromise the activity of truth-seeking – for instance, by cutting corners while rushing for an answer. So she proposed an alternative: “Rationality for rationality’s sake…for x-risk’s sake.”
This has the flavour of a trick to me. In order to be free to follow other ends intrinsically, you need to be vulnerable to the prospect of actually being less altruistic, of actually ending up doing less good. Maybe this is a necessary trick? Is there is a way of negotiating with the dominant part that allows it to let go in full knowledge of its vulnerability?
(an approach that has seemed to be working for me here has been comfort zone expansion with respect the feeling of being a bad person.)
As a counter-opinion to the above, I would be fine with the use of GPT-4, or even paying a writer. The goal of most initial applications is to asses some of the skills and experience of the individual. As long as that information is accurate, then any system that turns that into a readable application (human or AI) seems fine, and more efficient seems better.
The information this looses, is the way someone would communicate their skills and experience unassisted, but I'm skeptical that this is valuable in most jobs (and suspect it's better to test for these kinds of skills later in the process).
More generally I'm doubtful of the value of any norms that are very hard to enforce and disadvantage scrupulous people (e.g. "don't use GPT-4 or "only spend x hours on this application").