Doing good things is hard.

We’re gonna look at some deep tensions that attach to trying to do really good stuff. To keep it relatable(?!), I’ve included badly-drawn animals.

The mole pursues goals which are within comprehension and reach. At best the mole knows the immediate challenges extremely well and does a great job at dealing with them. At worst, the mole is digging in a random direction.

The giraffe looks into the distance, focusing on the big picture, and perhaps on challenges that will come up later but aren’t even apparent today. At best the giraffe identifies crucial directions to steer in. At worst, the giraffe doesn’t look where they’re going and trips over, or has ideas which are dumb because they don’t engage with details.

Moles have much more direct feedback loops than giraffes, so it’s harder to be a good giraffe than a good mole. When there’s a well-specified achievable goal, you can set a mole at it. Consequently many industries are structured with lots of mole-shaped roles. Idealists are often giraffes.


The beaver is industriously focused on the task at hand. The beaver rejects distractions and gets s*** done. At best, they are extremely productive. At worst, they miss big improvements in how they could go about things, or execute on a subtly wrong version of the task that misses most of the value.

The elephant is always asking how things are going, and whether the task is the right one. At their best, the elephant reorients things in better directions or finds big systemic improvements. At their worst, the elephant fails to get anything done because they can’t settle on what they’re even trying to do.

The mole and beaver are cousins, as are the giraffe and elephant. But you certainly get mole-elephants (applying lots of meta but only to local goals), or giraffe-beavers (just focused on the object-level of the big-picture).


The owl is a perfectionist. They have high standards for things, and want everything to meet those. At their best, they make things excellent, and perfectly crafted. If you want to produce the best X in the world, you probably need at least one owl involved. At their worst, they stall on projects because there’s something not quite right that they can’t see how to fix, and it’s unsatisfying.

The hare likes to ship things. They feel urgency all the time, and hate letting the perfect be the enemy of the good. At their best, they just make things happen! The hare can also be a good learner because they charge at things — sometimes they bounce off and get things wrong, but they get loads of experience so loads of chances to learn. At their worst, the hare produces loads of stuff but it’s all junk.


The dog is very socially governed / approval-seeking. They are excited to do things that people (particularly the cool people) will think are cool. At their best, they provide a social fabric which makes coordination simple (if someone else wants a thing done, they’re happy to do it without slowing everything down by making sure they understand the deep reasons why it’s desired). They also make sure gaps are filled — they jump onto projects and help out with things, or pick up balls that everyone agrees are important. At their worst, they chase after hype and status without providing any meaningful checks on whether the ideas they’re following are actually good.

The cat doesn’t give tuppence for what anyone else thinks. They’re just into working out what seems good and going for it. All new ideas involve at least a bit of cat. At their best, cats head into the wilderness and bring back something amazing. At their very worst they do something stupid and damaging that proper socialisation would have stopped. The more normal cat failure modes are to wander too far from consensus and end up working on things that don’t matter (where consensus would have made it easier to see that they didn’t matter), or to not know how to bring their catch back to the pack, so have it languish. Cats are proverbially difficult to herd.


All of the animals are archetypes. They’re each attending to something important, but each is pathological when taken too far. I think they’re useful to understand. The most valuable characteristics will vary quite a lot with task/project/role. Often I think on each dimension we need some balance; that can occur by pairing people with different strengths, but it can also be good if individuals learn how to integrate the strengths of both archetypes. Sometimes this might mean switching between them (e.g. I think this is often correct for the beaver and elephant); sometimes it might mean a deeper integration.

Many of the people I’ve talked to about this identify more readily with one end of some (or all) of the dimensions. You might like to take a minute and see if that seems true for you. (In some cases the answer might vary with context — maybe you’re a hare for fiction writing but lean owlish for programming.) Probably the end you identify with represents a strength. That’s worth holding onto and leaning into! See if you can design your work around your strengths.

But also perhaps try connecting with what’s good about its opposite. I think real mastery often involves being able to access the strengths of all the archetypes.


This was originally written for the Research Scholars Programme onboarding. Thanks to several people (both inside & outside RSP) who provided helpful comments or inspiration.

Comments5


Sorted by Click to highlight new comments since:

There is a poll on the Effective Altruism Polls Facebook group on the question "With which archetype(s) from Owen's post "A do-gooder's safari" do you identify the most?"

https://www.facebook.com/groups/477649789306528/posts/1022081814863320/

Thanks for this. Very useful. If you ever plan a future iteration, I think that making an abbreviation could be really helpful. 

For instance, I like the way we can say OCEAN for the big five personality traits or ENTP for the Myers Briggs. I think that this could be good to have something similar for differentiating people within EA circles.  

As a start, I think you (maybe) have these variables: temporal focus, abstraction, reflectiveness, impatience, perfectionism, and conformism, so PARTIC? Super catchy :) 

Throw in laziness (how much you want to work) and egoism (how much you need to gain/get credit from  do-gooding) and you get PARTICLE. 

Interesting idea!

I'm keen for the language around this to convey the correct vibe about the epistemic status of the framework: currently I think this is "here are some dimensions that I and some other people feel like are helpful for our thinking". But not "we have well-validated ways of measuring any of these things" nor "this is definitely the most helpful carving up in the vicinity" nor "this was demonstrated to be helpful for building a theory of change for intervention X which did verifiably useful things". I think the animal names/pictures are kind of playful and help to convey that this isn't yet attempting to be in epistemically-solid land?

I guess I'm interested in the situations where you think an abbreviation would be helpful. Do you want someone to make an EA personality test based on this?

Thanks for the response Owen. I understand about the epistemic status.

I guess I'm interested in the situations where you think an abbreviation would be helpful. 

I imagine that I meet some new EA and I am trying to get to know them. After the standard where did you hear about EA, what cause areas are you most interested in, I might want to ask about the sort engagement they have with EA and doing good. At this point it would be useful to be able to reference the dimensions you have outlined and similar.  I.e., 'So what sort of EA are you? How do you rate yourself on [abbreviation]?'

As this example might suggest, I think that an abbreviation could make such conversations more likely to occur by making the dimensions you have outlined easier to recall and communicate and increasing the probability that they disseminate widely.  

Do you want someone to make an EA personality test based on this?

I don't think that it is a high priority thing to do but I think that an EA/do-gooder personality test could be quite useful in the future for understanding differences between do-gooders (within and outside EA), connecting people to the right projects/causes, and building the right sorts of teams (i.e., with a balance of across key dimensions).

I know for example that Spencer Greenberg uses personality tests to help people determine fit for entrepreneurship and we could have something similar.   

[Someone strongly downvoted this. Please feel free to leave a comment or send a message to explain why as otherwise I can't update correctly!]

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
 ·  · 3m read
 · 
We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges –  and in particular the 🔸10% Pledge – the core focus of GWWC’s work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve seen almost 10,000 people take it and give nearly $50M USD to high-impact charities annually. We believe it could become a norm among at least the richest 1% — and likely a much wider segment of the population — which would cumulatively direct an enormous quantity of financial resources towards tackling the world’s most pressing problems.  We initiated this focus on pledges in early 2024, and are doubling down on it in 2025. In line with this, we are retiring various other initiatives we were previously running and which are not consistent with our new strategy. Introducing our BHAG We are setting ourselves a long-term Big Hairy Audacious Goal (BHAG) of 1 million pledgers donating $3B USD to high-impact charities annually, which we will start working towards in 2025. 1 million pledgers donating $3B USD to high-impact charities annually would be roughly equivalent to ~100x GWWC’s current scale, and could be achieved by 1% of the world’s richest 1% pledging and giving effectively. Achieving this would imply the equivalent of nearly 1 million lives being saved[3] every year. See the BHAG FAQ for more info. Working towards our BHAG Over the coming years, we expect to test various growth pathways and interventions that could get us to our BHAG, including digital marketing, partnerships with aligned organisations, community advocacy, media/PR, and direct outreach to potential pledgers. We thin
Recent opportunities in Career choice
14
Ryan Kidd
·