Pineapples on Mars, why you should stop looking at orgs for cause prio, and how we compare: an INT dissection for beginners
Clarification: by not changing biases, I meant 'don't try to change intuitive society bias such as valuing cuteness' for effectiveness, but rather work with them to our advantage. E.g. if I was raising money for animals, maybe don't pick a cockroach. You could- to get the shock factor and some uniqueness/bust stereotypes- but you are climbing an uphill battle when bigger wars are needed.
- BACKGROUND: Impact Research Groups fellowship
This is a short blogpost written on the day about my experience during day 1 of the IRG fellowship. I then mulled over whether to post but haven't changed anything (aside from one org name mispell pointed out to me), but decided to post as I've been doing lots of cause prio work myself.
Impact Research Groups is a London based high-impact program in collaboration with Arcadia Impact and London EA Uni groups.
Alicia Pollard is head of groups and started off our morning with some amazing introductory talks, which led to some amazing discussions with the other wonderfully smart, curious and epistemically-driven participants.
My cohort involved 35 individuals with 3 in my biosecurity stream (other streams included AI governance, AI technical alignment, animal welfare etc). I applied hoping to gain research experience in infectious disease, pandemic prevention, public health and more.
The aim is a semi-structured 'supervised research' project culminating in a submission and presentation over about 8 weeks. The kick off weekend was based in the LEAH coworking space in London, and then followed by mentor and team meetings on your chosen project.
The hope is for aspiring participants to gain confidence, networking and skills in high impact cause areas.
- Why we prioritise
- I want to help people (measured by metric x, y, z)
- Certain interventions are more effective than others
- We can't solve all the problems in the world (limited resources and burnout)
- I take in these factors when ranking causes (or consider that certain causes, although deserving of empathy, should not get more from our limited means)
- So I should value this cause as more effective
Such a fantastic program of course lends itself to discussions about 'what even is impact' and 'why work in this cause area'? That is exactly what we started discussing in our first sessions.
Prioritisation is not exclusive to EA, but cause prioritisation is commonly flagged in major organisations (along with building EA and global priorities research) such as 80,000 hours, Future of Humanity Institute, GiveWell etc as a good first step before any large career move or project.
Whether you dedicate your time, finances, donations, career or research hours- we all have something we can do to have a positive impact (no matter how we define it!).
The issue is, each individual, their circumstances, our resources, opportunities, biases, values and more all vary.
For this reason, we can create definitive lists based on certain metrics, such as cost effective interventions, or which drug saves a certain amount of lives, or how many years of pain are averted- but we can't keep up with the changing opinions, data and discourse in each topic.
Take the problem profiles on 80k's website, listing fields such as AI alignment, biosecurity and pandemics, global health, conflict, nuclear weapons, and more. All of these follow the rigorous, evidence-backed and replicated framework of INT.
IMPORTANT
NEGLECTED
TRACTABLE
Essentially, what is the scale (how many individuals are affected), how many resources are allocated to it (and how are they distributed), and how solvable is it?
The reason these lists use the framework as a starting point, is it points to good objective level relative values.
For example, take some variations of cause areas that don't make the cut of 'most pressing problems', you can easily see how the INT framework captures a good tripod of factors needed to make the cut.
A problem that is important and neglected may affect many people, or severely impair someone's life, and may have no one (or very few) people tackling it, and yet not be worth working on (rather than other causes) because it just can't be solved.
Take the problem of 'people lightly bumping into others'. This is very common (affects many people so IMPORTANT) and NEGLECTED (no one is really financing research or solutions to prevent this), and yet it probably can't be solved with all the effort in the world completely.
A more realistic cause is something NEGLECTED and TRACTABLE but not important. Take the fact that there are no pineapples on Mars.
The issue has no science teams frantically working away at it (NEGLECTED), and it can be SOLVED (with a giant space mission to deliver the fruity extraterrestrial gift), and yet probably isn't an IMPORTANT issue to work on.
Finally, take something TRACTABLE (solvable) and IMPORTANT (affects many), but not NEGLECTED, such as cancer research. Now, I am NOT saying don't work on (or prioritise this), but taking just a purely 'effective cause' stance, will you, the 10001th person to work on it, make that much difference?
Will you make more difference than any other researcher?
With the issues of diminishing returns, a look at low hanging fruit in neglected fields would allow you to have a deeper impact on another cause with the same (or less) resources.
If you had a groundbreaking new way to look at a common problem, that's great, but most likely, the time, dedication and resources needed to make a 1% difference to cancer research is just so great compared to say financing malarial treatments. The power of these health interventions are on average 180 times more effective, and can save more lives for less money.
These are just examples of how we try to tackle the initial principles of:
However, this is where the issues start. We may logically realise a certain cause or intervention is more effective or weighted, and yet morally vs emotionally valuing and intuitively feeling a certain affinity for it is different. (fuzzies vs utilons)
There is a reason all the adverts for charities use cute puppies or crying kids rather than cockroaches or shrimp, despite their magnitude of suffering and cost-effective interventions being more effective by these metrics.
And that's where the issue comes in. Hard vs easy cause prioritisation.
- We disagree and why that's good
You see, saying 'humanity has this probability of dying of misaligned AI systems running rampant in this time' is great! (ish).
But backing up the timeframes, extent, and more, means disagreement. Experts can barely agree what font to use in academic reports, much less the timeframe for 'when should we expect existential risks from new digital entities capable of suffering and preference''!
Instead, this is a case of HARD prioritisation. It's not that the overall cause area ranking was 'easy' in terms of effort, but that looking at objective(ish) measures in the INT framework is pretty easy to copy and paste.
But take digging into 'how much should I actually devote to cause x'. That depends.
If I told you the INT framework would place shrimp welfare at the top, due to their magnitude (trillions suffer and die), it is neglected, and can be solved- would you suddenly abandon your work and passions to work on freeing shrimp?
Probably not (unless you already leaned towards that).
You have your own personal comparative advantages, but aside from sheer utility impact, you also have preferences, and lifestyle constrains, and your own wants and dreams and plans. You don't want to burn out, or abandon your previous work to suddenly retrain in an entirely new cause area. No one should always maximise for peak objective effectiveness.
Instead, personal factors affect how much you personally value the causes. Does it matter what species the suffering entity is (e.g. a digital mind, a dog, an insect), or the number of individuals suffering, or the time they are in relative to you, or other factors such as their own moral weight?
And philosophical 'should' weightings differ to how you would act in real decisions. From the trolley problem, to drowning children, to oiled birds- we have many thought experiments trying to grapple with 'what should we want? What should we want to want? What does it even mean to want? What is good?' etc
Instead of definitive answers, we only get questions. And more questions. And more.
The biggest question is really what does it mean to 'do more good'? What do we even do to do good?
Some themes tend to stand out, such as:
- Humans are so bad at comparing big numbers (scope insensitive)
- Out of sight out of mind- hypothetical suffering doesn't tend to invoke as much drive to stop it as immediate, relativistic or present harm
- All suffering deserves empathy but we can't give everything the same resource allocation
- Some things are better than others for our personal fit and impact
- Some cause areas affect more people, more deeply and could be impacted
- Some interventions are many times more effective than others
And then you bring in issues such as, do we value future suffering as much as (or to what comparison) of present suffering? Should age or location affect the resources we put to a problem? Should instrumental factors such as 'cuteness' affect decisions on an 'effective-in-our-society' level?
We don't need to 'correct' biases, but acknowledge and scope them. Due to innate human differences, it diverges the philosophical priorities to our personal priorities. It means it changes what we should work on, care about or spend resources on.
The issue is that we may want to work on everything, care about everyone, help everything, and give everyone resources.
But we can't.
So you can see, it gets harder and harder…
But that's a good thing!
We don't NEED to solve every problem in the world to have an impact.
Instead, maybe look at these cause prioritisations as a helpful tool to narrow down your own niche. Add in your own preferences and experience, and then figure out YOUR PRIORITY.
We may not fully eradicate disease, save all lives, stop all suffering, be purely moral (if that even exists), solve all world problems, prevent all conflict and more…
But we can try to do SOMETHING good. We can make an impact as effectively as we can, with the resources we have, within a scope.
Don't spend your life worrying which charities to donate to at all seconds. Instead look to others, ask questions, section our what decisions are 'evidence based' and what are 'just for fun', and finally, accept we can't do good all the time or for everyone, but just a bit of good is already wonderful.
- So why should I 'stop looking at organisations for cause prioritisation'?
Okay, so the title is a bit of 'attention' leaning wording, because still use the very reputable, transparent and dynamic resources you prefer. But don't use it as gospel.
Aside from the sheer scientific, social and practical ramifications in accepting a worldview from one external source, on a purely prioritisation level, you can only do the 'hard' prioritisation yourself.
How much do YOU care about x?
How happy would YOU be with your life if you worked on x?
And so on
There are some amazing resources out there to help you look through these questions in a way that challenges your preconceptions, and many mentors and advisors and helpful media podcasts, articles, papers, books and more on this.
But the truth of the matter is. The hard stuff is up to you, but it's not impossible. Don't try to solve every problem in the world. Pick something that matches your metrics for impact, that you'd enjoy, and that you'd be well suited for. Try it, reflect, be open to change your mind, be flexible, adapt to new evidence, and be happy if you nudge your niche a little closer to the finish line.
So start with a little good and go from there :)
Clarification: by not changing biases, I meant 'don't try to change intuitive society bias such as valuing cuteness' for effectiveness, but rather work with them to our advantage. E.g. if I was raising money for animals, maybe don't pick a cockroach. You could- to get the shock factor and some uniqueness/bust stereotypes- but you are climbing an uphill battle when bigger wars are needed.