Pineapples on Mars, why you should stop looking at orgs for cause prio, and how we compare: an INT dissection for beginners 

 

Clarification: by not changing biases, I meant 'don't try to change intuitive society bias such as valuing cuteness' for effectiveness, but rather work with them to our advantage. E.g. if I was raising money for animals, maybe don't pick a cockroach. You could- to get the shock factor and some uniqueness/bust stereotypes- but you are climbing an uphill battle when bigger wars are needed.

  1. BACKGROUND: Impact Research Groups fellowship

 

This is a short blogpost written on the day about my experience during day 1 of the IRG fellowship. I then mulled over whether to post but haven't changed anything (aside from one org name mispell pointed out to me), but decided to post as I've been doing lots of cause prio work myself.

Impact Research Groups is a London based high-impact program in collaboration with Arcadia Impact and London EA Uni groups.

Alicia Pollard is head of groups and started off our morning with some amazing introductory talks, which led to some amazing discussions with the other wonderfully smart, curious and epistemically-driven participants.

 

My cohort involved 35 individuals with 3 in my biosecurity stream (other streams included AI governance, AI technical alignment, animal welfare etc). I applied hoping to gain research experience in infectious disease, pandemic prevention, public health and more.

The aim is a semi-structured 'supervised research' project culminating in a submission and presentation over about 8 weeks. The kick off weekend was based in the LEAH coworking space in London, and then followed by mentor and team meetings on your chosen project.

The hope is for aspiring participants to gain confidence, networking and skills in high impact cause areas.

 

  1. Why we prioritise
    1. I want to help people (measured by metric x, y, z)
    2. Certain interventions are more effective than others
    3. We can't solve all the problems in the world (limited resources and burnout)
    4. I take in these factors when ranking causes (or consider that certain causes, although deserving of empathy, should not get more from our limited means)
    5. So I should value this cause as more effective

Such a fantastic program of course lends itself to discussions about 'what even is impact' and 'why work in this cause area'? That is exactly what we started discussing in our first sessions.

 

Prioritisation is not exclusive to EA, but cause prioritisation is commonly flagged in major organisations (along with building EA and global priorities research) such as 80,000 hours, Future of Humanity Institute, GiveWell etc as a good first step before any large career move or project.

 

Whether you dedicate your time, finances, donations, career or research hours- we all have something we can do to have a positive impact (no matter how we define it!).

The issue is, each individual, their circumstances, our resources, opportunities, biases, values and more all vary.

 

For this reason, we can create definitive lists based on certain metrics, such as cost effective interventions, or which drug saves a certain amount of lives, or how many years of pain are averted- but we can't keep up with the changing opinions, data and discourse in each topic.

 

Take the problem profiles on 80k's website, listing fields such as AI alignment, biosecurity and pandemics, global health, conflict, nuclear weapons, and more. All of these follow the rigorous, evidence-backed and replicated framework of INT.

 

IMPORTANT

NEGLECTED

TRACTABLE

 

Essentially, what is the scale (how many individuals are affected), how many resources are allocated to it (and how are they distributed), and how solvable is it?

 

The reason these lists use the framework as a starting point, is it points to good objective level relative values.

For example, take some variations of cause areas that don't make the cut of 'most pressing problems', you can easily see how the INT framework captures a good tripod of factors needed to make the cut.

 

A problem that is important and neglected may affect many people, or severely impair someone's life, and may have no one (or very few) people tackling it, and yet not be worth working on (rather than other causes) because it just can't be solved.

Take the problem of 'people lightly bumping into others'. This is very common (affects many people so IMPORTANT) and NEGLECTED (no one is really financing research or solutions to prevent this), and yet it probably can't be solved with all the effort in the world completely.

 

A more realistic cause is something NEGLECTED and TRACTABLE but not important. Take the fact that there are no pineapples on Mars.

The issue has no science teams frantically working away at it (NEGLECTED), and it can be SOLVED (with a giant space mission to deliver the fruity extraterrestrial gift), and yet probably isn't an IMPORTANT issue to work on.

 

Finally, take something TRACTABLE (solvable) and IMPORTANT (affects many), but not NEGLECTED, such as cancer research. Now, I am NOT saying don't work on (or prioritise this), but taking just a purely 'effective cause' stance, will you, the 10001th person to work on it, make that much difference?

Will you make more difference than any other researcher?

 

With the issues of diminishing returns, a look at low hanging fruit in neglected fields would allow you to have a deeper impact on another cause with the same (or less) resources.

 

If you had a groundbreaking new way to look at a common problem, that's great, but most likely, the time, dedication and resources needed to make a 1% difference to cancer research is just so great compared to say financing malarial treatments. The power of these health interventions are on average 180 times more effective, and can save more lives for less money.

 

These are just examples of how we try to tackle the initial principles of:

 

However, this is where the issues start. We may logically realise a certain cause or intervention is more effective or weighted, and yet morally vs emotionally valuing and intuitively feeling a certain affinity for it is different. (fuzzies vs utilons)

There is a reason all the adverts for charities use cute puppies or crying kids rather than cockroaches or shrimp, despite their magnitude of suffering and cost-effective interventions being more effective by these metrics.

 

And that's where the issue comes in. Hard vs easy cause prioritisation.

 

  1. We disagree and why that's good

You see, saying 'humanity has this probability of dying of misaligned AI systems running rampant in this time' is great! (ish).

But backing up the timeframes, extent, and more, means disagreement. Experts can barely agree what font to use in academic reports, much less the timeframe for 'when should we expect existential risks from new digital entities capable of suffering and preference''!

 

Instead, this is a case of HARD prioritisation. It's not that the overall cause area ranking was 'easy' in terms of effort, but that looking at objective(ish) measures in the INT framework is pretty easy to copy and paste.

But take digging into 'how much should I actually devote to cause x'. That depends.

 

If I told you the INT framework would place shrimp welfare at the top, due to their magnitude (trillions suffer and die), it is neglected, and can be solved- would you suddenly abandon your work and passions to work on freeing shrimp?

Probably not (unless you already leaned towards that).

 

You have your own personal comparative advantages, but aside from sheer utility impact, you also have preferences, and lifestyle constrains, and your own wants and dreams and plans. You don't want to burn out, or abandon your previous work to suddenly retrain in an entirely new cause area. No one should always maximise for peak objective effectiveness. 

 

Instead, personal factors affect how much you personally value the causes. Does it matter what species the suffering entity is (e.g. a digital mind, a dog, an insect), or the number of individuals suffering, or the time they are in relative to you, or other factors such as their own moral weight?

And philosophical 'should' weightings differ to how you would act in real decisions. From the trolley problem, to drowning children, to oiled birds- we have many thought experiments trying to grapple with 'what should we want? What should we want to want? What does it even mean to want? What is good?' etc

 

Instead of definitive answers, we only get questions. And more questions. And more.

 

The biggest question is really what does it mean to 'do more good'? What do we even do to do good?

 

Some themes tend to stand out, such as:

  •  Humans are so bad at comparing big numbers (scope insensitive)
  • Out of sight out of mind- hypothetical suffering doesn't tend to invoke as much drive to stop it as immediate, relativistic or present harm
  • All suffering deserves empathy but we can't give everything the same resource allocation
  • Some things are better than others for our personal fit and impact
  • Some cause areas affect more people, more deeply and could be impacted
  • Some interventions are many times more effective than others

And then you bring in issues such as, do we value future suffering as much as (or to what comparison) of present suffering? Should age or location affect the resources we put to a problem? Should instrumental factors such as 'cuteness' affect decisions on an 'effective-in-our-society' level?

 

We don't need to 'correct' biases, but acknowledge and scope them. Due to innate human differences, it diverges the philosophical priorities to our personal priorities. It means it changes what we should work on, care about or spend resources on.

The issue is that we may want to work on everything, care about everyone, help everything, and give everyone resources.

But we can't.

 

So you can see, it gets harder and harder…

 

But that's a good thing!

We don't NEED to solve every problem in the world to have an impact.

 

Instead, maybe look at these cause prioritisations as a helpful tool to narrow down your own niche. Add in your own preferences and experience, and then figure out YOUR PRIORITY.

 

We may not fully eradicate disease, save all lives, stop all suffering, be purely moral (if that even exists), solve all world problems, prevent all conflict and more…

 

But we can try to do SOMETHING good. We can make an impact as effectively as we can, with the resources we have, within a scope.

 

Don't spend your life worrying which charities to donate to at all seconds. Instead look to others, ask questions, section our what decisions are 'evidence based' and what are 'just for fun', and finally, accept we can't do good all the time or for everyone, but just a bit of good is already wonderful.

 

  1. So why should I 'stop looking at organisations for cause prioritisation'?

Okay, so the title is a bit of 'attention' leaning wording, because still use the very reputable, transparent and dynamic resources you prefer. But don't use it as gospel. 

Aside from the sheer scientific, social and practical ramifications in accepting a worldview from one external source, on a purely prioritisation level, you can only do the 'hard' prioritisation yourself.

 

How much do YOU care about x?

How happy would YOU be with your life if you worked on x?

And so on

 

There are some amazing resources out there to help you look through these questions in a way that challenges your preconceptions, and many mentors and advisors and helpful media podcasts, articles, papers, books and more on this.

 

But the truth of the matter is. The hard stuff is up to you, but it's not impossible. Don't try to solve every problem in the world. Pick something that matches your metrics for impact, that you'd enjoy, and that you'd be well suited for. Try it, reflect, be open to change your mind, be flexible, adapt to new evidence, and be happy if you nudge your niche a little closer to the finish line.

 

So start with a little good and go from there :)

Comments1


Sorted by Click to highlight new comments since:

Clarification: by not changing biases, I meant 'don't try to change intuitive society bias such as valuing cuteness' for effectiveness, but rather work with them to our advantage. E.g. if I was raising money for animals, maybe don't pick a cockroach. You could- to get the shock factor and some uniqueness/bust stereotypes- but you are climbing an uphill battle when bigger wars are needed.

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr