Bio

Participation
4

I'll be at EAG London in June, come say hi :)

I currently work with CE/AIM-incubated charity ARMoR on research distillation, quantitative modelling, consulting, and general org-boosting to support policy advocacy for market-shaping tools to incentivise innovation and ensure access to antibiotics to help combat AMR

I previously did AIM's Research Training Program, was supported by a FTX Future Fund regrant and later Open Philanthropy's affected grantees program, and before that I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, e-commerce) and departments (commercial, marketing), after majoring in physics at UCLA and changing my mind about becoming a physicist. I've also initiated some local priorities research efforts, e.g. a charity evaluation initiative with the moonshot aim of reorienting my home country Malaysia's giving landscape towards effectiveness, albeit with mixed results. 

I first learned about effective altruism circa 2014 via A Modest Proposal, Scott Alexander's polemic on using dead children as units of currency to force readers to grapple with the opportunity costs of subpar resource allocation under triage. I have never stopped thinking about it since, although my relationship to it has changed quite a bit; I related to Tyler's personal story (which unsurprisingly also references A Modest Proposal as a life-changing polemic):

I thought my own story might be more relatable for friends with a history of devotion – unusual people who’ve found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It’s the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life].

How others can help me

I'm looking for "decision guidance"-type roles e.g. applied prioritization research.

How I can help others

Do reach out if you think any of the above piques your interest :)

Comments
269

Topic contributions
3

(The recording coefficient section of their report discusses this extensively if I've interpreted your question correctly.)

historical and current EA wins

The best one-stop summary I know of is still Scott Alexander's In Continued Defense Of Effective Altruism from late 2023. I'm curious to see if anyone has an updated take, if not I'll keep steering folks there:

Here’s a short, very incomplete list of things effective altruism has accomplished in its ~10 years of existence. I’m counting it as an EA accomplishment if EA either provided the funding or did the work, further explanations in the footnotes. I’m also slightly conflating EA, rationalism, and AI doomerism rather than doing the hard work of teasing them apart:

Global Health And Development

  • Saved about 200,000 lives total, mostly from malaria1
  • Treated 25 million cases of chronic parasite infection.2
  • Given 5 million people access to clean drinking water.3
  • Supported clinical trials for both the RTS.S malaria vaccine (currently approved!) and the R21/Matrix malaria vaccine (on track for approval)4
  • Supported additional research into vaccines for syphilis, malaria, helminths, and hepatitis C and E.5
  • Supported teams giving development economics advice in Ethiopia, India, Rwanda, and around the world.6

Animal Welfare:

  • Convinced farms to switch 400 million chickens from caged to cage-free.7
  • Freed 500,000 pigs from tiny crates where they weren’t able to move around8
  • Gotten 3,000 companies including Pepsi, Kelloggs, CVS, and Whole Foods to commit to selling low-cruelty meat.

AI:

  • Developed RLHF, a technique for controlling AI output widely considered the key breakthrough behind ChatGPT.9
  • …and other major AI safety advances, including RLAIF and the foundations of AI interpretability10.
  • Founded the field of AI safety, and incubated it from nothing up to the point where Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Bill Gates, and hundreds of others have endorsed it and urged policymakers to take it seriously.11
  • Helped convince OpenAI to dedicate 20% of company resources to a team working on aligning future superintelligences.
  • Gotten major AI companies including OpenAI to work with ARC Evals and evaluate their models for dangerous behavior before releasing them.
  • Got two seats on the board of OpenAI, held majority control of OpenAI for one wild weekend, and still apparently might have some seats on the board of OpenAI, somehow?12
  • Helped found, and continue to have majority control of, competing AI startup Anthropic, a $30 billion company widely considered the only group with technology comparable to OpenAI’s.13
  • Become so influential in AI-related legislation that Politico accuses effective altruists of having “[taken] over Washington” and “largely dominating the UK’s efforts to regulate advanced AI”.
  • Helped (probably, I have no secret knowledge) the Biden administration pass what they called "the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.”
  • Helped the British government create its Frontier AI Taskforce.
  • Won the PR war: a recent poll shows that 70% of US voters believe that mitigating extinction risk from AI should be a “global priority”.

Other:

I think other people are probably thinking of this as par for the course - all of these seem like the sort of thing a big movement should be able to do. But I remember when EA was three philosophers and few weird Bay Area nerds with a blog. It clawed its way up into the kind of movement that could do these sorts of things by having all the virtues it claims to have: dedication, rationality, and (I think) genuine desire to make the world a better place.

One detail that caught my eye from your post was this chart:

I was a bit surprised to see the global average WTP being $67k per QALY or ~5x world GDP per capita, while for these individual countries they seem closer to 0.5-2x. 

Eyeballing the chart below from OP's 2021 technical update makes me wonder if that discrepancy is driven by the higher WTP multipliers in LMICs:

Figure1.png

But contra my own guess, the authors say

As we’ll discuss in Appendix A, there are empirical and theoretical reasons to think that the exchange rate at which people trade off mortality risks against income gains differs systematically across income levels, with richer people valuing mortality more relative to income.

so I remain confused.

Might Nuno's previous work Shallow evaluations of longtermist organizations be useful or relevant? I'm guessing you've probably seen it, just flagging in case you haven't (e.g. it wasn't in fn2).

What do you think of OWID's dissolution of the Easterlin paradox? In short:

  • OWID say Easterlin and other researchers relied on data from the US and Japan, but...
  • In Japan, life satisfaction questions in the ‘Life in Nation surveys’ changed over time; within comparable survey periods, the correlation is positive (graphic below visualises this for ~50 years of data from 1958-2007, cf. your multidecade remark)
  • In the US, growth has not benefitted the majority of people; income inequality has been rising in the last four decades, and the income and standard of living of the typical US citizen have not grown much in the last couple of decades

so there's no paradox to explain.

GDP per capita vs. Life satisfaction across survey questions

(I vaguely recall having asked you this before and you answering but may be confabulating; if that's happened and you feel annoyed I'm asking again, feel free to ignore)

self-reported happiness over a short period (like 1 day)

Not exactly what you meant, but you may be interested in Jeff Kaufman's notes on his year-long happiness logging self-experiment. My main takeaway was to be mildly more bearish of happiness logging than when I first came across the idea, based on his

Overall my experience with logging has made me put less trust in "how happy are you right now" surveys of happiness. Aside from the practical issues like logging unexpected night wake-time, I mostly don't feel like the numbers I'm recording are very meaningful. I would rather spend more time in situations I label higher than lower on average, so there is some signal there, but I don't actually have the introspection to accurately report to myself how I'm feeling.

Scattered quotes that made me go "huh":

When I first started rating my happiness on a 1-10 scale I didn't feel like I was very good at it. At the time I thought I might get better with practice, but I think I'm actually getting worse at it. Instead of really thinking "how do I feel right now?" it's really hard not to just think "in past situations like this I've put down '6' so I should put down '6' now".

Being honest to myself like this can also make me less happy. Normally if I'm negative about something I try not to dwell on it. I don't think about it, and soon I'm thinking about other things and not so negative. Logging that I'm unhappy makes me own up to being unhappy, which I think doesn't help. Though it's hard to know because any other sort of measurement would seem to have the same problem.

I checked out your website thinking I'd find something like this but couldn't. Did you have something different in mind re: league table?  

Tangent: when I was reengaging seriously with EA before eventually changing my career path, your story was among the ones in Strangers Drowning that powerfully resonated with me. So it was interesting for me to learn recently that you felt strangely about MacFarquhar’s coverage of you and Jeff in SD.

Nadia later worked with Jhourney to write the most extensive report yet (tweetstorm summary) on how jhanas improve the well-being of meditators, including claims like "2x more likely to report changes in lifestyle (84% vs. 41%), 1.5x more likely to report changes in thoughts + beliefs (92% vs. 59%), more kindness, awareness of pleasure, and reduced cravings". I find Nadia's claims somewhat more believable because, to quote her:

I am not a meditator. (Even after experiencing the jhanas, I still have no desire to develop a meditation practice.) Nor am I a “spiritual seeker” of the sort you might find at Burning Man or a Vipassana retreat. ... 

If you’re raising an eyebrow right now, I must once again stress that I, too, did not believe this was a thing. I arrived at the retreat feeling rather silly for being there. I left astonished, and perplexed, as to why barely anyone has studied the jhanas at all. ...

I am less interested in making the argument that everyone should try the jhanas. But it seems to me that if people can access these experiences with relatively little mental effort – and to do so legally, for free – more ought to know that such a thing exists. At the very least, shouldn’t there be more than three published studies about it?

(I'd caution against truly maximising.) 

Ben Todd's 80K article What is social impact? A definition is a pretty decent start:

If you just want a quick answer, here’s the simple version of our definition (a more philosophically precise one — and an argument for it — follows below):

Your social impact is given by the number of people1 whose lives you improve and how much you improve them, over the long term.

This shows that you can increase your impact in two ways: by helping more people over time, or by helping the same number of people to a greater extent (pictured below).

two ways to have impact

And their more rigorous definition:

“Social impact” or “making a difference” is (tentatively) about promoting total expected wellbeing — considered impartially, over the long term.

We don’t think social impact is all that matters. Rather, we think people should aim to have a greater social impact within the constraints of not sacrificing other important values – in particular, while building good character, respecting rights and attending to other important personal values. We don’t endorse doing something that seems very wrong from a commonsense perspective in order to have a greater social impact.

In fact, we even think that paying attention to these other values is probably the best way to in fact have the most social impact anyway, even if that’s all you want to aim for.

The rest of the article elaborates on what they mean by all the terms in their rigorous definition.

80K also note that this doesn't just reduce to utilitarianism:

Is this just utilitarianism?

No. Utilitarianism claims that you’re morally obligated to take the action that does the most to increase wellbeing, as understood according to the hedonic view.

Our definition shares an emphasis on wellbeing and impartiality, but we depart from utilitarianism in that:

  • We don’t make strong claims about what’s morally obligated. Mainly, we believe that helping more people is better than helping fewer. If we were to make a claim about what we ought to do, it would be that we should help others when we can benefit them a lot with little cost to ourselves. This is much weaker than utilitarianism, which says you ought to sacrifice an arbitrary amount so long as the benefits to others are greater.
  • Our view is compatible with also putting weight on other notions of wellbeing, other moral values (e.g. autonomy), and other moral principles. In particular, we don’t endorse harming others for the greater good.
  • We’re very uncertain about the correct moral theory and try to put weight on multiple views.

Read more about how effective altruism is different from utilitarianism.

Load more