Mo Putera

Working (6-15 years of experience)
88Joined Jun 2022

Bio

FTX Future Fund exploratory grantee based in Kuala Lumpur, spending a year trying to find an EA-related job. Previously I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, ecommerce) and departments (commercial, marketing), after majoring in physics at UCLA. 

I've been at the periphery of EA for a long time; my introduction to it in 2014 was via the dead children as unit of currency essay, I started donating shortly thereafter, and I've been "soft-selling" basic EA ideas for years. But I only started actively participating in the community in 2021, when I joined EA Malaysia. Given my career background it perhaps makes sense that my interests center around improving assessment of altruistic impact, and improving decision-making based on that: estimation of value, cost-effectiveness analysis, local priorities research, etc. 

Comments
33

Topic Contributions
1

Donation opportunities, yes. I'm not sure if donation opportunities in particular are something development economists look for; I'm not familiar with the literature. 

I broadly agree with the substance of your comment, I just admittedly find the tone off-puttingly abrasive ("delusional and arrogant" doesn't seem charitable), so I'll respectfully bow out of this exchange. 

Halstead and Hillebrandt didn't claim that a 4-person year research effort could discover the key to economic growth. Their claim is simply about finding good donation opportunities:

A ~4 person-year research effort will find donation opportunities working on economic growth in LMICs which are substantially better than GiveWell’s top charities from a current generation human welfare-focused point of view. 

My sense is such an effort might start from Hillebrandt's appendices, in particular appendix 4. The output of such an effort might look like one of Founders Pledge's reports (example; Halstead is a coauthor). 

In practice, "the charity that has the highest EV on the current margin" is more complicated than you may realize; see e.g. Section 1.3 of froolow's post on incorporating uncertainty analysis in cost-effectiveness modeling, showing how any of GiveWell's (older list of) top charities could be highest EV given reasonable assumptions:

and when:

I think it's also worth noting that GiveWell's CEAs don't actually calculate CE on the margin, and not the marginal impact of the donor's dollar but of all dollars generated by the donated dollar via leverage/funging (2nd bullet point in the Errors section); I was confused by this when I first tried to replicate (one column of) GW's AMF CEA. 

Another reason is just bets-hedging. It's probably unwise to put all one's eggs in one basket, not just due to theoretical considerations like moral uncertainty but operational considerations as well, like whether the organization can deliver on the forecasted impact next year (as someone directly impacted by the recent fiasco this is front-of-mind).

I'm also personally torn between EV-maxing and risk aversion. The former suggests donating to longtermist charities and the latter to GiveDirectly; I care that I have some impact as much as(?) I care about the opportunity cost of missing out on more impact. This is a little like how I think about personal investing, although in the latter case risk aversion is greater. 

Frankly this may just be me failing to find the right numbers or something, but I'd be curious to know if you yourself have identified any single charity you consider highest-EV on the margin (not historical EV), and what that EV number is (and preferably a link to how it's calculated).

I'd be curious to know as well, speaking as an FTX regranting program grantee.

HLI's research overview page mentions that they're planning to look into the following interventions and policies via the WELLBY lens; there is some overlap with what you mentioned:

Our search for outstanding funding opportunities continues at three levels of scale. These are set out below with examples of the interventions and policies we plan to investigate next.

Micro-interventions (helping one person at a time)

Meso-interventions (systemic change through specific policies) 

Macro-interventions (systemic change through the adoption of a wellbeing approach) 

  • Advocacy for, and funding of, subjective wellbeing research
  • Developing policy blueprints for governments to increase wellbeing

I liked the rigor in your post and learned a lot from it, thank you for writing it.

I interpret 80000 Hours as explaining this in their What is social impact? page (emphasis mine):

What does it mean to act ethically? Moral philosophers have debated this question for millennia, and have arrived at three main kinds of answers:

  1. Making the world better — e.g. helping others.
  2. Acting rightly — e.g. respecting the rights of others and not doing wrong.
  3. Being virtuous — e.g. being honest, kind, and wise.

These correspond to consequentialism, deontology, and virtue ethics, respectively.

We think all three perspectives have something to offer, but when our readers talk about wanting to “make a difference,” they’re most interested in the first of these perspectives — changing the world for the better.

We agree this focus makes sense — we don’t just want to avoid doing wrong, or live honest lives, but actually leave the world better than we found it. And there is a lot we can all do to get better at that. ...

In our essay on your most important decision, we argued that some career paths open to you will do hundreds of times more to make the world a better place than others. So it seems really important to figure out what those paths are.

In contrast, it’s often a lot easier to know whether a path violates someone’s rights or involves virtuous behaviour (most career paths seem pretty OK on those fronts), so there’s less to gain from focusing there.

In fact, even people who emphasise moral rules and virtue agree that if you can make others better off, that’s a good thing to do, and that it’s even better to make more people better off than fewer. (And in general we think deontologists and utilitarians agree a lot more than people think.) ...

Since there seem to be big opportunities to make people better off, and some seem to be better than others, we should focus on finding those.

So, while we think it’s really important to avoid harming others and to strive to act virtuously, when it comes to real decisions, we think the potential positive consequences are what we should focus on the most.

You've mentioned upthread that you're uncertain what exactly EAs should do to be "more like peak Quakerism", but can you take a stab at some concrete suggestions that illustrate what you mean? I'm just wondering what would change. Paraphrasing examples from other comments:

  • emphasizing silence in EA meetups to improve the quality of debate and ideas?
  • whatever other aspects of Quaker-style meetings that make them distinctive? (like what?)
  • abstaining from alcohol?
  • being more co-dependent on others?
  • more rituals? 
  • more children? (probably not?)
  • various forms of meditation like loving-kindness and insight, reflection, etc?
  • emphasizing other personal behaviors the community should strive to include and praise because they push the group towards long-term success?

To support your point, Holden signal-boosted this in his aptitudes over paths post:

Basic profile: advancing into some high-leverage role in government (or some other institution such as the World Bank), from which you can help the larger institution make decisions that are good for the long-run future of the world.

Essentially any career that ends up in an influential position in some government (including executive, judicial, and legislative positions) could qualify here (though of course some are more likely to be relevant than others).

Examples:

Richard Danzig (former Secretary of the Navy, author of Technology Roulette); multiple people who are pursuing degrees in security studies at Georgetown and aiming for (or already heading into) government roles.

...

On track?

As a first pass, the answer to "How on track are you?" seems reasonably approximated by "How quickly and impressively is your career advancing, by the standards of the institution?" People with more experience (and advancement) at the institution will often be able to help you get a clear idea of how this is going (and I generally think it’s important to have good enough relationships with some such people to get honest input from them - this is an additional indicator for whether you’re “on track”).

Do you have a take on whether your recommendations here would change GW's funding allocation, and how (up or down), as per the contest details? If I understand you correctly, the efficiencies of scale section implies slightly increasing funding while switching to WHO data over IHME implies slightly reducing it; I'm unclear as to what the direction of the change is on net.

Load More