MP

Mo Putera

404 karmaJoined Jun 2022Working (6-15 years)

Bio

Participation
4

I'll be attending EAGxPhilippines and the Community Builders in Asia Retreat this October, come say hi :) 

Open Philanthropy (formerly FTX Future Fund) re-grantee from Kuala Lumpur, spending a year doing some career exploration, and just accepted into Charity Entrepreneurship's new Research Training Program. Previously I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, ecommerce) and departments (commercial, marketing), after majoring in physics at UCLA. Previously at Trajan House in Oxford and the Mexico EA Fellowship in Mexico City.

I've been at the periphery of EA for a long time; my introduction to it in 2014 was via the dead children as unit of currency essay, I started donating shortly thereafter, and I've been "soft-selling" basic EA ideas for years. But I only started actively participating in the community in 2021, when I joined EA Malaysia. Given my career background it perhaps makes sense that my interests center around improving decision-making via better value quantification, distillation and communication, and collaboration, including but not limited to cost-effectiveness analysis, local priorities research, etc. 

How others can help me

I'd love to get help with my career exploration:

  • informational interviews if you work in a career path I'm considering (I'd probably reach out to you)
  • suggest career path-specific low-cost tests, e.g. "attend workshop X" or "talk to Y" or "build on Z" 
  • challenge my career theory of change, my personal fit, etc
  • if you're doing career exploration as well I'd love to pick your brain 

How I can help others

Do reach out if you're interested to talk about, or collaborate on, 

  • local priorities research (we have a Slack!)
  • charity impact evaluation, for donors keen on alternative options to the standard ones 
  • relatedly, identifying new high-impact giving opportunities
  • thinking through career exploration ToCs, paths/options, activities, resources
  • cost-effectiveness analysis and prioritization research more generally, especially in specific decision-making contexts
  • specific resources, especially involving numbers (impact estimates, conversion units, etc)
  • my past work experience: data analytics, business intelligence, knowledge + project management

Not strictly help, but if you just want to talk about random topics I'm interested as well :) I've written short- to mid-length answers and posts on a wide variety of topics on Quora, and longform explainers on Substack, although strictly speaking I am an expert in none of them; caveat lector!

Posts
2

Sorted by New
4
· 6d ago · 1m read

Comments
56

Topic Contributions
1

Ajeya is already doing that with Kelsey Piper over at their blog Planned Obsolescence :) 

re: CATF, you can look at FP's cost-effectiveness analysis of CATF's work (past, future), along with their non-cost effectiveness-based reasoning (see Why do we trust this organisation?) and their general methodology for evaluating relative impact in high-uncertainty contexts like climate (where they argue that "bottom-up cost-effectiveness analyses as well as bottom-up plausibility checks... are fundamentally insufficient for claims of high impact"), and judge for yourself. I personally think that the notion of "CATF offsets" doesn't make much sense once I drilled down to that level; if I donate to them it won't be for ethical offsetting reasons. 

re: the vast majority of offsetting-oriented climate charities, I'm skeptical myself. 

I just learned about Tom Frieden via Vadim Albinsky's writeup Resolve to Save Lives Trans Fat Program for Founders Pledge. His impact in sheer lives saved is astounding, and I'm embarrassed I didn't know about him before: 

The CEO of RTSL, Tom Frieden, likely prevented tens of millions of deaths by creating an international tobacco control initiative in a prior role that may have been much more cost effective than most of our top recommended charities. ...

We believe that by leveraging his influence with governments, and the relatively low cost of advocating for regulations to improve health, Tom Frieden has the potential to again save a vast number of lives at a low cost. 

How many more? Albinsky estimates:

RTSL is aiming to save 94 million lives over 25 years by advocating for countries to implement policies to reduce non-communicable diseases. We believe the industrially-produced trans fat elimination program is the most cost-effective of their initiatives. ... Even after very conservative discounts to RTLS’s impact projections we estimate this program to be more cost effective than most of our top global health and development recommendations.

Tangentially, if a "Borlaug" is a billion lives saved, then Frieden's impact is probably on the scale of ~100 milliBorlaugs (to nearest OOM). Bill and Melinda likely have had similar impact. This makes me wonder who else I don't know about who's done ~100 milliBorlaugs of good. 

(It's arguably unfair to wholly attribute all those lives saved to Frieden, and I am honestly unsure what credit attribution makes most sense, but applying the same logic to Borlaug you can no longer really say he saved a billion lives.)

However I still find myself reluctant to put AI as my priority despite knowing these things.

One way out is to simply not put AI as your own, personal, priority (vs say "the wider EA community's priority", a separate question altogether). 80,000 Hours' problem profiles page for instance explicitly says that their list of most pressing world problems, where AI risk features at the top, is 

ranked roughly by our guess at the expected impact of an additional person working on them, assuming your ability to contribute to solving each is similar

which is already an untrue assumption, as they clarify in their problem framework:

While personal fit is not assessed in our problem profiles, it is relevant to your personal decisions. If you enter an area that you find totally demotivating, then you’ll have almost no impact. 

Given the ostensible reluctance in your post, I'm not sure that you yourself should make AI safety work your top priority (although you can still e.g. donate to the Long-Term Future Fund, one of GWWC's top recommendations in this area, and read Holden's writing and discuss it with others, and so on, none of which require such drastic re-prioritization).  

Also, since other commenters / answerers will likely supply materials in support of prioritizing AI safety, for the sake of good epistemics I think it's worth signal-boosting a good critique of it, so consider checking out Nuno Sempere's My highly personal skepticism braindump on existential risk from artificial intelligence.

I'm curious what people who're more familiar with infinite ethics think of Manheim & Sandberg's What is the upper limit of value?, in particular where they discuss infinite ethics (emphasis mine):

Bostrom’s discussion of infinite ethics is premised on the moral relevance of physically inaccessible value. That is, it assumes that aggregative utilitarianism is over the full universe, rather than the accessible universe. This requires certain assumptions about the universe, as well as being premised on a variant of the incomparability argument that we dismissed above, but has an additional response which is possible, presaged earlier. Namely, we can argue that this does not pose a problem for ethical decision-making even using aggregative ethics, because the consequences of any ethical decision can have only a finite (difference in) value. This is because the value of a moral decision relates only to the impact of that decision. Anything outside of the influenced universe is not affected, and the arguments above show that the difference any decision makes is finite.

I first read their paper a few years ago and found their arguments for the finiteness of value persuasive, as well as their collectively-exhaustive responses in section 4 to possible objections. So ever since then I've been admittedly confused by claims that the problems of infinite ethics still warrant concern w.r.t. ethical decision-making (e.g. I don't really buy Joe Carlsmith's arguments for acknowledging that infinities matter in this context, same for Toby Ord's discussion in a recent 80K podcast). What am I missing?

I think GiveWell shouldn’t be modeled as wanting to recommend organizations that save as many current lives as possible. I think a more accurate way to model them is “GiveWell recommends organizations that are [within the Overton Window]/[have very sound data to back impact estimates] that save as many current lives as possible.” If GiveWell wanted to recommend organizations that save as many human lives as possible, their portfolio would probably be entirely made up of AI safety orgs.

This paragraph, especially the first sentence, seems to be based on a misunderstanding I used to share, which Holden Karnofsky tried to correct back in 2011 (when he was still at GiveWell) with the blog post Why we can’t take expected value estimates literally (even when they’re unbiased) in which he argued (emphasis his):

While some people feel that GiveWell puts too much emphasis on the measurable and quantifiable, there are others who go further than we do in quantification, and justify their giving (or other) decisions based on fully explicit expected-value formulas. The latter group tends to critique us – or at least disagree with us – based on our preference for strong evidence over high apparent “expected value,” and based on the heavy role of non-formalized intuition in our decisionmaking. ...

We believe that people in this group are often making a fundamental mistake... [of] estimating the “expected value” of a donation (or other action) based solely on a fully explicit, quantified formula, many of whose inputs are guesses or very rough estimates. 

We believe that any estimate along these lines needs to be adjusted using a “Bayesian prior”; that this adjustment can rarely be made (reasonably) using an explicit, formal calculation; and that most attempts to do the latter, even when they seem to be making very conservative downward adjustments to the expected value of an opportunity, are not making nearly large enough downward adjustments to be consistent with the proper Bayesian approach.

This view of ours illustrates why – while we seek to ground our recommendations in relevant facts, calculations and quantifications to the extent possible – every recommendation we make incorporates many different forms of evidence and involves a strong dose of intuition. And we generally prefer to give where we have strong evidence that donations can do a lot of good rather than where we have weak evidence that donations can do far more good – a preference that I believe is inconsistent with the approach of giving based on explicit expected-value formulas (at least those that (a) have significant room for error (b) do not incorporate Bayesian adjustments, which are very rare in these analyses and very difficult to do both formally and reasonably).

(He since developed this view further in the 2014 post Sequence thinking vs cluster thinking.) Further down, Holden wrote 

My prior for charity is generally skeptical, as outlined at this post. Giving well seems conceptually quite difficult to me, and it’s been my experience over time that the more we dig on a cost-effectiveness estimate, the more unwarranted optimism we uncover.

This guiding philosophy hasn't changed; in GiveWell's How we work - criteria - cost-effectiveness they write:

Cost-effectiveness is the single most important input in our evaluation of a program's impact. However, there are many limitations to cost-effectiveness estimates, and we do not assess programs solely based on their estimated cost-effectiveness. We build cost-effectiveness models primarily because:

  • They help us compare programs or individual grant opportunities to others that we've funded or considered funding; and
  • Working on them helps us ensure that we are thinking through as many of the relevant issues as possible.

which jives with what Holden wrote in the relative advantages & disadvantages of sequence thinking vs cluster thinking article above. 

Note that this is for global health & development charities, where the feedback loops to sense-check and correct cost-effectiveness analyses that guide resource allocation & decision-making are much clearer and tighter than for AI safety orgs (and other longtermist work more generally). If it's already this hard for GHD work, I get much more skeptical of CEAs in AIS with super-high EVs, just in model uncertainty terms

This isn't meant to devalue AIS work! I think it's critical and important, and I think some of the "p(doom) modeling" work is persuasive (MTAIR, Froolow, and Carlsmith come to mind). Just thought that "If GiveWell wanted to recommend organizations that save as many human lives as possible, their portfolio would probably be entirely made up of AI safety orgs" felt off given what they're trying to do, and how they're going about it. 

Hi Catherine! Great writeup, I really liked it :) I especially liked "Good call 4: I reliably did stuff that seemed to need doing, even if they were boring, low status, or unpaid. I tried to be of service to others." It reminds me of Miranda's essay The Importance of Sidekicks, which resonated with me more than the usual hero narratives I hear bandied about.

Also:

I was very influenced by some earlier EAs like Julia and Jeff - who gave up significant resources to make a big difference with not a lot of encouragement from the world.

Julia and Jeff's story was personally inspirational to me as well. I kept going back to the anecdotes in Strangers Drowning by Larissa MacFarquhar in the chapter profiling Julia, Jeff and the early EA movement; they were powerfully moving. 

I think I buy that interventions which reduce either catastrophic or extinction risk by 1% for < $1 trillion exist. I'm less sure as to whether many of these interventions clear the 1,000x bar though, which (naively replacing US VSL = $7 mil with AMF's ~$5k) seems to imply 1% reduction for < $1 billion. (I recall Linch's comment being bullish and comfortable on interventions reducing x-risk ~0.01% at ~$100 mil, which could either be interpreted as ~100x i.e. in the ballpark of GiveDirectly's cash transfers, or as aggregating over a longer timescale than by 2050; the latter is probably the case. The other comments to that post offer a pretty wide range of values.) 

That said, I've never actually seen a BOTEC justifying an actual x-risk grant (vs e.g. Open Phil's sample BOTECs for various grants with confidential details redacted), so my remarks above seem mostly immaterial to how x-risk cost-effectiveness estimates inform grant allocations in practice. I'd love to see some real examples.

Perhaps it's less surprising given who counted as 'superforecasters', cf magic9mushroom's comment here? I'm not sure how much their personal anecdote as participant generalizes though.

I'm confused by the disagree-votes on Malde's comment, since it makes sense to me. Can anyone who disagreed explain their reasoning?

Load more