Bio

Participation
4

CE/AIM Research Training Program graduate and research contractor at ARMoR under the Global Impact Placements program, working on research & quantitative modeling to support policy advocacy for market-shaping tools to help combat AMR, and also exploring similar "decision guidance" roles e.g. applied prioritization research. Previously supported by a FTX Future Fund regrant and later Open Philanthropy's affected grantees program. Before that I spent 6 years doing data analytics, business intelligence and knowledge + project management in various industries (airlines, e-commerce) and departments (commercial, marketing), after majoring in physics at UCLA. I've also initiated some local priorities research efforts, e.g. a charity evaluation initiative with the moonshot aim of reorienting Malaysia's giving landscape towards effectiveness, albeit with mixed results. 

I first learned about effective altruism circa 2014 via A Modest Proposal, a polemic on using dead children as units of currency to force readers to grapple with the opportunity costs of subpar resource allocation under triage. I have never stopped thinking about it since, although my relationship to it has changed quite a bit; I related to Tyler's personal story (which unsurprisingly also references A Modest Proposal as a life-changing polemic):

I thought my own story might be more relatable for friends with a history of devotion – unusual people who’ve found themselves dedicating their lives to a particular moral vision, whether it was (or is) Buddhism, Christianity, social justice, or climate activism. When these visions gobble up all other meaning in the life of their devotees, well, that sucks. I go through my own history of devotion to effective altruism. It’s the story of [wanting to help] turning into [needing to help] turning into [living to help] turning into [wanting to die] turning into [wanting to help again, because helping is part of a rich life].

Comments
124

Topic contributions
3

Upvoted :) 

I agree with Ben Millwood's comment that I don't think this would change many decisions in practice. 

To add another point, input parameter uncertainty is larger than you probably think, even for direct-delivery GHD charities (let alone policy or meta orgs). The post Quantifying Uncertainty in GiveWell Cost-Effectiveness Analyses visualises this point particularly vividly; you can see how a 10% change doesn't really change prioritisation much:

InterventionGiveWellOur Mean95% CIDiference
Against Malaria Foundation0.03750.03840.0234 - 0.0616+2.4%
GiveDirectly0.003350.003590.00167 - 0.00682+7%
Helen Keller International0.05410.06110.0465 - 0.0819+12.8%
Malaria Consortium0.0310.03180.0196 - 0.0452+2.52%
New Incentives0.04580.05210.0139 - 0.11713.8%

(Look at how large those 95% CIs are vs a 10% change.)

I think a useful way to go about this is to ask, what would have to change to alter the decisions (e.g. top-recommended charities, intervention ideas turned into incubated charities, etc)? This gets you into uncertainty analysis, to which I'd point you to froolow's Methods for improving uncertainty analysis in EA cost-effectiveness models.

The ARC Prize website takes this definitional stance on AGI:

Consensus but wrong:

AGI is a system that can automate the majority of economically valuable work.

Correct:

AGI is a system that can efficiently acquire new skills and solve open-ended problems.

Something like the former definition, central to reports like Tom Davidson's CCF-based takeoff speeds for Open Phil, basically drops out of (the first half of the reasoning behind) the big-picture view summarized in Holden Karnofsky's most important century series: to quote him, the long-run future would be radically unfamiliar and could come much faster than we think, simply because standard economic growth models imply that any technology that could fully automate innovation would cause an "economic singularity"; one such technology could be what Holden calls PASTA ("Process for Automating Scientific and Technological Advancement"). In What kind of AI? he elaborates (emphasis mine)

I mean PASTA to refer to either a single system or a collection of systems that can collectively do this sort of automation. ...

By talking about PASTA, I'm partly trying to get rid of some unnecessary baggage in the debate over "artificial general intelligence." I don't think we need artificial general intelligence in order for this century to be the most important in history. Something narrower - as PASTA might be - would be plenty for that. ...

I don't particularly expect all of this to happen as part of a single, deliberate development process. Over time, I expect different AI systems to be used for different and increasingly broad tasks, including and especially tasks that help complement human activities on scientific and technological advancement. There could be many different types of AI systems, each with its own revenue model and feedback loop, and their collective abilities could grow to the point where at some point, some set of them is able to do everything (with respect to scientific and technological advancement) that formerly required a human.

This is why I think it's basically justified to care about economy-growing automation of innovation as "the right working definition" from the x-risk reduction perspective for a funder like Open Phil in particular, which isn't what an AI researcher like Francois Chollet cares about. Which is fine, different folks care about different things. But calling the first definition "wrong" feels like the sort of mistake you make when you haven't at least good-faith effort tried to do what Scott suggested here with the first definition: 

... if you're looking into something controversial, you might have to just read the biased sources on both sides, then try to reconcile them.

Success often feels like realizing that a topic you thought would have one clear answer actually has a million different answers depending on how you ask the question. You start with something like "did the economy do better or worse this year?", you find that it's actually a thousand different questions like "did unemployment get better or worse this year?" vs. "did the stock market get better or worse this year?" and end up with things even more complicated like "did employment as measured in percentage of job-seekers finding a job within six months get better" vs. "did employment as measured in total percent of workforce working get better?". Then finally once you've disentangled all that and realized that the people saying "employment is getting better" or "employment is getting worse" are using statistics about subtly different things and talking past each other, you use all of the specific things you've discovered to reconstruct a picture of whether, in the ways important to you, the economy really is getting better or worse.

Note also that PASTA is a lot looser definitionally than the AGI defined in Metaculus' When will the first general AI system be devised, tested, and publicly announced? (2031 as of time of writing), which requires the sort of properties Chollet would probably approve (single unified software system, not a cobbled-together set of task-specialized subsystems), yet if the PASTA collective functionally completes the innovation -> resources -> PASTA -> innovation -> ... economic growth loop, that would already be x-risk relevant. The argument would then need to be "something like the Chollet's / Metaculus' definition is necessary to complete the growth loop", which would be a testable hypothesis.

AMF does. Quoting Rob Mathers' (AMF CEO) recent post, emphasis mine:

Many recognise the impact of AMF’s work, yet we still have significant immediate funding gaps that are over US$300m. ...

There is already a significant shortfall in funding for malaria control activities, including for net distribution programmes so miraculous things will have to happen in the coming year if we are to get anywhere close, globally and across all funding partners, to where we need to be to be able to drive malaria impact numbers down. Counterfactually of course, if the funding that is being brought to bear was not there, the number of people affected by malaria would be horrifically higher. Currently there are ~620,000 deaths a year from malaria and 250 million people fall sick. 

The Global Fund is the world’s largest funder of malaria control activities and has a funding replenishment round every three years, with funding provided by global governments, that determines the funds it has available across three disease areas: HIV/Aids, malaria and TB. The target for the 2024 to 2026 period was raising US$18 billion, largely to stand still. The funding achieved was US$15.7 billion. The shortfall will have major ramifications and we are already seeing the impact in planning in the Democratic Republic of Congo, one of the two countries in the world worst affected by malaria, for the 2024 to 2026 programme. Currently only 65% of the nets desperately needed will be able to be funded. We have never had this low a percentage of funding at this stage, with limited additional funding forecast.

The latest actual publicly-available RFMF figure I can find for AMF, and the other top GW charities, is here from Q3 2020, which is probably what you're referring to in the OP by "It's hard to find up-to-date data"; back then it was just $37.8M, nearly an order of mag lower, although I'm not sure whether Rob's and GiveWell's RFMF figures are like for like.

The justifications for these grants tend to use some simple expected value calculation of a singular rosy hypothetical casual chain. The problem is it's possible to construct a hypothetical value chain to justify any sort of grant. So you have to do more than just make a rosy casual chain and multiply numbers through.

Worth noting that even GiveWell doesn't rely on a single EV calculation either (however complex). Quoting Holden's 10 year old writeup Sequence thinking vs. cluster thinking:

Our approach to making such comparisons strikes some as highly counterintuitive, and noticeably different from that of other “prioritization” projects such as Copenhagen Consensus. Rather than focusing on a single metric that all “good accomplished” can be converted into (an approach that has obvious advantages when one’s goal is to maximize), we tend to rate options based on a variety of criteria using something somewhat closer to (while distinct from) a “1=poor, 5=excellent” scale, and prioritize options that score well on multiple criteria.

We often take approaches that effectively limit the weight carried by any one criterion, even though, in theory, strong enough performance on an important enough dimension ought to be able to offset any amount of weakness on other dimensions. 

... I think the cost-effectiveness analysis we’ve done of top charities has probably added more value in terms of “causing us to reflect on our views, clarify our views and debate our views, thereby highlighting new key questions” than in terms of “marking some top charities as more cost-effective than others.”

I'd be interested to see explanations from the disagree-voters (even short ones would be useful). Was it the proposed renaming? The description draft? Something else? 

Yeah I roll to disbelieve too. One of my quantitative takeaways from Andrew Gelman's modelling of the 2020 elections was that very few states (4/50, in particular New Hampshire, Pennsylvania, Wisconsin, and Michigan) were modelled as close enough that p(one vote changes outcome) > 1 in 10 million; New Hampshire tops the list at 1 in 8M. Optimistically assuming $100 per voter that's still nearly a billion dollars at the very low end; a more realistic estimate would probably be ~1 OOM higher. Probably some sort of nonlinearity kicks in at this scale, or the most cost-effective tactics to sway voters cap out at relatively low levels for whatever reason?

On the flip side, I'm reminded of Scott's essay Too much dark money in almonds? which provides an intuition pump for why it might be the case that it's not as expensive as you may expect to swing the election:

Everyone always talks about how much money there is in politics. This is the wrong framing. The right framing is Ansolabehere et al’s: why is there so little money in politics? But Ansolabehere focuses on elections, and the mystery is wider than that. ... 

(in case you’re keeping track: all donations to all candidates, all lobbying, all think tanks, all advocacy organizations, the Washington Post, Vox, Mic, Mashable, Gawker, and Tumblr, combined, are still worth a little bit less than the almond industry. And Musk could buy them all.) ... 

In this model, the difference between politics and almonds is that if you spend $2 on almonds, you get $2 worth of almonds. In politics, if you spend $2 on Bernie Sanders, you get nothing, unless millions of other people also spend their $2 on him. People are great at spending money on direct consumption goods, and terrible at spending money on coordination problems.

(I don't really have an opinion either way on whether more or less money should be spent on this)

[Caveat that I don't know anything else about this] 

I recall Rob Wiblin's 80K article on voting referencing this summary table from the 2015 edition of Get Out The Vote claiming "$30-100 or a few hours of work as a volunteer" to "persuade one stranger to vote for your preferred candidate", a lot lower than the OP's claimed figures, and that even adjusting upwards for various factors doesn't worsen this by more than an OOM. 

Is it important to get others to vote? Here is a table of cost-effectiveness estimates of  various interventions to get out the vote.

(That said, I regard these figures basically the same way GW treats the best cost-effectiveness estimates in the DCP2/3) 

For structural AI risk, maybe start from Allan Dafoe's writings (eg this or this with Remco Zwetsloot) and follow the links to cited authors? Also Sam Clarke (here), Justin Olive (here

This sounds similar to what David Chapman wrote about in How to think real good; he's mostly talking about solving technical STEM-y research problems, but I think the takeaways apply more broadly:

Many of the heuristics I collected for “How to think real good” were about how to take an unstructured, vague problem domain and get it to the point where formal methods become applicable.

Formal methods all require a formal specification of the problem. For example, before you can apply Bayesian methods, you have to specify what all the hypotheses are, what sorts of events constitute “evidence,” how you can recognize one of those events, and (in a decision theoretic framework) what the possible actions are. Bayesianism takes these as given, and has nothing to say about how you choose them. Once you have chosen them, applying the Bayesian framework is trivial. (It’s just arithmetic, for godssakes!)

Finding a good formulation for a problem is often most of the work of solving it. [...]

Before applying any technical method, you have to already have a pretty good idea of what the form of the answer will be.

Part of a “pretty good idea” is a vocabulary for describing relevant factors. Any situation can be described in infinitely many ways. For example, my thinking right now could be described as an elementary particle configuration, as molecules in motion, as neurons firing, as sentences, as part of a conversation, as primate signaling behavior, as a point in world intellectual history, and so on.

Choosing a good vocabulary, at the right level of description, is usually key to understanding.

A good vocabulary has to do two things. Let’s make them anvils:

1. A successful problem formulation has to make the distinctions that are used in the problem solution.

So it mustn’t categorize together things that are relevantly different. Trying to find an explanation of manic depression stated only in terms of emotions is unlikely to work, because emotions, though relevant, are “too big” as categories. “Sadness” is probably a complex phenomenon with many different aspects that get collapsed together in that word.

2. A successful problem formulation has to make the problem small enough that it’s easy to solve.

Trying to find an explanation of manic depression in terms of brain state vectors in which each element is the membrane potential of an individual neuron probably won’t work. That description is much too complicated. It makes billions of distinctions that are almost certainly irrelevant. It doesn’t collapse the state space enough; the categories are too small and therefore too numerous.

It’s important to understand that problem formulations are never right or wrong.

Truth does not apply to problem formulations; what matters is usefulness.

In fact,

All problem formulations are “false,” because they abstract away details of reality. 

[...]

There’s an obvious difficulty here: if you don’t know the solution to a problem, how do you know whether your vocabulary makes the distinctions it needs? The answer is: you can’t be sure; but there are many heuristics that make finding a good formulation more likely. Here are two very general ones:

Work through several specific examples before trying to solve the general case. Looking at specific real-world details often gives an intuitive sense for what the relevant distinctions are.

Problem formulation and problem solution are mutually-recursive processes.

You need to go back and forth between trying to formulate the problem and trying to solve it. A “waterfall” approach, in which you take the formulation as set in stone and just try to solve it, is rarely effective.

(sorry for the overly long quote, concision is a work in progress for me...)

From Richard Y Chappell's post Theory-Driven Applied Ethics, answering "what is there for the applied ethicist to do, that could be philosophically interesting?", emphasis mine:

A better option may be to appeal to mid-level principles likely to be shared by a wide range of moral theories. Indeed, I think much of the best work in applied ethics can be understood along these lines. The mid-level principles may be supported by vivid thought experiments (e.g. Thomson’s violinist, or Singer’s pond), but these hypothetical scenarios are taken to be practically illuminating precisely because they support mid-level principles (supporting bodily autonomy, or duties of beneficence) that we can then apply generally, including to real-life cases.

The feasibility of this principled approach to applied ethics creates an opening for a valuable (non-trivial) form of theory-driven applied ethics. Indeed, I think Singer’s famous argument is a perfect example of this. For while Singer in no way assumes utilitarianism in his famous argument for duties of beneficence, I don’t think it’s a coincidence that the originator of this argument was a utilitarian. Different moral theories shape our moral perspectives in ways that make different factors more or less salient to us. (Beneficence is much more central to utilitarianism, even if other theories ought to be on board with it too.)

So one fruitful way to do theory-driven applied ethics is to think about what important moral insights tend to be overlooked by conventional morality. That was basically my approach to pandemic ethics: to those who think along broadly utilitarian lines, it’s predictable that people are going to be way too reluctant to approve superficially “risky” actions (like variolation or challenge trials) even when inaction would be riskier. And when these interventions are entirely voluntary—and the alternative of exposure to greater status quo risks is not—you can construct powerful theory-neutral arguments in their favour. These arguments don’t need to assume utilitarianism. Still, it’s not a coincidence that a utilitarian would notice the problem and come up with such arguments.

Another form of theory-driven applied ethics is to just do normative ethics directed at confused applied ethicists. For example, it’s commonplace for people to object that medical resource allocation that seeks to maximize quality-adjusted life years (QALYs) is “objectionably discriminatory” against the elderly and disabled, as a matter of principle. But, as I argue in my paper, Against 'Saving Lives': Equal Concern and Differential Impact, this objection is deeply confused. There is nothing “objectionably discriminatory” about preferring to bestow 50 extra life-years to one person over a mere 5 life-years to another. The former is a vastly greater benefit, and if we are to count everyone equally, we should always prefer greater benefits over lesser ones. It’s in fact the opposing view, which treats all life-saving interventions as equal, which fails to give equal weight to the interests of those who have so much more at stake.

Two asides:

  • This seems broadly correct (at least for someone who shares my biases); e.g. even in pure math John von Neumann warned

As a mathematical discipline travels far from its empirical source, or still more, if it is a second and third generation only indirectly inspired by ideas coming from "reality" it is beset with very grave dangers. It becomes more and more purely aestheticizing, more and more purely l'art pour l'art. This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men with an exceptionally well-developed taste. But there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganized mass of details and complexities. In other words, at a great distance from its empirical source, or after much "abstract" inbreeding, a mathematical subject is in danger of degeneration. ... In any event, whenever this stage is reached, the only remedy seems to me to be the rejuvenating return to the source: the re-injection of more or less directly empirical ideas.

  • This makes me wonder if it would be fruitful to look at & somehow incorporate mid-level principles into decision-relevant cost-effectiveness analyses that attempt to incorporate moral uncertainty, e.g. HLI's app or Rethink's CCM. (This is not at all a fleshed-out thought, to be clear)
Load more