joshcmorrison

Topic Contributions

Comments

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Yeah it's unclear how much of the 20% reduction is due to OP's work or would happen counterfactually. My main point with that number is that reductions of that size are very possible, which implies assuming a 1-10% chance of that level of impact at a funding level 10-100x OP's amount is overly conservative (particularly since I think  OP was funding like 25%  of American CJR work -- though that number may be a bit off). 

Another quick back of the envelope way to do the math would be to say something like: assume 1. 50% of policy change is due to deliberate advocacy, 2. OP's a funder of average ability that is funding 25% of the field, 3. the 20% 2009-2018 change implies a further 20% change is 50% likely at their level of funding, then I think you get 6.25% (.5*.25*.5) odds of OP's $25M a year funding level achieving a 20% change in incarceration rates. If I'm looking at your math right (and sorry if not) a 20% reduction for 10 years would be worth  like (using point estimates) 4M QALYs (2M people *10 years *1 QALY*20% decrease), which I think would come out to 250K QALYs in expectation (6.25%*4M), which at $25M/year for 10 years would be 1K/QALY -- similar to your estimate for GiveDirectly but worse than AMF. (Sorry if any of that math is wrong -- did it quickly and haphazardly)
 

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Thanks for putting this together! I think criticizing funders is quite valuable, and I commend your doing so. My main object-level thought here is I suspect much of the disagreement with the OP funding decision is based around the 1%-10% estimate of a $2B-$20B campaign leading to a 25%-75% decrease in incarceration. Since, per this article, incarceration rates in the U.S. have declined 20% (per person) between 2008 and 2019, your estimates here seem somewhat pessimistic to me. 

My guess is at the outset, OP would have predicted a different order of magnitude for both of those numbers (so I would have estimated something closer to $500M-$5B would produce a 5%-50% chance of a 25%-75% decrease), particularly since (as has been mentioned in other comments) it seemed a particularly tractable moment for criminal justice reform (since crime had declined for a long-time, there was seeming bipartisan support for reform, costs of incarceration were high, and U.S. was so far outside the global norm). By my quick read of the math, that change would put the numbers on par with your estimates for global health stuff. 

As someone who's worked on criminal justice reform (on a volunteer basis, not OP-funded though inspired by OP-funded work like Just Leadership's Close Rikers campaign), two features of the field that are striking to me are: 1. OP's original vision was to reduce incarceration while also reducing crime -- I don't think the "reduce crime" half ended up being a main goal of the work, which I think has probably made it less politically robust. 2. A lot of criminal justice reform work (including mine) has stemmed from the thesis that empowering the voices of current and formerly incarcerated people politically would be politically beneficial; I think in retrospect this may have been mistaken (or at least an incomplete hypothesis) and that, more broadly, left identity-politics based strategies of the 2010s have not been as politically (especially electorally) successful as I at least had hoped. 

On a meta-level, I think estimating impact of past grantmaking is very important, and EAs should do more of it. (I also think something along these lines could theoretically provide a scalable internship program for EA college students, since estimating impact teaches both cause prioritization skills and skills understanding the operations of organizations trying to achieve EA goals). 
 

Edit: I should clarify that I've received significant funding from OP (including from their US Policy side that covered their criminal justice work), so I'm naturally biased in its favor

Some potential lessons from Carrick’s Congressional bid

Am a bit late to this but wanted to jot down a few thoughts:

  1. Does EA Represent Electoral Constituents? Since EA is cosmopolitan and disregards national and possibly temporal boundaries, does that mean  EA politicians will prioritize non-voters over the interest of voters? A lot of EAs may feel like which Americans have health insurance coverage or have a right to an abortion is less important than Africans dying of malaria or humanity going extinct. But 1. is this a legitimate basis for democratic politics and 2. if legitimate, will espousing it inherently be a losing electoral strategy (since EA politicians will quickly be branded as doing a suboptimal job representing their voters)?
  2. Should We Be Open to a Chesterton's Fence Around Money in Politics? One political science mystery (which EA investment in electoral races has tried to exploit) is that there is less money in politics than might be rational from the economic self-interest of motivated stakeholders (i.e. campaign donations and lobbying expenditures are much less than the government spending they help determine). But the Flynn campaign experience implies that at least being identified with a single donor creates strong backlash, so we should perhaps more carefully consider explanations for the "too little money in politics" mystery that aren't simply "it is rational to spend more money on politics."
  3. Should We Require Local Buy-In to Run in Electoral Races? I'd love to understand better the local Oregonian organizing and stakeholder building that was done for this campaign. I'm also curious to what extent Nick Kristof (who was running for Governor of Oregon and has written sympathetically about effective altruism) was engaged. I was frankly pretty surprised not to see Kristof publicly on board, particularly because he'd built a gubernatorial campaign that ended up not being used (since he was excluded from the ballot for residency reasons). Given the heavy carpetbagging criticism of Flynn (outside crypto money, hadn't voted in recent elections, etc.) and some of the issues that have come up (criticism on local reddits, Oregonians posting negatively about the campaign on this forum, perhaps a misguided tactic of bringing in outside volunteers for door-knocking) along with question 1. i raise above, it may be uniquely valuable in future races to have at least some local community groups bought in ahead of time.
Demandingness and Time/Money Tradeoffs are Orthogonal

Thanks Caroline for writing this! I think it's a really rich vein to mine because it pulls together several threads I've been thinking a lot about lately.

One issue it raises is should we care about the "altruist" in effective altruists? If someone is doing really useful things because they think FTX will pay them a lot of money or fund their political ambitions, is this good because useful things happen or bad because they won't be a trustworthy agent for EA when put into positions of power? My instinct is to prefer giving people good incentives than selecting people who are virtuous: I think virtue tends to be very situationally dependent and that very admirable people can do bad things and self-deceive if it's in their interest to do so. But it's obviously not either-or. I also tend to have fairly bourgeois personal preferences and think EA should aspire to universality such that lots of adherents can be materially prosperous and conventionally successful and either donate ~20% of their income or work/volunteer for a useful cause (a sort of prosperity gospel form of EA amenable to wide swathes of the professional and working class rather than a self-sacrifice form that could be more pure). 

A separate issue is one of community health. So on an individual level maybe it could be fine if people join EA because the retreats are lit and the potential for power and status is high, but as a group there may be some like tipping point where people's self-identity changes as the community in fact prizes the perks and status over results. This could especially be a concern insofar as 1. goals that are far off make it easy to self-deceive about progress and 2. building the EA community can be seen as an end in itself in a way that risks circularity and self-congratulation. You can say the solution here is to really elevate people who do in fact achieve good results (because achieving good things for the world is what we care about), but lots of results take a long time to unfold (even for "near-termist" causes) and are uncertain (e.g. Open Phil's monetary policy and criminal justice reform work, both of which I admire and think have been positive). For example, while I've been in the Bahamas, people have been very complementary of 1Day Sooner (where I work and which I think EAs tend to see as a success story). I'm proud of my work at 1Day and hopeful what we've already done is expanding the use of challenge studies to develop better vaccines, but despite achieving some intermediate procedural successes (positive press coverage, some government buy-in and policy choices, some academic and bioethics work), I think the jury is very much still out on what our impact will end up being and most of our impact will likely come from future work. 

The point about self-identity and developing one's moral personhood really drives me in a direction of wanting to encourage people to make altruist choices that  are significant and legible to themselves and others. For example, becoming a kidney donor made me identify myself more with the desire to have an impact which led me further into doing EA types of work. I think the norm of donating a significant portion of your income to charity is an important one for this reason, and I've been disappointed to see that norm weaken in recent years. I do worry that some of the types of self-sacrificing behavior you mention aren't legible enough or state change-y enough to have this permanent character/self-identity building effect. 

There's an obvious point here about PR and I do think committing to behavior that we're proud to display in public is an important principle (though not one that I think necessarily cuts against paying EAs a lot). First, public display is epistemically valuable because (a) it unearths criticisms and ideas an insular community won't necessarily generate and (b) views that have overlapping consensus among diverse audiences are more likely to be true. Second, hiding things isn't a sustainable strategy and also looks bad on its own terms. 

Last thought that is imperfectly related is I do think there may be a bit of a flaw in EA considering meta-level community building on the same plane as object-level work and this might be a driving a bit of inflation in meta-level activities that manifests itself in opulent EA college resources (and maybe some other things) that are intuitively jarring even as they can seem intellectually justified. So if you consider object and meta-level stuff on the same plane, the $1 invested in recruiting EAs who then eventually spend $10 and recruit more EAs seems like an amazing investment (way better than spending that $1 on an actual EA object level activity). But this seems intuitively to me like it's missing something and discounting the object level $ for the $ spent on the meta-level needed for fundraising doesn't seem to satisfy the problem. I'm not sure but I think the issue (and this also applies to other power-seeking behavior like political fundraising) is that the community building is self-serving (not "altruistic") and from an  view outside of EA does not seem morally praiseworthy. We could take the position that that outside view is simply wrong insofar as it doesn't take into the account the possibility that we are in fact right about our movement being right. The Ponzi-ishness of the whole thing doesn't quite sit well, but I haven't come to a well-reasoned view.

The Bioethicists are (Mostly) Alright

Thanks for writing this! I run 1Day Sooner (and have a lots of thoughts about bioethics), so I have a special interest. 

I really agree with the point that complaints about bioethics are less about the positions of individual bioethicists than the outcomes of bioethical institutions. So I think it's worth asking why these institutions lead to frustrating outcomes. Some briefly sketched out, somewhat simplistic thoughts:

  • Conservatism: Structurally, bioethical scrutiny adds friction to accomplishing whatever action it is being applied to. Providing a justification takes time and effort (as does reviewing the quality of that justification and suggesting and making remedial measures). That friction reduces unethical action but it also reduces action of all types, and the cost of that general inaction is not accounted for in bioethical review. One line I like to use about IRBs/RECs is that they're like a driver who  has only a brake and no gas pedal. But I think more broadly the academic act of ethical inquiry problematizes decisions into potential mistakes, which increases the complexity of the decision being made and therefore the difficulty in making it. (To be clear, one could argue this tradeoff of fewer ethical abuses for reduced dynamism is worthwhile)
  • Parochialism: Because bioethical institutions often exist to translate legal regulations into practice, they are embedded in local concerns and are not cosmopolitan. They are more likely to focus case-by-case (and have insufficient incentive to create rules that would apply globally), and they also are unlikely to take the lives of people outside of rich countries seriously (which matters particularly in a research context).
  • Illiberalism: In many cases, bioethicists are partially acting as agents for subjects (like research participants) who do not choose them and have no ability to appeal their judgments. Because bioethicists are  (1)  in fact different from the people on whose behalf they are making decisions and (2) (unconsciously) motivated to maintain their power/resources, their decision-making is imperfect and paternalistic. 
  • What Are Bioethics For? I think retreating to the safety of the academy (i.e. separating the generally reasonable intellectual arguments of bioethicist academics from the practical decision-making of translating biomedical regulation into practice) is not a tenable move for the bioethics field to make. Bioethics exists (i.e. is largely funded) to help solve problems in medical and biological spaces. If in practice, those problems are being solved poorly, that seems like something the bioethics field needs to take up and solve. Otherwise, what's the use? We could  just have philosophers, biolawyers, and doctors. 
Have any EA nonprofits tried offering staff funding-based compensation? If not, why not? If so, how did it go?

Yeah I wasn't really talking about EA donors per se: I think EA nonprofits should try to be funded by non EA donors (/expand the EA community) to the extent possible and that we also shouldn't assume there's a clear differentiation between EA and non-EA donors.

That said, I do think the tax effect I outlined would reasonably be of concern to EA donors  or insofar as it's not because the compensation mechanism will definitely create better results, it may make the argument a bit circular. I also think there's a principle/agent problem with donors (maximize impact) and non-profit staff (motivated consciously or unconsciously in part by maximizing compensation/job security), and it would be a mistake to assume that shared EA values fully solve that problem. 

Have any EA nonprofits tried offering staff funding-based compensation? If not, why not? If so, how did it go?

This is an intriguing idea, and I'm all for experimentation in nonprofits generally and with compensation specifically. I also find nonprofit performance incentives potentially valuable and interesting.

One problem I see is lots of funders would hate this: from their perspective it creates a sort of tax on their donation. Instead of the whole donation going to whatever new thing they'd want to fund, a percentage gets set aside for current employees. I think this is part of the reason (per Jared's smart reply) that grantwriting commissions are looked down upon in the industry. 

Another problem could be that lots of donors want to feel like nonprofit employees are not motivated by money, and implying otherwise could make the nonprofit unatttractive. 

I think the broader principle-agent issue is that funding and results are orthogonal to one another (probably the central problem for nonprofits generally), so compensating based on funding raised incentivizes employees to pursue flashy or unfairly charismatic projects, overpromise, and embellish or lie about results. (Though to be clear, nonprofits/nonprofit fundraisers already face these incentives). 

One analogous idea I've noodled around with a bit is results-based bonuses where you set a goal, a probability of success, and a dollar figure for achieving the goal, and you set aside a pool of money where if the employee achieves the result they receive the amount it's worth divided by the estimated probability of success. If my job were an example, this would look something like if 1Day Sooner's goal is to have a 50% chance of our work being the but-for cause of saving ~4 million DALYs by 2030, you could set aside $100K of my compensation each year and if we achieved the goal, I'd receive 2x that. one problem is a lot of results take a long-term to materialize (and are hard to prove/calculate reliably), and you might have to pay too many premiums (to adjust for risk + time value of money) to make it worth it.

Pathways to impact for forecasting and evaluation

I'm very much not a visual person, so I'm probably not the most helpful critic of diagrams like this. That said, I liked Ozzie's points (and upvoted his post). I'm also not sure what the proper level of abstraction should be for the diagram -- probably whatever you find most helpful.

A couple preliminary and vague thoughts on the substantive use cases of forecasting that insofar as they currently appear do so in a somewhat indirect way:

  • Developing Institutionally Reliable Forecasts: This seems to fall under the "track record" and maybe "model of the world" boxes, but my idea here would be if you can develop a track record of accurate forecasting for some system, you can use that system as part of an institutional decision-making process when it forecasts a result at a certain probability. Drug development would be a good example: the FDA could have a standard of authorizing any drug that a reliable forecaster gave a >95% probability of licensure (or of some factual predicate like efficacy). Another set of applications could be in litigation (e.g. using reliable forecasters in a contract arbitration context). The literature around prediction markets probably has a lot of examples of use cases of this type. It might be difficult, though, to create a forecasting system robust to the problem of becoming contaminated and gamed when tied to an important outcome. 
  • Predictive Coding: There's an idea in neuroscience that perception involves creating a predictive model of the world and updating the model in response to errors reported by sensory data. Some people (like Karl Friston and Andy Clark) argue that action and perception are largely indistinct and run on the same mechanism -- so you act (like lifting your hand) via your brain predicting you will act. SlateStarCodex has a good summary of this. It seems like developing more fine-grained, reliable, and publicly legible forecasting machinery may have useful applications in policy-making by perhaps allowing the construction of a rudimentary version of something analogous. Some of the ideas under my first bullet might fit this concept, but you could also imagine having mechanisms that target the reliable forecast (using the forecast as a correlate of whatever actual change you're trying to achieve in the world). Another way of thinking of this might be to use forecasting to develop a more sophisticated perceptual layer in  policy-making. 
EA Should Spend Its “Funding Overhang” on Curing Infectious Diseases

Not opposed to EA anti-aging research, but my intuition would be targeting infectious disease allows for more rapid iteration and proof of concept because the solutions are easier to publicly demonstrate in the short term. So I think it provides better training for EA methods (which could in turn enhance aging research). 

 

Also, infectious disease affects poor people disproportionately and as such there's more likely to be a market failure and undersupply of resources rather than aging, which is a proportional problem between the rich and poor.

Load More