IanDavidMoss

Topic Contributions

Comments

Before There Was Effective Altruism, There Was Effective Philanthropy

Do you think that some of the people who would have been attracted to effective philanthropy in the past now just join effective altruism?

Some, sure. EA seems to be a lot more mainstream now than it was even 3-4 years ago, so that's probably the main reason.

While I think EP has been influential, I just didn't find the work from CEP and similar places as intellectually engaging as what EA puts out (or as important overall).

I think the main thing EA has going for it over EP is that it has a much better track record of taking ideas seriously. EP explored a lot of promising directions and anticipated a number of things that EA organizations ended up doing (e.g., incorporating expected value estimates into grantmaking). But in my view the key players, in trying to optimize for elite credibility at the same time as intellectual rigor, didn't give themselves enough weirdness points to work with. As a result, they both failed to pursue their best ideas to their logical conclusion and didn't do enough to distinguish between transformative ideas and mediocre ones.

Before There Was Effective Altruism, There Was Effective Philanthropy

I wasn't there at the very beginning, but have followed the effective philanthropy "scene" since 2007 or so. My sense is that most EA community members aren't very knowledgeable about this whole side of institutional philanthropy, so I was pleasantly surprised to see the history recounted pretty accurately here! With that said, one quibble is that the book you cited entitled Effective Philanthropy by Mary Ellen Capek and Molly Mead is not one I'd ever heard of before reading this post; I think this is just a case of a low-profile resource happening to get good Google search results years later.

Here is a bit of additional background on the key players and some of their intersections, as I understand it:

  • The effective philanthropy movement was very much a child of the original dot-com boom in the late 1990s. While CEP is based in Boston, the scene was mostly driven by an earlier generation of West Coast tech magnates who were interested in bringing business concepts like results-based management to philanthropy. Education funding was viewed as a major priority and there were close ties to the charter school movement, which saw a number of influential organizations like KIPP incubated by funders looking to put these ideas into practice. With that said, CEP's Phil Buchanan has consistently pushed back against the idea that nonprofits are analogous to businesses, despite his own MBA from Harvard Business School.
  • The William and Flora Hewlett Foundation has an Effective Philanthropy Program and has been a major financial supporter of CEP for a long time. Hewlett's former president Paul Brest (2000-2012) pioneered the notion of "strategic philanthropy" which is closely related both in spirit and sociologically to this movement. Fun trivia note: Hewlett's Effective Philanthropy program was an early funder of GiveWell at the time when that organization was precariously situated (i.e., pre-Dustin & Cari).
  • Stanford Social Innovation Review was closely associated with this scene as well. With startup funding from Hewlett, I believe it was intended to be a Harvard Business Review for the social sector when it was founded in 2003. (HBR had published the original article on "venture philanthropy" in 1997.)
  • Some other funders that have been influential include Mario Marino's Venture Philanthropy Partners and his Leap of Reason community, the Edna McConnell Clark Foundation, the Robin Hood Foundation, and REDF (which developed the social return on investment methodology, a form of cost-benefit analysis).

Over the past decade, the consensus among US-based staffed foundations has shifted hard against some of the technocratic premises that drove the effective philanthropy movement, in particular its emphasis on measurable outcomes and tendency to invest lots of funder resources in strategy development. The Whitman Institute's work probably contributed in a minor way to that dynamic, but in my read a much stronger influence has been the growing emphasis on racial justice in the nonprofit sector since the dawn of the Black Lives Matter movement that, via a variety of pathways including the widespread socialization of Tema Okun's work, caused so-called "top-down" approaches like effective/strategic philanthropy to feel out of touch with the moment. One of the earliest points of tension was a series put out beginning in 2009 by the National Committee for Responsive Philanthropy called "Philanthropy at its Best" critiquing current foundation practices, which Brest wrote a four-part essay responding to in 2011. A parallel thread of critique comes from complexity science, via the argument that the wicked problems philanthropy is trying to solve are knotty enough that trying to predict the outcomes of philanthropic investments with any meaningful level of detail is a fool's errand, and funders should therefore defer to the expertise of grantees wherever possible. On that front, this essay from one of the co-founders of FSG (a philanthropy consultancy closely associated with Harvard Business School and the early days of venture philanthropy) was particularly influential.

I don't believe there was one single event that caused the momentum around effective philanthropy to fall apart, but by 2016 or so it was clear that its peak was in the rear-view mirror; a particularly dramatic turn was when Hal Harvey, Paul Brest's co-author on their 2008 book Money Well Spent which was written while Brest was still president of Hewlett, wrote an op-ed apologizing for his role advancing strategic philanthropy. There's a much longer conversation to have about to what extent and which of the critiques of effective philanthropy are worth attending to, and how they relate to effective altruism, but I'm happy to see it pointed out that many of the topics EA is most concerned with have been discussed at length in other venues.

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

I don't have any inside info here, but based on my work with other organizations I think each of your first three hypotheses are plausible, either alone or in combination.

Another consideration I would mention is that it's just really hard to judge how to interpret advocacy failures over a short time horizon. Given that your first try failed, does that mean the situation is hopeless and you should stop throwing good money after bad? Or does it mean that you meaningfully moved the needle on people's opinions and the next campaign is now likelier to succeed? It's not hard for me to imagine that in 2016-17 or so, having seen some intermediate successes that didn't ultimately result in legislation signed into law, OP staff might have held out genuine hope that victory was still close at hand. Or after the First Step Act was passed in 2018 and signed into law by Trump, maybe they thought they could convert Trump into a more consistent champion on the issue and bring the GOP along with him. Even as late as 2020, when the George Floyd protests broke out, Chloe's grantmaking recommendations ended up being circulated widely and presumably moved a lot of money; I could imagine there was hope at that time for transformative policy potential. Knowing when to walk away from sustained but not-yet-successful efforts at achieving low-probability, high-impact results, especially when previous attempts have unknown correlations with the probability of future success, is intrinsically a very difficult estimation problem. (Indeed, if someone at QURI could develop a general solution to this, I think that would be a very useful contribution to the discourse!)

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

One context note that doesn't seem to be reflected here is that in 2014, there was a lot of optimism for a bipartisan political compromise on criminal justice reform in the US. The Koch network of charities and advocacy groups had, to some people's surprise, begun advocating for it in their conservative-libertarian circles, which in turn motivated Republican participation in negotiations on the hill. My recollection is that Open Phil's bet on criminal justice reform funding was not just a "bet on Chloe," but also a bet on tractability: i.e., that a relatively cheap investment could yield a big win on policy because the political conditions were such that only a small nudge might be needed. This seems to have been an important miscalculation in retrospect, as (unless I missed something) a limited-scope compromise bill took until the end of 2018 to get passed. I'm not aware of any significant other criminal justice legislation that has passed in that time period. [Edit: while this is true at the national level, arguably there has been a lot of progress on CJR at state and local levels since 2014, much of which could probably be traced back to advocacy by groups like those Open Phil funded.]

This information strongly supports the "Leverage Hypothesis," which was cited by Open Phil staff themselves, so I think it ought to be weighted pretty strongly in your updates.

What’s the theory of change of “Come to the bay over the summer!”?

Separating out how important networking is for different kinds of roles seems valuable, not only for the people trying to climb the ladder but also for the people already on the ladder. (e.g., maybe some of these folks desperate to find good people to own valuable projects that otherwise wouldn't get done should be putting more effort into recruiting outside of the Bay.)

What’s the theory of change of “Come to the bay over the summer!”?

I like this comment because it does a great job of illustrating how socioeconomic status influences the risks one can take. Consider the juxtaposition of these two statements:

(from the comment)

Maybe this is mainly targeted at undergraduate students, who are more likely to have a few months of time over the summer with no commitments. But in that case how do they have the money to do what is basically an extended vacation? Most students aren't earning much/any money. 

  • Maybe this is only targeted at students who have wealthy families willing to fund expensive adventures.

(from the OP)

It’s unclear from the outside:

  • How easy it is to start a project and how secure this is relative to starting ambitious things outside of EA. Funding, advisors, a high-trust community, and social prestige are available...Looking at what scale EA projects in the bay operate at disperses false notions of limits and helps shoot for the correct level of ambition

Even once you know these things intellectually, it’s hard to act in accordance with them before knowing them viscerally, e.g., viscerally feel secure in starting an ambitious project. Coming to Berkeley really helps with that.

Let's say that for a typical motivated early-career EA, there's a 60% chance that moving to the Bay will result in desirable full-time employment within one month. (I have no idea if that's the correct number, just taking a wild guess.) From an expected-value standpoint, that seems like a great deal! Of course you would do that! But for someone who's resource-constrained, that 40% combined with the high living costs are really big red flags. What happens if things don't work out? What happens is that you've now blown all your savings and are up shit creek, and if you didn't embed yourself in the community well enough during that time to get a job, you probably don't have enough good friends to help you out of a financial hole either. So do you make the leap? Without a safety net or an upfront commitment, it's so much harder to opt for high-upside but riskier pathways, and that in turn ends up impacting the composition of the community.

Unflattering reasons why I'm attracted to EA

Really appreciate you writing this! Echoing others, I think many of these more self-serving motivations are pretty common in the community. With that said, I think some of these are much more potentially problematic than others, and the list is worth disaggregating on that dimension. For example, your comment about EA helping you not feel so fragile strikes me as prosocial, if anything, and I don't think anyone would have a problem with about someone gaining hope that their own suffering could be reduced from engaging in EA.

The ones that I think are most worrying and worth pushing back on (not just for you, but for all of us in the community) are:

  • Affiliation with EA aligns me with high-status people and elite institutions, which makes me feel part of something special, important and exclusive (even if it's not meant to be)
  • EA is partly an intellectual puzzle, and gives me opportunities to show off and feel like I'm right and other people are wrong / I don't have to get my hands dirty helping people, yet I can still feel as or more legitimate than someone who is actually on the front line
  • It is a way to feel morally superior to other people, to craft a moral dominance hierarchy where I am higher than other people

The first one is tricky, as affiliation with high-status people and organizations can be instrumentally quite useful for achieving impact--indeed, in some contexts it's essential--and for that reason we shouldn't reject it on principle. And just like I think it's okay to enjoy money, I think it's okay to enjoy the feeling of doing something special and important! The danger is in having the status become its own reward, replacing the drive for impact. I feel that this is something we need to be constantly vigilant about, as it's easy to mistake social signals of importance for actual importance (aka LARPing at impact.)

I grouped the "intellectual puzzle" and "get my hands dirty" items because I see them as two sides of the same coin. In recent years it feels to me that EA has lost touch a bit with its emotional core, which is arguably easier to bring forward in the contexts of animal welfare and global poverty than x-risk (and to the extent there is an emotional core to x-risk, it is mostly one of fear rather than compassion). I personally love solving intellectual puzzles and it's a big reason why I keep coming back to this community, but it mustn't come at the expense of the A in EA. I group this with "get my hands dirty" because I think for many of us, hard intellectual puzzles are our bread and butter and actually take less effort/provoke less discomfort than putting ourselves in a position to help people suffering right in front of us. I similarly see this one as a balance to strike.

The last one is the only one that I think is just unambiguously bad. Not only is it incorrect on its face, or at least at odds with what I see as EA's core values, but it is a surefire way to turn off people who might otherwise be motivated to help. And indeed there has been a history of people in EA publicly communicating in a way that came across to others as morally arrogant, especially in early years of the movement, which created rifts with mainstream nonprofit/social sector practice that are still there today (e.g.).

Revisiting the karma system

I think the issue is more that different users have very disparate norms about how often to vote, when to use a strong vote, and what to use it on. My sense (from a combination of noticing voting patterns and reading specific users' comments about how they vote) is that most are pretty low-key about voting, but a few high-karma users are much more intense about it and don't hesitate to throw their weight around. These users can then have a wildly disproportionate effect on discourse because if their vote is worth, say, 7 points, their opinion on one piece of content vs. another can be and often is worth a full 14 points.

In addition to scaling down the weight of strong votes as MichaelStJules suggested, another corrective we could think about is giving all users a limited allocation of strong upvotes/downvotes they can use, say, each month. That way high-karma users can still act in a kind of decentralized moderator role on the level of individual posts and comments, but it's more difficult for one person to exert too much influence over the whole site.

Revisiting the karma system

Sorry if I'm being dense, but where is this 4-tuple available?

Revisiting the karma system

I would be in favor of eliminating strong downvotes entirely. If a post or comment is going to be censored or given less visibility, it should be because a lot of people wanted that to happen rather than just two or three.

Load More