samuel

141Joined Mar 2022

Posts
1

Sorted by New

Comments
20

"My point is just that this nightmare is probably not one of a True Sincere Committed EA Act Utilitarian doing these things" - I agree that this is most likely true, but my point is that it's difficult to suss out the "real" EAs using the criteria listed. Many billionaires believe that the best course of philanthropic action is to continue accruing/investing money before giving it away. 

Anyways, my point is more academic than practical, the FTX fraud seems pretty straight forward and I appreciate your take. I wonder if this forum would be having the same sorts of convos after Thanos snaps his fingers.

I don’t [currently] view EA as particularly integral to the FTX story either. Usually, blaming ideology isn’t particularly fruitful because people can contort just about anything to suit their own agendas. It’s nearly impossible to prove causation, we can only gesture at it.

However, I’m nitpicking here but - is spending money on naming rights truly evidence that SBF wasn’t operating under a nightmare utilitarian EA playbook? It’s probably evidence that he wasn’t particularly good at EA, although one could argue it was the toll to further increase earnings to eventually give. It’s clearly an ego play but other real businesses buy naming rights too, for business(ish) reasons, and some of those aren’t frauds… right? 

I nitpick because I don't find it hard to believe that an EA could also 1) be selfish, 2) convince themselves that ends justify the means and 3) combine 1&2 into an incendiary cocktail of confused egotism and lumpy, uneven righteousness that ends up hurting  people. I’ve met EAs exactly like this, but fortunately they usually lack the charm, knowhow and/or resources required to make much of a dent. 

In general, I’m not surprised with the community's reaction. Best case scenario, it had no idea that the fraud was happening (and looks a bit naïve in hindsight) and its dirty laundry is nonetheless exposed (it’s not so squeaky clean after all). Even if EA was only a small piece in the machinery that resulted in such a [big visible] fraud, the community strives to do *important* work and it feels bad for potentially contributing to the opposite.

Thanks for the feedback, I appreciate it! SBF has clearly been interested in EA for a long time, but taking him seriously as a thought leader is pretty new. @donychristie mentioned that he was an early poster child of earning-to-give, which I also vaguely remember, but his elevation in status is a recent phenomenon.

Regardless, my main point is that EA should be sensitive to the reputation of its funders. Stuff like this feels off even if it may come from a well-intentioned place.

I was honestly surprised how quickly SBF was "platformed" by EA (but not actually surprised, he was a billionaire shoveling money in EA's direction).  One day I looked up and he was everywhere. On every podcast I follow, fellow EAs quoting him, one EA told me how much they wanted to meet his brother... it felt unearned/uncanny. For me, a main takeaway is that the community should be more cautious about the partners that it aligns with and also create a more resilient infrastructure to mitigate blowback when this stuff happens (it'll happen again, it always does with wealthy donors). When the major consultancies recently started getting flack for unsavory clients, they spun up teams to assess the ethical aspect of contracts and started turning down business that didn't align with certain standards.

FYI I'm not a "de-platforming" person, just felt like SBF immediately became a highly visible EA figure for no good reason beyond $$$.

Interested to hear why people are downvoting this comment... would love to engage in a discussion!

I wanted to keep the meat of my argument above as concise as possible, but also want to mention that EAs largely fail to grasp 1) what politics do to politicians  and 2) the unknowable, cascading, massive impacts of political decisions. Politicians change their minds, trade votes, compromise, make decisions based on reelection. And the decisions they make reverberate. None of this is predictable or measurable, so it's hard to imagine how to classify it as effective altruism.

I appreciate you laying out the specifics here! As someone who grew up in/around politics, the ineffectiveness of a freshman member of congress feels obvious.  I want to  amplify the concern for politics & EA.

EA should seriously consider drawing the line at financial support. Some EAs  want EA-aligned candidates to run, and that generally feels like a good idea. Rational politicians who care about important issues are better, right? They know what's best? Let's assume that's true, even if that's quite an assumption to make. Representatives vote on every bill, many of which have little to do with EA. How should we expect an EA candidate to vote on non-EA issues? If EA publicly and significantly backs a specific candidate, EA becomes at least a little culpable for all of a candidate's views, not just the EA ones. Furthermore, there's no guarantee that a candidate will vote how they say they'll vote. Moreover, even if they do vote how they say they'll vote, that doesn't guarantee results, whether that be winning a vote or operationalizing a government program that proves to be effective. There's so much uncertainty here. How can we as EAs truly calculate return on investment in campaign politics? I don't think we can with any real accuracy. There's nothing wrong with supporting candidates that you like, but this seems to fall far short of what we typically expect in terms of evidence. It feels like informed voting, not EA.

Agree that running EA candidates may polarize issues that are refreshingly nonpartisan. This would be an own-goal of sizable consequence.

Politics is a high-leverage arena, so it's logical that EAs are attracted to it, especially now that there's money floating around. EA as a (mostly) nonpartisan movement has higher potential with less downside. Channeling the community's energy into lobbying and advocating for EA-aligned policy is straightforward, effective and transparent. "This strongly suggests that influencing current elected officials, rather than attempting to directly hold political power, plays more towards our strengths." I couldn't agree more.

+1 - Ecosystem services (and more generally, Earth systems) are infamously hard to pin down, which is why I often taken any bottom line analyses of climate change with gigantic grains of salt (in both directions). For example, there's currently a gold rush on technology to quantify the value of soil sequestration, forest sequestration, etc, and as far as I can tell, experts are still bickering over the basics on how to calculate these data with any accuracy. Those are just a few small pieces of a very very large pie that is difficult to value. Perhaps the modeling takes these massive uncertainties into consideration, but I'm skeptical (and will have to do some research of my own).

Lots of good stuff here! I work in the climate change field so I have expertise here, although it's crucial to note I haven't spent my career comparing the risk that climate change poses relative to the other big topics that concern EAs. 

It's not surprising given my biases that I always grimace a little when EAs talk about climate. It's an easy target - lots of attention, tons of media hubbub, plenty of misinformed opinions and outright grifters, and of course, lack of direct existential threat. Hey look, here's an issue that most EAs care about that's already getting attention and talent, and if you run the numbers, according to our values...that's more than enough attention! So come work on an underserved issue like AI or pandemic risk!  It makes sense to use it as a point of contrast and I'm glad that 80K Hours still takes climate change seriously. However, the framing could maybe be better, I'm not sure, I need to think about it more. 

One small qualm within the well researched piece - the plastic bag bit is off. Disregarding the fact that plastic bag fees aren't just about carbon reductions, that graph shows that as long as you don't make reusable bags out of cotton, reusable bags do exactly what you want them to do. Now, that's not to say those policies are great, there's plenty of issues with them, but I don't find the example to be compelling evidence, especially because no policy demands cotton bags nor do most people use cotton bags. I don't remember that Danish LCA to be particularly good either.

Load More