RedStateBlueState

456Joined Apr 2022

Comments
39

I mean,  of course Effective Altruism is striving for perfection, every movement should, but this is very different than thinking that EA has already achieved perfection. I think you listed a couple of things that I had read as EA self-criticism pre-FTX collapse, suggesting that EAs were aware of some of the potential pitfalls of the movement. I just don't think many people thought EA exceptional in the "will avoid common pitfalls of movements" perfection sense as implied by the article.

This article makes one specific point I want to push back on:

It’s not that E.A. institutions were necessarily more irresponsible, or more neglectful, than others in their position would have been; the venture capitalists who worked with Bankman-Fried erred in the same direction. But that’s the point: E.A. leaders behaved more or less normally. Unfortunately, their self-image was one of exceptionalism.

Anyone who has ever interacted with EA knows that this is not true. EAs are constantly, even excessively, criticizing the movement and trying to figure out what big flaws could exist in it. It's true, a bit strange, and a bad sign, that these exercises did not for the most part highlight this flaw of reliance on donors who may use EA to justify unethical acts - unless you counted the reforms proposed by Carla Zoe Cremer that were never adopted cited by the article. Yes, there is some blame that could be assigned for never adopting these reforms, but the pure quantity of EA criticism and other potential fault points suggests IMO that it's really hard to figure out a priori what will fail a movement. EA is not perfect, nobody has ever claimed as much, and to some extent I think this article is disingenuous for implying this. "People focused on doing the most good" =/= "moral saints who think they are above every human flaw".

What is a statement you'd want instead of that?

I can't think of a way to phrase a similar statement that wouldn't come off as horrendous when read by the public. Even if this is mostly an EA-facing post (and I'm not sure this is true), the public is inevitably going to find it and if it says anything like "fraud may be warranted in some special  circumstances", Rethink Priorities is in big trouble.

If you have a better way of phrasing it that is more accurate and doesn't come off wrong, I'd be glad to hear it. Otherwise, I think this is a good statement. Most EAs will probably read through the word "unequivocally", and I would encourage further discussion on the tolerable bounds of behavior in pursuit of EA ideals, but this statement is not the place for that.

I mean, he said "the part I most regret was filing for bankruptcy" (ie, when he stopped hurting people and acknowledged his poor actions) , and that he has spent his entire career lying about his ethical beliefs, and in general showed absolutely no sign of remorse for the people he had hurt. This is borderline-indistinguishable from the logic that horrific dictators use to justify themselves, and he did it all while being a well-known figure in a movement built around doing good! I don't know if sociopath is exactly the right word, but it is definitely the sign of someone who doesn't care about other human beings.

I really don't understand how you could have read that whole interview and see SBF as incompetent rather than a malicious sociopath. I know this is a very un-EA-forum-like comment, but I think it's necessary to say.

First a disclaimer that I’ve never got anywhere close to interacting with SBF personally; I’m very much an outsider to this situation. However, from everything I have read, I think it’s pretty ridiculous to suggest that EA wasn’t the main reason SBF tried so hard to maximize profit (poorly, I might add, but it seems like that was his goal) to the point of committing fraud. As far as I understand EA was SBF’s primary guiding ideology; it is why he went down this career path of Jane Street and then starting his own companies. This post seems overly reliant on the fun fact that SBF paid more for e-sports naming rights than on EA donations to show why actually Sam didn’t care about EA that much. But these are two completely separate things! E-sports naming rights is just a means of advertising, with the goal of making FTX more money which will eventually allow SBF to donate more to EA. I think there’s also decent evidence that SBF was looking to ramp up donations in the future, as Effective Altruism continues to grow and is able to use more funding. Once you take out this fun fact about SBF’s current EA spending, I think this whole argument kind of falls apart.

[This comment is no longer endorsed by its author]Reply

I’ve asked this question on the forum before to no reply, but do the people doing grant evaluations consult experts in their choices? Like do global development grant-makers consult economists before giving grants? Or are these grant-makers just supposed to have up-to-date knowledge of research in the field?

I’m confused about the relationship between traditional topic expertise (usually attributed to academics) and EA cause evaluation.

In other domains, when we combine different metrics to yield one frankenstein metric, it is because these different metrics are all partial indicators of some underlying measure we cannot directly observe. The whole point of ethics is that we are trying to directly describe this underlying measure of "good", and thus it doesn't make sense to me to create some frankenstein view. 

The only instance I would see this being ok is in the context of moral uncertainty, where we're saying "I believe there is some underlying view but I don't know what it is, so I will give some weight to a bunch of these plausible theories". Which maybe is what you're getting at? But in that case, I think it's necessary to believe that each of the views you are averaging over could be approximately true on its own, which IMO really isn't the case with a complicated utilitarianism formula, especially since we know there is no formula out there that will give us all we desire. Though this is another long philosophical rabbit hole, I'm sure.

There's a proof showing that any utilitarian ideology violates either the repugnant or sadistic conclusion (or anti-egalitarianism, incentivizing an unequal society), so you can't cleverly avoid these two conclusions with some fancy math. To add, any fancy view you create will be in some sense unmotivated - you just came with a formula that you like, but why would such a formula be true? Totalism and averagism seem to be the two most interpretable utilitarian ideologies, with totalism caring only about pain/pleasure (and not by whom this pain/pleasure is experienced) and averagism being the same except population-neutral, not incentivizing a larger population unless it has higher average net pleasure. Anything else is kind of an arbitrary view invented by someone who is too into math.

Somewhat of a tangential question but what is the point of making EAGx region-specific? If these are the only events with a relatively low bar of entry, why are we not letting people attend them until one happens to come along near where they live? Without this restriction I could easily see EAGx solving most of the problems Scott is bringing up with EAG.

Load More