M

Moya

30 karmaJoined Feb 2023

Comments
11

Whoa, thank you so much for this post - the topic of outreach and aligning people's moral intuitions seems incredibly important, as so many of the problems we are facing could be solved quite easily if everyone agreed to genuinely cooperate!

We do feel that two particular topics might have gotten a bit more attention (or maybe they are in there and we just didn't notice?)

  1. Maybe you could go a bit more in detail about how we can actually get people to enjoy the stories we write, and what is necessary to make them palatable / reach a wider audience?
  2. Especially regarding the propaganda accusation (FAQ part, question 2) - maybe also talk about counterfactuals, and how any story that does not transport positive moral values is taking up space from other stories that would do so? (After thinking about this for a bit we feel it is actually quite a harmful action to write a story that gets read a lot, takes up a lot of space, but does not teach valuable lessons...)

Here is my criticism in more detail:

Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. ... Every decent person should share the basic goals or values underlying effective altruism.

It starts here in the abstract - writing this way immediately sounds condescending to me, making disagreement with EA sound like an entirely unreasonable affair. So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.

Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. ... If it does not, then by their own lights they have no basis for thinking it a better option.

On systemic change: The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hills - higher maxima - out there, but we do not know how to get there; any particular systemic change might as well make things worse. But if EA principles told us to only ever sit at this local maximum and never even attempt to go anywhere else, then those would not be principles I would be happy following.
So yes, people who support systemic change often do not have the mathematical basis to argue that it necessarily will be a good deal - but that does not mean that there is no basis for thinking attempting it is a good option.
Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.

Rare exceptions aside, most careers are presumably permissible. ... This claim is both true and widely neglected. ... Neither of these important truths is threatened by the deontologist's claim that one should not pursue an impermissible career.

On earning to give: Again, the arguments are very simplified here. A career being permissible or not is not a binary choice, true or false. It is a gradient, and it fluctuates and evolves over time, depending on how what you are asked to do on the job fluctuates over time, and depending on how the ambient morality of yourself and society shifts over time. So the question is not "among all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?" but "what is the tradeoff I should be willing to make between the career being more morally iffy, and the positive impact I can have by donation from a larger income baseline?", and additionally, if you still just donate e.g. 10% of your income but your income is higher it means that also there is a larger amount of money you do not donate, which counterfactually you might use to buy things you do not actually need that need to be produced and shipped and so on, in the worse case making the world a worse place for everyone to be in, so even just "more money = more good" is not a simple truth that just holds.
And despite all these simplifications, the sentence "This claim is ... true" just really, really gets to me - such binary language again completely sweeps any criticism, any debate, any nuance under the rug.

EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth ... Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.

On billionaire philanthropy: Yes, billionaires are capable of doing immense good, and again, I have not seen anyone actually arguing against that. The most common arguments I am aware of against billionaire philanthropists are (1) that billionaires in the first place just shouldn't exist, as yes they have the capacity to do immense good, but also the capacity to do immense harm, and no single person should be allowed to have the capacity to do so much harm to living beings on a whim. And (2) billionaires are capable of paying people to advise them on how to best make it look like they are doing good, when actually, they are not (such as creating huge charitable foundations and equipping them with lots of money, but these foundations then actually just re-investing that money into projects run by companies these billionaires have shares in, etc.)

So that is what I mean by "arguing against strawpeople" - claims are so far simplified and/or misrepresented that they do not accurately represent the actual positions of EAers, or of people who criticise them.

I understand that my vague criticism was unhelpful; sadly, when posting I did not have enough time to really point out specific instances, and thought it would still be higher value to mention it in general than to just not write anything at all.

I will try to find the time now to write down my criticisms in more detail, and once I am ready will comment then on the question of Dr. David Mathers above, as he also asked for it (and by commenting here and there, you both will be notified. Hooray.)

When seeing the title of this post I really wanted to like it, and I appreciate the effort that went into it all so far.

Unfortunately, I have to agree with Paul - both the post as well as the paper draft itself read pretty weak to me. In many instances, it seems that you argue against strawpeople rather than engaging with criticism of EA in good faith, and even worse, the arguments you use to counter the criticism boil down to what EA is advocating for “obviously” being correct (you wrote in the post that the arguments are very much shortened because there is just so much ground to cover, but I believe that if an argument cannot be made in a convincing way, we should either focus more time on making it properly, or dropping the discussion entirely, rather than just vaguely pointing towards something and hoping for the best.)

Also, you seem to not defend all of EA, but whatever part of EA that is most easily defendable in the particular paragraph, such as arguing that EA does not require people to always follow its moral implications, only sometimes - which some EAers might agree with, but certainly not all.

Moya
1y10
3
0

I think this is important, and yet, I feel that the opposite also needs to be pointed out - please don't disengage / not vote, just because something was shared by a friend / person who inspires you / whomever.

Basically, I see the failure mode of people disengaging from voting because of being afraid that it might be vote brigading due to the way they got notified of the post, or particular comment, being kind of on a slippery slope to getting there - but not getting their honest opinion reflected in the votes would be just as bad.

So yeah, don't vote brigade, but if you do genuinely feel something should be up- or downvoted on its own merit, then please do so, no matter how you ended up in a place to see this content.

Thank you for this post!

I really like combining the emotional access to these women's stories with the intellectual facts of the studies - often I only see people focus on the emotional side (and then I am unsure if I should really factually believe it, or it is just cherry-picked anecdotal stuff), or only on the intellectual side and leave out emotions entirely (which just leaves a whole bunch of low-hanging potential motivation go to waste), so combining the two is great!

Hi there :)

Yes indeed, burn events are based on the same principles as Burning Man, but each regional burn is a bit different just based on who attends, how these people choose to interpret the (intentionally) vague and contradicting principles, etc. :)

Hi Mila,

Yeah, I am involved in the Darmstadt local group (when I have the time, many many things going on.)

And wheee, would be glad to meet you too :)

Just regarding your last sentence: I disagree that it has any bearing whatsoever whether everyone else is excluding other's visions of the future or not.
No matter if everyone else is great or terrible - I want EA to be as good as it possibly can, and if it fails on some metric it should be criticised and changed in that regard, no matter if everyone else fails on the same metric too, or not, or whatever.

Hi all,

Moya here from Darmstadt, Germany. I am a Culture-associated scientist, trans* feminist, poly, kinky, and a witch.
I got into LessWrong in 2016 and then EA 2016 or 2017, don't quite remember. :)

I went to the University of Iceland, did a Master's degree in Computer Science / Bioinformatics there, then built software for the European Space Agency, and nowadays am a freelance programmer and activist in the Seebrücke movement in Germany and other activist groups as well. I also help organize local burn events (some but not all of them being FLINTA* exclusive safer spaces.)

Silly little confession: It took me so many years to finally sign up to the EA forum because my password manager is not great and I just didn't want to bother opening it and storing yet another password in there. But hey, finally overcame that incredibly-tiny-in-hindsight-obstacle after just a bit over half a decade and signed up. \o/

Load more