nathan98000

84Joined Jun 2017

Comments
40

As someone who leans towards hedonistic utilitarianism, I would agree with this impression. It seemed like the post asserted that utilitarianism must be true and that alternative intuitions could be dismissed without any good corresponding argument.

I would also add that there are many different flavors of utilitarianism, and it's unclear which, if any, is the correct theory to hold. This podcast has a good breakdown of the possibilities.

https://clearerthinkingpodcast.com/episode/042

I think this post makes many correct observations about the EA movement, but it draws the wrong conclusions.

For example, it's true that EAs will sometimes use uncommon phrases like "tail-risk" and "tractability". But that's because these are important concepts! Heck,  just "probability" is a word that might scare off most people too. But it would be a mistake to water down one's language to attract as many people as possible.

More generally, the EA movement isn't trying to grow as fast as possible. It's not trying to get everyone's attention. Instead, it's trying to specifically reach those people who are sympathetic to an evidence-based approach to helping others. Bare emotional appeals risk attracting the wrong kind of people and misrepresent what EA is all about.

There's a place for stories to motivate and inspire, but if they're divorced from engagement with data and careful reasoning, EA stops being EA.

Yes, I think it would be best to hold off. I think you'll find MacAskill addresses most of your concerns in his book.

I think you keep misinterpreting me, even when I make things explicit. For example, the mere fact that X is good doesn’t entail that people are immoral for not doing X.

Maybe it would be more productive to address arguments step by step.

Do you think it would be bad to hide a bomb in a populated area and set it to go off in 200 years?

If you agree we should help those who will have moral status, that's it. That's one of the main pillars of longtermism. Whether or not present and future moral status are "comparable" in some sense is beside the point. The important point of comparison is whether they both deserve to be helped, and they do.

Longtermists think we should help those who do (or will) have moral status.

No, it's because future moral status also matters.

FWIW this article has a direct account of persistence hunting among the Tarahumara. It also cites other accounts of persistence hunting among the Kalahari and Saami.

I think they will have moral status once they exist, and that's enough to justify acting for the sake of their welfare.

I disagree with the certainty you express, I'm not so sure, but that's a separate discussion, maybe for another time.

I haven't expressed certainty. It's possible to expect X to happen without being certain X will happen. Example: I expect for there to be another pandemic in the next century, but I'm not certain about it.

I assume then that you feel assured that whatever steps you take to prevent human extinction are also steps that you feel certain will work, am I right?

No, this is incorrect for the same reason as above.

The whole point of working on existential risk reduction is to decrease the probability of humanity's extinction. If there were already a 0% chance of humanity dying out, then there would be no point in that work.

Load More