RH

rickyhuang.hexuan

63 karmaJoined Oct 2022

Comments
5

I have a general disdain for criticizing arguments as ivory-tower thinking without engaging with the content itself. I think it is an ineffective way of communication which leaves room for quite a lot of non-central fallacy. The same ivory tower thinkings you identified were also quite important at promoting moral progress with careful reflections. I don't think considering animals as deserving moral attention is naturally an insulting position. Perhaps a better way of approaching this question will be to actually consider whether or not this trade-off is worth it. 

p.s  I don't think the post called for a stop of GiveWell's act of giving. The research questions you identified are important decision relevant open-ended questions which will aid GiveWell's research.  Perhaps not all of it can be solved, but it doesn't mean that we shouldn't consider devoting a reasonable amount of resources to researching these questions. I'm a firm believer in world-view diversification. The comparative probably isn't that GiveWell will stop helping someone die of malaria, but they may lower their recommendations for said program/or offer recommendations to make existing interventions more effective with an account for these new moral considerations.

Yep! I think that would be really useful. I was wondering if people compiled a list relevant to this, which I think might be valuable for like-minded students like myself.

The reasons why I want to do a Philosophy PhD are not directly EA related. I enjoy thinking about complex problems, and handled graduate level epistemology well as an undergrad. 

I'm still in the figuring out process for whether or not I should work on alignment  or just other areas of philosophy.  

I think you ask two questions with this framing (1) a descriptive question on whether or not this divide exists currently int the EA Community (2) a normative question on whether or not this divide should exist. I think it is useful to separate the two questions, as some of the comments seem to use responses to (2) as a response to (1).  I don't know if (1) is true. I don't think I've noticed it in the EA Community but I'm willing to have my mind changed on this

On (2), I think this can be easily resolved. I don't think we should (and I don't think we can) have non-epistemic* reasons for belief. However, we can have non-epistemic reasons on why we would want to act on a certain proposition. I'm not really falling into either "camp" here, and I don't think it necessitates us to fall into any "camp". There's a wide wealth of literature in Epistemology on this.

*I think sometimes EAs use the word "epistemic" differently then what I conventionally see in academic philosophy, but this comment is based on conventional interpretations of "epistemic" in Philosophy. 

[This comment is no longer endorsed by its author]Reply

Thanks for the comment! I really enjoyed reading the work on WELLBYs by HLI. 

I personally think GiveWell fell into the epistemic problem of prioritizing current functionality (even when it is based upon an unjustified belief) over potential counterfactual impact of establishing something new. I think they know its bad, but unaware of how bad moral weight currently is.