Aaron__Maiwald

Comments

What actually is the argument for effective altruism?

I actually think there is more needed. 

If “its a mistake not to do X” means “its in alignment with the persons goal to do X”, then I think there are a few ways in which the claim could be false.

I see two cases where you want to maximize your contribution to the common good, but it would still be a mistake (in the above sense) to pursue EA:

  1. you are already close to optimal effectiveness and the increase in effectiveness by some additional research in EA is so small that you would be maximizing by just using that time to earn money and donate it or have a direct impact
  2. pursuing EA causes you to not achieve another goal which you value at least equally or a set of goals which you, in total, value at least equally

If that's true, then we need to reduce the scope of the conclusion VERY much. I estimate that the fraction of people caring about the common good  for whom Bens claim holds is in [1/10000,1/100000]. So in the end the claim can be made for hardly anyone right?

What actually is the argument for effective altruism?

I'd say that pursuing the project of effective altruism is worthwhile, only if the opportunity cost of searching C is justified by the amount of additional good you do as a result of searching for better ways to do good, rather then go by common sense A. It seems to me that if C>= A, then pursuing the project of EA wouldn't be worth it. If, however, C< A, then pursuing the project of EA would be worth it, right?

To be more concrete let us say that the difference in value between the commonsense distribution of resources to do good and the ideal might be only 0.5%. Let us also assume it would cost you only a minute to find out the ideal distribution and that the value of spending that minute in your commonsense way is smaller than getting that 0.5% increase. Surely it would still be worth seeking the ideal distribution (=engaging in the project of EA), right?

How you can contribute to the broader EA research project

Do you still recommend these approaches or has your thinking shifted on any? Personally, I'd be especially interested if you still recommend to "Produce a shallow review of a career path few people are informed about, using the 80,000 Hours framework. ".

Making decisions under moral uncertainty

Hey, thank you very much for the summary!

I have two questions:

(1) how should one select which moral theories to use in ones evaluation of the expected choice worthiness of a given action?

"All" seems impossible, supposing the set of moral theories is indeed infinite; "whatever you like" seems to justify basically any act by just selecting or inventing the right subset of moral theories; "take the popular ones" seems very limited (admittedly, I dont have an argument against that option, but is there a positive one for it?)

(2) how should one assign probabilities to moral theories?


I realise that these are probably still controversial issues in philosophy, so I dont expect a solution. Rather, any (yet speculative) ideas on how to resolve them would be great!