M

mhendric🔸

474 karmaJoined

Comments
64

Regarding skeptical optimism, how about
Cautious Optimism
Safety-conscious optimism
Lighthearted skepticism
Happy Skepticism
Happy Worries
Curious Optimism
Positive Skepticism
Worried Optimism
Careful Optimism
Vigilant Optimism
Vigilant Enthusiasm
Guarded Optimism
Guarded Enthusiasm
Mindful Optimism
Mindful Enthusiasm

Just throwing a bunch of suggestions out in case one of them sounds good to your ear.

I really liked this post, and found the second half of it especially insightful.

I love your blog and reliably find it to provide the highest-quality EA criticism I have found. I shifted my view on a handful of issues based on it. 

It may be helpful for non-philosophy readers to know that the journals these paper are published in are very impressive. For example, Ethics (Mistakes in Moral Math of Longtermism paper) is the most well-regarded ethics journal I know of in our discipline, akin to e.g. what Science or Nature would be for a natural scientist. 

I am somewhat disheartened that those papers did not gain visible uptake from key players in the EA space (e.g. 80K, Openphil), especially since it was published when most EA organizations strike me as moving strongly towards longtermism/AI risk. My sense is that it was briefly acknowledged, then simply ignored. I don't think that the same would have happened with e.g. a Science or Nature paper.

To stick with the Mistakes in Moral Math paper, for example: I think it puts forward a very strong argument against the very few explicit numerical models of EV calculations for longtermist causes. A natural longtermist response would be to either adjust models or present new models, incorporating factors such as background risk that are currently not factored in. I have not seen any such models. Rather, I feel like longtermist pitches often get very handwavey when pressed on explicit EV models that compare their interventions to e.g. AMF or Give Directly. I take it to be a central pitch of your paper that it is very bad that we have almost no explicit numerical models, and that those we have neglect crucial factors_. To me, it seems like that very valid criticism went largely unheard. I have not seen new numerical EV calculations for longtermist causes since publication. This may of course be a me problem - please send me any such comparative analyses you know!

 I don't want to end on such a gloomy note - even if I were right that these criticisms are valid, and that EA fails to update on them, I am very happy that you do this work. Other critics often strike me as arguing in bad faith or being fundamentally misinformed - it is good to have a good-faith, quality critique to discuss with people. And in my EA-adjacent house, we often discuss your work over beers and food and greatly enjoy it haha. Please keep it coming!
 

I am organizing a fundraising competition between Philosophy Departments for AMF.
You can find it here: https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9191
Previous editions have netted (badum-tschak) roughly $40.000:
https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9189
Any contributions are very welcome, as is sharing the fundraiser. A more official-looking announcement is on Dailynous, a central blog of academic philosophy: people found this ideal for sharing via e.g. department listservs. 
https://dailynous.com/2024/12/02/philosophers-against-malaria-a-fundraising-competition/

These are relatively low-effort to set up - I spend maybe 10-20h a year on them. If you are interested in setting up a similar thing for your discipline/social circles, feel very welcome to reach out for help.

I don't find this convincing. It seems to me that updating that one line on your website should not take longer than e.g. writing this comment. Why would you think it has a significant tradeoff?

Are you familiar with Probably Good and their 1on1 career advising? This seems like a natural fit!

Thanks both, that's exactly what I meant to be asking.

I understand! Out of curiosity, does whether the organization want to stay anonymous factor into the decision in any way?

Great to hear the second round was successful. Given an anonymous AI org is taking up half of the budget, I wonder what the overall approach of the org is, what makes you think you're the best-suited funder for it, or what reasons led to granting anonymity to the organization. If there's anything you'd be willing to share on any of these, it'd be greatly appreciated!

Load more