What AI safety work should altruists do? For example, AI companies are self-interestedly working on RLHF, so there's no need for altruists to work on it. (And even stronger, working on RLHF is actively harmful because it advances capabilities.)
Tweet-thread promoting Rotblat Day on Aug. 31, to commemorate the spirit of questioning whether a dangerous project should be continued.
On Rotblat day, people post what signs they would look for to determine if their work was being counterproductive.
How about May 7, the day of the German surrender?
I found the opening paragraph a bit confusing. Suggested edits:
An evaluation by the National Academy of Sciences estimates PEPFAR has saved millions of lives (PEPFAR itself claims 25 million).
The dominant conceptual apparatus economists use to evaluate social policies—comparative cost-effectiveness analysis, which focuses on a specific goal like saving lives, and ranks policies by lives saved per dollar—suggested America’s foreign aid budget could’ve been better spent on condoms and awareness campaigns, or even malaria and diarrheal diseases.
As already mentioned by others, these two claims are consistent.
I think non-longtermists don't hold these premises; rather, they object to longtermism on tractability grounds.