Woman // Christian // Canadian // Londoner // Married // Donates to Global Development // Works on Energy Policy
I've seen 80,000 Hours say something similar, but I don't actually think this provides counterfactual impact unless one of the things I listed above is also true.
If you're hired as a research assistant or programmer and someone else would have done the role equally well otherwise, you wouldn't have any counterfactual impact. It's only if the role wouldn't have been filled otherwise, or the other candidates wouldn't have taken the initiative to automate others' work, that you have a counterfactual impact.
<0.01% is definitely overconfident given at that point I had already expressed misgivings and we do not have 10,000 authors on the EA Forum.
(I'm not against my writing being podcastified in principle but I want to check out any podcast services who broadcast my work in advance to decide if I'm happy to be associated with them. I'm strongly against someone else making that decision for me.)
Sounds good to me :) Thanks for posting!
Move slowly with high quality makes more sense for people whose "product" is not optional, eg monopolies or public services.
You really don't want your water provider to upgrade quickly if it increases the chance you won't have water at all for a month.
They could be shorter. That said, using bullet points and quoted extracts the way you do definitely helps keep them readable (and skimmable). The ones I've seen are relevant, on topic and useful.
I've seen this reasoning a lot, where EA organisations assume they won't get sued because the only people they're illegally using the data of are other EAs, and and as someone whose data has been misused with this reasoning, I don't love it!
That is not how copyright works Kat!
Are you getting author's consent before turning their work into a podcast?
Phil Trammell argues the same thing (that patient philanthropists should look somewhat more favourably on earning to give than people who want to do good immediately) in this podcast.
The main counterargument was that the world might change in a way that makes donating in many years less valuable than donating right now. An obvious example would be if we have transformative AI very soon, completely changing the economy and the x-risk landscape, or another example would be if the world ends, but this could also be if you think certain investments in global poverty would outperform most financial investments (Phil is not convinced but you might be).
Ah right, not just releasing the next-best candidate to do another job, but helping other people save time as well (in a better way than another candidate would ie because you have rare and valuable skills)