A

ada

14 karmaJoined May 2022

Comments
5

Hi Jeff! Something that I've been thinking about re: earning to give is that it seems popular to donate a portion of one's income each year. However, this feels like it necessarily requires one to re-evaluate the space of possible donations each year and have a consistently good estimate of the best place for the money. On the other hand, one could donate far more infrequently (or even just donate upon retirement) and more deeply research the best charity of the time. However, this latter approach is more complicated and seems more vulnerable to issues like value drift and akrasia. Do you have any thoughts on this tension? How did you decide to give every year?

Anecdotal evidence: At MIT, we received ~5x the number of applications for AI safety programming compared to EA programming, despite similar levels of outreach last year. This ratio was even higher when just considering applicants with relevant backgrounds and accomplishments. Around two dozen winners and top performers of international competitions (math/CS/science olympiads, research competitions) and students with significant research experience engaged with AI alignment programming, but very few engaged with EA programming. 

Dunno what the exact ratio would look like (since the different groups run somewhat different kinds of events), but we've definitely seen a lot of interest in AIS at Carnegie Mellon as well. There's also not very much overlap between the people who come to AIS things and those who come to EA things.

I'd be really curious what sort of impact perks have on the cost of an employee at typical EA orgs. Is the difference on the order of 5% or 30% of an employee's salary? I mostly feel like I don't have a sense as to whether cultural / signaling considerations or cost considerations are primary.

Thanks for the reply. I had no idea the spread was so wide (<2% to >98% in the last link you mentioned)!

I guess the nice thing about most of these estimates is they are still well above the ridiculously low orders of magnitude that might prompt a sense of 'wait, I should actually upper-bound my estimate of humanity's future QALYs in order to avoid getting mugged by Pascal.' It's a pretty firm foundation for longtermism imo.

One quick question about your post -- you mention that some in the community think there is virtually no chance of humanity surviving AGI and cite an April Fool's Day post. (https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) I'm not sure if I'm missing some social context behind this post, but have others claimed that AGI is basically certain to cause an extinction event in a non-joking manner?