EtG @ Google
You should consider that living in such conditions will likely lead to burnout.
Altruism is a marathon not a sprint. You should focus on having a high impact over your whole career, which will involve not pushing yourself literally as hard as possible all the time. You are only human, regardless of what your ethics say.
That said, there are cases of people living somewhat as you describe and giving a lot of money away - see here
How much time do you spend on deciding where to donate? Or do you mostly have enough trust to delegate to e.g. GiveWell in your decisions?
Relatedly, do you spend much time evaluating the donations from previous years for impact?
(As a smaller scale EtGer myself I often struggle with how much time I should be spending on these things, which are plausibly extremely important)
I'll ask the obvious awkward question:
Staff numbers are up ~35% this year but the only one of your key metrics that has shown significant movement is "Job Vacancy Clickthroughs".
What do you think explains this? Delayed impact, impact not caught by metrics, impact not scaling with staff - or something else?
I definitely think it's an (the most?) important argument against. Some of this comes down to your views on timelines which I don't really want to litigate here.
I guess I don't know how much research leading to digital people is likely to advance AI capabilities. A lot of the early work was of course inspired by biology, but it seems like not much has come of it recently. And it seems to me that we can focus on the research needed to emulate the brain, and try not to understand it in too much detail.
That could happen. I would emphasise that I'm not talking about whether we should have digital minds at all, just when we get them (before or after AGI). The benefit in making AGI safer looms larger to me than the risk of bad actors - and the threat of such bad actors would lead us to police compute resources more thoroughly than we do now.
Digital people may be less predictable, especially if "enhanced", I think that the trade-off is still pretty good here in that they almost entirely approximate human values versus AI systems which (by default) do not at all.
Edited, thank you!