Operations Generalist at Anthropic & former President of UChicago Effective Altruism. Suffering-focused.
Currently testing fit for operations. Tentative long-term plan involves AI safety field-building or longtermist movement-building, particularly around s-risks and suffering-focused ethics.
Cause priorities: suffering-focused ethics, s-risks, meta-EA, AI safety, wild animal welfare, moral circle expansion.
To clarify, you think that "buying time" might have a negative impact [on timelines/safety]?
Even if you think that, I think I'm pretty uncertain of the impact of technical alignment, if we're talking about all work that is deemed 'technical alignment.' e.g., I'm not sure that on the margin I would prefer an additional alignment researcher (without knowing what they were researching or anything else about them), though I think it's very unlikely that they would have net-negative impact.
So, I think I disagree that (a) "buying time" (excluding weird pivotal acts like trying to shut down labs) might have net negative impact and that & thus also that (b) "buying time" has more variance than technical alignment.
edit: Thought about it more and I disagree with my original formulation of the disagreement. I think "buying time" is more likely to be net negative than alignment research, but also that alignment research is usually not very helpful.
I find myself slightly confused - does 80k ever promote jobs they consider harmful (but ultimately worth it if the person goes on to leverage that career capital)?
My impression was that all career-capital building jobs were ~neutral or mildly positive. My stance on the 80k job board—that the set up is largely fine, though the perception of it needs shifting—would change significantly if 80k were listing jobs they thought were net negative if they didn't expect the person to later take an even higher-impact role because of the net negative job.
Thanks for writing about this!
I'm thinking a lot about this question and would welcome chatting more with people about this - particularly on the impacts on invertebrates & wild animals. I work at Anthropic (note: not in a technical capacity, and my views are purely my own) and so am feeling like I might be relatively better-placed (at least for now) to think about the intersection of AI and animals, but I have a lot to learn about animal welfare!