I think people are, generally speaking, being too simplistic between "capabilities" and "alignment". I assume most people on the forum use ChatGPT/Claude or other LLM apps and don't think they pose, in their current form, much of a safety concern.
I am far more concerned of "geniuses in a data center" which Dario/Sam seem to be pushing for, than I am of more economically useful AI.
I furthermore think that Matthew and to a lesser extent, Tamay and Ege have engaged significantly with AI risk arguments than most people.
Disclosure: I'm one of the investors in Mechanize
I understand this. Good analogy.
I suppose what it comes down to is that I actually DO think it is morally better for the person earning $10m/year to donate $9.9m/year than $9m/year, about $900k/year better.
I want to achieve two things (which I expect you will agree with).
I think it's also reasonable for people to set limits for how much they are willing to do.
I don't want to argue in anyone's specific case, but I don't think it's universally true at all or even true the majority of the time that people that those working in AI could make more elsewhere. It sounds nice to say, but I think often people are earning more in AI jobs than they would elsewhere .
To a point, maybe a bit less than it does currently but in general it seems to work well.