I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff over there.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
Copying from my other comment:
The reason Eliezer gets paid so much is because a donor specifically requested it. The express purpose of the donation was to make Eliezer rich enough that he could afford to say "actually AI risk isn't a big deal" and shut down MIRI without putting himself in a difficult financial situation.
(I don't know about Nate's salary but $235K looks pretty reasonable to me? That's less than a mid-level software engineer makes.)
MIRI pays Eliezer Yudkowsky $600,000 a year.
I believe this is because a donor specifically requested it. The express purpose of the donation was to make Eliezer rich enough that he could afford to say "actually AI risk isn't a big deal" and shut down MIRI without putting himself in a difficult financial situation.
I spent a few days thinking about this but I struggled to come up with a bet structure that I was confident was good for both of us. The financial implications of this sort of bet are complicated. I didn't want to spend more time on it so I'll punt on this for now but I will keep it in the back of my mind in case I come up with anything.
How do I know that they're aligned? e.g. I asked Claude to find some quotes from Mike Rounds and he mentioned biorisk from AI but that was about it. Rounds also said "America needs to lead in AI to make sure that our warriors have every advantage" which sounds anti-aligned.
Are you assuming the deployment of ASI will be analogous to an omnipotent civilisation with values completely disconnected from humans suddenly showing up on Earth?
Something like that, yeah.
However, that would be very much at odds with historical gradual technological development shaped by human values.
ASI would have a level of autonomy and goal-directedness that's unlike any previous technology. The case for caring about AI risk doesn't work if you take too much of an outside view, you have to reason about what properties ASI would have.
Donating at the end of the year.
I used to donate mid-year for the reasons you gave. The last couple years I donated at the end of the year because the EA Forum was running a donation election in early December, and I wanted to publish my "where I'm donating" post shortly before the donation election, and I don't want to donate until after I've published the post. But perhaps syncing with the donation election is less important and I should publish and donate mid-year instead?
A donor wanted to spend their money this way; it would not be fair to the donor for Eliezer to turn around and give the money to someone else. There is a particular theory of change according to which this is the best marginal use of ~$1 million: it gives Eliezer a strong defense against accusations like
I kinda don't think this was the best use of a million dollars, but I can see the argument for how it might be.