According to public reports, Dan Hendrycks has been influenced by EA since he was a freshman (https://www.bostonglobe.com/2023/07/06/opinion/ai-safety-human-extinction-dan-hendrycks-cais/).
He did the 80,000 hours program.
He worries about AI bringing about the end of humanity, if not the planet.
After getting his Ph.D., he started an AI safety organization instead of joining one of the many AI startups.
And he's taken $13M in donations from two EA orgs - OpenPhilanthropy and FTX Foundation.
Yet he denies being an Effective Altruism member when asked about it by the press. For instance (https://www.bloomberg.com/news/newsletters/2024-06-27/an-up-and-coming-ai-safety-thinker-on-why-you-should-still-be-worried)
As an aside, Hendrycks is not alone in this. The founders of the Future of Life Institute have done the same thing (https://www.insidecyberwarfare.com/p/an-open-source-investigation-into).
I'm curious to know what others think about Hendryck's attempts to disassociate himself from Effective Altruism.
Thank you for your response, @Dan H . I understand that you do not agree with a lot of EA doctrine (for lack of a better word), but that you are a Longtermist, albeit not a "strong axiological longtermist." Would that be a fair statement?
Also, although it took some time, I've met a lot of scientists working on AI safety who have nothing to do with EA or Longtermism or AI doom scenarios. It's just that they don't publish open letters, create political action funds, or have any funding mechanism similar to OpenPhilanthropy or similarly-minded billionaire donors like Jaan Tallinn and Vitalik Buterin. As a result, there's the illusion that AI Safety is dominated by EA-trained philosophers and engineers.