Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://www.centreforeffectivealtruism.org/careers
Feedback always appreciated; feel free to email/DM me or use this link if you prefer to be anonymous.
The policy you are suggesting is far further away from "open source" than this is. It is totally reasonable for Meta to claim that doing something closer to open source has some proportion of the benefits of full open source.
Suppose meta was claiming that their models were curing cancer. It probably is the case that their work is more likely to cure cancer than if they took Holly's preferred policy, but nonetheless it feels legitimate to object to them generating goodwill by claiming to cure cancer.
That page lists the value of vaccine challenge trials as $10^12-10^13, which is substantially more than the market capitalization of the three big AI labs combined.
(I think there is a decent case that society is undervaluing those companies, but the relevant question seems to be their actual valuation, not what value they theoretically should have. I feel fairly confident that if you asked the average American whether they would prefer to have had a vaccine for COVID one year earlier versus GPT 3 one year earlier, they would prefer the vaccine.)
I’m pretty sure you have met people doing mechanistic interpretability, right?
Nora is Head of Interpretability at EleutherAI :)
Thanks for the suggestion! I expect that @Holly_Elmore will represent that viewpoint. See e.g. this podcast.
If everything goes according to schedule, the final debate post would be the 24th, and then it will take some undetermined amount of time after that for Scott to write his summary/conclusion. I wouldn't be that surprised if things fell behind schedule though.
Up until now, leaders haven’t prioritized this very highly
Hmm, this doesn't feel true of my experience. I'm mentally running through a list of recent large-ish CEA projects, and they all involved user interviews, surveys, or both.
It's possible that you mean something else by "meaningful dialogue"? (Or are referring to non-CEA projects?)
Thanks for all your work over the last 11 years Will, and best of luck on your future projects. I have appreciated your expertise on and support of EA qua EA, and would be excited about you continuing to support that.