Thanks! Note that I have stopped updating this list, because I think the EA Eindhoven Syllabi Collection is more comprehensive.
Can you share a link to the source of this chart? The current link shows me a jpg and nothing else.
Note that in the US, the National Defense Authorization Act (NDAA) for FY2024 might direct the Secretary of Defense to establish an AI bug bounty program for "models being integrated into Department of Defense missions and operations." Here is the legislative text.
Probably worth adding a section of similar collections / related lists. For instance, see Séb Krier's post and https://aisafety.video/.Apart Research has a newsletter that might be on hiatus.
It's worth mentioning the Horizon Fellowship and RAND Fellowship.
However, the drop in engagement time which we could attribute to this change was larger than we’d expected.
How did you measure a "drop in engagement time which we could attribute to this change"? Some relevant metrics are page view counts, time spent on the website, number of clicks, number of applications to 80k advising, etc.
Current scaling "laws" are not laws of nature. And there are already worrying signs that things like dataset optimization/pruning, curriculum learning and synthetic data might well break them
Interesting -- can you provide some citations?
80k's AI risk article has a section titled "What do we think are the best arguments against this problem being pressing?"
Can you highlight some specific AGI safety concepts that make less sense without secular atheism, reductive materialism, and/or computational theory of mind?
The AI Does Not Hate You is the same book as The Rationalist's Guide to the Galaxy? I didn't realize that. Why do they have different titles?