Researcher of causal models and human-aligned AI at FHI | https://twitter.com/ryancareyai
In the last month or so, here are a bunch of things I've enjoyed reading that weren't on the forum:
Yeah, I think high-quality content is spread across many blogs. But not terribly hard to find - a lot of it is in blog posts that can be seen by following a hundred Twitter accounts.
I agree crossposting or linkposting is one way to gather content. I guess that's kind-of what subreddits/hackernews/Twitter all do, but those platforms are more-designed for that purpose. Not sure what's the best solution.
To evaluate its editability, we can compare AI code to code, and to the human brain, along various dimensions: storage size, understandability, copyability, etc. (i.e. let's decompose "complexity" into "storage size" and "understandability" to ensure conceptual clarity)
For size, AI code seems more similar to humans. AI models are already pretty big, so may be around human-sized by the time a hypothetical AI is created.
For understandability, I would expect it to be more like code, than to a human brain. After all, it's created with a known design and objective that was built intentionally. Even if the learned model has a complex architecture, we should be able to understand its relatively simpler training procedure and incentives.
And then, an AI code will, like ordinary code - and unlike the human brain - be copyable, and have a digital storage medium, which are both potentially critical factors for editing.
Size (i.e. storage complexity) doesn't seem like a very significant factor here.
I'd guess the editability of AI code would resemble the editability of code moreso than that of a human brain. But even if you don't agree, I think this points at a better way to analyse the question.
It's weird that he doesn't cite https://nickbostrom.com/papers/vulnerable.pdf
A big expansion of the non-defence science budget, $8B/yr->$30B+/yr, with ML/genomics/disaster prevention being among the focus areas for the additional funding - interesting! Yet less than federal national defence spending ($60B/yr)., and much less than private R&D ($400B/yr). 
I guess groups that are already using defence research grants (maybe AI research) or private funding would be affected to a small-to-medium extent, whereas ones that are not (disaster prevention) could feel a big difference.
1. See Fig 3 and Table 1 at https://fas.org/sgp/crs/misc/R44307.pdf
For Covid-19 spread, what seems to be the relative importance of: 1) climate, 2) behaviour, and 3) seroprevalence?
The comment was probably strong-downvoted because it is confidently wrong in two dimensions:
1. The EA Forum only exists to promote impactful ideas. So to say that the question "where are impactful ideas?" is a distraction from the question "when should we post on the Forum?" is to have things entirely backwards. To promote good ideas, we do need to know where they are.
2. We are trying to address what a community-builder should do, not a content-creator. It is a non-sequitur to try to replace the important meta-questions of what infrastructure and incentives there should be, with the question of when an individual should post to the forum.
Almost all content useful to EAs is not written on the forum, and almost all authors who could write such content will not write it on the forum. So it would be a lot more valuable to reward good content whether or not it is on the forum. It is harder to evaluate all content, but one can consider nominated content. If this is outside one's job description, then can one change the job description?
One relevant datapoint is Stripe Press. The tech company Stripe promotes some books on startups and progress studies, with the stated goal of sharing ideas that would inspire startups (that might use their product). They outsource the printing.
Does the rate of consumption of books increase when Stripe reprints them?
But these books are relatively unpopular, relative to Superintelligence, which has 12k ratings, and TLYCS, which has 4k. We can see that reprinting can help revive unpopular books. But it's far from clear that it would help already-thriving ones, if it would cut the flow of that book into physical bookstores. It could just as easily hinder. So it'll be interesting to see more data.