I research a wide variety of issues relevant to global health and development. I also consult as a researcher for GiveWell (but nothing I say on the Forum is ever representative of GiveWell). I'm always happy to chat - if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!
IQ grew over the entire 20th century (Flynn effect). Even if it's declining now, it is credulous to take a trend over a few decades and extrapolate it to millennia from today. Especially when that trend of a few decades is itself a reversal of an even longer trend.
Compare this to other trends that we extrapolate out for millennia – increases in life expectancy and income. These are much more robust. Income has been steadily increasing since the Industrial Revolution and life expectancy possibly for even longer than that. That doesn't make extrapolation watertight by any means, but it's a way stronger foundation.
Also, I don't know much about the social context for this article that you say is controversial, but it strikes me as really weird to say "here's an empirical fact that might have moral implications, but EAs won't acknowledge it because its taboo and they're not truthseeking enough". That's putting the cart a few miles before the horse.
The True Believer by Eric Hoffer is a book about the psychology of mass movements. I think there are important cautions for EAs thinking about their own relationship to the movement.
There is a fundamental difference between the appeal of a mass movement and the appeal of a practical organization. The practical organization offers opportunities for self-advancement, and its appeal is mainly to self-interest. On the other hand, a mass movement, particularly in its active, revivalist phase, appeals not to those intent on bolstering and advancing a cherished self, but to those who crave to be rid of an unwanted self. A mass movement attracts and holds a following not because it can satisfy the desire for self-advancement, but because it can satisfy the passion for self-renunciation.
I wanted to write a draft amnesty post about this, but I couldn't write anything better than this Lou Keep essay about the book, so I'll just recommend you read that.
Something that I personally would find super valuable is to see you work through a forecasting problem "live" (in text). Take an AI question that you would like to forecast, and then describe how you actually go about making that forecast. The information you seek out, how you analyze it, and especially how you make it quantitative. That would
This exercise does double duty as "substantive take about the world for readers who want an answer" and "guide to forecasting for readers who want to do the same".
But neglectedness as a heuristic is very good precisely for narrowing down what you think the good opportunity is. Every neglected field is a subset of a non-neglected field. So pointing out that great grants have come in some subset of a non neglected field doesn't tell us anything.
To be specific, it's really important that EA identifies the area within that neglected field where resources aren't flowing, to minimize funging risk. Imagine that AI safety polling had not been neglected and that in fact there were tons of think tanks who planned to do AI safety polling and tons of funders who wanted to make that happen. Then even though it would be important and tractable, EA funding would not be counterfactually impactful, because those hypothetical factors would lead to AI safety polling happening with or without us. So ignoring neglectedness would lead to us having low impact.
I consider myself good at sniffing out edited images but I can't spot any signs in Balenciaga Pope. Besides, for a deepfake to be useful, it only has to be convincing to a large minority of people, including very technologically unsophisticated people.
Thanks for the link to your thoughts on why you think it's likely that there will be a crash. I think you underestimate the likelihood of the US government propping up AI companies. Just because they didn't invest money in the Stargate expansion doesn't mean they aren't reserving the option to do so later if necessary. It seems clear that Elon Musk is personally very invested in AI. Even aside from his personal involvement the fact that China/DeepSeek is in the mix points towards even a normal government offering strong support to American companies in this race.
If you believe that the US government will prop up AI companies to virtually any level they might realistically need by 2029, then i don't see a crash happening.