I think it is basically erroneous to say that EA has "refused to engage in the political".
If you're not proposing electioneering, what exactly is the program that you are suggesting could have prevented these USAID cuts? Because from where I'm sitting, I don't really think there was anything EA could have done to prevent that, even if the whole weight of the movement were dedicated to that one thing.
Let's imagine I have a proposal or a white paper. How and where can I submit it for evaluation?
This forum might not be a bad place to start?
Probably a reference to this study. https://thefilter.blogs.com/thefilter/2009/12/the-israeli-childcare-experiment.html
If your idea is that in-country employees/contractors of organizations like GiveDirectly, Fistula Foundation, AMF, MC, Living Goods, etc., should be invited to EA Global — I agree, and I think these folks often have useful information to add to the conversation. Though I don't assume everyone in these orgs is a good fit, many are and it's worth having those voices. Some have an uncritical mindset, basically just doing what they're told, while others are a little bit too sharp-elbowed and are just looking at what can get funders' attention without caring how good it actually is.
On the other hand, if your idea is to (for example) invite some folks from villages where GiveDirectly is operating, I pretty strongly feel that this would be a waste of resources. We can get a much better perspective from this group by surveying (and indeed GiveWell and GiveDirectly have sponsored such surveys). If you were to just choose randomly, I think most of those chosen wouldn't be in a good position to contribute to discussions; and if you were to choose village elites, then you end up with a systematic bias to elite interests, which has been a serious systematic problem in trying to make bottom-up charitable interventions work.
Random thought: does the idea of explosive takeoff of intelligence assume the alignment is solvable?
If the alignment problem isn’t solvable, then an AGI, in creating ASI, would face the same dilemma as humans: The ASI wouldn’t necessarily have the same goals, would disempower the AGI, instrumental convergence, all the usual stuff.
I suppose one counter argument is that the AGI rationally shoudn’t create ASI, for these reasons, but, similar to humans, might do so anyway due to competitive/racing dynamics. Whichever AGI doesn’t creates ASI will be left behind, etc.
I think the amount of news that is helpful and healthy to consume depends a lot on what it is that you’re trying to do. So maybe good place to start is thinking about how sensitive your work is to developments, and go from there. Channel Duncan Sabien and ask, “what am I doing, and why am I doing it?”.
And if you are going to spend a lot of time with the news, read Zvi’s piece on bounded distrust and maybe also the linked piece from Scott Alexander.
If you want to understand what expected behavior looks like in these sorts of situations, I would suggest you consider taking a course in journalistic ethics. The industry’s poor reputation for truth seeking is deserved; but there are standards for when and how to seek comment that would serve you well in this context.