While I don't work in GHD, I still enjoy reading GHD content on the Forum and on Substack. I agree that interesting questions in GHD are far from solved, but I wonder if a lot of the low-hanging intellectual fruit has been picked (your number 5)? I wasn't around in early GiveWell days but I imagine that would have been an amazing time to be thinking about GHD and coming up with lots of new approaches and ideas. I haven't found GiveWell's research to be that surprising or interesting lately for instance (vibes-based, I don't engage that closely with them anymore).
I would be keen to hear more from CE charities about what things they are learning and what questions they are facing!
Re your solution #2, I think I probably wouldn't want the Forum team to show 'favouritism', but the decline of GHD curated posts is interesting, and maybe that should change.
I think the type of early deal that would be most valuable is where the US and China both agree to produce a joint 'consensus' ASI aligned to 'the good'. In more detail:
This proposal is clearly very far outside the overton window currently, but I don't think this is that much worse on feasibility than your proposed great power resource-sharing deals. It also solves the enforcement challenge as well which is convenient since we might have needed to create such a consensus AI to enforce a different sort of deal.
I am tentatively excited about this proposal, but I expect there isn't much to do to further it until the relevant parties are taking things more seriously.
I'm fairly sympathetic to that, but it also feels like one needs to draw a line somewhere and where they have currently drawn it seems not unreasonable to me. Though another place to draw the line kind of on the opposite extreme which could also work is just anyone who supports effective giving and is planning to donate/salary sacrifice a lot of their money. Maybe the worry is that is too fuzzy and diluting the core 10% message though.
fyi @Luke Moore đ¸Â
Reasonable if you don't want to publicly go into internecine tensions, but the obvious question seems to be how you see this relating to principles-first EA, which is, on its face, a similar idea.
A world of intelligence too cheap to meter is more teleological: constraints and tradeoffs that exist now are washed away, and what matters is mainly what people ultimately value. And more people ultimately value animal welfare than animal diswelfare. The main game is wild animals, and the ~only way for things to go well for them is if we build an ASI that can eventually reshape the natural world to be less suffering filled. I think it is very unlikely farmed animal suffering is exported to other galaxies in a major way, because at technological maturity animals will not be the mose efficient way to meet human material needs.