I think the type of early deal that would be most valuable is where the US and China both agree to produce a joint 'consensus' ASI aligned to 'the good'. In more detail:
This proposal is clearly very far outside the overton window currently, but I don't think this is that much worse on feasibility than your proposed great power resource-sharing deals. It also solves the enforcement challenge as well which is convenient since we might have needed to create such a consensus AI to enforce a different sort of deal.
I am tentatively excited about this proposal, but I expect there isn't much to do to further it until the relevant parties are taking things more seriously.
I'm fairly sympathetic to that, but it also feels like one needs to draw a line somewhere and where they have currently drawn it seems not unreasonable to me. Though another place to draw the line kind of on the opposite extreme which could also work is just anyone who supports effective giving and is planning to donate/salary sacrifice a lot of their money. Maybe the worry is that is too fuzzy and diluting the core 10% message though.
fyi @Luke Moore đ¸Â
Reasonable if you don't want to publicly go into internecine tensions, but the obvious question seems to be how you see this relating to principles-first EA, which is, on its face, a similar idea.
That is encouraging! Scott's post linking to various prediction markets for Antrhopic's implied valuations was also heartening.
Good point re communal values of the forum, seems right.
Ah, maybe I interpreted the original question differently to what you intended. SInce you said it is not about 'post quality' I was trying to put that aside and imagine AI-written posts that are better than human-written posts, and I think in that case I would be happy to read them. But I agree that currently I am turned off by AI writing and far prefer people write themselves in most cases. I suppose I was answering the question more in principle, i.e. if an AI-written post was amazing I would be comfortable with it, but currently they are not. So for me it is more a quality issue than fundamentally and AI-written issue (except for the communal/sentimental aspects, which I agree have value).
How much of a post are you comfortable for AI to write?
Currently, I think AI writing isn't good enough to be better than good human users of the Forum, but I think this will quickly change, and I want to prioritise ideas and impact over who wrote the final words. I expect it will be longer before AIs are at the frontier of doing EA research and cuase-prioritization, so I think posts with only AI ideas will be bad for a longer time to come. But posts with human ideas written up well by an AI I could well imagine being better quality than most Forum writer's posts within a year or two.
I feel differently if someone is writing something to me personally, if someone writes me a poem or a birthday card or something that has sentimental value, then AI writing reduces that. But the Forum I see as primarily content-value rather than sentimental value.
While I don't work in GHD, I still enjoy reading GHD content on the Forum and on Substack. I agree that interesting questions in GHD are far from solved, but I wonder if a lot of the low-hanging intellectual fruit has been picked (your number 5)? I wasn't around in early GiveWell days but I imagine that would have been an amazing time to be thinking about GHD and coming up with lots of new approaches and ideas. I haven't found GiveWell's research to be that surprising or interesting lately for instance (vibes-based, I don't engage that closely with them anymore).
I would be keen to hear more from CE charities about what things they are learning and what questions they are facing!
Re your solution #2, I think I probably wouldn't want the Forum team to show 'favouritism', but the decline of GHD curated posts is interesting, and maybe that should change.