OscarD🔸

2123 karmaJoined Working (0-5 years)Oxford, UK

Comments
347

While I don't work in GHD, I still enjoy reading GHD content on the Forum and on Substack. I agree that interesting questions in GHD are far from solved, but I wonder if a lot of the low-hanging intellectual fruit has been picked (your number 5)? I wasn't around in early GiveWell days but I imagine that would have been an amazing time to be thinking about GHD and coming up with lots of new approaches and ideas. I haven't found GiveWell's research to be that surprising or interesting lately for instance (vibes-based, I don't engage that closely with them anymore).

I would be keen to hear more from CE charities about what things they are learning and what questions they are facing!

Re your solution #2, I think I probably wouldn't want the Forum team to show 'favouritism', but the decline of GHD curated posts is interesting, and maybe that should change.

I think the type of early deal that would be most valuable is where the US and China both agree to produce a joint 'consensus' ASI aligned to 'the good'. In more detail:

  • The US and China, as you note, are unsure who will win, and would be better off making a deal to preserve some minimum amount of future influence. But I think I am more worried than you about the costs of continued multipolarity into space colonisation. You write “Even having two alternative systems might open up the possibility for comparison, healthy competition, and moral trade.” War, threats, and unhealthy (e.g., burning the cosmic commons) competition also seem like important possibilities here.
  • Instead, I think having a joint superintelligence that coordinates using our cosmic endowment would be better, with some amount of influence within the 'moral parliament' of the ASI for each of the US and China.
  • Just that would be preferable to dividing up the universe into two camps I think - it is easier to do moral trades within one agent acting under moral uncertainty than coordinating between two agents.
  • A better version, though, could involve the US and China agreeing on some core moral precepts, or just a moral reflection process, and then jointly designing a moral curriculum for the proto-ASI including plenty of Western and Chinese texts, and letting the ASI do as it sees fit. Presumably both sides genuinely believe they are right and that an appropriate moral training process for the AI will lead to liberalism/Socialism with Chinese characteristics. So this exploits the two sides having different credences (where as you note your proposed deals are possible even if both sides have the same credences). This creates a larger surplus for posisble agreements.
  • Of course, agreeing to create a joint ASI could also have big nearer term benefits, e.g. avoiding racing and slowing down AI progress and investing more in safety.

This proposal is clearly very far outside the overton window currently, but I don't think this is that much worse on feasibility than your proposed great power resource-sharing deals. It also solves the enforcement challenge as well which is convenient since we might have needed to create such a consensus AI to enforce a different sort of deal.

I am tentatively excited about this proposal, but I expect there isn't much to do to further it until the relevant parties are taking things more seriously.

I'm fairly sympathetic to that, but it also feels like one needs to draw a line somewhere and where they have currently drawn it seems not unreasonable to me. Though another place to draw the line kind of on the opposite extreme which could also work is just anyone who supports effective giving and is planning to donate/salary sacrifice a lot of their money. Maybe the worry is that is too fuzzy and diluting the core 10% message though.
fyi @Luke Moore 🔸 

The relevant GWWC FAQ is here and there was also a more detailed discussion here.

Great article! I sometimes find myself explaining cash benchmarking to people and why some charities still beat cash, and this will be a useful thing to link to going forwards :)

Seems great! Insofar as you feel comfortable saying, why isn't this (fully) funded by cG?

Reasonable if you don't want to publicly go into internecine tensions, but the obvious question seems to be how you see this relating to principles-first EA, which is, on its face, a similar idea.

That is encouraging! Scott's post linking to various prediction markets for Antrhopic's implied valuations was also heartening.

Good point re communal values of the forum, seems right.

Ah, maybe I interpreted the original question differently to what you intended. SInce you said it is not about 'post quality' I was trying to put that aside and imagine AI-written posts that are better than human-written posts, and I think in that case I would be happy to read them. But I agree that currently I am turned off by AI writing and far prefer people write themselves in most cases. I suppose I was answering the question more in principle, i.e. if an AI-written post was amazing I would be comfortable with it, but currently they are not. So for me it is more a quality issue than fundamentally and AI-written issue (except for the communal/sentimental aspects, which I agree have value).

OscarD🔸
11
2
1
100% agree

How much of a post are you comfortable for AI to write?

Currently, I think AI writing isn't good enough to be better than good human users of the Forum, but I think this will quickly change, and I want to prioritise ideas and impact over who wrote the final words. I expect it will be longer before AIs are at the frontier of doing EA research and cuase-prioritization, so I think posts with only AI ideas will be bad for a longer time to come. But posts with human ideas written up well by an AI I could well imagine being better quality than most Forum writer's posts within a year or two.
I feel differently if someone is writing something to me personally, if someone writes me a poem or a birthday card or something that has sentimental value, then AI writing reduces that. But the Forum I see as primarily content-value rather than sentimental value.

Load more