A variant on your proposal could be a moratorium on training new large models (e.g. OpenAI would be forbidden from training GPT-5, for example).
Thanks very much Saulius.
In SoGive's 2023 plans document, we said
"An investigation of No Means No Worldwide was suggested to us by a member of the EA London community, who was excited to have an EA-aligned recommendation for a charity which prevents sexual violence. We have mostly completed a review of this charity, and were asked not to publish it yet because it used a study which is not yet in the public domain."
That said, part of the reason I didn't allude to NMNW is that my vague memory of the average was older (presumably my vague memory was wrong).
I don't see how we could implement a moratorium on AGI research that does stop capabilities research but doesn't stop alignment research?
Cool. To be clear, I think if anyone was reading your piece with any level of care or attention, it would be clear that you were comparing normal and lognormal, and not making any stronger claims than that.
Someone pinged me a message on here asking about how to donate to tackle child sexual abuse. I'm copying my thoughts here.
I haven't done a careful review on this, but here's a few quick comments:
If anyone is interested in this topic and wants to put aside a substantial sum (high 5 digits or six digits) then the next steps would involve a number of conversations to gather more evidence and check whether existing interventions are as lacking in evidence as I suspect. If so, the next step would be work on creating a new charity. It's possible that Charity Entrepreneurship might be interested in this, but I haven't spoken with them about this and I don't know their appetite. I'd be happy to support you on this, mostly because I know that CSA can be utterly horrific (at least some of the time).
If someone gives me a book, I feel like I have no deadline for reading it. Which sometimes means I never read it. If it's a loan, it's more likely I'll read it at all.
The flipside of this dynamic is that I'm unlikely to accept a book if I don't think I'm likely to read it, or if I'm interested in reading it, but know that I won't have time for a while.
I agree with your claim that lognormal distributions are a better choice than normal. However this doesn't explain whether another distribution might be better (especially in cases where data is scarce, such as the number of inhabitable planets).
For example, the power law distribution has some theoretical arguments in its favour and also has a significantly higher kurtosis, meaning there is a much fatter tail.
It looks like the arguments in favour of a boycott would look stronger if there were a coherent AI safety activist movement. (I mean "activist" in the sense of "recruiting other people to take part, and grassroots lobbying of decision-makers", not "activist" in the sense of "takes some form of action, such as doing AI alignment research")
I haven't thought hard about how good an idea this is, but those interested might like to compare and contrast with ClientEarth.
A limit on compute designed to constrain OpenAI, Anthropic, or Google from training a new model sounds like a very high bar. I don't understand why that could easily be got around?