Very cool you've previously mentioned it - nice that we've both been thinking about it!
One proposal is a slight modification. Basically to use your example, you could (a) randomise the entire 250 or (b) you could rank the 500, give the 'treatment' to the top 150 say, then randomise 100 'treatments' to 200 around (100 above and 100 below) the cutoff. I think both proposals, or a RDD, would be good - but would defer to advice from actual EA experts on RCTs.
Congratulations on this growth, really exciting!
Have you thought about including randomisation to facilitate evaluation?
E.g. you could include some randomisation in who invited to events (of those who applied), which universities/cities get organisers (of those on the shortlist) etc. This could also be done with 80k coaching calls, dunno if it has been tried.
You then track who did and didn't get the treatment, to see what effect it had. This doesn't have to involve denying 'treatment' to people/places - presumably there are more applicants than there are places - you introduce randomisation at the cutoff.
This would allow some causal inference (RCT/Randomista, does x cause y, etc) as to what effect these treatments are having (vs the control, and null hypothesis of no effect). This could help justify impact to the community and funders. I'm sure people at eg JPAL, Rethink, etc could help with research design.
Interesting idea. Wanted to throw in a few reflections from working at the Centre for the Study of Existential Risk for four years.
Just want to give a big plus one to the infohazards section. Several states and terrorist groups have been inspired by bioweapons information in the public domain - its a real problem. At CSER we've occassionally thought up what might be a new contributor to existential risk - and have decided not to publish on it. I'm sure Anders Sandberg has come up with tonnes too (thankfully he's on the good side!) - and has also published good stuff on them. Very important bit.
I imagine you'd get lots of kooks writing in (e.g. we get lots of Biblical prediction books in the post), so you'd need some way to sift through that. You'd also need some way to handle disagreement (eg. I think climate change is a major contributor to existential risk, some other researchers in the field do not). Also worth thinking about incentives - in a way, this is a prize for people to come up with new dangerous ideas.
Excellent overview, and I completely agree that the AI Act is an important policy for AI governance.
One quibble: as far as I know, the Center for Data Innovation is just a lobbying group for Big Tech - I was a little surprised to see it listed in "public responses from various EA and EA Adjacent organisations".
Cool post, very interesting! I'm fascinated by this topic - the PhD thesis I'm writing is on nuclear, bio and cyber weapons arms control regimes and what lessons can be drawn for AI. So obviously I'm very into this, and want to see more work done on this. Really excellent to see you exploring the parallels. A few thoughts:
Here's my piece on this question, from February 2021 - https://forum.effectivealtruism.org/posts/ST8vFfPropD9AYqkX/alternatives-to-donor-lotteries
FYI this link is broken for me: "Audio version available at Cold Takes "
Interesting first point, but I disagree. To me, the increased salience of climate change in recent years can be traced back to the 2018 Special Report on Global Warming of 1.5 °C (SR15), and in particular the meme '12 years to save the world'. Seems to have contributed to the start of School Strike for Climate, Extinction Rebellion and the Green New Deal. Another big new scary IPCC report on catastrophic climate change would further raise the salience of this issue-area.
I was thinking that $100m would be for all four of these topics, and that we'd get cause-prioritisation VOI across all four of these areas. $100m for impact and VOI across all four seems pretty good to me (however I'm a researcher not a funder!)
On solar geo, I'm not an expert on it and am not arguing for it myself, merely reporting that its top of the 'asks' list for orgs like Silver Lining.
I actually rather like the framing in Xu & Ram - I don't think we know enough about >5 °C scenarios, so describing them as "unknown, implying beyond catastrophic, including existential threats" seems pretty reasonable to me. In any case, I cited that more to demonstrate the lack of research thats been done on these scenarios.
I think its a really good point that there's something very different between research/policy orgs and orgs that deliver products and services at scale. I basically agree, but I'd slightly tweak this to"It is very hard for a charity to scale to more than $100 million per year without delivering a physical product or service."
Because digital orgs/companies who deliver a digital service (GiveDirectly, Facebook/Google/etc) obviously can scale to $100 million per year.
Hell yeah! Get JGL to star - https://www.eaglobal.org/speakers/joseph-gordon-levitt/