Update Dec 4: Funds still needed for next month's stipends, plus salaries to run the 11th edition. Zvi listed AISC at the top of his recommendations for talent funnel orgs.
We are organising the 9th edition without funds. We have no personal runway left to do this again. We will not run the 10th edition without funding.
In a nutshell:
- Last month, we put out AI Safety Camp’s funding case.
A private donor then decided to donate €5K.
- Five more donors offered $7K on Manifund.
For that $7K to not be wiped out and returned, another $21K in funding is needed. At that level, we may be able to run a minimal version of AI Safety Camp next year, where we get research leads started in the first 2.5 months, and leave the rest to them.
- The current edition is off to a productive start!
A total of 130 participants joined, spread over 26 projects. The projects are diverse – from agent foundations, to mechanistic interpretability, to copyright litigation.
- Our personal runways are running out.
If we do not get the funding, we have to move on. It’s hard to start a program again once organisers move on, so this likely means the end of AI Safety Camp.
- We commissioned Arb Research to do an impact assessment.
One preliminary result is that AISC creates one new AI safety researcher per around $12k-$30k USD of funding.
How can you support us:
- Spread the word. When we tell people AISC doesn't have any money, most people are surprised. If more people knew of our situation, we believe we would get the donations we need.
- Donate. Make a donation through Manifund to help us reach the $28K threshold.
Reach out to remmelt@aisafety.camp for other donation options.
I have similar views to Marius's comment. I did AISC in 2021 and I think it was somewhat useful for starting in AI safety, although I think my views and understanding of the problems were pretty dumb in hindsight.
AISC does seem extremely cheap (at least for the budget options). If you have like 80% on the "Only top talent matters" model (MATS, Astra, others) and 20% on the "Cast a wider net" model (AISC), I would still guess that AISC seems like a good thing to do.
My main worries here are with the negative effects. These are mainly related to the "To not build uncontrollable AI" stream; 3 out of 4 of these seem to be about communication/politics/advocacy.[1] I'm worried about these having negative effects, making the AI safety people seem crazy, uninformed, or careless. I'm mainly worried about this because Remmelt's recent posting on LW really doesn't seem like careful or well thought through communication. (In general I think people should be free to do advocacy etc, although please think of externalities) Part of my worry is also from AISC being a place for new people to come, and new people might not know how fringe these views are in the AI safety community.
I would be more comfortable with these projects (and they would potentially still be useful!) if they were focused on understanding the things they were advocating for more. E.g. a report on "How could lawyers and coders stop AI companies using their data?", rather than attempting to start an underground coalition.
All the projects in the "Everything else" streams (run by Linda) seem good or fine, and likely a decent way to get involved and start thinking about AI safety. Although, as always, there is a risk of wasting time with projects that end up being useless.
[ETA: I do think that AISC is likely good on net.]
The other one seems like a fine/non-risky project related to domain whitelisting.
Ie.
Looking back: I should have just held off until I managed to write one explainer (this one) that folks in my circles did not find extremely unintuitive.