Hide table of contents

What is the AI Safety Camp?

Would you like to work on AI safety or strategy research and are looking for a concrete way to get started? We are organizing this camp for aspiring AI safety and strategy researchers. At the camp, you:

  • build connections with others in the field
  • build your research portfolio
  • receive feedback on your research ideas and help others with theirs
  • make concrete progress on open AI safety research questions

Read more about the last research camp here, including a summary of the produced research.

What’s the structure of the camp?

The camp is preceded by 7 weeks of preparation in form of an online study group of 3-5 people, followed by a 10-day intensive camp with the aim of creating and publishing a research paper, extensive blog post, or github repository.

What will attendants work on?

Participants will work in groups on tightly-defined research projects, for example in the following areas:

  • Strategy and Policy
  • Agent Foundations (decision theory, subsystem alignment, embedded world models, MIRI-style)
  • Value learning (IRL, approval-directed agents, wireheading, …)
  • Corrigibility / Interruptibility
  • Side Effects, Safe Exploration
  • Scalable & Informed Oversight
  • Robustness (distributional shift, adversarial examples)
  • Human Values (including philosophical and psychological approaches)

When and where?

4–14 October 2018, in Prague, Czech Republic.

Pricing

Attendance is free.

Apply

Applications and more information on aisafetycamp.com

9

0
0

Reactions

0
0
Comments10
Sorted by Click to highlight new comments since: Today at 4:29 AM

Thanks for this info, Anne! Could you just clarify who the sponsors of the camp are? I am asking because the attendance is free, but somehow I haven't found any info on who is paying for the whole event. Just out of curiosity :)

I’ll answer this point since I happen to know.

  • Left-over funds from the previous camp were passed on
  • Greg Colbourn is willing to make a donation
  • The funding team just submitted an application for EA Grant’s second round

The team does have plenty of back-up options for funding so I personally don’t expect financial difficulties (though it would be less than ideal I think if the Czech Association for Effective Altruism has to cover a budget deficit itself).

Thanks for the reply! It's great if the funding comes from institutions or individuals who are willing to support research topics. I think it would be really bad though if the funding was taken from any standard EA donations without previously attempting to get the funding via existing public grants and institutions (and even in that case, it would still be bad given the comparative impact of such a camp and alternative ways of providing effective charity. I am all for research, but primarily via non-EA research funds which are numerous for topics such as this one - i.e. we should strive to fund EA research topics from general research related funds as much as possible).

If it would cost the same or less time to get funding via public grants and institutions, I would definitely agree (i.e. in filling in an application form, in the average number of applications that need to be submitted before the budget is covered, and in loss of time because of distractions and 'meddling' by unaligned funders).

Personally, I don't think this applies to AI Safety Camp at all though (i.e. my guess is that it would cost significantly more time than getting money from 'EA donors', which we would be better off spending on improving the camps) except perhaps in isolated cases that I have not found out about yet.

I'm also not going to spend the time to write up my thoughts in detail but here's a summary:

  • AI alignment is complicated – there's a big inferential gap in explaining to public grantmakers why this is worth funding (as well as difficulty making the case for how this is going to make them look good)
  • The AI Safety Camp is not a project of an academic institution, which gives us little credibility to other academic institutions who would be most capable of understanding the research we are building on
  • Tens of millions of dollars are being earmarked to AI alignment research by people in the EA community right now who are looking to spend that on promising projects run by reliable people. There seems to be a consensus that we need to work at finding talent to spent the money on (not more outside funders).

I very much understand your hope concerning the AI talent and the promising value of this camp. However, I'd also like to see the objective assessment of effectiveness (as in effective altruism) concerning such research attempts. To do so, you would have to show that such research has a comparatively higher chance of producing something outstanding than the existing academic research. Of course, that needs to be done in view of empirical evidence, which I very much hope you can provide. Otherwise, I don't know what sense of "effective" is still present in the meaning of "effective altruism".

Again: I think these kinds of research camps are great as such, i.e. in view of overall epistemic values. They are as valuable as, say, a logic camp, or a camp in agent-based models. However, I would never argue that a camp in agent-based models should be financed by EA funds unless I have empirically grounded reasons that such a research can contribute to effective charity and prevention of possible dangers better than the existing academic research can.

As for the talent search, you seem to assume that academic institutions cannot uncover such talents. I don't know where you get this evidence from, but PhD grants across EU, for instance, are precisely geared towards such talents. Why would talented individuals not apply for those? And where do you get the idea that the topic of AI safety won't be funded by, say, Belgian FWO or German DFG? Again, you would need to provide empirical reasons that such systematic bias against projects on these topics exists.

Finally, if the EA community wants to fund reliable project initiators for the topic of AI safety, why not make an open call for experts in the field to apply with project proposals and form the teams who can immediately execute these projects within the existing academic institutions? Where is this fear of academia coming from? Why would a camp like this be more streamlined than an expert proposal, where a PI of the given project employs the junior researchers and systematically guides them in the given research? In all other aspects of EA this is precisely how we wish to proceed (think of medical research).

For more on the thinking behind streamlined non-mainstream funding, see https://www.openphilanthropy.org/blog/hits-based-giving

I don't think academia is yet on the same page as EA with regard to AI Safety, but may well be soon hopefully (with credibility coming from the likes of Stuart Russell and Max Tegmark).

But this is not about whether academia is on the same page or not; it's about the importance of pushing the results via academic channels because otherwise they won't be recognized by anyone (policy makers especially). Moreover, what I mention above are funding institutions offering the finances of individual projects - assessed in terms of their significance and feasibility. If there is a decent methodology to address the given objectives, even if the issue is controversial, this doesn't mean the project won't be financed. Alternatively, if you actually know of decent project applications that have been rejected, well let's see those and examine whether there is indeed a bias in the field. Finally, why do you think that academia is averse towards risky projects?! Take for instance ERC schemes: they are intentionally designed for high-risk/high-gain project proposals, that are transformative and groundbreaking in character.

There is an analogy with speculative investing here I think - for something to be widely regarded as worthwhile investing in (i.e. research funded by mainstream academia) it has to already have evidence of success (e.g. Bitcoin now). By which point it is no longer new and highly promising in terms of expected value (like Bitcoin was in, say, 2011) i.e. it is necesssarily the case that all things very high in (relative) expected value are outside the mainstream.

AGI alignment is gaining more credibility, but it still doesn't seem like it's that accepted in mainstream academia.

Anyway, I think we are probably on a bit of a tangent to what AISC is trying to achieve - namely help new researchers level up (/get a foot in the door in academic research).

Oh I agree that for many ideas to be attractive, they have to gain a promising character. I wouldn't reduce the measure of pursuit worthiness of scientific hypotheses to the evidence of their success though: this measure is rather a matter of prospective values, which have to do with a feasible methodology (how many research paths we have despite current problems and anomalies?). But indeed, sometimes research may proceed simply as tapping in the dark, in spite all the good methodological proposals (as e.g. it might have been the case in the research on protein synthesis in the mid 20th c.).

However, my point was simply the question: does such an investment in future proposals outweigh the investment in other topics, so that it should be funded from an EA budget rather than from existing public funds? Again: I very much encourage such camps. Just not on the account of spending the cash meant for effectively reducing suffering (due to these projects being highly risky and due to the fact that they are already heavily funded by say OpPhil).

My point (and remmelt's) was that public funds would be harder/more time (and resource) consuming to get.

There is currently a gap at the low end (OpenPhil is too big to spend time on funding such small projects).

And Good Ventures/OpenPhil also already fill a lot of the gap in funding programs with track records of effectively reducing suffering.

Curated and popular this week