Four free CFAR programs on applied rationality and AI safety

by AnnaSalamon 4y10th Apr 20165 comments

12


CFAR will be running four free programs this summer that are in various ways intended to help with EA/xrisk, all of which are currently accepting applications:

EuroSPARC  (July 19-26 in Oxford, UK).  A program on applied rationality and cognition for mathematically talented high schoolers from anywhere in the world. 

CFAR for Machine Learning Researchers (SF area, Aug 30 - Sept 4): A program for students and researchers in the fields of machine learning and artificial intelligence.  Includes a 4-day applied rationality workshop plus a day for discussion of long-term AI impacts.

MIRI Summer Fellows Program (SF area, June 19 - July 4): A program for people with strong math backgrounds who are interested in doing technical AI safety research. Includes applied rationality content and training from MIRI researchers on components of their technical research program.

Workshop on AI Safety Strategy (SF area, May 29 - June 5): A program for people curious about the AI safety landscape and how they might influence it. Includes training on forecasting skills and strategizing about the AI landscape. Also includes admission to a standard CFAR applied rationality workshop.

We are also running a (paid) standard applied rationality workshop in the SF area May 18-22 , which is for almost anyone who wants to improve their reasoning and effectiveness skillset; the workshops have gotten better over time. Some financial aid is available, especially for EAs.

(People fly in for all of these, and all of these programs are residential, so don't feel as though you need to already be from the SF Bay Area; some travel assistance is available for the free programs.)

Human capital seems to be a big bottleneck on EA & xrisk.  If you know someone who might be helped by one of these programs, you might consider sending them a quick email suggesting they apply.

Thanks, and hope to see some of you this summer!  Questions also welcome.