[ Question ]

Why were people skeptical about RAISE?

by casebash 3mo4th Sep 20191 min read8 comments

14


RAISE was a project that was aiming to build an online course for AI safety. It shut down because their attempt at a study didn't show any significant improvement, but I know that some people were sceptical of the project goal, not just its failure to achieve that. What was the worry here? Was it related to excessively growing the size of the field, the idea that anyone capable of significantly contributing wouldn't need an on-ramp, the choice of topics or something else?

New Answer
Ask Related Question
New Comment
Write here. Select text for formatting options.
We support LaTeX: Cmd-4 for inline, Cmd-M for block-level (Ctrl on Windows).
You can switch between rich text and markdown in your user settings.

4 Answers

I was mostly skeptical because the people involved did not seem to have any experience doing any kind of AI Alignment research, or themselves had the technical background they were trying to teach. I think this caused them to focus on the obvious things to teach, instead of the things that are actually useful.

To be clear, I have broadly positive impressions of Toon and think the project had promise, just that the team didn't actually have the skills to execute on it, which I think few people have.

>anyone capable of significantly contributing wouldn't need an on-ramp

That's approximately why I was skeptical, although I want to frame it a bit differently. I expect that the most valuable contributions to AI safety will involve generating new paradigms, asking questions that nobody has yet thought to ask, or something like that. It's hard to teach the skills that are valuable for that.

I got the impression that RAISE was mostly oriented toward producing people who become typical MIRI researchers. Even if MIRI's paradigm is the right one, I expect that MIRI needs atypically good researchers, and would only get minor benefits from someone who is struggling to become a typical MIRI researcher.


Depends what you call the "goal".

If you mean "make it easier for new people to get up to speed", I'm all for that goal. That goal encompasses a significant chunk of the value of the Alignment Newsletter.

If you mean "create courses that allow new people to get the required mathematical maturity", I'm less excited. Such courses already exist, and while mathematical thinking is extremely useful, mathematical knowledge mostly isn't. (Mathematical knowledge is more useful for MIRI-style work, but I'd guess it's still not that useful.)

My notes from the time suggest I thought the team was inexperienced relative to the difficulty of the project, and that their roadmap was poorly calibrated