Hide table of contents

RAISE was a project that was aiming to build an online course for AI safety. It shut down because their attempt at a study didn't show any significant improvement, but I know that some people were sceptical of the project goal, not just its failure to achieve that. What was the worry here? Was it related to excessively growing the size of the field, the idea that anyone capable of significantly contributing wouldn't need an on-ramp, the choice of topics or something else?

14

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

I was mostly skeptical because the people involved did not seem to have any experience doing any kind of AI Alignment research, or themselves had the technical background they were trying to teach. I think this caused them to focus on the obvious things to teach, instead of the things that are actually useful.

To be clear, I have broadly positive impressions of Toon and think the project had promise, just that the team didn't actually have the skills to execute on it, which I think few people have.

>anyone capable of significantly contributing wouldn't need an on-ramp

That's approximately why I was skeptical, although I want to frame it a bit differently. I expect that the most valuable contributions to AI safety will involve generating new paradigms, asking questions that nobody has yet thought to ask, or something like that. It's hard to teach the skills that are valuable for that.

I got the impression that RAISE was mostly oriented toward producing people who become typical MIRI researchers. Even if MIRI's paradigm is the right one, I expect that MIRI needs atypically good researchers, and would only get minor benefits from someone who is struggling to become a typical MIRI researcher.


RAISE was oriented toward producing people who become typical MIRI researchers... I expect that MIRI needs atypically good researchers.

Slightly odd phrasing here which I don't really understand, since I think the typical MIRI researcher is very good at what they do, and that most of them are atypically good researchers compared with the general population of researchers.

Do you mean instead "RAISE was oriented toward producing people who would be typical for an AI researcher in general"? Or do you mean that there are only minor benefits from additional researchers who are about as good as current MIRI researchers?

3
PeterMcCluskey
5y
I meant something like "good enough to look like a MIRI researcher, but unlikely to turn out to be more productive than the average MIRI researcher". I guess when I wrote that I was feeling somewhat pessimistic about MIRI's hiring process. Given optimistic assumptions about how well MIRI distinguishes good from bad job applicants, then I'd expect that MIRI wouldn't hire RAISE graduates.

Depends what you call the "goal".

If you mean "make it easier for new people to get up to speed", I'm all for that goal. That goal encompasses a significant chunk of the value of the Alignment Newsletter.

If you mean "create courses that allow new people to get the required mathematical maturity", I'm less excited. Such courses already exist, and while mathematical thinking is extremely useful, mathematical knowledge mostly isn't. (Mathematical knowledge is more useful for MIRI-style work, but I'd guess it's still not that useful.)

I'm not sure I understand the difference between mathematical thinking and mathematical knowledge. Could you briefly explain or give a reference? (e.g. I am wondering what it would look like if someone had a lot of one and very little of the other)

3
Rohin Shah
5y
Mathematical knowledge would be knowing that the Pythagoras theorem states that a2+b2=c2, mathematical thinking would be the ability to prove that theorem from first principles. The way I use the phrase, mathematical thinking doesn't only encompass proofs. It would also count as "mathematical reasoning" if you figure out that means are affected by outliers more than medians are, even if you don't write down any formulas, equations, or proofs.

My notes from the time suggest I thought the team was inexperienced relative to the difficulty of the project, and that their roadmap was poorly calibrated

Curated and popular this week
Relevant opportunities