[1. Do you think many major insights from longtermist macrostrategy or global priorities research have been found since 2015?]
I think "major insights" is potentially a somewhat loaded framing; it seems to imply that only highly conceptual considerations that change our minds about previously-accepted big picture claims count as significant progress. I think very early on, EA produced a number of somewhat arguments and considerations which felt like "major insights" in that they caused major swings in the consensus of what cause areas to prioritize at a very high level; I think that probably reflected that the question was relatively new and there was low-hanging fruit. I think we shouldn't expect future progress to take the form of "major insights" that wildly swing views about a basic, high-level question as much (although I still think that's possible).
[2. If so, what would you say are some of the main ones?]
Since 2015, I think we've seen good analysis and discussion of AI timelines and takeoff speeds, discussion of specific AI risks that go beyond the classic scenario presented in Superintellilgence, better characterization of multipolar and distributed AI scenarios, some interesting and more quantitative debates on giving now vs giving later and "hinge of history" vs "patient" long-termism, etc. None of these have provided definitive / authoritative answers, but they all feel useful to me as someone trying to prioritize where Open Phil dollars should go.
[3. Do you think the progress has been at a good pace (however you want to interpret that)?]
I'm not sure how to answer this; I think taking into account the expected low-hanging fruit effect, and the relatively low investment in this research, progress has probably been pretty good, but I'm very uncertain about the degree of progress I "should have expected" on priors.
[4. Do you think that this pushes for or against allocating more resources (labour, money, etc.) towards that type of work?]
I think ideally the world as a whole would be investing much more in this type of work than it is now. A lot of the bottleneck to this is that the work is not very well-scoped or broken into tractable sub-problems, which makes it hard for a large number of people to be quickly on-boarded to it.
[5. Do you think that this suggests we should change how we do this work, or emphasise some types of it more?]
Related to the above, I'd love for the work to become better-scoped over time -- this is one thing we prioritize highly at Open Phil.
[Off the top of my head. I don't feel like my thoughts on this are very developed, so I'd probably say different things after thinking about it for 1-10 more hours.]
[ETA: On a second reading, I think some of the claims below are unhelpfully flippant and, depending on how one reads them, uncharitable. I don't want to spend the significant time required for editing, but want to flag that I think my dispassionate views are not super well represented below.]
Things that immediately come to mind, not necessarily the most important levers:
I expect that some of the resulting specialists would have a natural home in existing academic disciplines and others wouldn't, but I can't immediately think of examples.
I think in principle it'd be great if there were more RSP-type things, but I'm not sure if I think they're good to expand at the margin given opportunity costs.
However, I expect that for most people the best training setup would not be RSP-type things but a combination of:
This is because I do agree there are important components of "EA/rationalist mindware and knowledge" without which I expect even super smart and extremely skilled people to have little impact. But I'm really skeptical that the best way to transmit these is to have people hang out for years in insular low-stimulation environments. I think we can transmit them in much less time, and in a way that doesn't compete as much with robustly useful skill acquisition, and if not then we can figure out how to do this.
I expect RSP-type things to be targeted at people in more exceptional circumstances, e.g. they have good plans that don't fit into existing institutions or they need time to "switch fields".