[1. Do you think many major insights from longtermist macrostrategy or global priorities research have been found since 2015?]
I think "major insights" is potentially a somewhat loaded framing; it seems to imply that only highly conceptual considerations that change our minds about previously-accepted big picture claims count as significant progress. I think very early on, EA produced a number of somewhat arguments and considerations which felt like "major insights" in that they caused major swings in the consensus of what cause areas to prioritize at a very high level; I think that probably reflected that the question was relatively new and there was low-hanging fruit. I think we shouldn't expect future progress to take the form of "major insights" that wildly swing views about a basic, high-level question as much (although I still think that's possible).
[2. If so, what would you say are some of the main ones?]
Since 2015, I think we've seen good analysis and discussion of AI timelines and takeoff speeds, discussion of specific AI risks that go beyond the classic scenario presented in Superintellilgence, better characterization of multipolar and distributed AI scenarios, some interesting and more quantitative debates on giving now vs giving later and "hinge of history" vs "patient" long-termism, etc. None of these have provided definitive / authoritative answers, but they all feel useful to me as someone trying to prioritize where Open Phil dollars should go.
[3. Do you think the progress has been at a good pace (however you want to interpret that)?]
I'm not sure how to answer this; I think taking into account the expected low-hanging fruit effect, and the relatively low investment in this research, progress has probably been pretty good, but I'm very uncertain about the degree of progress I "should have expected" on priors.
[4. Do you think that this pushes for or against allocating more resources (labour, money, etc.) towards that type of work?]
I think ideally the world as a whole would be investing much more in this type of work than it is now. A lot of the bottleneck to this is that the work is not very well-scoped or broken into tractable sub-problems, which makes it hard for a large number of people to be quickly on-boarded to it.
[5. Do you think that this suggests we should change how we do this work, or emphasise some types of it more?]
Related to the above, I'd love for the work to become better-scoped over time -- this is one thing we prioritize highly at Open Phil.
This may sound really obvious in retrospect, but Evan G. Williams' 2015 paper (summarized here) felt pretty convincing to me that conditional upon moral realism being broadly true, we're all almost certainly unknowingly guilty of large moral atrocities.
There's several steps here that I think is interesting:
Even though I'm not a moral realist, I feel like this paper had a substantial effect on how I view the demands of morality, and over the years I've slowly internalized the message that this type of thing is hard (I'm also maybe 15% less optimistic about moral hedging as a robust strategy than I otherwise would've been if I hadn't read this paper).
These points feel so obvious in retrospect that I'd be surprised if they weren't all covered before 2015, so I'd be interested in whether philosophers and philosophy students here can point to earlier sources.