[1. Do you think many major insights from longtermist macrostrategy or global priorities research have been found since 2015?]
I think "major insights" is potentially a somewhat loaded framing; it seems to imply that only highly conceptual considerations that change our minds about previously-accepted big picture claims count as significant progress. I think very early on, EA produced a number of somewhat arguments and considerations which felt like "major insights" in that they caused major swings in the consensus of what cause areas to prioritize at a very high level; I think that probably reflected that the question was relatively new and there was low-hanging fruit. I think we shouldn't expect future progress to take the form of "major insights" that wildly swing views about a basic, high-level question as much (although I still think that's possible).
[2. If so, what would you say are some of the main ones?]
Since 2015, I think we've seen good analysis and discussion of AI timelines and takeoff speeds, discussion of specific AI risks that go beyond the classic scenario presented in Superintellilgence, better characterization of multipolar and distributed AI scenarios, some interesting and more quantitative debates on giving now vs giving later and "hinge of history" vs "patient" long-termism, etc. None of these have provided definitive / authoritative answers, but they all feel useful to me as someone trying to prioritize where Open Phil dollars should go.
[3. Do you think the progress has been at a good pace (however you want to interpret that)?]
I'm not sure how to answer this; I think taking into account the expected low-hanging fruit effect, and the relatively low investment in this research, progress has probably been pretty good, but I'm very uncertain about the degree of progress I "should have expected" on priors.
[4. Do you think that this pushes for or against allocating more resources (labour, money, etc.) towards that type of work?]
I think ideally the world as a whole would be investing much more in this type of work than it is now. A lot of the bottleneck to this is that the work is not very well-scoped or broken into tractable sub-problems, which makes it hard for a large number of people to be quickly on-boarded to it.
[5. Do you think that this suggests we should change how we do this work, or emphasise some types of it more?]
Related to the above, I'd love for the work to become better-scoped over time -- this is one thing we prioritize highly at Open Phil.
I think that's a very interesting question, and one I've sometimes wondered about.
Oversimplifying a bit, my answer is: We need neither just bloggers nor just orgs like FHI and CLR. Instead, we need to move from a model where epistemic progress is achieved by individuals to one where it is achieved by a system characterized by a diversification of epistemic tasks, specialization, and division of labor. (So in many ways I think: we need to become more like academia.)
Very roughly, it seems to me that early intellectual progress in EA often happened via distinct and actionable insights found by individuals. E.g. "AI alignment is super important" or "donating to the best as opposed to typical charities is really important" or "current charity evaluators don't help with finding impactful charities" or "wow, if I donate 10% of my income I can save many lives over my lifetime" or "oh wait, there are orders of magnitudes more wild than farmed animals, so we need to consider the impact of farmed animal advocacy on wild animals".
(Of course, it's a spectrum. Discussion and collaboration were still important, my claim is just that there were significantly more "insights within individuals" than later.)
But it seems to me that most low-hanging fruits have been plucked. So it can be useful to look at other more mature epistemic endeavours. And if I reflect on those it strikes me that in some sense most of the important cognition isn't located in any single mind. E.g. for complex questions about the world, it's the system of science that delivers answers via irreducible properties like "scientific consensus". And while in hindsight it's often possible to summarize epistemic progress in a way that can be understood by individuals, and looks like it could have been achieved by them, the actual progress was distributed across many minds.
(Similarly, the political system doesn't deliver good policies because there's a superintelligent policymaker but because of checks and balances etc.; the justice system doesn't deliver good settlement of disputes because there's a super-Salomonic judge but because of the rules governing court cases that have different roles such as attorneys, the prosecution, judges, etc.)
This also explains why, I think correctly, discussions on how to improve science usually focus on systemic properties like funding, incentives, and institutions. As opposed to, say, how to improve the IQ or rationality of individual scientists.
And similarly, I think we need to focus less on how to improve individuals and more on how to set up a system that can deliver epistemic progress across larger time scales and larger numbers of people less selected by who happens to know whom.