First EuroSPARC was in 2016. Targeting 16-19 year olds, my prior would be participants should still mostly study, and not work full-time on EA, or only exceptionally.
Long feedback loops are certainly a disadvantage.
Also in the meantime ESPR underwent various changes and actually is not optimising for something like "conversion rate to an EA attractor state".
I. I did spent a considerable amount of time thinking about prioritisation (broadly understood)
My experience so far is
few examples, where in some cases I got to writing something
II. My guess is there are more people who work in a similar mode, trying to basically 'build as good world model as you can', dive into problems you run into, and at the end prioritise informally based on such a model. Typically I would expect such model to be in parts implicit / be some sort of multi-model ensemble / ...
While this may not create visible outcomes labeled as prioritisation, I think it's important part of what's happening now
I posted a short version of this, but I think people found it unhelpful, so I'm trying to post somewhat longer version.
I'm not sure you've read my posts on this topic? (1,2)
In the language used there, I don't think the groups you propose would help people overcome the minimum recommended resources, but are at the risk of creating the appearance some criteria vaguely in that direction are met.
Overall, I think sometimes small obstacles - such as having to find EAs from your country in the global FB group or on EA hub and by other means - are a good thing!
FWIW the Why not to rush to translate effective altruism into other languages post was quite influential but is often wrong / misleading / advocating some very strong prior on inaction, in my opinion
I don't think this is actually neglected
(more on the topic here)
For example, CAIS and something like "classical superintelligence in a box picture" disagree a lot on the surface level. However, if you look deeper, you will find many similar problems. Simple to explain example: problem of manipulating the operator - which has (in my view) some "hard core" involving both math and philosophy, where you want the AI to somehow communicate with humans in a way which at the same time allows a) the human to learn from the AI if the AI knows something about the world b) the operator's values are not "overwritten" by the AI c) you don't want to prohibit moral progress. In CAIS language this is connected to so called manipulative services.
Or: one of the biggest hits of past year is the mesa-optimisation paper. However, if you are familiar with prior work, you will notice many of the proposed solutions with mesa-optimisers are similar/same solutions as previously proposed for so called 'daemons' or 'misaligned subagents'. This is because the problems partially overlap (the mesa-optimisation framing is more clear and makes a stronger case for "this is what to expect by default"). Also while, for example, on the surface level there is a lot of disagreement between e.g. MIRI researchers, Paul Christiano and Eric Drexler, you will find a "distillation" proposal targeted at the above described problem in Eric's work from 2015, many connected ideas in Paul's work on distillation, and while find it harder to understand Eliezer I think his work also reflects understanding of the problem.
For example: You can ask whether the space of intelligent systems is fundamentally continuous, or not. (I call it "the continuity assumption"). This is connected to many agendas - if the space is fundamentally discontinuous this would cause serious problems to some forms of IDA, debate, interpretability & more.
(An example of discontinuity would be existence of problems which are impossible to meaningfully factorize; there are many more ways how the space could be discontinuous)
There are powerful intuitions going both ways on this.
I think the picture is somewhat correct, and we surprisingly should not be too concerned about the dynamic.
My model for this is:
1) there are some hard and somewhat nebulous problems "in the world"
2) people try to formalize them using various intuitions/framings/kinds of math; also using some "very deep priors"
3) the resulting agendas look at the surface level extremely different, and create the impression you have
4) if you understand multiple agendas deep enough, you get a sense
Overall, given our current state of knowledge, I think running these multiple efforts in parallel is a better approach with higher chance of success that an idea that we should invest a lot in resolving disagreements/prioritizing, and everyone should work on the "best agenda".
This seems to go against some core EA heuristic ("compare the options, take the best") but actually is more in line with what rational allocation of resources in the face of uncertainty.