First EuroSPARC was in 2016. Targeting 16-19 year olds, my prior would be participants should still mostly study, and not work full-time on EA, or only exceptionally.
Long feedback loops are certainly a disadvantage.
Also in the meantime ESPR underwent various changes and actually is not optimising for something like "conversion rate to an EA attractor state".
Quick reaction:
I. I did spent a considerable amount of time thinking about prioritisation (broadly understood)
My experience so far is
few examples, where in some cases I got to writing something
II. My guess is there are more people who work in a similar mode, trying to basically 'build as good world model as you can', dive into problems you run into, and at the end prioritise informally based on such a model. Typically I would expect such model to be in parts implicit / be some sort of multi-model ensemble / ...
While this may not create visible outcomes labeled as prioritisation, I think it's important part of what's happening now
I posted a short version of this, but I think people found it unhelpful, so I'm trying to post somewhat longer version.
I'm not sure you've read my posts on this topic? (1,2)
In the language used there, I don't think the groups you propose would help people overcome the minimum recommended resources, but are at the risk of creating the appearance some criteria vaguely in that direction are met.
Overall, I think sometimes small obstacles - such as having to find EAs from your country in the global FB group or on EA hub and by other means - are a good thing!
FWIW the Why not to rush to translate effective altruism into other languages post was quite influential but is often wrong / misleading / advocating some very strong prior on inaction, in my opinion
I don't think this is actually neglected
(more on the topic here)
Sure
a)
For example, CAIS and something like "classical superintelligence in a box picture" disagree a lot on the surface level. However, if you look deeper, you will find many similar problems. Simple to explain example: problem of manipulating the operator - which has (in my view) some "hard core" involving both math and philosophy, where you want the AI to somehow communicate with humans in a way which at the same time allows a) the human to learn from the AI if the AI knows something about the world b) the operator's values are not "overwritten" by the AI c) you don't want to prohibit moral progress. In CAIS language this is connected to so called manipulative services.
Or: one of the biggest hits of past year is the mesa-optimisation paper. However, if you are familiar with prior work, you will notice many of the proposed solutions with mesa-optimisers are similar/same solutions as previously proposed for so called 'daemons' or 'misaligned subagents'. This is because the problems partially overlap (the mesa-optimisation framing is more clear and makes a stronger case for "this is what to expect by default"). Also while, for example, on the surface level there is a lot of disagreement between e.g. MIRI researchers, Paul Christiano and Eric Drexler, you will find a "distillation" proposal targeted at the above described problem in Eric's work from 2015, many connected ideas in Paul's work on distillation, and while find it harder to understand Eliezer I think his work also reflects understanding of the problem.
b)
For example: You can ask whether the space of intelligent systems is fundamentally continuous, or not. (I call it "the continuity assumption"). This is connected to many agendas - if the space is fundamentally discontinuous this would cause serious problems to some forms of IDA, debate, interpretability & more.
(An example of discontinuity would be existence of problems which are impossible to meaningfully factorize; there are many more ways how the space could be discontinuous)
There are powerful intuitions going both ways on this.
1.
For different take on very similar topic check this discussion between me and Ben Pace (my reasoning was based on the same Sinatra paper).
2.
For practical purposes, my impression is some EA recruitment efforts could be more often at risk of over-filtering by ex-ante proxies and being bitten by tails coming apart, rather than at risk of not being selective enough.
Also, often the practical optimization question is how much effort you should spend on on how extreme tail of the ex-ante distribution.
3.
Meta-observation is someone should really recommend more EAs to join the complex systems / complex networks community.
Most of the findings from this research project seem to be based on research originating in complex networks community, including research directions such as "science of success", and there is more which can be readily used, "translated" or distilled.