Given your background, I will take as given your suggestion that disentanglement research is both very important and a very rare skill. With that said, I feel like there's a reasonable meta-solution here, one that's at least worth investigating. Since you've identified at least one good disentaglement researcher (eg, Nick Bolstrom), have you considered asking them to design a test to assess possible researchers?
The suggestion may sound a bit silly, so I'll elaborate. I read your article and found it compelling. I may or may not be a good disentanglement researcher, but per your article, I probably am not. So, your article has simultaneously raised my awareness of the issue and dissuaded me from possibly helping with it. The initial pessimism, followed by your suggestion to "Read around in the area, find something sticky you think you might be able to disentangle, and take a run at it", all but guarantees low followthrough from your audience.
Ben Garfinkel's suggestion in your footnote is a step in the right direction, but it doesn't go far enough. If your assessment that the skill is easily assessed is accurate, then this is fertile ground for designing an actual filter. I'm imagining a well-defined scope (for example, a classic 1-2 page "essay test") posing an under-defined DR question. There are plenty of EA-minded folk out there who would happily spend an afternoon thinking about, and writing up their response to, an interesting question for its own sake (cf. the entire internet), and even more who'd do it in exchange for receiving a "disentanglement grade". Realistically, most submissions will be clear F/D scores within a paragraph or two (perhaps modulo initial beta testing and calibration of the writing prompt), and the responses requiring a more careful reading will be worth your while for precisely the reason that makes this exercise interesting in the first place.
TLDR: don't define a problem and immediately discourage everyone from helping you solve it.