Permanence should already be under cost effectiveness over time, but I agree it's not obvious where it goes in the simplified ITN framework. if we added it, I'd suggest 'persistence' as the more general framing for how long the fix lasts. And fistulas won't ever get fixed permanently globally the way eliminating diseases globally can permanently fix an entire problem, so I'd say it's fairly persistent, but not fully permanent.
"EA has always bee[n] rather demanding,"
I want to clarify that this is a common but generally incorrect reading of EA's views. EA leaders have repeatedly clarified that you don't need to dedicate your life to it, and can simply donate to causes that others have identified as highly effective, and otherwise live your life.
If you want to do more than that, great, good for you - but EA isn't utilitarianism, so please don't confuse the demandingness of the two.
First, Utilitarianism doesn't traditionally require the type of extreme species neutrality you propose here.Singer and many EAs gave a somewhat narrower view of what 'really counts' as Utilitarian, but your argument assumes that narrow view without really justifying it.
Second, you assume future AIs will have rich inner lives that are valuable, instead of paperclipping the universe. You say "one would need to provide concrete evidence about what kinds of objectives advanced AIs are actually expected to develop" - but Eliezer has done that quite explicitly.
I very much appreciate that you are thinking about this, and the writing is great. That said, without trying to address the arguments directly, I worry that the style here is justifying a conclusion you've come to and explores analogies you like rather than exploring the arguments and trying to decide what side to be on, and it fails to embrace scout mindset sufficiently to be helpful.
On AIxCyber field building, you might be interested in knowing about Heron, which launched this past year with Openphil support.