It might be too hard to envision an entire grand future, but it's possible to envision specific wins in the short and medium-term. A short-term win could be large cage-free eggs campaigns succeeding, a medium-term win could be a global ban on caged layer hens. Similarly a short-term win for AI safety could be a specific major technical advance or significant legislation passed, a medium-term win could be AGIs coexisting with humans without the world going to chaos, while still having massive positive benefits (e.g. a cure to Alzheimer's).
One possible way to get most of the benefits of talking to a real human being while getting around the costs that salius mentions is to have real humans serve as templates for an AI chatbot to train on.
You might imagine a single person per "archetype" to start with. That way if Danny is an unusually open-minded and agreeable Harris supporter, and Rupert is an unusually open-minded and agreeable Trump supporter, you can scale them up to have Dannybots and Rupertbots talk to millions of conflicted people while preserving privacy, helping assure people they aren't judged by a real human, etc.
One thing I don't understand is whether this approach is immune to fanaticism/takeover from moral theories that place very little (but nonzero) value in hedonism. Naively a theory that (e.g.) values virtue at 10,000x that of hedonism will just be able to swamp out hedonism-centric views in this approach, unless you additionally normalize in a different way.
Ah, interesting that you think many people put >50% on hedonism and similarly-animal-friendly theories. 50% was intended to be generous; the last animal-welfare-friendly person I asked about this was 20-40% IIRC. Pretty sure I am even lower.
One thing to be careful of re: question framing is to make sure to constrain the set of theories under consideration to altruism-relevant theories. Eg many people will place nontrivial credence in nihilism, egotism, commonsense morality, but most of those theories will not be particularly relevant to the prioritization for altruistic allocation of marginal donations.
You'd either want to stop focusing on infant mortality, or start interventions to increase fertility. (Depending on whether population growth is a priority.)
I'm not sure I buy this disjunctive claim. Many people over humanity's history have worked on reducing infant mortality (in technology, in policy, in direct aid, and in direct actions that prevent their own children/relatives' children from dying). While some people worked on this because they primarily intrinsically value reducing infant mortality, I think many others were inspired by the indirect effects. And taking the long view, reducing infant mortality clearly had long-run benefits that are different from (and likely better than) equivalent levels of population growth while keeping infant mortality rates constant.
I guess I still don't think of "I would need to spend a lot of time as a representative of this position" as being an anti-animal advocate. I spend a lot of time disagreeing with people on many different issues and yet I'd consider myself an advocate for only a tiny minority of them.
Put another way, I view the time spent as just one of the costs of being known as an anti-animal advocate, rather than being one.
I think people take this into account but not enough or something? I strongly suspect when evaluating research many people have a vague, and not sufficiently precise, sense of both the numerator and denominator, and their vague intuitions aren't sufficiently linear. I know I do this myself unless it's a grant I'm actively investigating.
This is easiest to notice in research because it's both a) a large fraction of (non-global health and development) EA output and b) very gnarly. But I don't think research is unusually gnarly in terms of EA outputs or grants, advocacy, comms, etc have similar issues.