I would sum this article up as "A speciesist society capable of tiling itself across the galaxy is a frightening one we should be actively working to avoid, and this conclusion is robust to a wide variety of future scenarios with respect to AGI, factory farming, wild animal suffering, and alien civilizations."
I was glad to see James Faville link to Tobias Baumann's post on Longtermism and animal advocacy. I'll highlight a few quotes relevant to your questions (I especially like the third one):
it stands to reason that good outcomes are only possible if those in power care to a sufficient degree about all sentient beings… What hope is there of a good long-term future (for all sentient beings) as long as people think it is right to disregard the interests of animals (often for frivolous reasons like the taste of meat)? Generally speaking, the values of (powerful) people are arguably the most fundamental determinant of how the future will go, so improving those values is a good lever for shaping the long-term future.
Folks in the comments here have described a number of mechanisms for the immense suffering risks associated with a longterm future that includes animal agriculture, or more generally lacks concern for animals. Those examples make it clear to me that moral progress (with respect to animals, but elsewhere too) is a necessary but not sufficient condition for a positive longterm future. Organizations focused on making moral progress, in this conversation animal advocacy charities, are pretty clearly contributing to the longtermist cause. Of course, that's not to say animal advocacy charities are the most effective intervention from a longtermist perspective, but right now, my sense is longtermism suffers from a dearth of projects worth funding, and is less concerned with ranking their effectiveness.
a longtermist outlook implies a much stronger focus on achieving long-term social change, and … This entails a focus on the long-term health and stability of the animal advocacy movement.
Meta-charities like ACE, Faunalytics, and Encompass are examples of such organizations, and would probably represent the best fit for a philanthropist influenced by longtermism interested in animal advocacy.
it is crucial that the movement is thoughtful and open-minded… we should also be mindful of how biases might distort our thinking (see e.g. here) and should consider many possible strategies, including unorthodox ones such as the idea of patient philanthropy.
A focus on building epistemic capacity in the animal advocacy movement leads you to similar organizations.
I clearly need to read "How to Create a Vegan World"! Adding it to my reading list.
I certainly want to live in a vegan world, i.e. one where the wellbeing of non-human animals is considered equally to people. But I'm not sure I want to live in an "EA world." Maybe that's a failure of my own imagination, but it's hard for me to even think about what that would look like...
As a "Big Tent EA", I'd like to see EA grow, not only to increase it's impact in a strict sense of scale, but also reform, refine, and expand on its ideas. There are certain EA values that I'd like to see become universal — generally various expansions of the moral circle, e.g. cosmopolitanism, veganism, longtermism — but I'm not as sure about widespread adoption of the movement. Does a world where everyone's moral circle is a bit larger have to be an "EA world?"
I see the "incremental" vs. "optimal" approach as a bit of a false dichotomy , in the sense that it seems like what you're really arguing for (or at least what I'd argue for) is that EA needs more on-ramps. As you mentioned, plant-based burgers normalize veganism and give folks a clearer path to caring about animals. Donating to US-based GiveDirectly leads to donating abroad.
Given the multiple orders of magnitude of difference in effectiveness between US and developing country charities (to take global wellbeing as an example cause area), it seems difficult to argue on the merits of "doing more good" if it doesn't lead to even more good from that person/group, i.e. if there isn't momentum or a flywheel effect on someone's altruism. But as a "Big Tent EA," I would love to see more focus on EA on-ramps because that compounding effect seems real and substantial to me.
But maybe I'm biased because I followed that path? Hopefully still!
Building off of Mathias' question, it seems like the idea behind CE is to find passionate generalists, give them a few months of training/mentorship/support, and set them off to implement a research-backed intervention. How does expertise, experience, and personal fit play a role with regard to founding a successful charity?
Personally, I can't imagine signing up for the incubation program without having a concrete idea that I'd feel uniquely capable of working on. Is that common?
How would you distribute the value proposition of attending the program for aspiring charity entrepreneurs between things like:
1) thoroughly researched charity ideas
3) networking with other EA's for things like co-founders, mentors, and donors
4) seed funding/legal services/office space
Given the ongoing global pandemic, it seems the program will likely be conducted remotely. What will that mean for the program? For example, the curriculum has quite a bit of in-person team-building activities and project-based work. Does CE have plans to adjust to this new reality that it can share?
What does "Hits-based Giving" look like for animal advocacy?
Should we be focusing more on "work that is more than 90% likely to fail, as long as the overall expected value is high enough?" 
If "effectively all the returns are concentrated in a few big winners, and ... the best ideas look initially like bad ideas" , then how can ACE use evidence in an "epistemically permissive"  way to find those big winners?