Completely agree with your first 2 points!
With the 3rd, I feel that the incentive to do things that are less effective in absolute terms but more appealing to non-EA funders already exists, and that whether someone should act upon this incentive depends on how much effectiveness they would have to sacrifice, and how their project compares to other potential uses of the EA funding.
That being said, I'm of the (completely subjective) opinion that there are probably lots of cases where a 'pull non-EA funding towards a relatively more EA project' approach will have a greater counterfactual impact than 'create a very EA project and get EA funding for it' approach. But as Owen said below, it's definitely a case-by-case kind of thing.
As far as I'm aware the main EA electoral reform org (electionscience.org) advocates for approval voting rather than PR, so I think a successful criticism of electoral reform as a cause area would require comparing approval voting and other voting system ideas to both PR and FPTP.
I was thinking about this earlier, it feels like the negative counterfactual impact of starting new charities would be very valuable for someone to investigate.
Also, I agree that "Where is the funding coming from?" is a super important question when assessing replaceability / the counterfactual, and I think a norm of seeking non-EA funding first for EA projects would be a good thing (but it might already be a thing, I'm not sure).
Related to this, a reasonable question I can see progressives asking is "Why do EAs not prioritise anti-racism/ feminism/ LGBT rights?"
While EAs could argue that drug decriminalisation and criminal justice reform in America are closely related to anti-racism, I think there are some important philosophical questions to answer here related to how EA chooses to define a cause area, and why we don't seem to think of anti-racism / feminism / LGBT rights as cause areas. I have no idea what a good answer would look like.
I also don't think that the last discussion on this forum of how we define cause areas made much progress.
I agree but I feel that in practice leftists I come across use the term to mean 'working against the class you grew up in', and exclusively use it for people who grew up poor and working class.
Not OP or at Harvard Law but anecdotally I know plenty of people who would consider themselves to be leftists, fit in the anti-oppression cluster, but wouldn't think that just going to Harvard Law makes you a class traitor. I think for many it would depend on what the Harvard Law grad actually did as a profession, eg - are you a corporate lawyer (class traitor) or a human rights lawyer (not class traitor).
That being said, I also think that the mainstreaming of social justice issues means that increasing numbers of people in the intersectionality/anti-oppression cluster don't know about / care about / support ideas about class struggle and class war, so aren't really 'leftists' in that sense of the word.
I think of 'equality' as having 2 major versions:
EA and utilitarianism generally focus on the first version.
In most cases, I think that focusing on either of these versions give us the same conclusions, eg - EA approaches to global health, development and animal welfare.
In my opinion, longtermism is the only strand of EA where our attempts to maximise utility do not also bring us closer to all individuals having equal utility.
And I think your idea of 'just' and 'fair' actions depend on which version of equality you value more. For me, I value the first version more, so the actions that I see as 'just' and 'fair' are almost entirely the ones that EA endorses.
Is anyone aware of previous writings by EAs on founding think tanks as a way of having an impact over the long-term?
In the UK, I think the Fabian Society and the Centre for Policy Studies are continuing to influence British politics long after the deaths of their founders.
Is anyone aware of any research / blog posts specifically on how much free range hens suffer? Most of the ones I can see repeatedly deviate from this question.
Like others have said, I suspect that neutrality on making happy people isn't the majority view amongst EAs.
But I am neutral on making happy people, which means that I am not particularly worried about extinction, but I still think EA work surrounding extinction is a priority, because almost all of this work also helps to prevent other 'worst case scenarios' that do not necessarily involve extinction (https://forum.effectivealtruism.org/posts/nz26sqMNf7kfFDg8y/longtermism-which-doesn-t-care-about-extinction-implications).
I think preference for extinction over a point in time with small amounts of suffering only holds if, on top of being 'time-agnostic' and neutral on making happy people, you are a strict negative utilitarian (you only care about reducing suffering, and not about increasing pleasure), and that the small amount of suffering cannot be eliminated at a later point in time.