This is a seed for further articulation on why EA should rate the importance of biodiversity higher:
Focusing on human x-risk implicitly assumes that there is something uniquely valuable about being human or having human experience. But then, why should we assume that there is nothing uniquely valuable about the existence of any of the other millions of species that inhabit our planet? We know very little about the experience of other species - the limited information on what we do know is obviously filtered through human eyes, so we are potentially ignorant of a whole range of experience that other species may have, but we do not. Yes, ground beetles cannot build rockets and have no potential for colonizing our local cluster of galaxies, but it is our pre-conditioning human experience, our bias, that allows us to totally dismiss the inherent value of the existence of ground beetles.
As has been mentioned many times before, EA overrepresents welfarist/utilitarian frameworks, which to begin with, hold the individual human as a moral agent, and then defines everything else as a moral patient worthy of consideration using the yardstick of human experience of pain and pleasure. Yes, in this framework, biodiversity is most likely of limited utility. But again, this is NOT the only 'valid' ethical lens out there, and there are many arguments out there for why the basic assumptions are dubious. If aliens exist, perhaps they could have no understanding of pleasure or pain, or otherwise be supersentient, such that we are as relatively conscious as an insect.
If one were to assume the system itself as a 'moral patient' (as some cultures do), then irreversibly removing parts of the system is causing harm to it. Maybe including something as abstract as a 'system' in the circle of moral concern sounds absurd, but again, many cultures, among Indigenous Americans, for example, held that even the well-being of the whole system of plants, animals, and people was worthy of consideration.
Depending on your moral framework, protecting biodiversity is an incredibly urgent and neglected cause area, as the magnitude and speed at which it is happening is great (estimates up to 30% total loss just in this century). There is also, as other commenters spoke to, enormous potential to apply EA-style thinking to this problem. To end with an opinion, protecting biodiversity for the sake of biodiversity is the epitome of the word altruism, as it is outside considerations of utility for ourselves or our progeny (through which we like to imagine ourselves as vicariously living after our death).
I think this definitely is something that should be considered more under the lens of effective altruism. Currently, the vast majority of EA efforts are coming from a welfarist perspective and if I understand correctly biodiversity loss should be mostly neutral from that perspective. I guess that this is the main reason here, other than simply having no one picking up the glove.
It's definitely important to optimize "doing the most good" in moral frameworks other than welfarist. In particular, I'd be very happy to see an analysis of what'd be the best ways to contribute to preventing biodiversity loss and a good explanation of the moral framework involved (and why it's reasonable, and whether indeed biodiversity loss seems like the most important cause in that framework).
Broadly speaking, I think that there are two main ways of actually going about it in the EA community. One would be to develop this idea more and engage with the "intellectual" effort of figuring out how to do the most good. This could be done by, say, writing more about it, or by reaching out to people to discuss this (perhaps at the upcoming EAG). The other would be to set up an EA project around these lines, and try to secure funding from EA Funds or Open Phil or elsewhere. I'd expect both to be very challenging and to take a long time.
I agree. I'd like to see more diversity of ethical frameworks in EA and attract people with diverse ethical outlooks, since EA is theoretically compatible with frameworks other than utilitarianism.