I'm living in Lyon, France. Learned about EA in 2018, found that great, digged a lot into the topic. The idea of "what in the world improves well-being or causes suffering the most, and what can we do" really influenced me a whole lot - especially when mixed with meditation that allowed me to be more active in my life.
I'm doing a lot of personal research on a whole lot of topics. I also co-wrote a book in French with a few recommendations on how to take action for a better world, and included a chapter on EA (the title is "Acting for a Sustainable World", Éditions Jouvence). I've participated in a few conferences after that, it's a good way to improve oral skills.
One of the most reliable thing I have found so far is helping animal charities : farmed animals are much more numerous than humans (and have much worse living conditions), and there absolutely is evidence that animal charities are getting some improvements (especially from The Humane League). I tried to donate a lot there.
Long-termism could also be important, but I think that we'll hit energy limits before getting to an extinction event - I wrote an EA forum post for that here: https://forum.effectivealtruism.org/posts/wXzc75txE5hbHqYug/the-great-energy-descent-short-version-an-important-thing-ea
I just have an interest in whatever topic sounds really important, so I have a LOT of data on a lot of topics. These include energy, the environment, resource depletion, simple ways to understand the economy, limits to growth, why we fail to solve the sustainability issue, and how we got to that very weird specific point in history.
I also have a lot of stuff on Buddhism and meditation and on "what makes us happy" (check the Waking Up app!)
That's true.
However, even given these incentives, I would have expected more votes/interactions from people favouring global health - given that it is a very established field that feels instinctively good and well-known by most. Animal welfare was supposed to be the underdog here.
Moreover, there were fewer arguments favouring global health, and they felt much less convincing (personal opinion, but this felt reflected in votes).
So, although there is probably a bias to factor in, I still think that most people on the forum genuinely think animal welfare is the better choice for an additional $100m.
I agree. While I find the spirit of the post and the question interesting, I'm not sure the original claim is supported. Colonizing other planets remains a goal which is very far away.
The author of the book A City on Mars also appeared in the 80000 hours podcast to speak about the limits : https://80000hours.org/podcast/episodes/zach-weinersmith-space-settlement/
I'm not very convinced. At the very least, this absence of discussion should be a significant update against using neuron count as proxy of moral value right now. Or at least until significant evidence has been provided that it can be a useful measure (of which I'd expect at a minimum acknowledgment by a number of top experts). Otherwise it's akin to guessing.
Even if scientists are usually not very concerned by comparing the moral value of different beings, I'd expect that they'd still talk significantly about the number of neurons for other reasons. For instance, I'd expect that they would have formulated theories of consciousness that are based on the number of neurons, and where the experience 'expands' where there are more of them. (I am not formulating it in a precise way but I hope you get the idea)
Should we evaluate the potential of neuron count as a proxy ? Yes.
Should we use it to make significant funding allocation decisions with literal life or death consequences based on it ? No. At least not from what I've seen.
This is an interesting post.
Regarding neuron weights, I came across an interesting discussion last week on a post discussing RP's Moral Weight Projects. During this discussion, this comment by @David Mathers🔸 says this (emphasis mine):
I am far from an unbiased party since I briefly worked on the moral weight project as a (paid) intern for a couple of months, but for what it's worth, as a philosophy of consciousness PhD, it's not just that I, personally, from an inside point of view, think weighting by neuron count is bad idea, it's that I can't think of any philosopher or scientist who maintains that "more neurons make for more intense experiences", or any philosophical or scientific theory of consciousness that clearly supports this. The only places I've ever encountered the view is EAs, usually without formal background in philosophy of mind or cognitive science, defending focusing near-termist EA money on humans. (Neuron count might correlate with other stuff we care about apart from experience intensity of course, and I'm not a pure hedonist.)
For one thing, unless you are a mind-body dualist-and the majority of philosophers are not-it doesn't make sense to think of pain as like some sort of stuff/substance like water or air, that the brain can produce more or less of. And I think that sort of picture lies behind the intuitive appeal of more neurons=more intense experiences.
The author, @NickLaing, confirmed this:
I had a bit of a scan and I couldn't find [references to neuron count for moral weight] outside the EA sphere
I was surprised because, given the frequency at which neuron counts are discussed on the forum, it felt from the outside like a position that could have some academic credibility.
Personally, this makes me think that neuron count should not be considered among the most credible ways of comparing the moral weight of different species (unless further evidence arises, of course).
Of course, I understand discussing this while emphasising the speculative aspect ("what if neurons were actually important?"). But I think that saying: "Here's the conclusion with RP's moral weight projects, here's the conclusion with neuron count", gives way too much credibility to neuron counts as a measure, given the lack of evidence behind this position.
Regarding arguments against A City to Mars, I found the link you gave under titotal's post pretty interesting. It indeed lowers my credence in this book. Thanks for the link.
However, despite reading the post, I still fail to understand the economics of going to space long term -beyond what SpaceX is ready to fund- since everything would be so expensive, with no significant added value compared to what we can do on Earth. But maybe I missed something here as well.