Thank you so much for the feedback!
I did think about working for a government department (non-partisan), but I decided against it. From my understanding, you can't be working for 'the crown' and running for office, you'd have to take time off or quit.
The space agency was my thinking along those lines, as I don't think that counts as working for the crown.
I hadn't thought about the UK Civil Service. I've never looked in to it. I don't think that would affect me too much, as long as I'm not a dual citizen.
I haven't completely ruled out earning to give. I worked in the energy industry for 18 months before my PhD earning to give, but felt a pretty low personal fit for it. If I found a job that I was also intrinsically passionate about, I would consider it, but not otherwise.
Ah I hadn't thought about the for-profit plant-based food tech side of things, thanks, I'll think about that.
Am I reading the 0.1% probability for nuclear war right as the probability that nuclear war breaks out at all, or the probability that it breaks out and leads to human extinction? If it's the former, this seems much too low. Consider that twice in history nuclear warfare was likely averted by the actions of a single person (e.g. Stanislav Petrov), and we have had several other close calls ( https://en.wikipedia.org/wiki/List_of_nuclear_close_calls ).
When I say that the idea is entrenched in popular opinion, I'm mostly referring to people in the space science/engineering fields - either as workers, researchers or enthusiasts. This is anecdotal based on my experience as a PhD candidate in space science. In the broader public, I think you'd be right that people would think about it much less, however the researchers and the policy makers are the ones you'd need to convince for something like this, in my view.
We were pretty close to carrying out an asteroid redirect mission too (ARM), it was only cancelled in the last few years. It was for a small asteroid (~ a few metres across), but it could certainly happen sooner than I think most people suspect.
Neat, I'll have to get in touch, thanks.
I guess that would indeed make them long term problems, but my reading on them seems to have been that they are catastrophic risks rather than existential risks, as in they don't seem to have much likelihood (relative to other X-risks) of eliminating all of humanity.
My impression is that people do over-estimate the cost of 'not-eating-meat' or veganism by quite a bit (at least for most people in most situations). I've tried to come up with a way to quantify this. I might need to flesh it out a bit more but here it is.
So suppose you are trying to quantify what you think the sacrifice of being vegan is, either relative to vegetarian or to average diet. If I were asked what was the minimum amount money I would have to have received to be vegan vs non-vegan for the last 5 years if there were ZERO ethical impact of any kind, it would probably be $500 (with hindsight - cue the standard list of possible biases). This doesn't seem very high to me. My experience has been that most people who have become vegan have said that they vastly overestimated the sacrifice they thought was involved.
If one thought that there were diminishing returns for the sacrifice for being vegan over vegetarian, perhaps the calculus is better for being vegetarian over non-vegan, or for being vegan 99% of the time, say only when eating at your grandparents' house. I see too many people say 'well I can't be vegan because I don't want to upset my grandpa when he makes his traditional X dish'. Well, ok, so be vegan in every other aspect then. And as a personal anecdote, when my nonna found out she couldn't make her traditional Italian dishes for me anymore, she got over it very quickly and found vegan versions of all of them [off-topic, apologies!].
I also suspect that people are comfortable thinking about longtermism and sacrifice like this for non-humans but not for humans is because they may think that humans are still significantly more important. I think this is the case when you count flow-on effects, but not intrinsically (e.g. 1 unit of suffering for a human vs non-human).
I think the intrinsic worth ratio for most non-human animals is close to 1 to 1. I think the evidence suggests that their capacity for suffering is fairly close to ours, and some animals might arguably have an even higher capacity for suffering than us (I should say I'm strictly wellbeing/suffering based utilitarian in this).
I think the burden of proof should be on someone to show why humans are significantly more worthy of intrinsic moral worth. We all evolved from a common ancestor, and while there might be a sliding scale of moral worth from us to insects, it seems strange for there to be such a sharp drop off after humans, even within mammals. I would strongly err on the side of caution when applying this to my ethics, given our constantly expanding circle of moral consideration throughout history.
Self-plugging as I've written about animal suffering and longtermism in this essay:
To summarise some key points, a lot of why I think promoting veganism in the short term will be worthwhile in the long term is values spreading. Given the possibility of digital sentience, promoting the social norm of caring about non-human sentience today could have major long term implications.
People are already talking about introducing plants, insects and animals to Mars as a means of terraforming it. This would enormously increase the amount of wild-animal suffering. Even if we never leave our solar system, terraforming just one body, let alone several, could near double the amount of wild-animal suffering. There's also the possibility of bringing factory farms to Mars. I'm studying a PhD in space science and still get shut down when I try to say 'hey lets maybe think about not bringing insects to Mars'. This is far off from being a practical concern (maybe 100-1000 years) but it's never too early to start shifting social norms.
I'd call this mid term rather than long term, but the impacts of animal agriculture on climate change, zoonotic disease spread and antibiotic resistance are significant.
I'd like to echo Peter's point as well. We don't ask these questions for a lot of other actions that would be unethical in the short term. There seems to be a bias in EA circles of asking this kind of question about non-human animal exploitation. I'm more arguing for consistency than saying we can't argue that a short term good has a long term bad resulting in net bad.
Thanks for sharing, I've saved the dates! I look forward to seeing how this model plays out. Do you have any thoughts on whether the UK/Europe community might feel 'left out'? Are there plans for other EAGx conferences in Europe?
For some of the examples, it seems unclear to me how they differ from just reacting quickly generally. In other words, what makes these examples of 'ethical' reactions and not just 'technical' reactions?