I recently completed a PhD exploring the implications of wild animal suffering for environmental management. You can read my research here: https://scholar.google.ch/citations?user=9gSjtY4AAAAJ&hl=en&oi=ao
I am now considering options in AI ethics, governance, or the intersection of AI and animal welfare.
I think a big challenge is that often different interventions appear better or worse depending on the time horizon. That's apparent in this case: if corporate campaigns get a dozen companies to commit to buying cage-free eggs, that will have benefits in a matter of years. It's not clear what the long-term impacts will be (maybe a changing corporate culture that becomes more conscious of animal welfare? Maybe higher prices, leading to lower consumption of animal products?) but the theory of change isn't normally spelled out over that time horizon. For alternative proteins, the short-term benefits are rather modest, and the much more important benefits seem long-term, if it can lead to a much bigger plant-based market, there's a high chance that people will be more willing to consider changing their diet (and their ethics) completely.
I would really like to see more long-term theories of change within animal advocacy. I even find it a bit odd that it isn't more normal, given the buzz around longtermism within EA.
To be clear, my reason for disgust isn't because I think that eating meat is impure. It's because seeing dead animals, or parts of dead animals, reminds me of the life that once existed and is now gone. This is the same reason for why I would never be appetized by human flesh - seeing dead people fills me with sorrow, and seeing many dead people in places where dead people shouldn't be (such as on supermarket shelves) fills me with horror, because it reminds me of the atrocity that continues.
Many vegetarians don't have these emotions because they haven't fully recognized the atrocity for what it is. They aren't really coming to terms with the scale of suffering. Many vegans don't either, but I don't think it's possible to really be cognizant of the atrocity for what it is and not have this emotional reaction, for most people. For this reason, I want to be horrified, disgusted by meat, because I don't want to ignore the scale of the suffering around me. I want to be aware of and motivated by this wrong (except of course it is overwhelming and counterproductive). And I'm referring to myself here, but I think we would have much more pro-animal action if other people also tried to internalize nonspeciesm in this way too.
I hope that makes sense.
Sorry I missed this comment when you originally made it.
These are good arguments, but I disagree that you're hitting the same benefits. They're the same kind of reasons for adopting that diet, but the benefits are different:
Regarding the last point, I would agree... except that supply of meat is increasing, both globally and in the west. Efforts to change the minds of individual people aren't working, and won't work as long as structural issues e.g. the subsidisation of meat continue.
Hey Arnold, thanks for the message. Hmm do I understand the question correctly as: if I'm optimising for impact, but wary of burnout, I want to donate as much as possible without lowering my standard of living to an unsustainable level? And you're saying what's personally satisfying might not be the best thing?
That's certainly true. I'm almost sure that I could do more. I suppose I'm just wary of trying to optimise too much, because I think it can be emotionally draining. To be honest I think that social factors can be really important for this though - I would be willing to try to optimise more if those around me were doing it more. But I'm not really sure if that responded to your question!
My intention would be to gradually increase. So in the past I was earning just slightly above the median, but gave 15%. In general I think it's good to have an idea of what income you're comfortable with, and then increase donations significantly as you pass that point. But I set the bar really high here just because I'm aware that my perception of what is enough might change in different life-stages.
To be honest I think my model is super crude and probably not ideal, I would really like to see other models like this!
I hope I'm not taking this too seriously, but the examples Bob gave are of looking with concern for the bugs' welfare. Entomologists presumably do that more than your average Instagram user because they actually study and handle the bugs. Others might just look at photos of bugs the same way they look at photos of plants or landscapes.
I feel a bit confused by this strategy. The normal idea of voting is to express your preference, such that the outcome reflects what the majority prefers.
If people treat it rather as an opportunity to communicate to others, that seems likely to distort the outcome. In regular political elections I'm ok with that, but in this context where voters are voting altruistically, I'm less sure.
I'm also confused because the act of writing here is a signal, and probably a clearer one! Could you not have done that and voted for who you genuinely think should be 'elected'?
I agree. Maybe we can just say that veganism focuses on the wrong behavior? In addition to donating, I think voting can be more important than your individual diet. Many animal advocacy or rights organizations seem to recognize this, and refer to animal advocates or animal rights advocates to be more inclusive. They certainly do this for events where they seek to attract a lot of people, like animal rights marches. But for sure, veganism continues to be emphasized too much.
I also agree that the definition given in the post doesn't reflect popular usage, which is probably something like:
This doesn't seem particularly maximizing. The first part reflects the moral commitment, and yes it's possible to be perfectionist about it, but it's not fundamentally. The second part demands evidence of that moral commitment, and it's also far from maximizing, since not consuming animal products is very achievable for most people. So, as long as this definition is interpreted in a reasonable way, it doesn't seem particularly maximalist.
It seems like the wrong framing to talk about a "positive vision" for the transition to superintelligence, if that transition involves immense risks and is generally a bad idea. If you think the transition could be “on a par with the evolution of Homo sapiens, or of life itself” but compressed into years, then that surely involves immense risks (of very diverse kinds!).
From what I've heard you say elsewhere, I think you basically agree with this. But then, surely you must agree that the priority is to delay this process until we can make sure it's safe and well-controlled. And if you are going to talk about positive visions, then I would say it's really important that such visions come with an explicit disclaimer that they are talking about a future we should be actively trying to avoid. I'm afraid that otherwise these articles might give people the wrong idea.
Edit: to make my point clearer, I think a good analogy would be to think of yourself right before the development of nuclear power (including the nuclear bomb). Suppose other people are already talking about the risks, and it seems it's likely to happen so maybe it's worth thinking about how we can make a good future with nuclear. Ok. But given the risks (and that many people still aren't aware of them), talking about a good nuclear future without flagging that the best course of action would be to delay developing this technology until we're sure we can avoid catastrophe seems like a potential infohazard.