All of ZacharyRudolph's Comments + Replies

Exploring Existential Risk - using Connected Papers to find Effective Altruism aligned articles and researchers

Strong upvoted, I made a graph with it for a paper I intend to use for my summer research project and quickly found other papers I was unaware of which I expect will be helpful.  

4MaxRa5moSame here, thanks a lot for the post! Would be really cool if this leads to new connections in the growing field of longtermist academia.
4MichaelPlant7moYes, there is some overlap here, certainly. OPP has, I undestand it, worked on drug decriminalisation, cannabis legalisation, and prison reform, all within the US. What we might call 'global drug legalisation' goes further with respected to drug policy reform (legal, regulated markets for all drugs + global scope, rather than then US) but it also wouldn't cover non-drug related prison reforms.
What key facts do you find are compelling when talking about effective altruism?

That 11,000 children died yesterday, will die today and are going to die tomorrow from preventable causes. (I'm not sure if that number is correct, but it's the one that comes to mind most readily.)

The importance of how you weigh it

TLDR: Very helpful post. Do you have any rough thoughts on how someone would pursue moral weighing research?

Wanted to say, first of all, that I found this post really helpful in helping crystalize some thoughts I've had for a while.  I've spent about a year researching population axiologies (admittedly at the undergrad level) and have concluded that something like a critical level utilitarian view is close enough to a correct view that there's not much left to say. So, in trying to figure out where to go from there (and especially whether to pursue a ... (read more)

3Joe_Carlsmith8moGlad to hear you found it helpful. Unfortunately, I don't think I have a lot to add at the moment re: how to actually pursue moral weighting research, beyond what I gestured at in the post (e.g., trying to solicit lots of your own/other people's intuitions across lots of cases, trying to make them consistent, that kind of thing). Re: articles/papers/posts, you could also take a look at GiveWell's process here [https://docs.google.com/document/d/1hOQf6Ug1WpoicMyFDGoqH7tmf3Njjc15Z1DGERaTbnI/edit#] , and the moral weight post [https://www.lesswrong.com/posts/2jTQTxYNwo6zb3Kyp/preliminary-thoughts-on-moral-weight#Moral_weights_of_various_species] from Luke Muelhauser I mentioned has a few references at the end that might be helpful (though most of them I haven't engaged with myself). I'll also add, FWIW, that I actually think the central point in the post most applicable outside of the EA community than inside it, as I think of EA as fairly "basic-set oriented" (though there are definitely some questions in EA where weightings matter).
Boundaries of Empathy and Their Consequences

I'm mostly using "person" to be a stand in for that thing in virtue of which something has rights or whatever. So if preference satisfaction turns out to be the person-making feature, then having the ability to have preferences satisfied is just what it is to be a person. In which case, not appropriately considering such a trait in non-humans would be prima facie wrong (and possibly arbitrary).

1MichaelStJules2yI agree, but I think it goes a bit further: if preference satisfaction and subjective wellbeing (including suffering and happiness/pleasure) don't matter in themselves for a particular nonhuman animal with the capacity for either, how can they matter in themselves for anyone at all, including any human? I think a theory that does not promote the preference satisfaction or the subjective wellbeing as an end in itself for the individual is far too implausible. I suppose this is a statement of a special case of the equal consideration of equal interests.
Boundaries of Empathy and Their Consequences

I'm familiar with the general argument, but I find it persuasive in the other direction. That is, I find it plausible that there are human animals for whom personhood fails to pertain, so ~(2). [Disclaimer: I'm not making any further claim to know what sort of humans those might be nor even that coming to know the fact of the matter in a given case is within our powers.] I don't know if consciousness is the right feature, but I worry that my intuitive judgements on these sorts of features are ad hoc (and will just pick out whatever group I a... (read more)

2MichaelStJules2yI think if you decide what we should promote in a human for its own sake (and there could be multiple such values), then you'd need to explain why it isn't worth promoting in nonhumans. For example, if preference satisfaction matters in itself for a human, then why does the presence or absence of a given property in another animal imply that it does not matter for that animal? For example, why would the absence of personhood, however you want to define it, mean the preferences of an animal don't matter, if they still have preferences? In what way is personhood relevant and nonarbitrary where say skin colour is not? Like "preferences matter, but only if X". The "but only if X" needs to be justified, or else it's arbitrary, and anyone can put anything there. I see personhood as binary, but also graded. You can be a person or not, and if you are one, you may have the qualities that define personhood to a greater or lesser degree. If you're interested in some more reading defending the case for the consideration of the interests of animals along similar lines, here are a few papers:https://philpapers.org/rec/HORWTC-3 [https://philpapers.org/rec/HORWTC-3] https://stijnbruers.wordpress.com/2018/12/13/speciesism-arbitrariness-and-moral-illusions/amp/ [https://stijnbruers.wordpress.com/2018/12/13/speciesism-arbitrariness-and-moral-illusions/amp/]
Boundaries of Empathy and Their Consequences

Yes! It's much more conducive to conversation now, and I've changed my vote accordingly.


To actually engage with your question: I personally find (1) to be the most motivating reason to adopt a more vegetarian diet since I'm more compelled by the idea that my actions might be harming other persons. Regardless, (1) and (2) are both grounded in the empirical observations. (and both of which are seriously questionable in how much of a difference they make in the individual case: see this and the number of confounding factors in veg diets causin... (read more)

2MichaelStJules2yI think the best explanation for the moral significance of humans is consciousness. Conscious individuals (and those who have been and can again be conscious) matter because what happens to them matters to them. They have preferences and positive and negative experiences. On the other hand, (1) something that is intelligent (or has any other property) but could never be conscious doesn't matter in itself, while (2) a human who is conscious but not intelligent (or any other property) would still matter in themself. I think most would agree with (2) here (but probably not (1)), and we can use it to defend the moral significance of nonhuman animals, because the category "human" is not in itself morally relevant. Are you familiar with the argument from species overlap? https://www.animal-ethics.org/argument-species-overlap/ [https://www.animal-ethics.org/argument-species-overlap/]
Boundaries of Empathy and Their Consequences

"(3) The ethical argument: killing or abusing an animal for culinary enjoyment is morally unsound"

I'm understanding abuse as being wrong by definition, a la how murder is by definition a wrongful killing. (3) seems to transparently be a case of arguing that something that is wrong is thus wrong. But, I agree, this by itself wouldn't warrant downvoting so much as how the generally dismissive tone of the writing came off as assuming some moral high ground, e.g. "to accept that this being with no identity, little conceivable intellect... (read more)

5Tihitina2yI see what you mean and I've made some significant changes (let me know if you don't think they are significant enough). But I want to make it clear that I am not claiming neutrality on the issue-I am trying to troubleshoot why one side of the argument is not being received. That being said, I don't want my position to distract or deter people from help in troubleshooting so I am grateful you said something.
Boundaries of Empathy and Their Consequences

Down voted for question begging in the way you phrased the "ethical argument," and descriptions like "the mere desire of taste." [Edit: I changed my vote based on changes made.]

4MichaelStJules2yUnless the post has been edited, I don't see this as necessarily question begging, although I can also see why you might think that. My reading is that the claim is assumed to be true, and the post is about how to best convince people of it (or to become more empathetic) in practice, which need not be through a logical argument. It's not about proving the claim. It could be that making it easier for people to avoid animal products is a way to convince them (or the next generation) of the claim. Another way might be getting them to interact with or learn more about animals and their personalities.
What are your thoughts on my career change options? (AI public policy)

In that case, it seems plausible that you (and your coworkers) will do more and better work if you're not just ascetically grinding away for decades (and if they aren't spending time around someone like that). Perhaps, a good next step is to shadow/intern with/talk to people currently doing these jobs to learn what they look like day to day?

What are your thoughts on my career change options? (AI public policy)

I don't think I can give much specific advice, but it doesn't seem like you're putting much of a weight on what you want to do. For instance, it seems like you're somewhat disappointed that 80k advised against working in AI ethics. If so, I'd suggest maybe applying anyway or considering good programs not in the top 10 (most school rankings seem to be fairly arbitrary in my experience anyway) with the knowledge that you might have to be a little more self-motivated to do "top 10" quality work.

Alternatively, it might be the... (read more)

1Nathan Young2yI suppose I'm not putting much weight on it, other than what is required to keep me working at a problem for the long term. The issue there is that I don't know what working at many of these jobs will be like... In terms of desires, I would like most of all to have a legitimate ethical system. I value that more than my own wellbeing and my own desires. So I don't really care what I want other than instrumentally. I do thinks I *want* on my own time, whereas I think for my career I'd like to maximise as much as I can. At least I think so - it's hard to know what you really want, right? Perhaps I'll end up justifying what I want to do anyway. I suppose this process at least stops me making significantly non-maximal choices.
Want to Save the World? Enter the Priesthood

I'm not sure I understand your objection, but I feel like I should clarify that I'm not endorsing consequentialism as a sort of moral criterion (that is, the thing in virtue of which something is right or wrong) so much as I take the "effective" part of effective altruism to imply using some sort nonmoral consequentialist reasoning. As far as I understand (which isn't far), a Catholic moral framework would still allow for some sort of moral quantification (that some acts are more good than others or are good to a greater degree), e... (read more)

Want to Save the World? Enter the Priesthood

You're right. What I was trying to get at was that I presume Catholics would start with different answers to axiological questions like "what is the most basic good?". Where I might offer a welfarist answer, the Church might say say "a closeness to God" (I'm not confident in that). Thus, if a Catholic altruist applies the "effective" element of EA reasoning, the way to do the most good in the world might end up looking like aggressive evangelism in order to save the most souls. And that if we're trying to convin... (read more)

Tl;dr the moral framework of most religions is different enough from EA to make this reasoning nonsensical; it's an adversarial move to try to change religions' moral framework but there's potentially scope for religions to adopt EA tools


Like I said in my reply to khorton, this logic seems very strange to me. Surely the veracity of the Christian conception of heaven/hell strongly implies the existence of an objective, non-consequentialist morality? At that point, it's not clear why "effectively doing the most good" in this man... (read more)

Want to Save the World? Enter the Priesthood

I've spent some time seriously trying to convince a devout Catholic friend of mine about EA. The problem, as far as I can tell, is that EA and the Church have value systems that are almost directly at odds. I mean, that if you take seriously their value system, the rational course of action isn't EA. At least, not in the manner meant here.

My understanding: Essentially, the Church already has an entrenched long-termist view. It's just that the hugely disvaluable outcome is a soul or souls spending eternity in hell (or however long in purgato... (read more)

5Liam_Donovan2yI'm fairly confident the Church does not endorse basing moral decisions on expected value analysis; that says absolutely nothing about the compatibility of Catholicism and EA. For example, someone with an unusually analytical mindset might see participation in the EA movement as one way to bring oneself closer to God through the theological virtue of charity.
Advice for an Undergrad

I started quantitatively "upskilling" almost a year ago exactly after eschewing math classes for.. a while. I spent this past academic year taking the calc series. Now working through MITOpenCourseware's multivariable this summer to test out of it when I get to AC.

Contingent on testing out, it should only be two math classes/semester to meet the requirements.


Advice for an Undergrad

Do you recall which Facebook group/page? I searched the "Effective Altruism" group for keywords like major/college but didn't find anything.

Thanks for the class suggestion. I'll look into what they offer on that.

2DavidNash2yIt is probably this career discussion [https://www.facebook.com/groups/473795076132698/] one.
2Khorton2yIt might have been the career advice group, but I'm not sure.
Advice for an Undergrad

Thank you, I've actually read that article before. I asked here because there seem to be all kinds of factors which would confound the usefulness of the advice there, e.g. it might be tailored to the average reader/their ideal reader, limitations on what they want to publically advise.

I figured responses here might be less fit to the curve and thus more useful since I'm not confident of being on that curve.