Devin Kalish

798New York, NY, USAJoined Jan 2022

Bio

Hello, I'm Devin, I blog here along with Nicholas Kross. Currently working on a bioethics MA at NYU.

Comments
97

It looks like Nir Eyal just joined the forum! Is there a way to invite him to the subforum?

"If the basic idea of long-termism—giving future generations the same moral weight as our own—seems superficially uncontroversial, it needs to be seen in a longer-term philosophical context. Long-termism is a form of utilitarianism or consequentialism, the school of thought originally developed by Jeremy Bentham and John Stuart Mill.

The utilitarian premise that we should do whatever does the most good for the most people also sounds like common sense on the surface, but it has many well-understood problems. These have been pointed out over hundreds of years by philosophers from the opposing schools of deontological ethics, who believe that moral rules and duties can take precedence over consequentialist considerations, and virtue theorists, who assert that ethics is primarily about developing character. In other words, long-termism can be viewed as a particular position in the time-honored debate about inter-generational ethics.

The push to popularize long-termism is not an attempt to solve these long-standing intellectual debates, but to make an end run around it. Through attractive sloganeering, it attempts to establish consequentialist moral decision-making that prioritizes the welfare of future generations as the dominant ethical theory for our times."

This strikes me as a very common class of confusion. I have seen many EAs say that what they hope for out of "What We Owe the Future" is that it will act as a sort of "Animal Liberation for future people". You don't see a ton of people saying something like "caring about animals seems nice and all, but you have to view this book in context. Secretly being pro-animal liberation is about being a utilitarian sentientist with an equal consideration of equal interests welfarist approach, that awards secondary rights like life based on personhood". This would seem either like a blatant failure of reading comprehension, or a sort of ethical paranoia that can't picture any reason someone would argue for an ethical position that didn't come with their entire fundamental moral theory tacked on.

On the one hand I think pieces like this are making a more forgivable mistake, because the basic version of the premise just doesn't look controversial enough to be what MacAskill actually is hoping for. Indeed I personally think the comparison isn't fantastic, in that MacAskill probably hopes the book will have more influence on inspiring further action and discussion than on changing minds about the fundamental issue (which again is less controversial, and which he spends less time in the book on).

On the other hand, he has been at special pains to emphasize in his book, interviews, and secondary writings, that he is highly uncertain about first order moral views, and is specifically, only arguing for longtermism as a coalition around these broad issues and ways of making moral decisions on the margins. Someone like MacAskill who is specifically arguing for a period where we hold off from irreversible changes as long as possible in order to get these moral discussions right really doesn't fit the bill or someone trying to "make an end run around" these issues.

No worries, it works now!

Got it, sorry for misunderstanding.

Unfortunately the link doesn't seem to be working for me

Hi I’m Devin, I'm currently a second year Bioethics MA student at NYU. My eventual hopes are to become an academic philosopher though I know that is a long shot and I’m open to other routes, hoping to get some more clarity on some of them at EAG. I’ve been involved with EA for about five years and founded my undergrad’s (RIT) club. My academic background is a bit all over the place, I did an individualized study at RIT concentrating in Physics and Literature with minors in Philosophy and Math. I also have an English MA from University of Rochester.

Sorry yeah, that was an unstated assumption of mine as well.

Re 2: I don't think the standard issues with pure aggregation can be appealed to in the strongest version of non-anti-egalitarianism, because you can add arbitrarily many steps so that in each step the pool of beneficiaries and harmed are equal in size, and the per individual stakes of the beneficiaries is greater. Transitivity generally does imply pure aggregation for reasons like this, so it seems like in this case you'd want to deny transitivity (or again, IIA) instead, or else you'll need to make a stronger and apparently costlier claim about how to trade off interests that isn't unique to pure aggregation.

I am very strongly attached to both dominance addition and non-anti-egalitarianism. If I was to reject a premise it would probably be transitivity, though I think there are very strong structural reasons to accept transitivity, as well as modestly strong principled ones (in all cases of intransitive values I am aware of, the thing you care about in a world depends on what other world it is being compared to, if this is necessarily the case for intransitive values, it requires a sort of extrinsic valuation that I am very averse to, and which is crucial to my strong acceptance of dominance addition as well).

A more realistic way I reject the repugnant conclusion in practice is on a non-ethical level. I don't think that my aversion to the repugnant conclusion (actually not nearly as strong as my aversion to other principled results of my preferred views, like those of pure aggregation) is sensitive to strong moral reasons at all. I think that rejection of many of these implications, for me, is more part of a different sort of project all together from principled ethics. I don't believe I can be persuaded of these conclusions by even the best principled arguments, and so I don't think there is any level on which I arrived at them because of such principles either.

I think many people find views like this to be a sort of cop-out, but I think arriving at moral rules in a way that more resembles running linear regressions on case-specific intuitions degrades an honest appreciation for both moral principles and my own psychology, I see no reason to expect there to be a sufficiently comfortable convergence between the best versions of the two.

@throwaway151 I recommend editing this post to include a link to this comment in its body (and maybe change the title). At this point it seems like it’s Torres’ word against Cremer’s and I see no reason to default to Torres’ side/interpretation given this. For people who won’t read the comments that carefully this seems important, especially since this post looks quiet enough now that it’s unlikely this comment will be upvoted to the top comment above one that has karma in the triple digits.

On the last point, I stand corrected.

Load More