Devin Kalish

1891 karmaJoined Tarrytown, NY, USA
Interests:
Bioethics

Bio

Participation
3

Hello, I'm Devin, I blog here along with Nick/Heather Kross. Recently got my bioethics MA, now looking into getting a philosophy PhD.

Sequences
1

Alcoholism Appendices Sequence

Comments
245

Devin Kalish
4
0
0
30% disagree

I think "morality" as we discuss it and as I use it has many realish properties - I think things would be good or bad whether or not moral agents had ever come to exist (so long as moral patients did), I think we can be uncertain about which theory of ethics is "right" to begin with, and I don't think the debate to resolve this uncertainty is ultimately semantic. I think ethics has most of the stuff real things have except for the "being real" part.

 

I'm not super confident on this, but I note that most sorts of explanations of what ethics is either fall into the category of dubious empirical predictions "ethics is the one theory all rational beings would converge on given enough time and thought", or a muddled version of just restating a normative ethics theory "ethics is some hypothetical ideal contract between distinct agents, or what is good for all beings taken together".

 

Maybe more personally, I think that any explanation of what we mean by "objective ethics" would have to be something that, if we programmed a perfect superintelligence to determine what the correct answer to it was, I would be satisfied deferring to whatever answer it gave without further explanation. To borrow/restate a thought experiment of Brian Tomasik's, if a perfect "ethicsometer" told me that the correct ethical theory was torturing as many squirels as possible, I would have just learned that I don't care about ethics. I would go further than this though and say that the ethicsometer had failed to even satisfy what I mean by "ethics". I've been recommended Simon Blackburn's work on this, it seems possible I have a view most like what he calls "quasi-realism".

Devin Kalish
2
0
0
21% agree

Overall I care more about preventing the worst scenarios than promoting the very best. While I am worried about scenarios worse than extinction, and most of my ambivalence comes from the possibility of these, I would count extinction as a scenario that I care about substantially more about than bringing about very positive futures.

While there's less work on improving the longer term future, I also find what work there is not that promising by comparison to the preventing extinction work - and the longer we survive, the more likely I find it that we are able to have the abundance required for conflicts of values to be very cheap to resolve.

While I can't bring myself to lean very far because of the scenarios worse than extinction, and possibility of harder to coordinate futures in which very bad things can continue somewhere basically indefinitely, preventing extinction still seems more important than much of the work towards improving futures right now, and far more actionable than the rest.

Thanks! It's actually almost the other way around - the original essay this was based on was specifically about environmental restoration, but I've been thinking about expanding it to touch on the issue of terraforming for a little while, a concern of some consequentialists in the wild animal welfare community like Brian Tomasik. This draft touches on this idea briefly, but when I make the final draft, it will likely include a section more dedicated to the topic.

Sorry I wasn't able to show up, I was looking forward to it but woke up with a real nasty stomach bug. Will there be any more sessions like this?

I had this idea a while ago and meant to see if I could collaborate with someone on the research, but at this point barring major changes I would rather just see someone else do it well and efficiently. Fentanyl tests strips are a useful way to avoid overdoses in theory, and for some drugs can be helpful for this, but in practice the market for opioids is so flooded with adulterated products that they aren't that useful, because opioid addicts will still use drugs with fentanyl in them if it's all that's available. Changes in policy and technology might help with this and obviously the best solution is for opioid addicts to detox on something like suboxone and then abstain, but a sort of speculative harm-reduction idea occurred to me at some point that seems actionable now with no change in the technological or political situation.

Presumably these test-strips have a concentration threshold below which they can't detect fentanyl, so it might be possible to dilute some of the drug enough that, if the concentration of fentanyl is above a given level it will set off the test, and if it's below a given level it won't. There are some complications with this friends have mentioned to me (fentanyl has a bit of a clumping tendency for instance), but I think it would be great if someone figured out a practical guide for how to use test strips to determine the over/under concentration of a given batch of opioids so that active users can adjust their dosage to try to avoid overdoses. Maybe someone could even make and promote an app based on the idea.

Maybe an inherently drafty idea, but I would love if someone wrote a post on the feasibility of homemade bivalvegan cat food. I remember there was a cause area profile post a while ago talking about making cheaper vegan cat food, but I'm also hoping to see if there's something practical and cheap right now. Bivalves seem like the obvious candidate for - less morally risky and other animal products, probably enjoyable for cats or able to be made into something enjoyable, and containing the necessary nutrients. I don't know any of that for sure, or if there are other things you can add to the food or supplement on the side that would make a cat diet like this feasible, and I would love if someone wrote up a practical report on this. For current or prospective cat owners.

Pertinent to this idea for a post I’m stuck on:

What follows from conditionalizing the various big anthropic arguments on one another? Like, assuming you think the basic logic behind the simulation hypothesis, grabby aliens, Boltzman brains, and many worlds all works, how do these interact with one another? Does one of them “win”? Do some of them hold conditional on one another but fail conditional on others? Do ones more compatible with one another have some probabilistic dominance (like, this is true if we start by assuming it, but also might be true if these others are true)? Essentially I think this confusion is pertinent enough to my opinions on these styles of arguments in general that I’m satisfied just writing about this confusion for my post idea, but I feel unprepared to actually do the difficult, dirty work, of pulling expected conclusions about the world from this consideration, and I would love it if someone much cleverer than me tried to actually take the challenge on.

Topic from last round:

Okay, so, this is kind of a catch all. Out of the possible post ideas I commented last year, I never posted or wrote “Against National Special Obligation”, “The Case for Pluralist Evaluation”, or “Existentialist Currents in Pawn Hearts”. So, this is just the comment for “one of those”.

Load more