Derek Shiller

Wiki Contributions

Comments

Why don't governments seem to mind that companies are explicitly trying to make AGIs?

Are you sure that they don't mind? I would be surprised if intelligence agencies weren't keeping some track of the technical capabilities of foreign entities, and I'd be unsurprised if they were also keeping track of domestic entities as well. If they thought we were six months away from transformative AGI, they could nationalize it or shut it down.

Why do you find the Repugnant Conclusion repugnant?

There is a challenge here in making the thought experiment specific, conceivable, and still compelling for the majority of people. I think a marginally positive experience like sucking on a cough drop is easy to imagine (even if it is hard to really picture doing it for 40,000 years) and intuitively just slightly better than non-existence minute by minute.

Someone might disagree. There are some who think that existence is intrinsically valuable, so simply having no negative experiences might be enough to have a life well worth living. But it is hard to paint a clear picture of a life that is definitely barely worth living and involves some mix of ups and downs, because you then have to make sure that the ups and downs balance each other out, and this is more difficult to imagine and harder to gauge.

Why do you find the Repugnant Conclusion repugnant?

I find your attitude somewhat surprising. I'm much less sympathetic to trolley problems or utility monsters than the repugnant conclusion. I can see why some people aren't moved by it, but I have a hard time seeing how someone couldn't get what it is moving about it. Since it is a rather basic intuition, it's not super easy to pump. But I wonder, what do you think about this alternative, which seems to draw on similar intuitions for me:

Suppose that you could right now, at this moment, choose between continuing to live your life, with all its ups and downs and complexity, or going into a state of near-total suspended animation. In the state of suspended animation, you will have no thoughts and no feelings, except you will have a sensation of sucking on a rather disappointing but not altogether bad cough drop. You won't be able to meditate on your existence, or focus on the different aspects of the flavor. You won't feel pain or boredom. Just the cough drop. If you continue your life, you'll die in 40 years. If you go into the state of animation, it will last for 40,000 years (or 500,000, or 20 million, whatever number it takes.) Is it totally obvious that the right thing to do is to opt for the suspended animation (at least, from a selfish perspective) ?

Notes on the risks and benefits of kidney donation

My logic is (deferring judgment to medical professions) just the amount of effort and money that is spent on facilitating kidney donations, despite the existence of dialysis, indicates that experts think the cost/benefit ration is a good one. One reason I feel safe in this deference is because the field of medicine seems to have strong "loss aversion". I.e. Doctors seem strongly concerned about direct actions that cause harm, even if it is for the greater good.

The cynical story I've heard is that insurance providers cover it because it is cheaper than years of dialysis and doctors provide it because it pays well. Some doctors are hesitant about it, particularly for non-directed donors, but they aren't the ones performing it.

I do think that is overly cynical: there are clear advantages to the recipient that make transplantation very desirable. Dialysis is a pain, and not without its risks. Quality of life definitely goes up. Life expectancy probably goes up a fair bit too. If I had to make a guess, I'd guess donation produced something like 3-8 QALYs on average for the primary beneficiary, at a cost of about .5 QALYs for the donor. That is a pretty reasonable altruistic trade, but it isn't saving a life at the cost of a surgery and a few weeks recovery.

Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping.

I agree that there are challenges for each of them in the case of an infinite number of people. My impression is that total utilitarianism can handle infinite cases pretty respectably, by supplementing the standard maxim of maximizing utility with a dominance principle to the effect of 'do what's best for the finite subset of everyone that you're capable of affecting', though it also isn't something I've thought about too much either. I initially was thinking that average utilitarians can't make a similar move without undermining it's spirit, but maybe they can. However, if they can, I suspect they can make the same move in the finite case ('just focus on the average among the population you can affect') and that will throw off your calculations. Maybe in that case, if you can only affect a small number of individuals, the threat from solipsism can't even get going.

In any case, I would hope that SIA is at least able to accommodate an infinite number of possible people, or the possibility of an infinite number of people, without becoming useless. I take it that there are an infinite number of epistemically possible people, and so this isn't just an exercise.

Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping.

Interesting application of SIA, but I wonder if it shows too much to help average utilitarianism.

SIA seems to support metaphysical pictures in which more people actually exist. This is how you discount the probability of solipsism. But do you think you can simultaneously avoid the conclusion that there are an infinite number of people?

This would be problematic: if you're sure that there are an infinite number of people, average utilitarianism won't offer much guidance because you almost certainly won't have any ability to influence the average utility.

Thoughts on the welfare of farmed insects

Nice summary of the issues.

A couple of related thoughts:

There are some reasons to think that insects would not be especially harmed by factory farming, in the way that vertebrates are. It is plausible that the largest source of suffering in factory farms come from the stress produced by lack of enrichment and unnatural and overcrowded conditions. Even if crickets are phenomenally conscious AND can suffer, they might not be capable of stress or capable of stress in the same sort of dull over-crowded conditions as vertebrates. Given their ancient divergence in brain structures, their very different life styles, and their comparatively minuscule brains, it is reasonable to be skeptical that they feel environment induced stress. Death is conceivably such a short portion of their life that even a relatively painful death won't tip the balance.

If crickets are not harmed by the conditions of factory farms, they might instead benefit from factory farming. It seems possible that the average factory farmed cricket might have a net positive balance of good experiences vs bad experiences. In that case, it might be better to raise crickets in factory farm conditions than to produce equivalent amounts of non-sentient meat alternatives. The risks are not entirely on the farming side.