All of Derek Shiller's Comments + Replies

Animal Welfare: Reviving Extinct (human) intermediate species?

That humans and non-human animals are categorically distinct seems to be based on the fairly big cognitive and communicative gap between humans and the smartest animals.

There is already a continuum between the cognitive capacities of humans and animals. Peter Singer has pointed to cognitively disabled humals in arguing for better treatment of animals.

Do you think homo erectus would add something further? People often (arbitrarily) draw the line at species, but it seems to me that they could just as easily draw it at any clade. Growing fetuses display a ... (read more)

1Franziska Fischer15d
I think your example with fetuses being the variation between single cells and adults is very adequate here. So my claim would probably be something along the lines of "the fact that 8-month old fetuses exist (which usually may not be killed anymore) is a strong reason why in most countries 4-months-old fetuses have a lot of legal & societal protection. If there was nothing in between the 4 months fetus and the born baby, I don't think many countries would ban abortion of 4-months old fetuses, rather it is there because of the transition. Thus the existence of a smooth transition between non-human animals and sapiens would increase support for "lower" animals. I agree that there is already a continuum with e.g. disabled sapiens as you name it. However I don't think that "commonsense" is aware of that. I think commonsense sees mentally disabled people as something "that could have been any of us" (or could even still happen to many of us, as some mental disabilities are not from birth). However intermediate species can not be considered disabled exceptions/"misfortunes" or something like that
Key questions about artificial sentience: an opinionated guide

Computational functionalism about sentience: for a system to have a given conscious valenced experience is for that system to be in a (possibly very complex) computational state. That assumption is why the Big Question is asked in computational (as opposed to neural or biological) terms.

I think it is a little quick to jump from functionalism to thinking that consciousness is realizable in a modern computer architecture if we program the right functional roles. There might be important differences in how the functional roles are implemented that rules o... (read more)

2rgb25d
Thanks for the comment! I agree with the thrust of this comment. Learning more and thinking more clearly about implementation of computation [https://plato.stanford.edu/entries/computation-physicalsystems/] in general and neural computation [https://www.amazon.co.uk/Neurocognitive-Mechanisms-Explaining-Biological-Cognition/dp/0198866283] in particular, is perennially on my intellectual to-do list list. I agree with the way you've formulated the problem, and the possible solution - I'm guessing that an adequate theory of implementation deals with both of them. Some condition about there being the right kind of "reliable, counterfactual-supporting connection between the states" (that quote is from Chalmers' take [http://consc.net/papers/computation.html] on these issues). But I have not yet figured out how to think about these things to my satisfaction.
Consciousness, counterfactual robustness and absurdity

But there are many ordered subsets of merely trillions of interacting particles we can find, effectively signaling each other with forces and small changes to their positions.

In brains, patterns of neural activity stimulate further patterns of neural activity. We can abstract this out into a system of state changes and treat conscious episodes as patterns of state changes. Then if we can find similar causal networks of state changes in the wall, we might have reason to think they are conscious as well. Is this the idea? If so, what sort of states are yo... (read more)

3MichaelStJules1mo
I need to think about this in more detail, but here are some rough ideas, mostly thinking out loud (and perhaps not worth your time to go through these): 1. One possibility is that because we only care about when the neurons are firing if we reject counterfactual robustness anyway, we don't even need to represent when they're not firing with particle properties. Then the signals from one neuron to the next can just be represented by the force exerted by the corresponding particle to the next corresponding particle. However, this way, the force doesn't seem responsible for the "firing" state (i.e. that Y exerts a force on Z is not because of some Z that exerted a force on Y before that), so this probably doesn't work. 2. We can just pick any specific property, and pick a threshold between firing and non-firing that puts every particle well-above the threshold into firing. But again, the force wouldn't be responsible for the state being above the threshold. 3. We can use a particle's position, velocity, acceleration, energy, net force, whatever as encoding whether or not a neuron is firing, but then we only care about when the neurons are firing anyway, and we could have independent freedom for each individual particle to decide which quantity or vector to use, which threshold to use, which side of the threshold counts as a neuron firing, etc.. If we use all of those independent degrees of freedom or even just one independent degree of freedom per particle, then this does seem pretty arbitrary and gerrymandered. But we can also imagine replacing individual neurons in a full typical human brain each with a different kind of artificial neuron (or particle) whose firing is replaced by a different kind of degree of freedom, and still preserve counterfactual robustness, and it could (I'm not sure) look the same once we get rid of all of the inactive neurons, so is it really gerryman
Consciousness, counterfactual robustness and absurdity

Yes, it's literally a physical difference, but, by hypothesis, it had no influence on anything else in the brain at the time, and your behaviour and reports would be the same. Empty space (or a disconnected or differently connected neuron) could play the same non-firing neuron role in the actual sequence of events. Of course, empty space couldn't also play the firing neuron role in counterfactuals (and a differently connected neuron wouldn't play identical roles across counterfactuals), but why would what didn't happen matter?

I can get your intuition ab... (read more)

2MichaelStJules1mo
If the signals are still there to ensure causal influence, I think I would still be conscious like normal. The argument is exactly the same: whenever something is inactive and not affecting other things, it doesn't need to be there at all. This is getting close to the problem I'm grappling with, once we step away from neurons and look at individual particles (or atoms). First, I could imagine individual atoms acting like neurons to implement a human-like neural network in a counterfactually robust way, too, and that would very likely be conscious. The atoms could literally pass photons or electrons to one another. Or maybe the signals would be their (changes in the) exertion of elementary forces (or gravity?). If during a particular sequence of events, whenever something happened to be inactive, it happened to disappear, then this shouldn't make a difference. But if you start from something that was never counterfactually robust in the first place, which I think is your intention, and its events just happen to match a conscious sequence of activity in a human brain, then it seems like it probably wouldn't be conscious (although this is less unintuitive to me than is accepting counterfactual robustness mattering in a system that is usually counterfactually robust). Rejecting counterfactual robustness (together with my other views, and assuming things are arranged and mapped correctly) seems to imply that this should be conscious, and the consequences seem crazy if this turns out to be morally relevant. It seems like counterfactual robustness might matter for consciousness in systems that aren't normally conscious but very likely doesn't matter in systems that are normally conscious, which doesn't make much sense to me.
Consciousness, counterfactual robustness and absurdity

That seems unphysical, since we're saying that even if something made no actual physical difference, it can still make a difference for subjective experience.

The neuron is still there, so its existing-but-not-firing makes a physical difference, right? Not firing is as much a thing a neuron can do as firing. (Also, for what it's worth, my impression is that cognition is less about which neurons are firing and more about what rate they are firing at and how their firing is coordinated with that of other neurons.)

But neurons don't seem special, and if y

... (read more)
2MichaelStJules1mo
Thanks for the comment! Yes, it's literally a physical difference, but, by hypothesis, it had no influence on anything else in the brain at the time, and your behaviour and reports would be the same. Empty space (or a disconnected or differently connected neuron) could play the same non-firing neuron role in the actual sequence of events. Of course, empty space couldn't also play the firing neuron role in counterfactuals (and a differently connected neuron wouldn't play identical roles across counterfactuals), but why would what didn't happen matter? Do you expect that those temporarily inactive neurons disappearing temporarily (or slightly more realistically, being temporarily and artificially suppressed from firing) would make a difference to your experiences? (Firing rates would still be captured with sequences of neurons firing, since the same neuron can fire multiple times in a sequence. If it turns out basically every neuron has a nonzero firing rate during every interval of time long enough to generate an experience, if that even makes sense, then tortured walls could be much rarer. OTOH, we could just make all the neurons only be present exactly when they need to be to preserve the pattern of firing, so they might disappear between firing.) -------------------------------------------------------------------------------- On finding similar patterns elsewhere, it's because of the huge number of particles and interactions between them going on and relatively small number of interactions in a morally relevant pattern of activity. A human brain has fewer than 100 billion neurons [https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons], and the maximum neuron firing rate in many morally relevant experiences is probably less than 1000 [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5067378/]. So we're only talking at most trillions of events and their connections in a second, which is long enough for a morally relevant experience. But there are many
The Future Fund’s Project Ideas Competition

Authoritative Statements of EA Views

Epistemic Institutions

In academia, law, and government, it would be helpful to have citeable statements of EA relevant views presented in an authoritative and unbiased manner. Having such material available lends gravitas to proposals that help address related problems and provides greater justification in taking those views for granted.

(This is a variation on 'Expert polling for everything' focused on providing authority of views to non-experts. The Cambridge Declaration on Consciousness is a good example.)

Some thoughts on vegetarianism and veganism

Insofar as we are all imperfect and have to figure out which ways to prioritize improving on, it isn't obvious that we should treat veganism as a priority. That said, I think there is an important difference between what it makes sense to do and how it makes sense to feel. It makes sense to feel horrified by factory farming and disgusted by factory farmed meat if you care about the suffering of animals. It makes sense to respond to suffering inflicted on your behalf with sadness and regret.

Effective altruists should generally be vegan, not (just) because ... (read more)

Simplify EA Pitches to "Holy Shit, X-Risk"

the probabilities are of the order of 10^-3 to 10^-8, which is far from infinitesimal

I'm not sure what the probabilties are. You're right that they are far from infinitesimal (just as every number is!): still, the y may be close enough to warrant discounting on whatever basis people discount Pascal's mugger.

what is important is reducing the risk to an acceptable level

I think the risk is pretty irrelevant. If we lower the risk but still go extinct, we can pat ourselves on the back for fighting the good fight, but I don't hink we should assign it mu... (read more)

4Greg_Colbourn3mo
That would be bad, yes. But lowering the risk (significantly) means that it's (significantly) less likely that we will go extinct! Say we lower the risk from 1/6 (Toby Ord's all-things-considered estimate for x-risk over the next 100 years [https://theprecipice.com/]) to 1/60 this century. We've then bought ourselves a lot more time (in expectation) to lower the risk further. If we keep doing this at a high enough rate, we will very likely not go extinct for a very long time. I think "pretty well aligned" basically means we still all die; it has to be very well/perfectly aligned to be compatible with human existence, once you factor in an increase in power level of the AI to superintelligence; so it's basically all or nothing (I'm with Yudkowsky/MIRI on this).
Simplify EA Pitches to "Holy Shit, X-Risk"

Let me clarify that I'm not opposed to paying Pascal's mugger. I think that is probably rational (though I count myself lucky to not be so rational).

But the idea here is that x-risk is all or nothing, which translates into each person having a very small chance of making a very big difference. Climate change can be mitigated, so everyone working on it can make a little difference.

4Lukas_Finnveden3mo
You could replace working on climate change with 'working on or voting in elections', which are also all or nothing. (Edit: For some previous arguments in this vein, see this post [https://forum.effectivealtruism.org/posts/zjbxdJbTTmTvrWAX9/tiny-probabilities-of-vast-utilities-concluding-arguments] .)
Simplify EA Pitches to "Holy Shit, X-Risk"

I'm not disagreeing with the possibility of a significant impact in expectation. Paying Pascals' mugger is promising in expectation. The thought is that in order to make a marginal difference to x-risk, there needs to be some threshold for hours/money/etc under which our species will be wiped out and over which our species will survive, and your contributions have to push us over that threshold.

X-risk, at least where the survival of the species is concerned, is an all or nothing thing. (This is different than AI alignment, where your contributions might make things a little better or a little worse.)

3Greg_Colbourn3mo
I don't think this is a Pascal's Mugging situation; the probabilities are of the order of 10^-3 to 10^-8, which is far from infinitesimal. I also don't think you can necessarily say that there is a threshold for hours/money. Ideas seem to be the bottleneck for AI x-risk at least, and these are not a linear function of time/money invested. It is all-or-nothing in the sense of survival or not, but given that we can never reduce the risk to zero, what is important is reducing the risk to an acceptable level (and this is not all-or-nothing, especially given that it's hard to know exactly how things will pan out in advance, regardless of our level of effort and perceived progress). Also I don't understand the comment on AI Alignment - I would say that is all or nothing, as limited global catastrophes seem less likely than extinction (although you can still make things better or worse in expectation); whereas bio is perhaps more likely to have interventions that make things a little better or worse in reality (given that limited global catastrophe is more likely than x-risk with bio).
Simplify EA Pitches to "Holy Shit, X-Risk"

But also, we’re dealing with probabilities that are small but not infinitesimal. This saves us from objections like Pascal’s Mugging - a 1% chance of AI x-risk is not a Pascal’s Mugging.

It seems to me that the relevant probability is not the chance of AI x-risk, but the chance that your efforts could make a marginal difference. That probability is vastly lower, possibly bordering on mugging territory. For x-risk in particular, you make a difference only if your decision to work on x-risk makes a difference to whether or not the species survives. For some of us that may be plausible, but for most, it is very very unlikely.

2Greg_Colbourn3mo
I think a huge number of people can contribute meaningfully to x-risk reduction, including pretty-much everyone reading this. You don't need to be top 0.1% in research skill or intelligence - there are plenty of support roles that could be filled. Just think, by being a PA (or research assistant) to a top researcher or engineer, you might be able to boost their output by 10-30% (and by extension, their impact). I doubt that all the promising researchers have PAs (and RAs). Or consider raising awareness. Helping to recruit just one promising person to the cause is worthy of claiming significant impact (in expectation).
9Neel Nanda3mo
Hmm, what would this perspective say to people working on climate change?
Splitting the timeline as an extinction risk intervention

Importantly (as I'm sure you're aware), no amount of world slicing is going to increase the expected value of the future (roughly all the branches from here)

What makes you think that? So long as value can change with the distribution of events across branches (as perhaps with the Mona Lisa) the expected value of the future could easily change.

Why don't governments seem to mind that companies are explicitly trying to make AGIs?

Are you sure that they don't mind? I would be surprised if intelligence agencies weren't keeping some track of the technical capabilities of foreign entities, and I'd be unsurprised if they were also keeping track of domestic entities as well. If they thought we were six months away from transformative AGI, they could nationalize it or shut it down.

3Ozzie Gooen5mo
I don't have any inside information into the government, it's of course possible there are secretive programs somewhere. "If they thought we were six months away from transformative AGI, they could nationalize it or shut it down." Agreed, in theory. In practice, many different parts of the government think differently. It seems very likely that one will think that "there might be a 5% chance we're six months away from transformative AGI", but the parts that could take action just wouldn't.
Why do you find the Repugnant Conclusion repugnant?

There is a challenge here in making the thought experiment specific, conceivable, and still compelling for the majority of people. I think a marginally positive experience like sucking on a cough drop is easy to imagine (even if it is hard to really picture doing it for 40,000 years) and intuitively just slightly better than non-existence minute by minute.

Someone might disagree. There are some who think that existence is intrinsically valuable, so simply having no negative experiences might be enough to have a life well worth living. But it is hard to pain... (read more)

Why do you find the Repugnant Conclusion repugnant?

I find your attitude somewhat surprising. I'm much less sympathetic to trolley problems or utility monsters than the repugnant conclusion. I can see why some people aren't moved by it, but I have a hard time seeing how someone couldn't get what it is moving about it. Since it is a rather basic intuition, it's not super easy to pump. But I wonder, what do you think about this alternative, which seems to draw on similar intuitions for me:

Suppose that you could right now, at this moment, choose between continuing to live your life, with all its ups and downs ... (read more)

8Will Bradshaw5mo
Thanks for trying to come up with a thought experiment that targets your intuitions here! That's exactly what I was hoping people would do. For me, this thought experiment feels like it raises more "value of complexity" questions than the canonical RC. Though from the comments it seems like complexity vs homogeneity intuitions are contributing to quite a few people's anti-RC feelings, so it's not bad to have a thought experiment that targets that. In any case, I think there probably is a sufficiently large number of years at which I would take the cough drop, all else equal. Certainly I don't feel extremely strong resistance to the idea of doing so. However, I'm a slightly non-optimal person to pose this thought experiment to, in that I'm not at all sure that my life so far has been good for me on net.
2Jack Malde5mo
By the way I apologise for implying you should "remove" something from your comment which I didn't literally mean. What I should have said is I think the words led to an unhelpful characterisation of the life being lived in the thought experiment. The OP doesn't appreciate my contributions so I am going to leave this post.
-1Jack Malde5mo
Firstly remove the words "rather disappointing". Remember there is nothing bad in this world and terms like that don't help people put themselves in the situation. I for one find this very difficult to imagine, and perhaps counterproductive to the RC. A buddhist might say not feeling pain or boredom is akin to living an enlightened life which is of the highest possible quality. It's for this reason that I personally don't find this thought experiment very helpful - it's just way too difficult to imagine what such a cough drop life would be like. EDIT: I regret implying you should "remove" something from your comment which I don't literally mean. What I should have said is I think the words led to an unhelpful characterisation
Notes on the risks and benefits of kidney donation

My logic is (deferring judgment to medical professions) just the amount of effort and money that is spent on facilitating kidney donations, despite the existence of dialysis, indicates that experts think the cost/benefit ration is a good one. One reason I feel safe in this deference is because the field of medicine seems to have strong "loss aversion". I.e. Doctors seem strongly concerned about direct actions that cause harm, even if it is for the greater good.

The cynical story I've heard is that insurance providers cover it because it is cheaper than y... (read more)

1ElliotJDavies6mo
While that's certainly a possibility, some evidence against that perspective is that many countries (UK, DK off the top of my head) have introduced altruistic/non-direct kidney donation in the last decade. Interestingly, I think the Danish Health-board may have a perspective closer to you, in that they have set the minimum age of altruistic kidney donation to 40 years old. I was a little bit frustrated when I discovered this. One thing I would say (again, without knowing much) in dialysis does sound intuitively a lot worse than having a transplanted kidney, because you have waste products building up in your body for days at a time.
Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping.

I agree that there are challenges for each of them in the case of an infinite number of people. My impression is that total utilitarianism can handle infinite cases pretty respectably, by supplementing the standard maxim of maximizing utility with a dominance principle to the effect of 'do what's best for the finite subset of everyone that you're capable of affecting', though it also isn't something I've thought about too much either. I initially was thinking that average utilitarians can't make a similar move without undermining it's spirit, but maybe th... (read more)

Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping.

Interesting application of SIA, but I wonder if it shows too much to help average utilitarianism.

SIA seems to support metaphysical pictures in which more people actually exist. This is how you discount the probability of solipsism. But do you think you can simultaneously avoid the conclusion that there are an infinite number of people?

This would be problematic: if you're sure that there are an infinite number of people, average utilitarianism won't offer much guidance because you almost certainly won't have any ability to influence the average utility.

2wuschel1y
Very interesting point, I have not thought of this. I do think, however, that SIA, Utilitarianism, SSA, and Average Utilitarianism all kind of break down, once we have an infinite amount of people. I think people, like Bostrom, have thought about infinite ethics, but I have not read anything on that topic.
Thoughts on the welfare of farmed insects

Nice summary of the issues.

A couple of related thoughts:

There are some reasons to think that insects would not be especially harmed by factory farming, in the way that vertebrates are. It is plausible that the largest source of suffering in factory farms come from the stress produced by lack of enrichment and unnatural and overcrowded conditions. Even if crickets are phenomenally conscious AND can suffer, they might not be capable of stress or capable of stress in the same sort of dull over-crowded conditions as vertebrates. Given their ancient divergence ... (read more)

1Max_Carpendale3y
It's true that their minds are more divergent from ours, but I think that tends to mean there is more uncertainty about what they feel stress in response to, not that they feel less environmentally induced stress. Also, as I say in the post, the uncertainty makes it harder to improve their welfare. I probably should have paid more attention to arguments about how they could have net positive welfare to have a more balanced post. Though I have seen a real bias in favour of eating insects (at least outside the EA community), and so I still see this post as contributing to a more balanced discussion of the issue. And for the reasons I given the post I still view it is unlikely that they have net positive welfare.