Computational functionalism about sentience: for a system to have a given conscious valenced experience is for that system to be in a (possibly very complex) computational state. That assumption is why the Big Question is asked in computational (as opposed to neural or biological) terms.
I think it is a little quick to jump from functionalism to thinking that consciousness is realizable in a modern computer architecture if we program the right functional roles. There might be important differences in how the functional roles are implemented that rules o... (read more)
But there are many ordered subsets of merely trillions of interacting particles we can find, effectively signaling each other with forces and small changes to their positions.
In brains, patterns of neural activity stimulate further patterns of neural activity. We can abstract this out into a system of state changes and treat conscious episodes as patterns of state changes. Then if we can find similar causal networks of state changes in the wall, we might have reason to think they are conscious as well. Is this the idea? If so, what sort of states are yo... (read more)
Yes, it's literally a physical difference, but, by hypothesis, it had no influence on anything else in the brain at the time, and your behaviour and reports would be the same. Empty space (or a disconnected or differently connected neuron) could play the same non-firing neuron role in the actual sequence of events. Of course, empty space couldn't also play the firing neuron role in counterfactuals (and a differently connected neuron wouldn't play identical roles across counterfactuals), but why would what didn't happen matter?
I can get your intuition ab... (read more)
That seems unphysical, since we're saying that even if something made no actual physical difference, it can still make a difference for subjective experience.
The neuron is still there, so its existing-but-not-firing makes a physical difference, right? Not firing is as much a thing a neuron can do as firing. (Also, for what it's worth, my impression is that cognition is less about which neurons are firing and more about what rate they are firing at and how their firing is coordinated with that of other neurons.)
... (read more)But neurons don't seem special, and if y
Authoritative Statements of EA Views
Epistemic Institutions
In academia, law, and government, it would be helpful to have citeable statements of EA relevant views presented in an authoritative and unbiased manner. Having such material available lends gravitas to proposals that help address related problems and provides greater justification in taking those views for granted.
(This is a variation on 'Expert polling for everything' focused on providing authority of views to non-experts. The Cambridge Declaration on Consciousness is a good example.)
Insofar as we are all imperfect and have to figure out which ways to prioritize improving on, it isn't obvious that we should treat veganism as a priority. That said, I think there is an important difference between what it makes sense to do and how it makes sense to feel. It makes sense to feel horrified by factory farming and disgusted by factory farmed meat if you care about the suffering of animals. It makes sense to respond to suffering inflicted on your behalf with sadness and regret.
Effective altruists should generally be vegan, not (just) because ... (read more)
the probabilities are of the order of 10^-3 to 10^-8, which is far from infinitesimal
I'm not sure what the probabilties are. You're right that they are far from infinitesimal (just as every number is!): still, the y may be close enough to warrant discounting on whatever basis people discount Pascal's mugger.
what is important is reducing the risk to an acceptable level
I think the risk is pretty irrelevant. If we lower the risk but still go extinct, we can pat ourselves on the back for fighting the good fight, but I don't hink we should assign it mu... (read more)
Let me clarify that I'm not opposed to paying Pascal's mugger. I think that is probably rational (though I count myself lucky to not be so rational).
But the idea here is that x-risk is all or nothing, which translates into each person having a very small chance of making a very big difference. Climate change can be mitigated, so everyone working on it can make a little difference.
I'm not disagreeing with the possibility of a significant impact in expectation. Paying Pascals' mugger is promising in expectation. The thought is that in order to make a marginal difference to x-risk, there needs to be some threshold for hours/money/etc under which our species will be wiped out and over which our species will survive, and your contributions have to push us over that threshold.
X-risk, at least where the survival of the species is concerned, is an all or nothing thing. (This is different than AI alignment, where your contributions might make things a little better or a little worse.)
But also, we’re dealing with probabilities that are small but not infinitesimal. This saves us from objections like Pascal’s Mugging - a 1% chance of AI x-risk is not a Pascal’s Mugging.
It seems to me that the relevant probability is not the chance of AI x-risk, but the chance that your efforts could make a marginal difference. That probability is vastly lower, possibly bordering on mugging territory. For x-risk in particular, you make a difference only if your decision to work on x-risk makes a difference to whether or not the species survives. For some of us that may be plausible, but for most, it is very very unlikely.
Importantly (as I'm sure you're aware), no amount of world slicing is going to increase the expected value of the future (roughly all the branches from here)
What makes you think that? So long as value can change with the distribution of events across branches (as perhaps with the Mona Lisa) the expected value of the future could easily change.
Are you sure that they don't mind? I would be surprised if intelligence agencies weren't keeping some track of the technical capabilities of foreign entities, and I'd be unsurprised if they were also keeping track of domestic entities as well. If they thought we were six months away from transformative AGI, they could nationalize it or shut it down.
There is a challenge here in making the thought experiment specific, conceivable, and still compelling for the majority of people. I think a marginally positive experience like sucking on a cough drop is easy to imagine (even if it is hard to really picture doing it for 40,000 years) and intuitively just slightly better than non-existence minute by minute.
Someone might disagree. There are some who think that existence is intrinsically valuable, so simply having no negative experiences might be enough to have a life well worth living. But it is hard to pain... (read more)
I find your attitude somewhat surprising. I'm much less sympathetic to trolley problems or utility monsters than the repugnant conclusion. I can see why some people aren't moved by it, but I have a hard time seeing how someone couldn't get what it is moving about it. Since it is a rather basic intuition, it's not super easy to pump. But I wonder, what do you think about this alternative, which seems to draw on similar intuitions for me:
Suppose that you could right now, at this moment, choose between continuing to live your life, with all its ups and downs ... (read more)
My logic is (deferring judgment to medical professions) just the amount of effort and money that is spent on facilitating kidney donations, despite the existence of dialysis, indicates that experts think the cost/benefit ration is a good one. One reason I feel safe in this deference is because the field of medicine seems to have strong "loss aversion". I.e. Doctors seem strongly concerned about direct actions that cause harm, even if it is for the greater good.
The cynical story I've heard is that insurance providers cover it because it is cheaper than y... (read more)
I agree that there are challenges for each of them in the case of an infinite number of people. My impression is that total utilitarianism can handle infinite cases pretty respectably, by supplementing the standard maxim of maximizing utility with a dominance principle to the effect of 'do what's best for the finite subset of everyone that you're capable of affecting', though it also isn't something I've thought about too much either. I initially was thinking that average utilitarians can't make a similar move without undermining it's spirit, but maybe th... (read more)
Interesting application of SIA, but I wonder if it shows too much to help average utilitarianism.
SIA seems to support metaphysical pictures in which more people actually exist. This is how you discount the probability of solipsism. But do you think you can simultaneously avoid the conclusion that there are an infinite number of people?
This would be problematic: if you're sure that there are an infinite number of people, average utilitarianism won't offer much guidance because you almost certainly won't have any ability to influence the average utility.
Nice summary of the issues.
A couple of related thoughts:
There are some reasons to think that insects would not be especially harmed by factory farming, in the way that vertebrates are. It is plausible that the largest source of suffering in factory farms come from the stress produced by lack of enrichment and unnatural and overcrowded conditions. Even if crickets are phenomenally conscious AND can suffer, they might not be capable of stress or capable of stress in the same sort of dull over-crowded conditions as vertebrates. Given their ancient divergence ... (read more)
There is already a continuum between the cognitive capacities of humans and animals. Peter Singer has pointed to cognitively disabled humals in arguing for better treatment of animals.
Do you think homo erectus would add something further? People often (arbitrarily) draw the line at species, but it seems to me that they could just as easily draw it at any clade. Growing fetuses display a ... (read more)