I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I can help with career advice, prioritisation, and quantitative analyses.
Thanks for the comment, Michael. I read the post and Seth's original essay, and listened to the episode of The 80,000 Hours Podcast with Seth. I would agree the title of the post is a bit of a misnomer. I think one may update towards a lower chance of digital systems being conscious as a result of Seth's arguments, but they are far from conclusive. I only know I am conscious right now (and I am very confident I was conscious moments ago). So I think a system which is more similar to me at a fundamental physical level should have a higher chance of being conscious. However, I have no idea about what this implies in terms of concrete probabilities of consciousness. As far as I can tell, the available evidence is compatible with frontier large language models (LLMs) having a probability of consciousness of 10^-6, but also 99.999 %.
Hi Daniel and titotal. Thanks for the discussion.
I only know I am conscious right now (and I am very confident I was conscious moments ago). So I think a system which is more similar to me at a fundamental physical level should have a higher chance of being conscious. I have no idea about what this implies in terms of concrete probabilities of consciousness. As far as I can tell, the available evidence is compatible with frontier large language models (LLMs) having a probability of consciousness of 10^-6, but also 99.999 %.
As a side note, I would take for granted that all animals and digital systems are sentient, and focus on assessing the distribution of the intensity of subjective experiences. I think asking about the probability of sentience of an animal or digital system shares some of the issues of asking about the probability that an object is hot. People have different concepts about what "hot" means, and they do not depend just on temperature (for example, the minimum temperature for hot wood is higher than the minimum temperature for hot metal because this transfers heat more efficiently). I understand sentience as having subjective experiences whose intensity is not exactly 0. However, I suspect some people understand it as having subjective experiences which are sufficiently intense. Different bars for this will lead to different probabilities. Asking about the distribution of the intensity of subjective experiences mitigates this. For example, one could ask about the probability of the mean intensity of what an LLM experienced writing a message exceeding the mean intensity of human experiences. It still seems super hard to get numbers for this, but what they refer to may be more concrete than a vague concept like sentience.
I do not see how philosophical zombies (p-zombies) could be physically possible. If they were just like humans at a fundamental physical level, they would in fact be humans. So they would be as conscious as humans, which I assume are conscious (because I am a conscious human right now, and other humans do not seem relevantly different).
Hi Cynthia. Thanks for the clarifying comment.
Relatedly, I wonder how much welfare varies within production systems. For example, I am interested in knowing which of the following results in a greater increase in welfare. Layers going from:
Do you have sense of how these compare? The question reminds me of your meta-analysis of hen mortality in different indoor housing systems. Median cage-free aviaries most likely have higher welfare than median furnished cages, and 90th percentile cage-free aviaries most likely have higher welfare than 90th percentile furnished cages. However, it might still be worth advocating for better management of animals within each system. It might be cheaper than moving to a better system, and capture a significant fraction of its benefits. Likewise, I wonder whether it may sometimes be worth advocating for replacing battery cages with furnished cages instead of cage-free aviaries, or for banning battery cages instead of all cages.
Hi Aaron and Will. I estimated how much cage-free corporate campaigns for layers, and the Shrimp Welfare Project’s (SWP’s) Humane Slaughter Initiative (HSI) increase the welfare of their target beneficiaries for individual welfare per fully-healthy-animal-year proportional to "individual number of neurons"^"exponent", and "exponent" from 0 to 2, which covers the best guesses that I consider reasonable. An exponent of 1 would correspond to the linear weighting preferred by Will. Below is a graph with the results. I calculate cage-free corporate campaigns increase the welfare of chickens more cost-effectively than HSI has increased the welfare of shrimps for an exponent of at least 0.94. For exponents of 0 and 2, cage-free corporate campaigns increase the welfare of chickens 6.71*10^-4 and 4.43 k times as cost-effectively as HSI has increased the welfare of shrimps.
The above only looks into effects on the target benefeciaries. However, I believe effects on soil animals resulting from changes in land use can easily dominate, as illustrated below. I assume that increasing agricultural land increases the welfare of soil animals, but I have very little idea about whether this is the case. So "Increase in the welfare" in the title of the graph should be read as "Absolute value of the change in the welfare". The graph does not look into HSI (electrically stunning shrimp), but I also do not know whether this increases or decreases welfare in expectation due to potentially dominant effects on soil animals and microorganisms.
Hi Charlie. I agree it is better to target soil animals instead of farmed shrimps (at the margin) if individual welfare is proportional to the individual number of neurons as suggested by @William_MacAskill. Here are my estimates for the total number of neurons of animal populations. I calculate soil nematodes have 5.93 M times as many neurons in total as farmed shrimps.
It is also worth noting that only wild finfishes and soil animals have more neurons in total than humans.
As a fun fact, @Ajeya was early to the potential importance of nematodes. In her biological anchors report about transformative AI (TAI) timelines, she calculated the compute performed by evolution considering just nematodes.
Ajeya estimates 10^41. I [Scott Alexander] can’t believe I’m writing this. I can’t believe someone actually estimated the number of floating point operations involved in jellyfish rising out of the primordial ooze and eventually becoming fish and lizards and mammals and so on all the way to the Ascent of Man. Still, the idea is simple. You estimate how long animals with neurons have been around for (10^16 seconds), total number of animals at any given second (10^20) times average number of FLOPS per animal (10^5) and you can read more here but it comes out to 10^41 FLOs. I would not call this an exact estimate - for one thing, it assumes that all animals are nematodes, on the grounds that non-nematode animals are basically a rounding error in the grand scheme of things [emphasis mine].
Hi Laura.
Spending on corporate cage-free campaigns for egg-laying hens is robustly[8] cost-effective under nearly all reasonable types and levels of risk aversion considered here.
[...]
I am only considering the first-order cost-effectiveness of the interventions, whereas it is likely there are externalities (potentially both positive and negative) to spending on each intervention [more].
Do you have any thoughts on how accounting for effects on soil animals on change your conclusions? I have very little idea about whether cage-free campaigns for egg-laying hens increases or decreases animal welfare accounting for effects on ants and termites.
The CEAs of the animal welfare interventions looked very thorough. I left some comments.