The critical question is whether shrimp or insects can support the kinds of negative states that make suffering severe, rather than merely possible.
I think suffering matters proportionally to its intensity. So I would not neglect mild suffering in principle, although it may not matter much in practice due to contributing little to total expected suffering.
In any case, I would agree the total expected welfare of farmed invertebrates may be tiny compared with that of humans due invertebrates' experiences having a very low intensity. For expected individual w...
And even granting the usual EA filters—tractability, neglectedness, feasibility, and evidential robustness—the scale gradient from shrimp to insects (via agriculture-related deaths) is so steep that these filters don’t, by themselves, explain why the precautionary logic should settle on shrimp. All else equal, once you shift to a target that is thousands of times larger, an intervention could be far less effective [in terms of robustly increasing welfare in expectation] and still compete on expected impact.
Are you thinking about humans as an aligned collective in the 1st paragraph of your comment? I agree all humans coordinating their actions together would have more power than other groups of organisms with their actual levels of coordination. However, such level of coordination among humans is not realistic. All 10^30 bacteria (see Table S1 of Bar-On et al. (2018)) coordinating their actions together would arguably also have more power than all humans with their actual level of coordination.
I agree it is good that no human has power over all humans. H...
Hi Guy. Elon Musk was not the only person responsible for the recent large cuts in foreign aid from the United States (US). In addition, I believe outcomes like human extinction are way less likely. I agree it makes sense to worry about concentration of power, but not about extreme outcomes like human extinction.
Thanks for the relevant post, Wladimir and Cynthia. I strongly upvoted it. Do you have any practical ideas about how to apply the Sentience Bargain framework to compare welfare across species? I would be curious to know your thoughts on Rethink Priorities' (RP's) research agenda on valuing impacts across species.
Thank you all for the very interesting discussion.
I think addressing the greatest sources of suffering is a promising approach to robustly increase welfare. However, I believe the focus should be on the greatest sources of suffering in the ecosystem, not in any given population, such that effects on non-target organisms can be neglected. Electrically stunning farmed shrimps arguably addresses one of the greatest sources of suffering of farmed shrimps, and the ratio between its effects on target and non-target organisms is much larger than for the vast majo...
Thanks, Zoë. I see funders are the ones deciding what to fund, and that you only provide advice if they so wish, as explained below. What if funders ask you for advice on which species to support? Do you base your advice on the welfare ranges presented in Bob's book? Have you considered recommending research on welfare comparisons across species to such funders, such as the projects in RP's research agenda on valuing impacts across species?
...Q: Do Senterra Funders staff decide how funders make grant decisions?
A: No, each Senterra member maintains full autono
Fair point, Nick. I would just keep in mind there may be very different types of digital minds, and some types may not speak any human language. We can more easily understand chimps than shrimps. In addition, the types of digital minds driving the expected total welfare might not speak any human language. I think there is a case for keeping an eye out for something like digital soil animals or microorganisms, by which I mean simple AI agents or algorithms, at least for people caring about invertebrate welfare. On the other end of the spectrum, I am also op...
Thanks for the post, Noah. I strongly upvoted it.
- 5. How much welfare total capacity might digital minds have relative to humans/other animals
- a. Related questions include: the estimated scale of digital minds, moral weights-esque projects, which part of the model would have moral weight.
I think this is a very important uncertainty. Discussions of digital minds overwhelmingly focus on the number of individuals, and probability of consciousness or sentience. However, one has to multiply these factors by the expected individual welfare per year conditiona...
Thanks for sharing, Kevin and Max. Are you planning to do any cost-effectiveness analyses (CEAs) to assess potential grants? I may help with these for free if you are interested.
Global wealth would have to increase a lot for everyone to become billionaire. There are 10 billion people. So everyone being a billionaire would require a global wealth of 10^19 $ (= 10*10^9*1*10^9) for perfect distribution. Global wealth is 600 T$. So it would have to become 16.7 k (= 10^19/(600*10^12)) times as large. For a growth of 10 %/year, it would take 102 years (= LN(16.7*10^3)/LN(1 + 0.10)). For a growth of 30 %/year, it would take 37.1 years (= LN(16.7*10^3)/LN(1 + 0.30)).
I was considering hypothetical scenarios of the type "imagine this offer from MIRI arrived, would a lab accept"
When would the offer from MIRI arrive in the hypothetical scenario? I am sceptical of an honest endorsement from MIRI today being worth 3 billion $, but I do not have a good sense of what MIRI will look like in the future. I would also agree a full-proof AI safety certification is or will be worth more than 3 billion $ depending on how it is defined.
...With your bets about timelines - I did 8:1 bet with Daniel Kokotajlo against AI 2027 being as accur
Hi Ruth. I only care about seeking truth to the extent it increases welfare (more happiness, and less pain). I just think applicants optimising for increasing their chances of being funded usually leads to worse decisions, and therefore lower welfare, than them optimising for improving the decisions of the funders. I also do not think there is much of a trade-off between being funded by and improving the decisions of impact-focussed funders, who often value honesty and transparency about the downsides of the project quite highly.
Thanks, Jan. I think it is very unlikely that AI companies with frontier models will seek the technical assistance of MIRI in the way you described in your 1st operationalisation. So I believe a bet which would only resolve in this case has very little value. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views considering we could invest our money, and that you could take loans?
The post The shrimp bet by Rob Velzeboer illustrates why there is a far from strong case for the sentience of Litopenaeus vannamei (whiteleg shrimp), which is the species accounting for the most production of farmed shrimp.
Nice post, Alex.
Sometimes when there’s a lot of self doubt it’s not really feasible for me to carefully dismantle all of my inaccurate thoughts. This is where I find cognitive diffusion helpful- just separating myself from my thoughts, so rather than saying ‘I don’t know enough’ I say ‘I’m having the thought that I don’t know enough.’ I don’t have to believe or argue with the thought, I can just acknowledge it and return to what I’m doing.
Clearer Thinking has launched a program to learn cognitive diffusion.
Thanks for the nice point, Thomas. Generalising, if the impact is 0 for productivity P_0, and P_av is the productivity of random employees, an employee N times as productive as random employees would be (N*P_av - P_0)/(P_av - P_0) as impactful as random employees. Assuming the cost of employing someone is proportional to their productivity, the cost-effectiveness as a fraction of that of random employees would be (P_av - P_0/N)/(P_av - P_0). So the cost-effectiveness of an infinitely productive employee as a fraction of that of random employees would be P_...
Thanks for the relevant comment, Nick.
2. I've been generally unimpressed by responses to criticisms of animal sentience. I've rarely seen an animal welfare advocate make an even small concession. This makes me even more skeptical about the neutrality of thought processes and research done by animal welfare folks.
I concede there is huge uncertainty about welfare comparisons across species. For individual welfare per fully-healthy-animal-year proportional to "individual number of neurons"^"exponent", and "exponent" ranging from 0.5 to 1.5, which I believe co...
Yes, it is unlikely these cause extinction, but if they do, no humans means no AI (after all the power-plants fail). Seems to imply moving forward with a lot of caution.
Toby and Matthew, what is your guess for the probability of human extinction over the next 10 years? I personally guess 10^-7. I think disagreements are often driven by different assessment of the risk.
Hi David!
Some people argue that the value of the universe would be higher if AIs took over, and the vast majority of people argue that it would be lower.
Would the dinossaurs have argued their extinction would be bad, although it may well have contributed to the emergence of mammals and ultimately humans? Would the vast majority of non-human primates have argued that humans taking over would be bad?
...But it is extremely unlikely to have exactly the same value. Therefore, in all likelihood, whether AI takes over or not does have long-term and enormous imp
Thanks for the great post, Matthew. I broadly agree.
If we struggle to forecast impacts over mere decades in a data-rich field, then claiming to know what effects a policy will have over billions of years is simply not credible.
I very much agree. I also think what ultimately matters for the uncertainty at a given time in the future is not the time from now until then, but the amount of change from now until then. As a 1st approximation, I would say the horizon of predictibility is inversely proportional to the annual growth rate of gross world product (GWP)...
Thanks for crossposting this, Joey.
This is a linkpost for My Career Plan: Launching Elevate Philanthropy!
The above does not link to the original post. You are supposed to type out the URL in the field above.
Despite not even having publicly launched, I have back-to-back monthly promising projects lined up, each with significant estimated impact, each with higher impact than my upper bound estimates of my ability to earn via for-profit founding (my next highest career option).
How did you determine this? Did you explicitly quantify the impact of the promising...
Thanks for the post. I strongly upvoted it.
I have described my views on AI risk previously in this post, which I think is still relevant. I have also laid down a basic argument against AI risk interventions in this comment where I argue that AI risk is neither important, neglected nor tractable.
The 2nd link is not right?
Thanks for the comment, Mikhail. Gemini 3 estimates a total annualised compensation of the people working at Meta Superintelligence Labs (MSL) of 4.4 billion $. If an endorsement from Yudkowsky and Soares was as beneficial (including via bringing in new people) as making 10 % of people there 10 % more impactful over 10 years, it would be worth 440 M$ (= 0.10*0.10*10*4.4*10^9).
Thanks for the good point, Nick. I still suspect Anthropic would not pay e.g. 3 billion $ for Yudkowsky and Soares to endorse their last model as good if they were hypothetically being honest. I understand this is difficult to operationalise, but it could still be asked to people outside Anthropic.
@eleanor mcaree, to which extent is ACE's Movement Grants program open to funding research decreasing the uncertainty in interspecies welfare comparisons? @Jesse Marks, how about The Navigation Fund (TNF)? @Zoë Sigle 🔹, how about Senterra Funders? @JamesÖz 🔸, how about Mobius and the Strategic Animal Funding Circle (SAFC)? You can check my comment above for context about why I think such research would be valuable.
It is unclear to me whether all humans together are more powerful than all other organisms on Earth together. It depends on what is meat by powerful. The power consumption of humans is 19.6 TW (= 1.07 + 18.5), only 0.700 % (= 19.6/(2.8*10^3)) of all organisms. In any case, all humans together being more powerful than all other organisms on Earth together is still way more likely than the most powerful human being much more powerful than all other organisms on Earth together.
My upper bound of 0.001 % is just a guess, but I do endorse it. You can have a best...
If I had Eliezer's views about AI risk, I would simply be transparent upfront with the donor, and say I would donate the additional earnings. I think this would ensure fairness. If the donor insisted I had to spend the money on personal consumption, I would turn down the offer if I thought this would result in the donor supporting projects that would decrease AI risk more cost-effectively than my personal consumption. I believe this would be very likely to be the case.
Thanks for the comment, Tristan.
I have no doubt that if one human became superintelligent that would also have a high risk of disaster, precisely because they would have preferences that I don't share (probably selfish ones)
I would worry if a single human had much more power than all other humans combined. Likewise, I would worry if an AI agent had more power than all other AI agents and humans combined. However, I think the probability of any of these scenarios becoming true in the next 10 years is lower than 0.001 %. Elon Musk has a net worth of 765 bill...
Hi Jan.
For the companies racing to AGI, Y&S endorsing some effort as good would likely have something between billions $ to tens of billions $ value.
Are you open to bets about this? I would be happy to bet 10 k$ that Anthropic would not pay e.g. 3 billion $ for Yudkowsky and Soares to endorse their last model as good. We could ask the marketing team at Anthropic or marketing experts elsewhere. I am not officially proposing a bet just yet. We would have to agree on a concrete operationalisation.
Thanks for sharing, Michael. If I was as concerned about AI risk as @EliezerYudkowsky, I would use practically all the additional earnings (e.g. above Nate's 235 k$/year; in reality I would keep much less) to support efforts to decrease it. I would believe spending more money on personal consumption or investments would just increase AI risk relative to supporting the most cost-effective efforts to decrease it.
A donor wanted to spend their money this way; it would not be fair to the donor for Eliezer to turn around and give the money to someone else. There is a particular theory of change according to which this is the best marginal use of ~$1 million: it gives Eliezer a strong defense against accusations like
If they suddenly said that the risk of human extinction from AGI or superintelligence is extremely low, in all likelihood that money would dry up and Yudkowsky and Soares would be out of a job.
I kinda don't think this was the best use of a million dollars, but I can see the argument for how it might be.
Hi Saulius.
When local residents successfully protest against a planned chicken farm, production will likely increase elsewhere to meet demand—but how quickly? I found no clear methodology to estimate this and received no definitive answers when I asked on an economics forum. As far as I know, it could take anywhere from a week to 20 years, and the choice massively impacts cost-effectiveness.
I agree a farm being blocked can decrease anything from a 1 farm-week to 20 farm-years. Gemini says random broiler farms in Poland have an expected lifespan o...
Wow I'm mind blown that Yudowsky pays himself that much. If only because it leaves him open to criticisms likt these. I still don't think the financial incentives are as strong as for people starting an accellerationist company, but its a fair point.
I think the strength of the incentives to behave in a given way is more proportional to the resulting expected increase in welfare than to the expected increase in net earnings. Individual human welfare is often assumed to be proportional to the logarithm of personal consumption. So a given increase in earnings...
Hi Nick.
Although their arguments are reasonable, my big problem with this is that these guys are so motivated that I find it hard to read what they write in good faith.
People who are very invested in arguing for slowing down AI development, or decreasing catastrophic risk from AI, like many in the effective altruism community, will also be happier if they succeed in getting more resources to pursue their goals. However, I believe it is better to assess arguments on their own merits. I agree with the title of the article that it is difficult to do this. I a...
Thanks for this work. I find it valuable.
If AIs are conscious, then they likely deserve moral consideration
AIs could have negligible welfare (in expectation) even if they are conscious. They may not be sentient even if they are conscious, or have negligible welfare even if they are sentient. I would say the (expected) total welfare of a group (individual welfare times population) matters much more for its moral consideration than the probability of consciousness of its individuals. Do you have any plans to compare the individual (expected hedonistic) welfa...
Thanks for the post, Carl.
- As funding expands in focused EA priority issues, eventually diminishing returns there will equalize with returns for broader political spending, and activity in the latter area could increase enormously: since broad political impact per dollar is flatter over a large range political spending should either be a very small or very large portion of EA activity
Great point.
The bets I've seen you post seem rather disadvantageous to the other side, and I believed so at the time. Which is fine/good business from your perspective given that you managed to find takers. But it means I'm more pessimistic on finding good deals by both of our lights.
Thanks for the reply.
Right. There was a weight of 45 % on a ratio of 7.06, and of 55 % on one of 62.8 k (= 3.44*10^6/54.8), 8.90 k (= 62.8*10^3/7.06) times as much. My explanation for the large difference is that very little can be inferred about the intensity of excruciating pain, as defined by the Welfare Footprint Institute (WFI), from the academic studies AIM analysed to derive the pain intensities linked to the lower ratio.
... (read more)