I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I can help with career advice, prioritisation, and quantitative analyses.
I would easily be ok with 10 minutes of excruciating pain for 24 hours of fully healthy life
Would you prefer 10 min of "severe burning in large areas of the body, dismemberment, or extreme torture" (excruciating pain) over losing 24 h of fully healthy life (ignoring the indirect effects if the excruciating pain; it would probably lead to death, and therefore result in a loss of life which is worse than losing 24 h of fully healthy life)?
If we take the conservative 10 minutes per 24 hours that I would accept, that would make me 600 times less pain sensitive than you are. So if I take the very same line of thinking that led you to believe there is a 50% chance of them having a net positive life, I would probably conclude there is a 99% chance of them having net positive lives.
If I were 600 times as sensitive to pain as you, I guess I would also be 600 times as sensitive to pleasure. So my guess for the probability that wild invertebrates have positive/negative would arguably not change.
I am again advocating for other ethical frameworks like preference utilitarianism: They clearly show a preference to live so giving them a home by habitat preservation or rewilding is good while killing them is bad.
Could euthanising pets be good for them, even if it goes against their preferences?
I'm not a paid sub to Nuno so I can't see.
Me neither.
I don't expect we will see less than $5M of forecasting grants done by CG in 2026 or 2027 though
CG's Forecasting Fund granted 15.9 M$ in 2025.
Hi Guy. The bets would be directly beneficial if people who are more accurate donate to more cost-effective interventions? In addition, I wonder whether the discussions of bets involving donations, and investments could have higher quality than ones of forecasting questions without money on the line. The prospects of winning or losing money usually leads to people investigating their views more.
Forecasting is a dangerous activity, particularly because it is a fun, game-like activity that is nearly perfectly designed to be very attractive to EA/rationalist types because you get to be right when others are wrong, bet on your beliefs, and partake in the cultural practice.
I like bets involving donations, and investments as alternatives to forecasting without money on the line.
Hi Marcus. Thanks for the post. I broadly agree.
Coefficient Giving's (CG's) Forecasting Fund has recently been closed.
As of March 30, the Forecasting Fund is no longer active, though we continue to make key forecasting grants through other funds, such as Navigating Transformative AI. This page will be maintained until the end of 2026 as a record of the fundâs work.
I think this is more likely to make forecasting grants useful. They will presumably be assessed with the criteria used to evaluate the non-forecasting grants of the respective fund.
@NunoSempere wrote about the end of CG's Forecasting Fund in the last edition of the Forecasting Newsletter. Only paid subscribers can check the relevant section.
We are always in triage
That makes sense. Can I crosspost to the EA Forum arguments from Computational Functionalism Debate (linking to this post too)? I would like to share the Pen & Paper Argument, which is among the ones against CF which I find most persuasive.
I've also donated to the âinvertsâ project of RP!
Great. You may also be interested in donating to Arthropoda Foundation. I donated a few k$ to them last year. You are most likely aware of them. If relevant to readers, here is the post announcing their launch, and here is their post during the last Marginal Funding Week. They have been funding research informing how to increase the welfare of farmed arthropods, and "are particularly interested in research with a clear path to impact, whether by shaping future science or informing real-world decision-making".
Your updated estimates have huge credible intervals! What is the main source of the uncertainty in the model, or the main sources?
The ranges only account for uncertainty in the individual welfare per fully-healthy-animal-year. I consider this roughly proportional to the expected welfare range. So you can say I am just accounting for uncertainty in the expected welfare range. I believe this the overwhelming driver of the overall uncertainty. Here are the expected welfare ranges as a fraction of that of humans as a function of the exponent of the individual number of neurons.
The 90k (or 90.3k) figure is based on this sentence from this article
Got it. I consider the above ratio reasonable too, but my current best guess is much lower, as I commented above.
This is widely believed to be true outside effective altruism too.