I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I can help with career advice, prioritisation, and quantitative analyses.
I would easily be ok with 10 minutes of excruciating pain for 24 hours of fully healthy life
Would you prefer 10 min of "severe burning in large areas of the body, dismemberment, or extreme torture" (excruciating pain) over losing 24 h of fully healthy life (ignoring the indirect effects if the excruciating pain; it would probably lead to death, and therefore result in a loss of life which is worse than losing 24 h of fully healthy life)?
If we take the conservative 10 minutes per 24 hours that I would accept, that would make me 600 times less pain sensitive than you are. So if I take the very same line of thinking that led you to believe there is a 50% chance of them having a net positive life, I would probably conclude there is a 99% chance of them having net positive lives.
If I were 600 times as sensitive to pain as you, I guess I would also be 600 times as sensitive to pleasure. So my guess for the probability that wild invertebrates have positive/negative would arguably not change.
I am again advocating for other ethical frameworks like preference utilitarianism: They clearly show a preference to live so giving them a home by habitat preservation or rewilding is good while killing them is bad.
Could euthanising pets be good for them, even if it goes against their preferences?
I'm not a paid sub to Nuno so I can't see.
Me neither.
I don't expect we will see less than $5M of forecasting grants done by CG in 2026 or 2027 though
CG's Forecasting Fund granted 15.9 M$ in 2025.
Hi Guy. The bets would be directly beneficial if people who are more accurate donate to more cost-effective interventions? In addition, I wonder whether the discussions of bets involving donations, and investments could have higher quality than ones of forecasting questions without money on the line. The prospects of winning or losing money usually leads to people investigating their views more.
Forecasting is a dangerous activity, particularly because it is a fun, game-like activity that is nearly perfectly designed to be very attractive to EA/rationalist types because you get to be right when others are wrong, bet on your beliefs, and partake in the cultural practice.
I like bets involving donations, and investments as alternatives to forecasting without money on the line.
Hi Marcus. Thanks for the post. I broadly agree.
Coefficient Giving's (CG's) Forecasting Fund has recently been closed.
As of March 30, the Forecasting Fund is no longer active, though we continue to make key forecasting grants through other funds, such as Navigating Transformative AI. This page will be maintained until the end of 2026 as a record of the fundâs work.
I think this is more likely to make forecasting grants useful. They will presumably be assessed with the criteria used to evaluate the non-forecasting grants of the respective fund.
@NunoSempere wrote about the end of CG's Forecasting Fund in the last edition of the Forecasting Newsletter. Only paid subscribers can check the relevant section.
We are always in triage
That makes sense. Can I crosspost to the EA Forum arguments from Computational Functionalism Debate (linking to this post too)? I would like to share the Pen & Paper Argument, which is among the ones against CF which I find most persuasive.
Hi Marcus.
Are you open to funding research on the sentience of nematodes? This is one of the âFour Investigation Prioritiesâ mentioned in section 13.4 of chapter 13 of the book The Edge of Sentience by Jonathan Birch.
How about funding research on the time trade-offs between the pains defined by the Welfare Footprint Institute (WFI) by surveying people who have recently experienced excruciating pain? I think people suffering from cluster headaches would be good candidates. Ambitious Impact (AIM) currently estimates suffering-adjusted days (SADs) assuming that excruciating pain is 48.0 (= 11.7/0.244) times as intense as hurtful pain (you can ask Vicky Cox for the sheet), which I believe is very off. It implies 16 h of "awareness of Pain is likely to be present most of the time" (hurtful pain) is as bad as 20.0 min (= 16/48.0*60) of "severe burning in large areas of the body, dismemberment, or extreme torture" (excruciating pain). Here is a thread where I discussed AIM's pain intensities with the person responsible for their last iteration.
How about funding research on welfare comparisons across species? In Bob Fischerâs book about comparing welfare across species, the tentative sentience-adjusted welfare range of shrimps is 8.0 % of that of humans. However, if the sentience-adjusted welfare range is proportional to "individual number of neurons"^"exponent", and "exponent" can range from 0 to 2, which I consider reasonable, the sentience-adjusted welfare range of shrimp can range from 10^-12 (= (10^-6)^2) to 1 times that of humans.