It seems quite likely to me that all the results on creatine and cognition are bogus, maybe I'd bet at 4:1 against there being a real effect >0.05 SD.
Unless I'm misunderstanding, does this mean you'd bet that the effects are even smaller than what this study found on its preregistered tasks? If so, do you mind sharing why?
Does this study tell us much about the counterfactual advancement of policies that pass the threshold by more significant margins, like a few percentage points or even double digit percentage points? Presumably those are more popular, so more likely to be passed eventually anyway. Some might still be popular but neglected because they aren't high priorities in politics, though, e.g. animal welfare.
I think if there's anything they should bother to be publicly transparent about in order to subject to further scrutiny, it's their biggest cruxes for resource allocation between causes. Moral weights, theory of welfare and the marginal cost-effectiveness of animal welfare seem pretty decisive for GHD vs animal welfare.
There are other simple methodologies that make vaguely plausible guesses (under hedonism), like:
In my view, 1, 2 and 3 are more plausible and defensible than views that would give you (cortical or similar function) neuron counts as a good approximation. I also think the actually right answer, if there's any (so excluding the individual-relative interpretation for 1), will look like 2, but more complex and with possibly different functions. RP explicitly considered 1 and 3 in its work. These three models give chickens >0.1x humans' welfare ranges:
You can probably come up with some models that assign even lower welfare ranges to other animals, too, of course, including some relatively simple ones, although not simpler than 1.
Note that using cortical (or similar function) neuron counts also makes important assumptions about which neurons matter and when. Not all plausibly conscious animals have cortices, so you need to identify which structures have similar roles, or else, chauvinistically, rule these animals out entirely regardless of their capacities. So this approach is not that simple, either. Just counting all neurons would be simpler.
(I don't work for RP anymore, and I'm not speaking on their behalf.)
They won't be literally identical: they'll differ in many ways, like physical details, cognitive expression and behavioural influence. They're separate instantiations of the same broad class of functions or capacities.
I would say the number of times a function or capacity is realized in a brain can be relevant, but it seems pretty unlikely to me that a person can experience suffering hundreds of times simultaneously (and hundreds of times more than chickens, say). Rethink Priorities looked into these kinds of views here. (I'm a co-author on that article, but I don't work at Rethink Priorities anymore, and I'm not speaking on their behalf.)
FWIW, I started very pro-neuron counts (I defended them here and here), and then others at RP, collaborators and further investigation myself moved me away from the view.
We have a far more advanced consciousness and self awareness, that may make our experience of pain orders of magnitude worse (or at least different) than for many animals - or not.
I agree that that's possible and worth including under uncertainty, but it doesn't answer the "why", so it's hard to justify giving it much or disproportionate weight (relative to other accounts) without further argument. Why would self-awareness, say, make being in intense pain orders of magnitude worse?
And are we even much more self-aware than other animals when we are in intense pain? One of the functions of pain is to take our attention, and it does so more the more intense the pain. That might limit the use of our capacities for self-awareness: we'd be too focused on and distracted by the pain. Or, maybe our self-awareness or other advanced capacities distract us from the pain, making it less intense than in other animals.
(My own best guess is that at the extremes of excruciating pain, sophisticated self-awareness makes little difference to the intensity of suffering.)
The old cortical neuron count proxy for moral weight says that one chicken life year is worth 0.003, which is 1/100th of the RP welfare range estimate of 0.33. This number would mean chicken interventions are only 0.7x as effective as human interventions, rather than 700x as effective.
700/100=7, not 0.7.
But didn't RP prove that cortical neuron counts are fake?
Hardly. They gave a bunch of reasons why we might be skeptical of neuron count (summarised here). But I think the reasons in favour of using cortical neuron count as a proxy for moral weight are stronger than the objections.
I don't think the reasons in favour of using neuron counts provide much support for weighing by neuron counts or any function of them in practice. Rather, they primarily support using neuron counts to inform missing data about functions and capacities that do determine welfare ranges (EDIT: or moral weights), in models of how welfare ranges (EDIT: or moral weights) are determined by functions and capacities. There's a general trend that animals with more neurons have more capacities and more sophisticated versions of some capacities.
However, most functions and capacities seem pretty irrelevant to welfare ranges, even if relevant for what welfare is realized in specific circumstances. If an animal can already experience excruciating pain, presumably near the extreme of their welfare range, what do humans have that would make excruciating pain far worse for us in general, or otherwise give us far wider welfare ranges? And why?
There's a related tag Meat-eater problem, with some related posts. I think this is less worrying in low-income countries where GiveWell-recommended charities work, because animal product consumption is still low and factory farming has not yet become the norm. That being said, factory farming is becoming increasingly common, and it could be common for the descendants of the people whose lives are saved.
Then, there are also complicated wild animal effects from animal product consumption and generally having more humans that could go either way morally, depending on your views.
I think there are some interesting arguments here, but the argument in "4.3 Computational Equivalence" can probably still be saved, because it shouldn't depend on any controversial parts of computational theories.
Instead, imagine two identical brains undergoing the same physical events, but one doing so at twice the speed and over a period of time half as long.[1] Neural signals travel twice as fast, the time between successive neuron firings is halved, etc..
In my view, any plausible theory of consciousness and moral value should assign the same inherent hedonistic value to the two brains over their respective time intervals.[2] Computational theories are just one class of theories that do. But we can abstract away different physical details.
On the other hand, I can imagine two people with identical preferences living (nearly) identical lives over the same objective time intervals (from the same reference frame), but one experiencing events twice as quickly as the other, and this making little difference to the moral value on preference accounts. Each person has preferences and goals like getting married, having children, for there to be less suffering in the world, etc., and while how they care subjectively about those matters, the subjective rate of experience doesn't make their preferences more or less important (at least not in a straightforward multiplicative way). Rather, we might model them as having preferences over world states or world histories, and their subjective appreciation of how things turn out isn't what matters, it's just that things turn out more or less the way they prefer.
Maybe the thought experiment is in fact physically impossible, except through relativity, which the author addresses, but I don't think the right theory of consciousness should depend on those details, and we can make the brains and events different enough while still preserving value and having corresponding events unfold at twice the speed.
And I'd guess the same subjective value from the perspectives of those brains in most cases, but people can care about their relationships with the world in different ways that we might care about, e.g. maybe they want the timing of their subjective experiences or neural activity to match some external events. That seems like an odd preference, but I'm still inclined to care about it.