I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I can help with career advice, prioritisation, and quantitative analyses.
Guillaume and @Bob Fischer, do you think it would be useful to have a similar guide for nematodes? If so, how much funding would you need to do it?
Nematodes are one of the âFour Investigation Prioritiesâ mentioned in chapter 13 of the book The Edge of Sentience by Jonathan Birch. From Andrews (2024):
Even if we grant the author's low confidence in nematodes' having marker five (motivational trade-offs), current science provides ample confidence that nematodes have markers one (nociceptors), two (integrated brain regions), four (responsiveness to analgesics), and seven (sophisticated associative learning). Given high confidence that nematodes have even three of these markers, the report's methodology [Birch et al. (2021)] would have us conclude that there is âsubstantial evidenceâ of sentience in nematodes.
I guess research on motivational trade-offs would be the most useful to decrease the uncertainty about the sentience of nematodes. From section 13.4 of The Edge of Sentience:
In C. elegans, behaviour seems to be driven by immediate stimuli, with no reason to posit internal representations. That said, further investigation is clearly warranted, and we have to be open to all possible outcomes of this investigation, including an outcome in which we end up reclassifying nematodes as sentience candidates.
Becerra et al. (2023) has more relevant context about research on the sentience of nematodes.
Still interested in something like this, though might punt on it for a month or two (busy times)
Ok. I will remind you about this in 1.5 months (June 22).
If you know your numbers that would help - e.g. it sounds like you could suggest numbers that are a good deal to you, but also sound like an obviously good deal to me, despite my less-clear picture? You could also hash that if desired until I come back :P
I think the probability of human extinction this century is much lower than 1 %. I guess the probability of you not paying me back for reasons that do not have to do with transformative AI (TAI), which I speculate would be around 25 % for a bet resolving at the end of 2034, is much higher than the probability of human extinction, or additional income no longer being relevant.
Hi Vince. Do you have any plans to estimate the marginal cost-effectiveness of charities? I agree with Giving What We Can (GWWC) that this would be a major improvement to ACE's evaluations.
We believe that ACEâs Charity Evaluation Programâs current approach does not sufficiently emphasise marginal cost-effectiveness as the most decision-relevant factor in evaluation decisions. For example, ACE primarily evaluates on the basis of organisationsâ existing programs, rather than explicitly focusing their evaluation on those programs that are most likely to be funded on the margin. This is in direct contrast to ACEâs Movement Grants, which explicitly evaluates programs that ACE would be influencing funding to on the margin.
Hi. You may be interested in the PhD thesis Consciousness in Functionally and Spatially Distributed Systems by Duygu AktaĹ. It was published this month.
Suppose this was all that existed of you, and your real brain never had existed. Would that mean that you never existed as a conscious being, despite all your thoughts and utterances still being a part of the world?
I think whether my thoughts and utterances would come together with consciousness would strictly depend on how they are produced. I agree they could be reproduced at the computational (input-to-output) level with an arbitrarily high precision with an infinitely powerful digital computer (see Marr's levels of analysis). However, I do not see that as sufficient (or necessary) for consciousness. An infinitely large lookup table can also reproduce human behaviour at the computational level with an arbitrarily high precision, and I consider it has the least consciousness possible (practically 0). I believe consciousness depends on algorithms and implementation, not on the input-to-output mapping. This matters to me because simple logical operations written out by hand with pen and paper can only reproduce the behaviour of humans at the input-to-output level, not at the algorithmic or implementation level. In contrast, they can reproduce the behaviour of digital computers at the computational and algorithmic level. So my belief that they cannot be conscious makes me very sceptical about digital consciousness without causing a conflict with my belief in human consciousness.
Hi Toby. Thanks for the comment.
If the human brain operates according the known laws of physics, then in principle your brain could be simulated with a pen and paper (at least given unlimited time, ink, and paper), and it would behave identically to the real thing (it would talk and think like you and have all your opinions).
One would need infinite resources to fully reproduce the behaviour of the brain assuming the universe is continuous. Even if the universe is discrete, one would need an unfeasibly large amount of resources. The human brain has around 0.00120 m^3 (= (1.13 + 1.26)*10^-3/2). The Planck volume is 4.22*10^-105 m^3. So the volume of a human brain corresponds to 2.84*10^101 (= 0.00120/(4.22*10^-105)) times the Plack volume. Even assuming all the information in a volume equal to the Planck volume can be represented by a single bit, one would need 2.84*10^101 bits to fully represent the state of a human brain. This is more bits than the 10^80 or so atoms in the universe, and one needs more than 1 atom per bit in a digital computer.
I don't get why the "moment of experience taking a thousand years" thing is supposed to be so weird? If we slowed down all the processes in your brain then moments of experience would take longer in physical time. That's not an argument against your consciousness being real. And this isn't a hypothetical. We can literally do that by sending you on a spaceship close to the speed of light, and that's exactly what would happen!
This is not what would happen under special relativity. If I was sent on a spaceship close to the speed of light, I would continue aging normally in my frame of reference. If I travelled for N years in the frame of reference of the spaceship, I would become N years older biologically speaking (neglecting the effects of microgravity). If I returned back to Earth then, it would have passed more than N years on Earth. So I would have effectively time-travelled into the future on Earth.
CF predicts that some sets of AND, OR, and NOT operations are conscious even if run at an arbitrarily low speed in their local frame of reference. So all my brain processes would have to slow down in the frame of reference of the brain for the analogy to hold. I guess one can get the closest to this slow down with cryopreserved brains, and I do not think these are conscious.
That makes sense. I was thinking Guillaume would have thoughts about the cost, and that you and Guillaume could have thoughts about the benefits. I wonder what would be Arthropoda Foundation's willingness to pay for a similar guide about research on the sentience of nematodes.