I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I can help with career advice, prioritisation, and quantitative analyses.
I agree greater uncertainty, and therefore less resilience, about the time until TAI is a reason for prioritising interventions whose effects are expected to materialise earlier. At a high level, I would model the impact of TAI as increasing the discount rate. For a 10th, 50th, and 90th percentile time until TAI of 100, 300, and 1 k years, I would not care about the uncertainty because I expect effects after 300 years to be negligible anyway, even without accounting for the additional discounting caused by TAI. However, for a 10th, 50th, and 90th percentile time until TAI of 3, 10, and 30 years, I would care a lot about the uncertainty because I expect effects after 10 years to be significant for many interventions.
Hi, Jim!
Shorter timelines for transformative AI (TAI) would make me prioritise more interventions whose effects happen earlier. There will be more change soon if TAI happens earlier, and I believe effects decay faster when there is more change.
A best guess for the probability of an event has implications for the resilience of the best guess. If my best guess is that something is 50 % likely to happen, the probability of me updating towards it being 90 % likely to happen should be at most 55.6 % (= 0.50/0.90).
I recommend research informing how to increase the welfare of soil animals over pursuing whatever land use change interventions naively seem to achieve that the most cost-effectively. I have very little idea about whether increasing agricultural land, such as by saving human lives, increases or decreases welfare. I am very uncertain about what increases or decreases soil-animal-years, and whether soil animals have positive or negative lives. So I do not know whether saving human lives sooner increases welfare more or less than saving human lives later. Assuming that saving human lives increases welfare, I agree doing it earlier increases welfare more if TAI happens earlier.
This seems in tension with what you and @Wladimir J. Alonso say here.
Nevermind. I have been using your estimates for the time in pain as if they do not account for any considerations relevant for interspecies welfare comparisons. However, the sentence below made me think no adjustments were needed to compare your estimates for the time humans and shrimp spend in excruciating pain. So I mistakenly inferred you were accounting for considerations relevant for interspecies welfare comparisons. However, as you say in the same paragraph, you "hold this assumption as temporary until better evidence allows for a more accurate placement of each experience on an absolute scale".
In the Welfare Footprint framework, pain intensities are defined as absolute measures, meaning that one hour of Excruciating pain in humans is assumed to be hedonically equivalent to one hour of Excruciating pain in shrimps, if shrimps were capable of experiencing Excruciating pain.
the Welfare Footprint Framework is intentionally agnostic about correction values for interspecific scaling
In agreement with the above, I have been using your estimates for the time in pain as if they do not account for any considerations relevant for interspecies welfare comparisons.
In the Welfare Footprint framework, pain intensities are defined as absolute measures, meaning that one hour of Excruciating pain in humans is assumed to be hedonically equivalent to one hour of Excruciating pain in shrimps, if shrimps were capable of experiencing Excruciating pain.
This sentence made me think no adjustments were needed to compare your estimates for the time humans and shrimp spend in excruciating pain. So I mistakenly inferred you were accounting for considerations relevant for interspecies welfare comparisons. However, as you say in the same paragraph, you "hold this assumption as temporary until better evidence allows for a more accurate placement of each experience on an absolute scale".
Thanks for the post, Cameron! I strongly upvoted it. I think it is very valuable to have posts unpacking jobs.
Thanks for sharing! I recently left some related comments on a post from Bentham’s Bulldog, and discussed it in a podcast with him.