I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I can help with career advice, prioritisation, and quantitative analyses.
Hi titotal. This website "presents the AI Futures Model (Dec 2025 version), following up on the timelines and takeoff models we [AI Futures Project] published alongside AI 2027". I am sure many would be interested in a post with your thoughts, myself included! As expected based on your analysis, the parameter defining "How much easier/harder each coding time horizon doubling gets" is crucial. If I set it to 1 to make the time horizon increase exponentially with the effective training compute, as it arguably has trended recently, the automated coder only comes in 2042.
Hi Bob.
In addition, we’d like to adapt the work we’ve been doing on our Digital Consciousness Model for the MWP, which uses a Bayesian approach.
I do not see much value in improving the estimates for the probability of sentience presented in your book. I believe it is more important to decrease the uncertainty in the (expected hedonistic) welfare per unit time conditional on sentience, which I think is much larger than that in the probability of sentience.
I also worry about analysing just the probability of consciousness/sentience due to this not being independent from the welfare per unit time conditional on consciousness/sentience. Less strict operationalisations of the probability of consciousness/sentience will tend to result in a lower welfare per unit time conditional on consciousness/sentience.
Funding is, and long has been, the bottleneck
Have large funders explained their lack of interest? If not, what is your best guess?
Great point, Jeff. I agree initially assuming (expected hedonistic) welfare per unit time proportional to the number of atoms, cells, or neurons is much more reasonable than supposing it is the same for all organisms.
I estimated the total welfare of animal populations assuming individual welfare per fully-happy-animal-year is proportional to "number of neurons"^"exponent of the number of neurons". Phil, I had already shared the post with you. I am linking it here because it is related to your post.
Hi Cat and Yulia.
I am pessimistic about using data about suicides at the outreached schools to estimate the effect of the school awareness packages. For Yulia's estimate that this program reached 10,875 students (which I believe is too high), and suicide rate of 7.26*10^-5 suicides per student-year, one should expect 0.790 suicides per year (= 10.875*10^3*7.26*10^-5) in the outreached schools. For Yulia's guess that the program decreases suicides by 25 %, one should expect 0.592 suicides per year (= 0.790*(1 - 0.25)). I think it is going to be quite hard to distinguish between uncertain distributions whose means are 0.790 and 0.592 suicides per year. I suspect one may easily have to wait 10 years to know about whether there is an effect of 25 %.
To get more data quicker, I would track not only the number of suicides, but also outcomes that are known to predict suicides. For example, there may be many suicide attempts per suicide, and I guess the number of suicides is roughly proportional to the number of suicide attempts. One could also try to track attitudes towards suicide, some of which may be a good predictor of suicides, for instance, planning a suicide. At the most distant level from tracking the number of suicides, one could simply ask the students in the outreached schools to which extent they have engaged with the awareness packages.
Hi Lorenzo.
@
Vasco Grilo🔸given that your name is on thehttps://www.forgetveganuary.com/campaign and you're active on this forum, I'm curious what you think about this. Were you informed?Edit: they will remove that section from the page
I was not informed.
I agree greater uncertainty, and therefore less resilience, about the time until TAI is a reason for prioritising interventions whose effects are expected to materialise earlier. At a high level, I would model the impact of TAI as increasing the discount rate. For a 10th, 50th, and 90th percentile time until TAI of 100, 300, and 1 k years, I would not care about the uncertainty because I expect effects after 300 years to be negligible anyway, even without accounting for the additional discounting caused by TAI. However, for a 10th, 50th, and 90th percentile time until TAI of 3, 10, and 30 years, I would care a lot about the uncertainty because I expect effects after 10 years to be significant for many interventions.
I see. Thanks for clarifying.