I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I can help with career advice, prioritisation, and quantitative analyses.
Thanks, Jan. I think it is very unlikely that AI companies with frontier models will seek the technical assistance of MIRI in the way you described in your 1st operationalisation. So I believe a bet which would only resolve in this case has very little value. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views considering we could invest our money, and that you could take loans?
The post The shrimp bet by Rob Velzeboer illustrates why there is a far from strong case for the sentience of Litopenaeus vannamei (whiteleg shrimp), which is the species accounting for the most production of farmed shrimp.
Nice post, Alex.
Sometimes when there’s a lot of self doubt it’s not really feasible for me to carefully dismantle all of my inaccurate thoughts. This is where I find cognitive diffusion helpful- just separating myself from my thoughts, so rather than saying ‘I don’t know enough’ I say ‘I’m having the thought that I don’t know enough.’ I don’t have to believe or argue with the thought, I can just acknowledge it and return to what I’m doing.
Clearer Thinking has launched a program to learn cognitive diffusion.
Thanks for the nice point, Thomas. Generalising, if the impact is 0 for productivity P_0, and P_av is the productivity of random employees, an employee N times as productive as random employees would be (N*P_av - P_0)/(P_av - P_0) as impactful as random employees. Assuming the cost of employing someone is proportional to their productivity, the cost-effectiveness as a fraction of that of random employees would be (P_av - P_0/N)/(P_av - P_0). So the cost-effectiveness of an infinitely productive employee as a fraction of that of random employees would be P_av/(P_av - P_0) = 1/(1 - P_0/P_av). In this model, super productive employees becoming more proctive would not increase their cost-effectiveness. It would just make them more impactful. For your parameters, employing an infinitely productive employee would be 3 (= 1/(1 - 100/150)) times as cost-effective as employing random employees.
Thanks for the relevant comment, Nick.
2. I've been generally unimpressed by responses to criticisms of animal sentience. I've rarely seen an animal welfare advocate make an even small concession. This makes me even more skeptical about the neutrality of thought processes and research done by animal welfare folks.
I concede there is huge uncertainty about welfare comparisons across species. For individual welfare per fully-healthy-animal-year proportional to "individual number of neurons"^"exponent", and "exponent" ranging from 0.5 to 1.5, which I believe covers reasonable best guesses, I estimate the absolute value of the total welfare of farmed shrimps ranges from 2.82*10^-7 to 0.282 times that of humans. In addition, I calculate the Shrimp Welfare Project’s (SWP’s) Humane Slaughter Initiative (HSI) has increased the welfare of shrimps 0.00167 (= 2.06*10^5/0.0123) to 1.67 k (= 20.6/0.0123) times as cost-effectively as GiveWell's top charities increase the welfare of humans. Moreover, I have no idea whether HSI or GiveWell's top charities increase or decrease welfare accounting for effects on soil animals and microorganisms.
Yes, it is unlikely these cause extinction, but if they do, no humans means no AI (after all the power-plants fail). Seems to imply moving forward with a lot of caution.
Toby and Matthew, what is your guess for the probability of human extinction over the next 10 years? I personally guess 10^-7. I think disagreements are often driven by different assessment of the risk.
Hi David!
Some people argue that the value of the universe would be higher if AIs took over, and the vast majority of people argue that it would be lower.
Would the dinossaurs have argued their extinction would be bad, although it may well have contributed to the emergence of mammals and ultimately humans? Would the vast majority of non-human primates have argued that humans taking over would be bad?
But it is extremely unlikely to have exactly the same value. Therefore, in all likelihood, whether AI takes over or not does have long-term and enormous implications.
Why? It could be that future value is not exactly the same if AI takes over or not by a given date, but that the longer difference in value is negligible.
Thanks for the great post, Matthew. I broadly agree.
If we struggle to forecast impacts over mere decades in a data-rich field, then claiming to know what effects a policy will have over billions of years is simply not credible.
I very much agree. I also think what ultimately matters for the uncertainty at a given time in the future is not the time from now until then, but the amount of change from now until then. As a 1st approximation, I would say the horizon of predictibility is inversely proportional to the annual growth rate of gross world product (GWP). If this become 10 times as fast as some predict, I would expect the horizon of predictibity (regarding a given topic) to shorten, for instance, from a few decades to years.
To demonstrate that delaying AI would have predictable and meaningful consequences on an astronomical scale, you would need to show that those consequences will not simply wash out and become irrelevant over the long run.
Right. I would just say "after significant change (regardless of when it happens)" instead of "over the long run" in light of my point above.
Hi Ruth. I only care about seeking truth to the extent it increases welfare (more happiness, and less pain). I just think applicants optimising for increasing their chances of being funded usually leads to worse decisions, and therefore lower welfare, than them optimising for improving the decisions of the funders. I also do not think there is much of a trade-off between being funded by and improving the decisions of impact-focussed funders, who often value honesty and transparency about the downsides of the project quite highly.