Bio

Participation
4

I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How others can help me

I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How I can help others

I can help with career advice, prioritisation, and quantitative analyses.

Comments
2827

Topic contributions
40

I was considering hypothetical scenarios of the type "imagine this offer from MIRI arrived, would a lab accept"

When would the offer from MIRI arrive in the hypothetical scenario? I am sceptical of an honest endorsement from MIRI today being worth 3 billion $, but I do not have a good sense of what MIRI will look like in the future. I would also agree a full-proof AI safety certification is or will be worth more than 3 billion $ depending on how it is defined.

With your bets about timelines - I did 8:1 bet with Daniel Kokotajlo against AI 2027 being as accurate as his previous forecast, so not sure which side of the "confident about short timelines" do you expect I should take.

I was guessing I would have longer timelines. What is your median date of superintelligent AI as defined by Metaculus?

Agreed, Ben. I encouraged Rob to crosspost it on the EA Forum. Thanks to your comment, I just set up a reminder to ping him again in 7 days in case he has not replied by then.

Hi Ruth. I only care about seeking truth to the extent it increases welfare (more happiness, and less pain). I just think applicants optimising for increasing their chances of being funded usually leads to worse decisions, and therefore lower welfare, than them optimising for improving the decisions of the funders. I also do not think there is much of a trade-off between being funded by and improving the decisions of impact-focussed funders, who often value honesty and transparency about the downsides of the project quite highly.

Thanks, Jan. I think it is very unlikely that AI companies with frontier models will seek the technical assistance of MIRI in the way you described in your 1st operationalisation. So I believe a bet which would only resolve in this case has very little value. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views considering we could invest our money, and that you could take loans?

The post The shrimp bet by Rob Velzeboer illustrates why there is a far from strong case for the sentience of Litopenaeus vannamei (whiteleg shrimp), which is the species accounting for the most production of farmed shrimp.

Thanks for sharing, Ben. I like the concept. Do you have a target total (time and financial) cost? I wonder what is the ideal ratio between total amount granted and cost for grants of "$670 to $3300".

Nice post, Alex. 

Sometimes when there’s a lot of self doubt it’s not really feasible for me to carefully dismantle all of my inaccurate thoughts. This is where I find cognitive diffusion helpful- just separating myself from my thoughts, so rather than saying ‘I don’t know enough’ I say ‘I’m having the thought that I don’t know enough.’ I don’t have to believe or argue with the thought, I can just acknowledge it and return to what I’m doing. 

Clearer Thinking has launched a program to learn cognitive diffusion.

Thanks for the nice point, Thomas. Generalising, if the impact is 0 for productivity P_0, and P_av is the productivity of random employees, an employee N times as productive as random employees would be (N*P_av - P_0)/(P_av - P_0) as impactful as random employees. Assuming the cost of employing someone is proportional to their productivity, the cost-effectiveness as a fraction of that of random employees would be (P_av - P_0/N)/(P_av - P_0). So the cost-effectiveness of an infinitely productive employee as a fraction of that of random employees would be P_av/(P_av - P_0) = 1/(1 - P_0/P_av). In this model, super productive employees becoming more proctive would not increase their cost-effectiveness. It would just make them more impactful. For your parameters, employing an infinitely productive employee would be 3 (= 1/(1 - 100/150)) times as cost-effective as employing random employees.

Thanks for the relevant comment, Nick.

2. I've been generally unimpressed by responses to criticisms of animal sentience. I've rarely seen an animal welfare advocate make an even small concession. This makes me even more skeptical about the neutrality of thought processes and research done by animal welfare folks.

I concede there is huge uncertainty about welfare comparisons across species. For individual welfare per fully-healthy-animal-year proportional to "individual number of neurons"^"exponent", and "exponent" ranging from 0.5 to 1.5, which I believe covers reasonable best guesses, I estimate the absolute value of the total welfare of farmed shrimps ranges from 2.82*10^-7 to 0.282 times that of humans. In addition, I calculate the Shrimp Welfare Project’s (SWP’s) Humane Slaughter Initiative (HSI) has increased the welfare of shrimps 0.00167 (= 2.06*10^5/0.0123) to 1.67 k (= 20.6/0.0123) times as cost-effectively as GiveWell's top charities increase the welfare of humans. Moreover, I have no idea whether HSI or GiveWell's top charities increase or decrease welfare accounting for effects on soil animals and microorganisms.

Yes, it is unlikely these cause extinction, but if they do, no humans means no AI (after all the power-plants fail). Seems to imply moving forward with a lot of caution.

Toby and Matthew, what is your guess for the probability of human extinction over the next 10 years? I personally guess 10^-7. I think disagreements are often driven by different assessment of the risk.

Load more