Bio

Participation
4

I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How others can help me

I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How I can help others

I can help with career advice, prioritisation, and quantitative analyses.

Comments
2823

Topic contributions
40

The post The shrimp bet by Rob Velzeboer illustrates why there is a far from strong case for the sentience of Litopenaeus vannamei (whiteleg shrimp), which is the species accounting for the most production of farmed shrimp.

Thanks for sharing, Ben. I like the concept. Do you have a target total (time and financial) cost? I wonder what is the ideal ratio between total amount granted and cost for grants of "$670 to $3300".

Nice post, Alex. 

Sometimes when there’s a lot of self doubt it’s not really feasible for me to carefully dismantle all of my inaccurate thoughts. This is where I find cognitive diffusion helpful- just separating myself from my thoughts, so rather than saying ‘I don’t know enough’ I say ‘I’m having the thought that I don’t know enough.’ I don’t have to believe or argue with the thought, I can just acknowledge it and return to what I’m doing. 

Clearer Thinking has launched a program to learn cognitive diffusion.

Thanks for the nice point, Thomas. Generalising, if the impact is 0 for productivity P_0, and P_av is the productivity of random employees, an employee N times as productive as random employees would be (N*P_av - P_0)/(P_av - P_0) as impactful as random employees. Assuming the cost of employing someone is proportional to their productivity, the cost-effectiveness as a fraction of that of random employees would be (P_av - P_0/N)/(P_av - P_0). So the cost-effectiveness of an infinitely productive employee as a fraction of that of random employees would be P_av/(P_av - P_0) = 1/(1 - P_0/P_av). In this model, super productive employees becoming more proctive would not increase their cost-effectiveness. It would just make them more impactful. For your parameters, employing an infinitely productive employee would be 3 (= 1/(1 - 100/150)) times as cost-effective as employing random employees.

Thanks for the relevant comment, Nick.

2. I've been generally unimpressed by responses to criticisms of animal sentience. I've rarely seen an animal welfare advocate make an even small concession. This makes me even more skeptical about the neutrality of thought processes and research done by animal welfare folks.

I concede there is huge uncertainty about welfare comparisons across species. For individual welfare per fully-healthy-animal-year proportional to "individual number of neurons"^"exponent", and "exponent" ranging from 0.5 to 1.5, which I believe covers reasonable best guesses, I estimate the absolute value of the total welfare of farmed shrimps ranges from 2.82*10^-7 to 0.282 times that of humans. In addition, I calculate the Shrimp Welfare Project’s (SWP’s) Humane Slaughter Initiative (HSI) has increased the welfare of shrimps 0.00167 (= 2.06*10^5/0.0123) to 1.67 k (= 20.6/0.0123) times as cost-effectively as GiveWell's top charities increase the welfare of humans. Moreover, I have no idea whether HSI or GiveWell's top charities increase or decrease welfare accounting for effects on soil animals and microorganisms.

Yes, it is unlikely these cause extinction, but if they do, no humans means no AI (after all the power-plants fail). Seems to imply moving forward with a lot of caution.

Toby and Matthew, what is your guess for the probability of human extinction over the next 10 years? I personally guess 10^-7. I think disagreements are often driven by different assessment of the risk.

Hi David!

Some people argue that the value of the universe would be higher if AIs took over, and the vast majority of people argue that it would be lower.

Would the dinossaurs have argued their extinction would be bad, although it may well have contributed to the emergence of mammals and ultimately humans? Would the vast majority of non-human primates have argued that humans taking over would be bad?

But it is extremely unlikely to have exactly the same value. Therefore, in all likelihood, whether AI takes over or not does have long-term and enormous implications.

Why? It could be that future value is not exactly the same if AI takes over or not by a given date, but that the longer difference in value is negligible.

Thanks for the great post, Matthew. I broadly agree.

If we struggle to forecast impacts over mere decades in a data-rich field, then claiming to know what effects a policy will have over billions of years is simply not credible.

I very much agree. I also think what ultimately matters for the uncertainty at a given time in the future is not the time from now until then, but the amount of change from now until then. As a 1st approximation, I would say the horizon of predictibility is inversely proportional to the annual growth rate of gross world product (GWP). If this become 10 times as fast as some predict, I would expect the horizon of predictibity (regarding a given topic) to shorten, for instance, from a few decades to years.

To demonstrate that delaying AI would have predictable and meaningful consequences on an astronomical scale, you would need to show that those consequences will not simply wash out and become irrelevant over the long run.

Right. I would just say "after significant change (regardless of when it happens)" instead of "over the long run" in light of my point above.

Thanks for crossposting this, Joey.

This is a linkpost for My Career Plan: Launching Elevate Philanthropy!

The above does not link to the original post. You are supposed to type out the URL in the field above.

Despite not even having publicly launched, I have back-to-back monthly promising projects lined up, each with significant estimated impact, each with higher impact than my upper bound estimates of my ability to earn via for-profit founding (my next highest career option).

How did you determine this? Did you explicitly quantify the impact of the promising projects in terms of money donated to GiveWell's top charities or similar?

Another example is when AIM [Ambitious Impact] created the metric of SADs [suffering-adjusted days], which is now used not only by AIM but also across the animal welfare space.

Could you elaborate on which organisations use SADs? I am only aware of Animal Charity Evaluators (ACE) using them in their charity evaluations.

I am particularly excited about time-bound projects that take between 30 and 300 hours, especially projects that create a common good. By this, I mean outcomes that benefit multiple philanthropic actors in the ecosystem. One example might be creating an external evaluation system for a single foundation but publishing the methods and strategies so that multiple other foundations can also use them.

What do you think about decreasing the uncertainty in welfare comparisons across species as a common good project? I think much more research on that is needed to conclude which interventions robustly increase welfare. I do not know about any intervention which robustly increases welfare due to potentially dominant uncertain effects on soil animals and microorganisms. Even neglecting these, I believe there is lots of room to change funding decisions as a result of more of research on that. I understand AIM, ACE, maybe the Animal Welfare Fund (AWF), and Coefficient Giving (CG) sometimes for robustness checks use the (expected) welfare ranges Rethink Priorities (RP) initially presented, or the ones in Bob Fischer's book as if they are within a factor of 10 of the right estimates (such that these could 10 % to 10 times as large). However, I can easily see much larger differences. For example, the estimate in Bob's book for the welfare range of shrimps is 8.0 % that of humans, but I would say one reasonable best guess (though not the only one) is 10^-6, the ratio between the number of neurons of shrimps and humans.

Hi Jan. I am open to bets against short AI timelines, or what they supposedly imply, up to 10 k$. Do you see any that we could make that is good for both of us under our own views considering we could invest our money, and that you could take loans?

Load more