Bio

Participation
4

I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How others can help me

I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How I can help others

I can help with career advice, prioritisation, and quantitative analyses.

Comments
2939

Topic contributions
40

I would estimate the number of layer-years improved in expectation in year Y from "expected population of layers in year Y"*("expected population of layers in cages in year Y without the intervention as a fraction of all of them in year Y" - "expected population of layers in cages in year Y with the intervention as a fraction of all of them in year Y") = P(Y)*(f_control(Y) - f_intervention(Y)), which is correct by definition.

Here is a post illustrating this.

Summary

  • Cost-effectiveness analyses (CEAs) of interventions accelerating animal welfare reforms usually estimate the increase in the welfare of the target animals (for example, hens in cages) based on the acceleration in years of the full implementation of the reform. This makes sense if each level of implementation of the reform is accelerated as much as its full implementation.
  • However, there are many cases where the acceleration of the full implementation of the reform is not enough to determine the number of animals helped, or animal-years improved. I discuss some below.

Thanks for the great post, Stefan.

Risk aversion. Risk aversion is normally a reason not to make a hard-to-reverse decision. Since reversibility gives you the option to switch course if your strategy underperforms, it normally reduces the risk of a truly bad outcome. Note, though, that the standard view within the effective altruism movement seems to be that altruists should not be risk-averse.

It could similarly be argued that reversibility gives one the option to switch course despite their strategy performing well, this increasing the risk of missing a truly great outcome? The takeaway is that one should make certainly bad options less available, and certainly good options more unavoidable?

Hi Ajeya.

But for the first time, I don’t see any solid trend we can extrapolate to say it won’t happen soon.[11] AI R&D really could be automated this year.

What are your predictions for the unemployment rate of software engineers? What do you think about these reasons for potentially overestimating the pace of automation based on AI benchmarks?

But there’s a big problem here – if AIs are actually able to perform most tasks on 1-hour task horizons, why don’t we see more real-world task automation? For example, most emails take less than an hour to write, but crafting emails remains an important part of the lives of billions of people every day.

Some of this could be due to people underusing AI systems,[2] but in this post I want to focus on reasons that are more fundamental to the capabilities of AI systems. In particular, I think there are three such reasons that are the most important:

  1. Time-horizon estimates are very domain-specific
  2. Task reliability strongly influences task horizons
  3. Tasks are very bundled together and hard to separate out.

Welcome to the EA Forum, Max. Thanks for the clarification, and additional context. I am rooting for your (GWWC's) success.

Thanks for asking, Vince. Here are some suggestions listed alphabetically which are not in your sheet, and have not yet been mentioned in other answers to your post:

Thanks for the post, Michael.

However, any specific function or set of coefficients would (to me) require justification, and it’s unclear that there can be any good justification.

I also worry about the arbitrariness of the weights (coefficients) of the models. In Bob Fischer's book about comparing welfare across species, there seems to be only 1 line about the weights used to aggregate the tentative estimates for the welfare range, the difference between the maximum and minimum hedonistic welfare per unit time. "We assigned 30 percent credence to the neurophysiological model, 10 percent to the equality model, and 60 percent to the simple additive model". People usually give weights that are at least 0.1/"number of models", which is at least 3.33 % (= 0.1/3) for 3 models, when it is quite hard to estimate the weights. However, giving weights which are not much smaller than the uniform weight of 1/"number of models" could easily lead to huge mistakes. As a silly example, if I asked random people with age 7 about whether the gravitational force between 2 objects is proportional to "distance"^-2 (correct answer), "distance"^-20, or "distance"^-200, I imagine I would get a significant fraction picking the exponents of -20 and -200. Assuming 60 % picked -2, 20 % picked -20, and 20 % picked -200, one may naively conclude the mean exponent of -45.2 (= 0.6*(-2) + 0.2*(-20) + 0.2*(-200)) is reasonable. Yet, there is lots of empirical evidence against this which the respondants are not aware of. The right conclusion would be that the respondants have no idea about the right exponent because they would not be able to adequately justify their picks. I think we are in a similar situation with respect to comparing hedonistic welfare across species.

Thanks for the post, Michael.

The more or larger such changes are necessary to get from one brain to another, the less tight the bounds on the comparisons could become, the further they may go both negative and positive overall,[2] and the less reasonable it seems to make such comparisons at all.

I agree comparisons become increasingly uncertain as the difference between the states of the organisms increases. However, I do not think there is a point where comparisons go from possible, but extremely difficult to not possible at all. I would say there is just a progressive widening of the distribution representing the hedonistic welfare per unit time of a given state of an organism as it moves away from typical human states. As an example, I could say my hedonistic welfare right now is 0.5 to 1.5 times that of random human who is awake, whereas that of a random nematode might be 10^-17 to 1 times that of a random human who is awake. I estimate the ratio between the individual number of neurons of nematodes and humans is 2.79*10^-9, whose square is 7.78*10^-18, roughly 10^-17.

Thanks, Seizal. I agree. On the other hand, I personally only care about increasing welfare in expectation. So I would be happy to support interventions which are very unlikely to significantly increase welfare if they increase it cost-effectively in expectation. If I was in an original position where I had an equal chance of reincarnating any of the individuals of a population, my expected welfare after the reincarnation would be proportional to the expected total future welfare of the population. So I believe maximising this corresponds to being as impartial as possible.

  • What makes two actions incomparable, under the imprecise EV model, is that the interval of EV differences crosses zero.

Imagine 2 states of the world which are exactly the same, and have an imprecice expected welfare of -1 to 1. The difference between their imprecise expected welfare is -2 (= -1 - 1) to 2 (1 - (-1)), which crosses 0. So their expected welfare would be incomparable under your framework? I would say their expected welfare would be comparable, and exactly the same.

I did not have a particular structure in mind for how the people in LMICs would regrant the funds. Thanks for giving examples, Mo.

Load more