Bio

Participation
4

I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How others can help me

I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How I can help others

I can help with career advice, prioritisation, and quantitative analyses.

Comments
2956

Topic contributions
40

I see. I agree an infinitesimal change to one of 2 exactly identical states could make their expected welfare incomparable under your framework. However, it does not follow that any 2 interventions are incomparable with respect to how much they change expected welfare (across all space and time). I think intervals representing the expected change in welfare are sufficiently narrow for any decision-relevant comparisons to be feasible, although very often with lots of (standard) uncertainty involved.

  • What makes two actions incomparable, under the imprecise EV model, is that the interval of EV differences crosses zero.

What exactly do you mean by "interval of EV differences"? Imagine A = [a1, a2], and B = [b1, b2] are intervals representing the imprecise expected welfare of 2 states of the world, and that b2 >= a2. What would be the "interval of EV differences" between B and A in terms of a1, a2, b1, and b2? I thought it would be B - A = [b1 - a2, b2 - a1].

Hello. I would take for granted that all animals are sentient, and focus on assessing the distribution of the intensity of subjective experiences. I think asking about the probability of sentience of an animal shares some of the issues of asking about the probability that an object is hot. People have different concepts about what "hot" means, and they do not depend just on temperature (for example, the minimum temperature for hot wood is higher than the minimum temperature for hot metal because this transfers heat more efficiently). I understand sentience as having subjective experiences whose intensity is not exactly 0. However, I suspect you are right that some people understand it as having subjective experiences which are sufficiently intense. Different bars for this will lead to different probabilities. Asking about the distribution of the intensity of subjective experiences mitigates this. For example, one could ask about the probability of the mean intensity of the pain shrimps experience during air asphyxiation exceeding the intensity of disabling pain in humans.

The intervals are supposed to represent imprecise expected value in the way you define it, which allows for the 1st case you described above leading to "A and B are incomparable"? In my mind, if 2 states of the world are exactly the same, they should be comparable, and exactly as valuable no matter what.

Hi Toby. Thanks for the relevant post.

For example Kokotajlo’s distribution implies a 28% chance transformative AI will happen during the current presidential term, a 35% chance it will happen in the next term, a 13% chance it will be the one after that, with 24% left over spread among ever more distant terms

There is even more uncertainty in AI Futures' artificial superintelligence (ASI) timelines. The difference between the 90th and 10th percentile is 168 years for Daniel Kokotajlo (2027 to 2195), and 137 years for Eli Lifland (2028 to 2165).

image.png
 

image.png

CG's current scaling could still be maximising expected impact if they are correctly assessing the reputational risks of funding EGIs. However, I agree this applies less to small individual donors, and I also suspect the marginal multiplier accounting for all effects of these donors funding EGIs supported by CG is higher than 1.

I wonder whether it would be good for CG to clarify why they do not fund EGIs more. I feel like this would make sense even if the cause was the reputational risks you mentioned, which I believe are broadly seen as understandable.

I am confident that CG running more RFPs, committing multi-year scale-up funding, branching out into diverse initiatives, and other such things with its increased EGI budget allocation is a very clear sign that it believes there is both high impact and absorbency here.

It does not follow from this that funding the EGIs supported by CG is more cost-effective than funding GiveWell? For this to be the case, assuming CG is trying to maximise their impact, one would have to think they should be scaling up their funding of EGIs faster, regardless of how fast they are currently scaling it. If CG's marginal funding of EGIs had a multiplier above 1 accounting for all effects, they would be leaving impact on the table by not scaling up faster.

Even now I think we could scale up faster (and hugely welcome this scale-up post). But I understand that CoGi has a certain amount it wishes to allocate, and its strategy to maximise impact is allocating that in such a way as to create clear high-giving-multiplier funding gaps for other GiveWell-aligned donors to be able to step in and fill.

Are you confident that CG should be increasing the funding of EGIs faster (for example, by using looser funding caps)? If not, can you be confident that funding the EGIs supported by CG is significantly more cost-effective than funding GiveWell? 

Do "£5x" and "£5y" refer to the impact accounting for all effects? If so, you are saying that the marginal multiplier accounting for all effects could be greater than the multiplier concerning the total spending accounting for all effects. I think this can only be the case if the organisation fails to allocate funds to the most cost-effective activities (accounting for all effects) 1st.

I still guess the marginal multiplier of the effective giving initiatives (EGIs) funded by Coefficient Giving (CG) is higher than 1, but I would be a bit surprised if it was 5. In this case, CG would be leaving lots of impact on the table by not funding EGIs more. CG is scaling up their funding of EGIs, and should ideally be doing this in the way that maximises impact. For CG's marginal funding of EGIs to have a multiplier of 5, one would have to think they should be scaling up faster. Maybe they should. The altruistic market is not perfectly efficient. However, it is worth having in mind that the multiplier of CG's marginal funding of EGIs may be closer to 1 after accounting for the risks of scaling up too fast. For example, a slower scale up could allow for learning more about which organisations are the most promising. I expect CG to be taking this into account, but mostly informally, not formally in the calculations of the multipliers of their grantees.

Load more