Sorted by New


Book Review: Deontology by Jeremy Bentham

Thanks for writing this! I really like the way you write, which I found both fun and light and, at the same time, highlighting the important parts vividly. I too was surprised to learn that this is the version of utilitarianism Bentham had in his mind, and I find the views expressed in your summary (Ergo) lovely too.

The extreme cost-effectiveness of cell-based meat R&D

I too was surprised when I first read your post. I find it reassuring that our estimates are not far from each other, although the models are essentially different. I suppose we both neglect some aspects of the problem, although both models are somewhat conservative.

I agree that it is probably the case that cell-based meat is very cost-effective at greenhouse gas reduction, and I would love to more sophisticated models than ours.

Research Summary: The Subjective Experience of Time

Thank you for the eloquent response, and for the pointers to the parts of your posts relevant to the matter.

I think I understand your position, and I will dig deeper into your previous posts to get a more complete picture of your view. Thanks once more!

The extreme cost-effectiveness of cell-based meat R&D

Thanks for sharing your computation. This highly resonates with a (very rough) back of the envelope estimate I ran for the cost-effectiveness of the Good Food Institute, the guesstimate model is here The result (which shouldn't be taken to literally) is $1.4 per ton CO2e (and $0.05-$5.42 for 90% CI).

I can give more details on how my model works, but very roughly I try to estimate the amount of CO2e saved by clean meat in general, and then try to estimate how much earlier will that happen because of GFI. Again, this is very rough, and I'd love any input, or comparison to other models.

Research Summary: The Subjective Experience of Time

Thank you for writing this summary (and conducting this research project)!

I have a question. I am not sure what the standard terminology is, but there are (at least) two different kinds of mental processes: reflexes/automatic response and thoughts or experiences which span longer times. I am not certain which are more related to capacity for welfare, but I guess it is the latter. Additionally I imagine that the experience of time is more relevant for the former. This suggests that maybe the two are not really correlated. Have you thought about this? Is my view of the situation flawed?

Thanks again!

Some promising career ideas beyond 80,000 Hours' priority paths

As someone in the intersection of these subjects I tend to agree with your conclusion, and with your next comment to Arden describing the design-implementation relationship.

However, while thinking about this, I did come up with a (very rough) idea for AI alignment , where formal verification could play a significant role.
One scenario for AGI takeoff, or for solving AI alignment, is to do it inductively - that is, each generation of agents designs the next generation, which should be more sophisticated (and hopefully still aligned). Perhaps one plan to do achieve this is as follows (I'm not claiming that any step is easy or even plausible):

  1. Formally define what it means for an agent to be aligned, in such a way that subsequent agents designed by this agent are also aligned.
  2. Build your first generation of AI agents (which should be lean and simple as possible, to make the next step easier).
  3. Let a (perhaps computer assisted) human prove that the first generation of AI is aligned in the formal sense of 1.

Then, once you deploy the first generation of agents, it is their job to formally prove that further agents designed by them are aligned as well. Hopefully, since they are very intelligent, and plausibly good at manipulating the previous formal proofs, they can find such proofs. Since the proof is formal, humans can trust and verify it (for example using traditional formal proof checkers), despite not being able to come up with the proof themselves.

This plan has many pitfalls (for example, each step may turn out to be extremely hard to carry out, or maybe your definition of alignment will be so strict that the agents won't be able to construct any new and interesting aligned agents), however it is a possible way to be certain about having aligned AI.

Climate change donation recommendations

I agree with your main argument, but I think that the current situation is that we have no estimate at all, and this is bad. We literally have no idea if GFI averts 1 ton CO2e at $0.01 or at $1000. I believe having some very rough estimates could be very useful, and not that hard to do.

Also, I completely agree that splitting donations is a very good idea, and I personally do it (and in particular donated to both CATF and GFI in the past).

Climate change donation recommendations

Thanks for sharing your perspective. However, I disagree with the conclusion of not performing these evaluations for that reason (though I think that it might make it harder to analyze and give an accurate answer).

For example, if it turns that GFI is 7 times less effective then CATF, that might mean that GFI is an extremely good donation opportunity for someone who wants to support both animal welfare and climate change mitigation. If it turns out that GFI's impact is 1000 times less effective then CATF, then the impact on climate change is negligible in donating to them.

Knowing the answer to this question could impact many people's donation strategy, especially if they are uncertain about what are the most important causes and prefer a diverse portfolio (like me).

Climate change donation recommendations

Thank you for writing that up!

Do you (or anyone else) have any cost-effectiveness analysis of CO2e emissions averted (even if very rough) for the charities in appendix 2?
I am particularly interested in estimates for the Good Food Institute impact on CO2e emissions.

EDIT: for future reference, there is a related post on the forum - The extreme cost-effectiveness of cell-based meat R&D.

Load More