I am not sure that there is actually a disagreement between you and Guy.If I understand correctly, Guy says that in so far as the funder wants research to be conducted to deepen our understanding of a specific topic, the funders should not judge researchers based on their conclusions about the topic, but based on the quality and rigor of their work in the field and their contributions to the relevant research community.This does not seem to conflict what you said, as the focus is still on work on that specific topic.
I strongly agree with this post and it's message.
I also want to respond to Jason Crawford's response. We don't necessarily need to move to a situation where everyone tries to optimize things as you suggest, but at this point it seems that almost no one tries to optimize for the right thing. I think even changing this to a few percents of entrepreneurial work or philanthropy could have tremendous effect, without losing much of the creative spark people worry we might lose, or maybe gain even more, as new directions open.
That's great, thanks!I was aware of Anthropic, but not of the figures behind it.
Unfortunately, my impression is that most funding for such projects are around AI safety or longtermism (as I hinted in the post...). I might be wrong about this though, and I will poke around these links and names.
Relatedly, I would love see OPP/EA Funds fund (at least a seed round or equivalent) such projects, unrelated to AI safety and longtermism, or hear their arguments against that.
Thanks for clarifying Ozzie!(Just to be clear, this post is not an attack on you or on your position, both of which I highly appreciate :). Instead, I was trying to raise a related point, which seems extremely important to me and I was thinking about recently, and making sure the discussion doesn't converge to a single point).
With regards to the funding situation, I agree that many tech projects could be funded via traditional VCs, but some might not be, especially those that are not expected to be very financially rewarding or very risky (a few examples that come to mind are the research units of the HMOs in Israel, tech benefitting people in the developing world [e.g. Sella's teams at Google], basic research enabling applications later [e.g. research on mental health]). An EA VC which funds projects based mostly on expected impact might be a good idea to consider!
I wrote a response post Even More Ambitious Altruistic Tech Efforts, and I would love to spinoff relevant discussion there. The tl;dr is that I think we should have even more ambitious goals, and try to initiate projects that potentially have a very large direct impact (rather than focus on tools and infrastructure for other efforts).
Also, thanks for writing this post Ozzie. Despite my disagreements with your post, I mostly agree with your opinions and think that more attention should be steered towards such efforts.
I just want to add, on top of Haydn's comment to your comment, that:
You don't need the treatment and the control group to be of the same size, so you could, for instance, randomize among the top 300 candidates.
In my experience, when there isn't a clear metric for ordering, it is extremely hard to make clear judgements. Therefore, I think that in practice, it is very likely that let's say places 100-200 in their ranking seem very similar.
I think that these two factors, combined with Haydn's suggestion to take the top candidates and exclude them from the study, make it very reasonable, and of very low cost.
Last August Stijn wrote a post titled The extreme cost-effectiveness of cell-based meat R&D about this subject.Let me quote the bottom line (emphasis mine):
This means one euro extra funding spares 100 vertebrate land animals. Including captured and aquaculture fish (also fish used for fish meal for farm animals), the number becomes an order 10 higher: 1000 vertebrate animals saved per euro....Used as carbon offsetting, cell-based meat R&D has a price around 0,1 euro per ton CO2e averted.
In addition, as I wrote in a comment, I also did a back of the envelope guesstimate model to estimate the cost-effectiveness of donations to GFI, and arrived at $1.4 per ton CO2e (and $0.05-$5.42 for 90% CI).
It is important to mention that our methods are not nearly as thorough as the work done by Giving Green or Founders Pledge about climate change, and I wouldn't take it too seriously. Nevertheless, I think that it at least hints the order of magnitude of the true numbers.
Edit: I just realized that Brian's comment refers to a newer post by Stijn, which I assume reflects his broader opinions. However I think that the discussion in the comments on Stijn's older post that I linked to is also interesting to read.
Thanks for linking this, this looks really interesting!
If anyone is aware of other similar lists, or of more information about those fields and their importance (whether positive or negative), I would be interested in that.
Thanks for detailing your thoughts on these issues!
I'm glad to hear that you are aware of the different problems and tensions, and made informed decisions about them, and I look forward to seeing the changed you mentioned being implemented.
I want to add one comment about to the How to plan your career article, if it's already mentioned. I think it's really great, but it might be a little bit too long for many readers' first exposure. I just realized that you have a summary on the Career planning page, which is good, but I think it might be too short. I found the (older) How to make tough career decisions article very helpful and I think it offers a great balance of information and length, and I personally still refer people to it for their first exposure. I think it will be very useful to have a version of this page (i.e. of similar length), reflecting the process described in the new article.
With regards to longtermism (and expected values), I think that indeed I disagree with the views taken by most of 80,000 hours' team, and that's ok. I do wish you offered a more balanced take on these matters, and maybe even separate the parts which are pretty much a consensus in EA from more specific views you take so that people can make their own informed decisions, but I know that it might be too much to ask and the lines are very blurred in any case.
Thanks for publishing negative results.
I think that it is important to do so in general, and especially given that many other group may have relied on your previous recommendations.
If possible, I think you should edit the previous post to reflect your new findings and link to this post.