Thanks for this interesting analysis! Do you have a link to Foster's analysis of MindEase's impact?
How do you think the research on MindEase's impact compares to that of GiveWell's top charities? Based on your description of Hildebrandt's analysis for example, it seems less strong than e.g. the several randomized control trials supporting distributing bed nets. Do you think discounting based on this could substantially effect the cost-effectiveness? (Given how much lower Foster's estimate of impact is though and that this is more heavily used in the overall cost-effectiveness, I would be interested to see whether this has a stronger evidence base?)
Thanks for this post Jack, I found it really useful as I haven't got round yet to reading the updated paper. This break down in the cluelessness section was a new arrangement to me. Does anyone know if this break down has been used elsewhere? If not this seems like useful progress in better defining the cluelessness objections to longtermism.
Thanks very much for your post! I think this a really interesting idea and it's really useful to learn from your experience in this area.
What would you think of the concern that these types of ads would be a "low fidelity" way of spreading EA that could risk misinforming people about EA? I think from my experience community building, it's really useful to be able to describe and discuss EA ideas in detail, and that there are risks to giving someone an incorrect view of EA. These risks include someone being critical of what they believe EA is, and spreading this critique, as well as discouraging them from getting involved when they may have done so at a later time. The risk is probably lower if someone clicks on a short ad that takes them to say effectivealtruism.com where the various ideas are carefully explained and introduced. But someone who only saw the ads and didn't click could end up with an incorrect view of EA.
I would be interested to see discussion about what would and wouldn't make a good online ad for EA e.g. how to intrigue people without being inaccurate or over-sensationalizing parts of EA.
There might also be an interesting balance between how much interest we want to someone to have shown in EA-related topics before advertising to them. E.g. every university student in the US is probably too wide a net, but everyone who's searching "effective altruism" or "existential risk" are probably already on their way to EA resources without the need for an advert.
I know lots of university EA groups make use of Facebook advertising and some have found this useful to promote events. I don't know whether Google/Youtube ads allow targeting at the level of students of a specific university?
I think I would have some worry that if external evaluations of individual grant recipients became common, this could discourage people from applying from grants in future, for fear of being negatively judged should the project not work out.
Potential grant recipients might worry that external evaluators may not have all the information about their project or the grant makers reasoning for awarding the grant. This lack of information could then lead to unfair or incorrect evaluations. This would be more a risk if it becomes common for people to write low quality evaluations that are weakly reasoned, uncharitable or don't respect privacy. I'm unsure whether it would be easy to encourage high quality evaluations (such as your own) without also increasing the risk of low quality evaluations.
The risk of discouraging grant applications would probably be greater for more speculative funds such as the Long Term Future Fund (LTFF), as it's easier for projects to not work out and look like wasted funds to uninformed outsiders.
There could also be an opposite risk that by seeking to discourage low quality evaluations, we discourage people too much from evaluating and criticizing work. It might be useful to establish key principles for writing evaluations that enable people to right respectful and useful evaluations, even with limited knowledge or time.
I'm unsure where the right trade-off between usefully evaluating projects, and not discouraging grant applications would be. Thank you for your review of the LTFF recipients and for posting this question, I found both really interesting.
Thanks for your comment Jack, that's a really great point. I suppose that we would seek to influence AI slightly differently for each reason:
e.g. you could reduce the chance of AI risk by stopping all AI development but then lose the other two benefits, or you could create a practically useful AI but not one that would guide humanity towards an optimal future. That being said I reckon in practice a lot of work to improve the development of AI would hit all 3. Though maybe if you view one reason as much more important than the others then you focus on a specific type of AI work.
Thank you very much for this post, I found it very interesting. I remember reading the original paper and feeling a bit confused by it. It's not too fresh in my mind so I don't feel too able to try to defend it. I appreciate you highlighting how the method they use to estimate f_l is unique and drives their main result.
A range of 0.01 to 1 for fl in your preferred model seems surprisingly high to me, though I don't understand the Lineweaver Davis paper well enough to really comment on its result which I think your range is based on. I think they mention how their approach leaves uncertainty in n_e as to what counts as a terrestrial planet. I wonder if most estimates of any one parameter have a tendency to shift uncertainty onto other parameters, so that when combining individual estimates of each of parameter you end up with an unrealistically certain result.
Thanks for your comment athowes. I appreciate your point that I could have done more in the post to justify this "binary" of good and optimal.
Though the simulated minds scenario I described seems at first to be pretty much optimal, it could be much larger if you thought it would last for many more years. Given large enough uncertainty about future technology, maybe seeking to identify the optimal future is impossible.
I think your resources, value and efficiency model is really interesting. My intuition is that values is the limiting factor. I can believe there are pretty strong forces that mean that humanity will eventually end up optimising resources and efficiency, but less confident values will converge to the best ones over time. This probably depends on whether you think a singleton will form at some point, and then it feels like the limit is how good the values of the singleton are.
Thanks for this post! I think I have a different intuition that there are important practical ways where longtermism and x-risk views can come apart. I’m not really thinking about this from an outreach perspective, more from an internal prioritisation view. (Some of these points have been made in other comments also, and the cases I present are probably not as thoroughly argued as they could be).
It seems that a possible objection to all these points is that AI risk is really high and we should just focus on AI alignment (as it’s more than just an extinction risk like bio).