All of AABoyles's Comments + Replies

I recently experienced a jarring update on my beliefs about Transformative AI. Basically, I thought we had more time (decades) than I now believe we will (years) before TAI causes an existential catastrophe. This has had an interesting effect on my sensibilities about cause prioritization. While I applaud wealthy donors directing funds to AI-related Existential Risk mitigation, I don't assign high probability to the success of any of their funded projects. Moreover, it appears to me that there is essentially no room for additional funds in kinds of denomin... (read more)

3
Charles He
2y
Consider s-risk: From your comment, I understand that you believe the funding situation is strong and not limiting for TAI, and also that the likely outcomes of current interventions is not promising.  (Not necessarily personally agreeing with the above) given your view, I think one area that could still interest you is "s-risk". This also relevant for your interests in alleviating massive suffering.  I think talking with CLR, or people such as Chi there might be valuable (they might be happy to speak if you are a personal donor).   Leadership development seems good in longtermism or TAI (Admittedly it's an overloaded, imprecise statement but) the common wisdom that AI and longtermism is talent constrained seems true. The ability to develop new leaders or work is valuable and can give returns, even accounting for your beliefs being correct.    Prosaic animal welfare Finally, you and other onlookers should be aware that animal welfare, especially the relatively tractable and "prosaic suffering" of farm animals, is one of the areas that has not received a large increase in EA funding.  Some information below should be interesting to cause neutral EAs. Note that based on private information: 1. The current accomplishments in farm animal welfare are real and the current work is good. But there is very large opportunity to help (many times more animals are suffering than have been directly helped so far). 2. The amount of extreme suffering that is being experienced by farm animals is probably worse, much worse than is commonly believed (this is directly being addressed through EA animal welfare and also motivates welfarist work). This level of suffering is being occluded because it does not help, for example, it would degrade the mental health of proponents to an unacceptable level. However, the suffering levels are illogical to disregard when considering neartermist cause prioritization. This animal welfare work would benefit from money and expertise.  No
4
Zach Stein-Perlman
2y
If there's at least a 1% chance that we don't experience catastrophe soon, and we can have reasonable expected influence over no-catastrophe-soon futures, and there's a reasonable chance that such futures have astronomical importance, then patient philanthropy is quite good in expectation. Given my empirical beliefs, it's much better then GiveDirectly. And that's just a lower bound; e.g., investing in movement-building might well be even better.

The mortality rate is the proportion of infections that *ultimately* result in death. If we had really good data (we don't), we could get a better estimate by pitting fatalities against *recoveries*. Since we aren't tracking recoveries well, If we attempt to compute mortality rates right now (as infections are increasing exponentially), we're going to badly underestimate the actual mortality rate.

Totally agree about data collection. Seems like a good candidate for an approval vote. After a five-minute search, I couldn't find a good approval-voting platform, when I realized that basically all polls on DEAM work this way (i.e. Facebook supports this). Maybe this is something we could post in the EA Facebook group? @Peter_Hurford?

Related: What is your estimate of the field's room-for-funding for the next few years?

GiveWell's Holden Karnofsky assessed the Singularity Institute in 2012, and provided a thoughtful, extensive critique of the mission and approach which still remains tied for the top post on Lesswrong. It seems the EA Meta-charity evaluators are still hesitant to name AI Safety (and more broadly, Existential Risk Reduction) as a potentially effective target for donations. What are you doing to change that?

Mr. Musk has personally donated $10 million via the Future of Life Institute towards a variety of AI safety projects. Additionally, MIRI is currently engaged in its annual fundraising drive with ambitious stretch goals, which include the hiring of several (and potentially many) additional researchers.

With this in mind, Is the bottleneck to progress in AI Safety research the availability of funding or researchers? Stated differently, If a technically-competent person assesses AI Safety to be the most effective cause, which is approach more effective: Earnin... (read more)

2
AABoyles
9y
Related: What is your estimate of the field's room-for-funding for the next few years?
4
John_Maxwell
9y
One answer: Apply for a job at a group like MIRI and tell them how much you plan to donate with your job if they don't hire you. This gives them a broader researcher pool to draw from and lets them adapt to talent/funding bottlenecks dynamically.

The Boston Review held a Forum on Effective Altruism with some excellent criticism by academic, non-EAs.

Also, props for compiling it in LaTeX. The typesetting is beautiful. :)

0
RyanCarey
9y
Thanks! Credit to Alex Vermeer from MIRI for giving me some basic scripts to help me get started :)

Honestly, no. It covers the high points of the movement with excellent pacing. The essays are concise, readable, and interesting. There's no superfluous content. It's great all around.

0
RyanCarey
9y
Thanks very much!

Excellent work! I've just finished it and posted it on GoodReads.

0
RyanCarey
9y
Thanks, that's helpful! Have you any suggestions for improving the book?