How can we trust the findings of EA organisations? It is a genuine possibility that I will change the entire course of my life as a result of the information on 80k hours. I guess many of you already have. Have you checked all of their reasoning? What percentage ought one to check? Or you trust someone else to have done that "due diligence"?
It's not enough to say they are transparent and seem honest - I know plenty of misguided transparent honest people. The issue is that EA organisations might be wrong, not in what they don't know (we cannot avoid being wrong in what we don't know) but in what they do - like a mistaken mathematical proof, their logic might be flawed or their sums might be off. This, by our own logic, would likely have disastrous results.
Frankly, someone needs to be checking their notes and I am not skilled enough, nor do I want to. I have yet to see this in regard to say 80k hours.
In this sense I can imagine three solutions:
Firstly, some sort of independent auditing body. They could read the works of EA organisations and attempt to see if the logic holds, flag up areas where decisions seem arbitrary etc. We would be paying someone to just be really on top of this stuff as their main job and to tell us if they found anything worrying. Arguably this forum kind of does this job, though A) we are all tremendously biased B) are people *really* checking the minutiae? I am not.
Secondly, multiple organisations independently asking the same questions. What if there were another 80k hours (called say, "Nine Years") which didn't try to interact with them, but sought answers to the same problems. "Nine Years" could publish their research and then we could read both summaries and then investigate areas of difference.
Thirdly, publish papers on our explanations as if they were mathematical (perhaps in philosophy journals). Perhaps this already happens (I guess if this post takes off I might research it more), but you could publish rigid testable explanations of the theories which undergird EA as an ideology. It seems well being (for instance) is very poorly defined. I'll explain more if people are interested, (read Deutsch's The Beginning of Infinity) but suffice it to say that to avoid being wrong you want to be definite so you can change. Is our ideology falsifiable? Sometimes EA seems very vague to me in its explanational underpinnings. If you can vary easily, it's hard to be wrong, and if you're never wrong, you never get better. I don't know if journals are the way to go but it seemed the easiest way to clearly suggest becoming more rigid.
Caveats
I do not know enough about EA - I've read about 20 hours of it in my life. Perhaps mechanisms like this already exist or you have reason to not require them.
I recently left religion and for that reason would like to know that I am not fooling myself here also. "Trust EA organisations because they are good" doesn't hold much water since the logic applies elsewhere - "Trust the Church because it is good"?
Summary
I think it would be good to have a mechanism for ensuring that we are not fooling ourselves here. EA redirects a huge number of person-hours and flaws in it could be catastrophic. I don't know what those are, but have got a few suggestions and am interested in your suggestions or criticisms of the ideas suggested here.
This is kind of off-topic, but I remember a few years ago, regarding the possibility of competition within AI alignment, I asked Nate, and he said one day he'd like to set up something like competing departments within MIRI. The issue with that at the time was that having an AI alignment organization respond to the idea they should have competitors with, instead of "trust us to do a good job", to internalize competition checks out to "trust us to do a good job". Things have changed, what with MIRI being much more reticent to publish much of their research, so it's almost like "trust us to do a good job" now no matter what MIRI actually does.
Divergence of efforts in AI alignment could lead to an arms race, and that's bad. At the same time, we can't discourage competition in AI alignment. It seems, for AI alignment, determining what is 'healthy' competition is extremely complicated. I just thought I'd bring this up, since competition in AI alignment is at least somewhat necessary while also posing a risk of a race to the bottom, in a way that, for example, bednet distribution doesn't.