I am asking this question to understand how an ideal EA would employ EA ideas, tools, and other frameworks practically - for instance, how would an ideal EA use the Scale, Neglectedness & Tractability arguments here? What about long-termism? Given the frameworks used, would this EA prioritize this cause over other causes?
I think thought experiments like this could help us properly explain to newcomers the intricacies of the EA thought process. If there are other thought experiments like this that serve the same purpose, do post those in your answer.
Here are some assumptions you are allowed to make:
-
Assume that although you are living before 07-08, EA concepts and ideas are where they are at right now. For instance, by 07-08, EAs already think AI & GCRs are high priority and that Givewell has done all of the mistakes they quote here and that 80k hours has already realized that leading with Earning to give was a bad idea and so on.
-
Assume the EA in this scenario is actually well aware of the major ideas of EA: say they took the Stanford Introductory fellowship or they might even be running a local EA group.
-
Assume the EA has a personal fit to help with this problem: say they are in the final year of their Econ PhD. But remember that they can direct their career towards other causes too.
-
Also assume that this is not about hunting and finding Cause X: One fine night before 07-08, God descended into the EA's bedroom and whispered in their ear, "You need to take a look at the US housing market". But God left without telling that there will be a crisis in 07-08 because of it or what exactly to look at in that market - God is too active on FB and just got distracted. You can also think that the EA is one of the protagonists at the beginning of the movie 'The Big Short" (the movie starts at '05).
If you are making additional assumptions then mention them in your answer.
Well, if your EA were particularly well placed to tackle this problem, then the answer is likely yes: they would probably realize its scalable and (partially neglected). Plus, if God is reliable, then the Holy Advice would likely dominate other matters - AGI and x-risks are uncertain futures, and reducing present suffering would be greatly affected by the financial crisis. In addition, maybe this is not quite the answer you're looking for, but I believe personal features (like fit and comparative advantages) would likely trump other considerations when it comes to choosing a cause area to work on (but not to donate to).
"... I believe personal features (like fit and comparative advantages) would likely trump other considerations..." That is a very interesting point. Sometimes I do have a very similar feeling - the other 3 criteria are there mostly just so one doesn't base one's decision fully on personal fit but consider other things too. At the end of the day, I guess the personal fit consideration ends up weighing a lot more for a lot of people. Would love to hear from someone in 80k hours if this is wrong...
Editing to add this: I wonder if there is a survey somewhere out there that asked people how much do they weigh each of the 4 factors. That might validate this speculation...