It seems that your original comment no longer holds under this version of "1% better", no? In what way does being 1% better at all these skills translate to being 30x better over a year? How do we even aggregate these 1% improvements under the new definition?
Anyway, even under this definition it seems hard to keep finding skills that one can get 1% better at within one day easily. At some point you would probably run into diminishing returns across skills -- that is, the "low-hanging fruit" of skills you can improve at easily will have been picked.
I have not read much of Tetlock's research, so I could be mistaken, but isn't the evidence for Tetlock-style forecasting only for (at best) short-medium term forecasts? Over this timescale, I would've expected forecasting to be very useful for non-EA actors, so the central puzzle remains. Indeed, if there is not evidence for long-term forecasting, then wouldn't one expect non-EA actors (who place less importance on the long-term) to be at least as likely as EAs use this style of forecasting?
Of course, it would be hard to gather evidence for forecasting working well over longer (say, 10+ year) forecasts, so perhaps I'm expecting too much evidence. But it's not clear to me that we should have strong theoretical reasons to think that this style of forecasting would work particularly well, given how "cloud-like" predicting events over long time horizons is and how with further extrapolation there might be more room for bias.
For more on this line of argument, I recommend one of my favorite articles on ethical vegetarianism: Alastair Norcross's "Puppies, Pigs, and People".
I'm not sure how reputable it is, but I picked up a used copy of Becoming Vegan Comprehensive Edition and have consulted it from time-to-time for vegan nutrition.
I enjoyed the new intro article, especially the focus on solutions. Some nitpicks:
Thanks for writing this up so concisely -- I think that this is a nice list of pros and cons. I agree that the weekly/seminar model works better for virtual reading groups. I certainly would not want to spend 6+ hours on Zoom for a reading group continuously.
I'm not sure what all of the participants' motivation was for joining (I should've gathered that info). As background, we mostly publicized the intensive to members of MIT EA interested in AI safety and to members of Harvard EA. Here are, I think, the main motivations I noticed:
Agreed, although it's possible to use Messenger with a deactivated Facebook account, which seems to solve this issue.
Why is this post being downvoted? I seriously doubt that EAs working to prevent school shootings would be cost-effective, but I don't get why there are downvotes here -- it's a fair question.