145 karmaJoined Jul 2020


It seems that your original comment no longer holds under this version of "1% better", no?  In what way does being 1% better at all these skills translate to being 30x better over a year?  How do we even aggregate these 1% improvements under the new definition?

Anyway, even under this definition it seems hard to keep finding skills that one can get 1% better at within one day easily.  At some point you would probably run into diminishing returns across skills -- that is, the "low-hanging fruit" of skills you can improve at easily will have been picked.

I have not read much of Tetlock's research, so I could be mistaken, but isn't the evidence for Tetlock-style forecasting only for (at best) short-medium term forecasts?  Over this timescale, I would've expected forecasting to be very useful for non-EA actors, so the central puzzle remains.  Indeed, if there is not evidence for long-term forecasting, then wouldn't one expect non-EA actors (who place less importance on the long-term) to be at least as likely as EAs use this style of forecasting?

Of course, it would be hard to gather evidence for forecasting working well over longer (say, 10+ year) forecasts, so perhaps I'm expecting too much evidence.  But it's not clear to me that we should have strong theoretical reasons to think that this style of forecasting would work particularly well, given how "cloud-like" predicting events over long time horizons is and how with further extrapolation there might be more room for  bias.

For more on this line of argument, I recommend one of my favorite articles on ethical vegetarianism: Alastair Norcross's "Puppies, Pigs, and People".

Answer by ag4000May 24, 202330

I'm not sure how reputable it is, but I picked up a used copy of Becoming Vegan Comprehensive Edition and have consulted it from time-to-time for vegan nutrition.

I enjoyed the new intro article, especially the focus on solutions.  Some nitpicks:

  • I'm not sure that it's good to use 1DaySooner as the second example of positive EA interventions.  I agree that challenge trials are good, but in my experience (admittedly a convenience sample), a lot of people I talk to are very wary of challenge trials.  I worry that including it in an intro article could create needless controversy/turn people away.
  • I also think that some of the solutions in the biodefense section are too vague.  For example, what exactly did the Johns Hopkins Center for Health Security do to qualify as important?  It's great that the Apollo Programme for Biodefense has billions in funding, but what are they doing with that money? 
  • I don't think it makes sense to include longtermism without explanation in the AI section.  Right now it's unexplained jargon.  If I were to edit this, I'd replace that sentence with a quick reason why this huge effect on future generations matters or delete the sentence entirely.

Thanks for writing this up so concisely -- I think that this is a nice list of pros and cons.  I agree that the weekly/seminar model works better for virtual reading groups.  I certainly would not want to spend 6+ hours on Zoom for a reading group continuously.

I'm not sure what all of the participants' motivation was for joining (I should've gathered that info).  As background, we mostly publicized the intensive to members of MIT EA interested in AI safety and to members of Harvard EA.  Here are, I think, the main motivations I noticed:

  • Considering pursuing AI safety technical research as a career, and thus wanting to develop a foundation/overview (~2 participants);
  • Wanting to learn about an important EA cause area to get a more well-rounded view of EA, or to help with work in an adjacent cause area like AI governance (~2 participants);
  • Shoring up/filling in gaps in knowledge about AI safety, already planning to work in AI safety (~2 participants).

Agreed, although it's possible to use Messenger with a deactivated Facebook account, which seems to solve this issue.

Back of the envelope calculation

Load more