Hi everyone! I'm Tom Chivers, and I'll be doing an AMA here. I plan to start answering questions on Wednesday 17 March at 9am UK: I reckon I can comfortably spend three hours doing it, and if I can't get through all the questions, I'll try to find extra time.
Who I am: a science writer, and the science editor at UnHerd.com. I wrote a book, The Rationalist's Guide to the Galaxy – originally titled The AI Does Not Hate You – in 2019, which is about the rationalist movement (and, therefore, the EA movement), and about AI risk and X-risk.
My next book, How to Read Numbers, written with my cousin David, who's an economist, is about how stats get misrepresented in the news and what you can do to spot it when they are. It's out on March 18.
Before going freelance in January 2018, I worked at the UK Daily Telegraph and BuzzFeed UK. I've won two "statistical excellence in journalism" awards from the Royal Statistical Society, and in 2013 Terry Pratchett told me I was "far too nice to be a journalist".
Ask me anything you like, but I'm probably going to be best at answering questions about journalism.
Why don't more journalists make concrete, verifiable, quantitative forecasts and then retrospectively assess their own accuracy, like you did here (also see more examples)? Is there anything that could be done to encourage you and other journalists to do more of that?
I agree with these comments, and think the first one - "If you haven't spent time on calibration training..." - makes especially useful points.
Readers of this thread may also be interesting in a previous post of mine on Potential downsides of using explicit probabilities. (Though be warned that the post is less concise and well-structured than I'd aim for nowadays.) I ultimately conclude that post by saying:
... (read more)