Hi, EAs! I'm Ed Mathieu, manager of a team of data scientists and researchers at Our World in Data (OWID), an online publication founded by Max Roser and based out of the University of Oxford.
We aim to make the data and research on the world's largest problems accessible and understandable. You can learn more about our mission on our site.
You’re welcome to ask me anything! I’ll start answering questions on Friday, 23 June.
- Feel free to ask anything you may want to know about our mission, work, articles, charts, or more meta-aspects like our team structure, the history of OWID, etc.
- Please post your questions as comments on this post. The earlier you share your questions, the higher the chances they'll reach the top!
- Please upvote questions you'd most like answered.
- I'll answer questions on Friday, 23 June. Questions posted after that are less likely to get answers.
- (This is an “AMA” — you can explore others here.)
I joined OWID in 2020 and spent the first couple of years leading our work on the COVID-19 pandemic. Since then, my role has expanded to coordinating all the research & data work on our site.
I previously worked as a data scientist at the University of Oxford in the departments of Population Health and Primary Care Health Sciences; and as a data science consultant in the private sector.
For a (3.5-hour!) overview of my background, and the work of our team at OWID, you can listen to my interview with Fin Moorhouse and Luca Righetti on Hear This Idea. I also gave a talk at EA Global: London 22.
Thanks for the question, Angelina!
The article on longtermism and our content on AI were published in 2022. They've had great success (6-figure page views in both cases). I was particularly happy that we had no negative reaction to either topic, given that both could have seemed outside of our usual coverage for traditional OWID readers.
On longtermism, the reception was very positive. Max Roser's hourglass chart had a Wait-but-Why vibe that made it particularly popular on social media. My (unsubstantiated) impression is that many people remembered that part of the article more than the broader presentation of longtermism. But if we want existential risks to be taken more seriously, getting more people to adopt a broader perspective of humanity's past and future is probably an essential first step, so I'd say the article was very beneficial overall. Another nice aspect is that it was well-received in longtermist circles; no one seemed to think we had neglected or distorted any angle of the topic.
On AI, the impact has been more immediate. We published a new topic page, 5 articles, and 29 charts late last year. We were delighted that we could give a platform to the excellent data published by Epoch and that it was much more widely seen because of it (both on our site and in re-uses, e.g., in The Economist). Reactions to the 5 articles seemed very positive as well; "Technology over the long run" and "The brief history of artificial intelligence" were the most shared among them.
The most significant limitation is that this was all published just a few weeks before the ChatGPT/GPT-4 craze started. If anything, we're even more convinced now than at the time that AI is one of the world's largest problems, and we're working on an interim update of our content.