I created this account because I wanted to have a much lower bar for participating in the Forum, and if I don't do so pseudonymously, I am afraid of looking dumb.
I also feel like my job places some constraints on the things I can say in public.
Hi Lauren!
Thank you for another excellent post! I’m becoming a big fan of the Substack and have been recommending it.
Quick question you may have come across in the literature, but I didn’t see it in your article: Not all peacekeeping missions are UN missions; there are also missions from ECOWAS, the AU, EU, and NATO.
Is the data you presented exclusively true for UN missions, or does it apply to other peacekeeping operations as well?
I’d be curious to know, since those institutions seem more flexible and less entangled in geopolitical conflicts than the UN. However, I can imagine they may not be seen as neutral as the UN and therefore may be less effective.
Could you say a bit more about your uncertainty regarding this?
After reading this, it sounds to me like shifting some government spending to peacekeeping would be money much better spent than on other themes.
Or do you mean it more from an outsider/activist perspective—that the work of running an organization focused on convincing policymakers to do this would be very costly and might make it much less effective than other interventions?
Simple Forecasting Metrics?
I've been thinking about the simplicity of explaining certain forecasting concepts versus the complexity of others. Take calibration, for instance: it's simple to explain. If someone says something is 80% likely, it should happen about 80% of the time. But other metrics, like the Brier score, are harder to convey: What exactly does it measure? How well does it reflect a forecaster's accuracy? How do you interpret it? All of this requires a lot of explanation for anyone not interested in the science of Forecasting.
What if we had an easily interpretable metric that could tell you, at a glance, whether a forecaster is accurate? A metric so simple that it could fit within a tweet or catch the attention of someone skimming a report—someone who might be interested in platforms like Metaculus. Imagine if we could say, "When Metaculus predicts something with 80% certainty, it happens between X and Y% of the time," or "On average, Metaculus forecasts are off by X%". This kind of clarity could make comparing forecasting sources and platforms far easier.
I'm curious whether anyone has explored creating such a concise metric—one that simplifies these ideas for newcomers while still being informative. It could be a valuable way to persuade others to trust and use forecasting platforms or prediction markets as reliable sources. I'm interested in hearing any thoughts or seeing any work that has been done in this direction.
Hi there!
I really enjoy the curated EA forum podcast and appreciate all the effort that goes into it. However, I wanted to flag a small issue: in my podcast app, emojis cannot be included in filenames. With the increasing use of the "🔸" in forum usernames, this has been causing some problems.
Would it be possible to remove emojis from the filenames?
Thanks for considering this!
This is a very non-EA opinion, but personally I quite like this on, for lack of a better word, aesthetics grounds: Charities should be accountable to someone, in the same way as companies are to shareholders, and politicians are to electorates. Membership models are a good way of achieving that. I am a little sad that my local EA group is not organized in the same way.
Does anyone have thoughts on whether it’s still worthwhile to attend EAGxVirtual in this case?
I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I haven't: