Sam Bogerd

95Joined Jul 2021


I don't necessarily disagree but from an organizational perspective: Is this not the sort of thing we have a wiki for? 

I think adversarial collaborations are very interesting, so I am curious to hear if anyone has done any work on how we can make this technique scale a bit more? Such as writing a good manual for how to do this? 

Thank you for answering! It is reassuring to know that previous attendance is taken into account.

For the upcoming Pentathlon this January: Is there anything I can do to increase the odds of being teamed up with fellow-EAs if I am not working for an EA org? 

I feel like there is a lot of content about getting the most out of an EAG on the forum, but not that much on who should go (again). I understand that this is going to depend a lot on personal circumstances, but I am wondering whether there are any guidelines / tips on who should apply to EAGs? And how many EAGs are too many?  

I have never gone to an EAG and felt I wasted my time, and have always come back with new valuable connections, but obviously going to all EAGs is too much, and you are taking away a spot from someone else who might need it more. Deciding where to draw the line is something I am struggling with a bit. 

Hi Lizka, 

Thank you for your response! I will contact OWID about this as well, that seems like a great idea! 

On your sixth point: I am sorry for not explaining it well initially, my concern would be something like this: 

  1. A government opens a forecasting questions on whether it will achieve its emissions target for 2030 (Or a target for anything else).  
  2. Forecasters in aggregate predict that there is only an 5% chance of success.
  3. This is seen as unacceptably low by policy-makers and new policy is announced and implemented. 
  4. Forecasters adjust and now think there is a 60% chance of success.
  5. This happens several times. 
  6.  Smart forecasters now understand that low aggregate forecasts will result in new policy initiatives, so a good strategy would be to  consistently predict higher chances of success than their true belief under current policy. 

I think this is roughly similar to the concern you expressed here under "Causality might diverge from conditionality".  

And of course I also doubt there are currently any governments responding enough to a prediction market / forecasting tournament for this to become a problem, but I am hoping that in future we might see a lot more government interest.  

Forecasting (SMART) government targets

Hi all! 

Nice to see that there is now a sub-forum dedicated to Forecasting, this seems like a good place to ask what might be a silly question.  

I am doing some work on integrating forecasting with government decision making.  There are several roadblocks to this, but one of them is generating good questions (See Rigor-Relevance trade-off among other things).  

One way to avoid this might be to simple ask questions about the targets the government has already set for itself, a lot of these are formulated in a SMART [1] way and are thus pretty forecastable. Forecasts on whether the government will reach its target also seem like they will be immediately actionable for decision makers.  This seemed like a decent strategy to me, but I think I have not seen them mentioned very often. So my question is simple: Is there some sort of major problem here I am overlooking? 

The one major problem I could think of is that there might be an incentive for a sort of circular reasoning: If forecasters in aggregate think that the government might not be on its way to achieve a certain target then the gov might announce new policy to remedy the situation. Smart Forecasters might see this coming and start their initial forecast higher. 

I think you can balance this by having forecasters forecast on intermediate targets as well.  For example: Most countries have international obligations to reduce their CO2 emissions by X% by 2030, instead of just forecasting the 2030 target you could forecasts on all the intermediate years as well. 


  1. ^

    SMART stands for: Specific, Measurable, Assignable, Realistic, Time-related - See 

I think the websites look amazing! 

It did give me a question: Is anyone doing focus groups or user interviews on these kinds of projects? I think this looks good, but I am already convinced that WAW is worth thinking about, so I am not really the target audience, but maybe the website is not so convincing for people who are not?  

I think I agree that democratizing workplaces is a good idea, and I think it is an interesting argument that this is potentially effective because so many people spend so much time at the workplace, nevertheless I would guess that spending or working on this does not come close to the cost-effectiveness of  EA charities, though I have not done the math and would love to see someone do an initial exploration of this. 


On a slightly related note: I am always a bit surprised by this (co-op, worker empowerment ect.) movement's focus on creating more co-ops or buying increasing shares of companies. It seems to me that if worker ownership is a good thing, then I think the smart political play would be to just keep strengthening co-determination laws until co-ops and other companies are basically identical. After all, creating new laws or convincing the state or others to spend loads of money is a lot more difficult then continually pushing to incrementally change existing laws. 

I just wanted to say that this essay was really important for me getting into EA a few years ago, it really resonated with me emotionally in a way that little else did. So wanted to thank you for writing this. 

I know this is not how EA is defined for most people, but I often think of EA as recognizing that we are always doing triage. 

Load More