"Indeed, some people have actually joked betting low on the more existential questions since they won't get a score if we're all dead (at least, I hope they're joking)"
I think several of them aren't joking, they care more about feeling "smart" because they've gamed the system than the long-term potential consequences of inaccurate forecasts.
I do, like you, really hope that I'm wrong and they are in fact joking.
As another very active user of metaculus recently, who's come from an EA background, I basically agree with all of the above, except that I don't think all the users are joking about betting low on the existential questions.
I wrote this a while ago, it wasn't intended as an update on Deworming but the claims I made about the evidence for Deworming in it were up-to-date at the time of writing.
Forecast your win probability in a fight against:
500 horses, each with the mass of an average duck.
1 duck, with the mass of an average horse.
(numbers chosen so mass is roughly equal)
Here's a ton of questions pick your favourites to answer.
What's your typical forecasting workflow like? Subquestions:
Do you tend to make guesstimate/elicit/other models, or mostly go qualitative? If this differs for different kinds of questions, how?
How long do you spend on initial forecasts and how long on updates? (Per question and per update would both be interesting)
Do you adjust towards the community median and if so how/why?
More general forecasting:
What's the most important piece of advice for new forecasters that isn't contained in Tetlock's superforecasting?
Do you forecast everyday things in your own life other than Twitter followers?
What unresolved question are you furthest from the community median on?
I think it depends somewhat on what you mean by longterm, but my (limited) understanding is that wild-animal welfare is currently very much in the "we should do some thinking and maybe some research stage but not take any actions until we know a lot more and/or have much greater ability to do so" stage, which does put it on a timeframe which is decidedly *not* "neartermist"
I'd also love to see a fictional world with a moral system that was explictly a karmic-utilitarian moral system. That is, the consequences of actions for particular agents matter proportionally to the amount of utility previously generated by those agents.
What if you had a world where karma is discovered to be real, but the amount of good karma you get is explicitly longtermist consequentialist and focus on expected utility? It'd be a great way of looking at effectiveness, and you'd also be able to explore really interesting neglectedness effects as people pile into effective areas.
I really enjoyed the blogpost, and think it's really valuable work, but have been somewhat dismayed to see virtually no discussion of the final part of the post, which is the first time the author attempts to include an admittedly rough term describing finite resources in the model. It... does not go well.
Given a lot of us are worried about x-risk, this seems to urgently merit further study.
I'd strongly suggest adding this post to appendix 1, especially given its 235 comments.