I run Sentinel, a team that seeks to anticipate and respond to large-scale risks. You can read our weekly minutes here. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable. I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:
Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.
I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:
But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value.
My career has been as follows:
You can share feedback anonymously with me here.
Note: You can sign up for all my posts here: <https://nunosempere.com/.newsletter/>, or subscribe to my posts' RSS here: <https://nunosempere.com/blog/index.rss>
I think there is something powerful about noticing who is winning and trying to figure out what the generators for their actions are.
On this specifically:
the most cost effective way a billionaire entrepreneur and major government contractor could get valuable ROI out of an easily-flattered president with overlapping interests was by buying Twitter
This is not how I see it. Buying Twitter and changing its norms was a surprisingly high-leverage intervention in a domain where turning money into power is notoriously difficult. One of the effects, but not the only one, was influencing the outcome of the 2024 US elections.
I've done some work for someone at the $200K/year level. Maybe one option would be to put in together three to ten people at your $20k-$100k range and pitch in to hire a researcher for a month to answer your pressing common questions.
I'm also kinda frustrated by the low number of funders for speculative not AI-specific gcr work (perhaps like your own Nucleic Acid Observatory, or my own Sentinel). I've thought about starting a donor circle for this, but then I'd just have an obvious conflict of interest. Still, I'd like such a thing to exist.
If you have {publicly competent, publicly incompetent, privately incompetent, privately competent}, we get some information that screens off publicly competent. That leaves the narrower set {publicly incompetent, privately incompetent, privately competent}. So it's still an update. I agree that in this case there is some room for doubt though. Depending on which institutions we are thinking of (Democratic party, newspapers, etc.), we also get some information from the speed at which people decided on/learned that Biden was going to step down.
First section in Forecasting newsletter: US elections, posting here because it has some overlap with EA.
Thanks for mentioning Sentinel. Two points: