I do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI).
I write a Forecasting Newsletter, and have programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms. I also generally enjoy winning bets against people too confident in their beliefs.
I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:
Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.
Otherwise, I like to spend my time acquiring deeper models of the world, and generally becoming more formidable. A good fraction of my research is available either on the EA Forum or backed-up on nunosempere.com, which also hosts my more casual personal blog.
I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." After that, I joined QURI and spent some time in the Bahamas as part of the FTX EA Fellowship.
Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.
You can share feedback anonymously with me here.
Note: You can sign up for all my posts here: <https://nunosempere.com/.subscribe/>, including posts that I don't feel like posting on the EA Forum, or that I don't think are an appropriate venue for it.
I think you may have a model where you don't want to have comments above a given level of rudeness/sarcasm/impoliteness/political incorrectness, etc. However, I would prefer that you had a model where you give a warning or a ban if a comment or a user exceeds some rudeness - value threshold, as I think that would provide more value: I would want to have the rude comments if they produce enough value to be worth it.
And I think that you do want to have the disagreeable people push back, to discourage fake group consensus.
I would go with this, though using more like 60 years
I would also discount a bit for counterfactual mortality.
> The piece relies heavily on criticism of Connor, Conjecture CEO, but does not attempt to provide a balanced assessment: there are no positive comments written about Connor along with the critiques
Could you elaborate more on your answer to this? I'm left a bit uncertain about whether there are redeeming aspects of Connor/Conjecture. For example, I might be inclined to count switching from capabilities to alignment for a lot of brownie points.
Our impression is that it's easy to use but no more powerful than existing open-source models like Whisper, although we are not aware of any detailed empirical evaluation.
Nitpick: I haven't tried it, but per <https://platform.conjecture.dev/>, it seems like it has diarization, i.e., the ability to distinguish between different speakers. Whisper doesn't have this built-in, and from what I recall, getting this to work with external libraries was extremely annoying.
Here is something else that I'm confused about: Did Bush announce PEPFAR specifically as a feel good measure to somehow support his invasion of Iraq, and contribute to the perception that they are the good guys? You mention "Twenty years ago, in the same State of the Union speech in which he made the case for invading Iraq..." If so, then the bundle was negative, in the same way that FTX+FTX Future Fund could be negative.
Emily Oster declared that “treating HIV doesn’t pay.” “It is humane to pay for AIDS drugs in Africa,” she wrote, “but it isn’t economical. The same dollars spent on prevention would save more lives.”
Twenty years later, with $100 billion dollars appropriated[26] under both Democratic and Republican administrations, and millions of lives saved, it’s hard to argue a different foreign aid program would’ve garnered more support, scaled so effectively, and done more good. It’s not that trade-offs don’t exist, we just got the counterfactual wrong.
It's not clear to me that the core point of the essay goes through. For instance, the same amount of money as applied to malaria would also have helped many people, driven down prices, encouraged innovation—maybe the equivalent would have been a malaria vaccine, a gene drive, or mass fumigations.
i.e., it seems plausible that both of these could be true:
I mean, I think it exceeds some level of rudeness in that you consider the hypothesis that Karnofsky might not be an impeccable boy scout, which some people might consider to be rude. But I also think that it's fine to exceed that threshold, so ¯\_(ツ)_/¯