I'm not really sure this contradicts what I said very much. I agree the V-Dem evaluators were reacting to Trump's comments, and this made them reduce their rating for America. I think they will react to Trump's comments again in the future, and this will again make them likely reduce their rating for America. This will happen regardless of whether policy changes, and be poorly calibrated for actual importance - contra V-Dem, Trump getting elected was less important than the abolition of slavery. Since I think Siebe was interested in policy changes rather than commentary, this means V-Dem is a bad metric for him to look at.
Let’s see what Manifold markets think about this.
https://manifold.markets/Siebe/if-trump-is-elected-will-the-us-sti
https://manifold.markets/Siebe/if-a-democrat-is-elected-president
You can see from this that the two sides aren’t equal.
Unfortunately these questions will be resolved based on V-DEM indicators which are a poor metric for this question, as I illustrated here. The scores are not particularly rigourous or consistent and the evaluators have clear partisan bias.
Suppose we have some LLM interpritability technology that helps us take LLMs from a bit worse than humans at planning to a bit better (say because it reduces the risk of hallucinations), and these LLMs will ultimately be used by both humans and future agentic AIs. The improvement from human-level planning to better-than-human level benefits both humans and optimiser AIs. But the improvement up to human level is a much bigger boost to the agentic AI, who would otherwise not have access to such planning capabilities, than to humans, who already had human-level abilities. So this interpritability technology actually ends up making crunch time worse.
It's different if this interpritability (or other form of safety/alignment work) also applied to future agentic AIs, because we could use it to directly reduce the risk from them.
This is an org that's sufficiently EA-aligned for me to have met several employees at EAGs
This seems like a very poor metric. I would not say OpenAI is an EA-aligned company; the standard EA view on OpenAI is it is a spectacularly destructive company that people would prefer stopped pushing the capabilities frontier.
I don't buy this is a morally or socially significant distinction. Do we really believe that a parallel world Warren, who made a public pledge to give his money away, and fully intended to, but never got around to actually writing a will before he changed his mind, would be significantly less blameworthy, or would escape opprobrium?
Part of my intuition is that the temporal ordering doesn't matter - if anything it's better to give sooner - so we should not treat more harshly someone who donated and then stopped than someone who consumed frivolously and then saw the light later in life.
Immigration security, college education and early childhood development seem like they straightforwardly fall into 'addressing the needs of society' according to standard usage of the term. They're not EA causes, but I'm not aware of Warren promising it would go to EA causes, or even things we would like at all. This is philanthropy (doing stuff to change society) as contrasted with personal consumption (boats, wine, parties etc.)
Thanks for writing this update Toby!
With Covid, we had the bizarre situation where the biggest global disaster since World War II was very plausibly caused by a lab escape from a well-meaning but risky line of research on bat coronaviruses. Credible investigations of Covid origins are about evenly split on the matter. It is entirely possible that the actions of people at one particular lab may have killed more than 25 million people across the globe.
I'm far from an expert on the subject, but my impression was that a lot of people were convinced by the rootclaim debate that it was not a lab leak. Is there a specific piece of evidence they might have missed to suggest that lab leak is still plausible? (The debate focused on genetically modified leaks, and unfortunately didn't discuss the possibility of a leak of naturally evolved disease).
Here is a new law paper on the subject, arguing that giving votes to kids (via their parents) is desirable and legally tractable: