I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.
He was a Republican donor, but from what I understand, not really a MAGA donor. My impression was that he was funding people on both sides, who were generally in favor of their interests - but that their interests did genuinely include issues like bio/ai safety.
I think it's very reasonable to try to be bipartisan on these issues.
Thinking about this a bit more -
My knee-jerk reaction is to feel attacked by this comment, on behalf of the EA community.
I assume that one thing that might be going on is a miscommunication. Perhaps you believe that I was assuming that EAs could quickly swoop in, spent a little time on things, and be far more correct than many experience political experts and analysts.
I'm not sure if this helps, but the above really doesn't align with what I'm thinking. More something like, "We could provide more sustained help through a variety of methods. People can be useful for many things, like direct volunteering, working in think tanks, being candidates, helping prioritization, etc. I don't expect miracle results - I instead expect roughly the results of adding some pretty smart and hardworking people."
On EAs in policy, I'd flag that:
- There's a good number of people currently working in AI governance, Bio governance, and animal law.
- Very arguably, said people have had a decent list of accomplishments and power positions, given that such work was fairly recent. See Biden's executive orders on AI, or the UK AI Security Institute. https://www.aisi.gov.uk/
- People like Dustin Moskovitz and SBF were some highly prominent donors to the Democratic party.
I think the EA policy side might not get a huge amount of popularity here, but it seems decently reputable to me. Mistakes have been made, but I think a decent report on the wins and losses would include several wins.
I do agree that finding others doing well and helping them is one important way to help. I'd suspect that the most obvious EA work would look like prioritization for policy efforts. This has been done before, and there's a great deal more that could be done here.
I think in-comparison to the space, EA has a comparative advantage more in talent than in money. I think the Harris campaign got $2B or so of donations, but I get the impression that it could have used smarter + more empirically-minded people. That said, there is of course the challenge or actually getting those people to be listened to.
I've substantially revised my views on QURI's research priorities over the past year, primarily driven by the rapid advancement in LLM capabilities.
Previously, our strategy centered on developing highly-structured numeric models with stable APIs, enabling:
However, the progress in LLM capabilities has updated my view. I now believe we should focus on developing and encouraging superior AI reasoning and forecasting systems that can:
This represents a pivot from scaling up traditional forecasting systems to exploring how we can enhance AI reasoning capabilities for forecasting tasks. The emphasis is now on dynamic, adaptive systems rather than static, pre-structured models.
(I rewrote with Claude, I think it's much more understandable now)
I kind of hate to say this, but in the last year I've become much less enamored by this broad idea. Due to advances in LLMs, my guess now is that:
1. People will ask LLMs for ideas/forecasts at the point that they need them, and the LLMs will do much of the work right then.
2. In terms of storing information and insights about the world, Scorable functions are probably not the best (it's not clear what is)
3. Ideally, we could basically treat the LLMs as the "Scorable Function". As in, we have a rating for how good a full LLM is. This becomes more important than any Scorable Function.
That said, Scorable Functions could be a decent form of LLM output here and there. It would be obvious to train LLMs to be great at outputting Scorable Functions.
More info here:
https://forum.effectivealtruism.org/posts/mopsmd3JELJRyTTty/ozzie-gooen-s-shortform?commentId=vxiAAoHhmQqe2Afc9
Quick list of some ideas I'm excited about, broadly around epistemics/strategy/AI.
1. I think AI auditors / overseers of critical organizations (AI efforts, policy groups, company management) are really great and perhaps crucial to get right, but would be difficult to do well.
2. AI strategists/tools telling/helping us broadly what to do about AI safety seems pretty safe.
3. In terms of commercial products, there’s been some neat/scary military companies in the last few years (Palantir, Anduril). I’d be really interested if there could be some companies to automate core parts of the non-military government. I imagine there are some parts of the government that are particularly tractable/influenceable/tractable. For example, just making great decisions on which contractors the government should work with. There’s a ton of work to do here, between the federal government / state government / local government.
4. Epistemic Evals of AI seem pretty great to me, I imagine work here can/should be pushed more soon. I’m not a huge fan of emphasizing “truthfulness” specifically, I think there’s a whole lot to get right here. I think my post here is relevant - it’s technically specific to evaluating math models, but I think it applies to broader work. https://forum.effectivealtruism.org/posts/fxDpddniDaJozcqvp/enhancing-mathematical-modeling-with-llms-goals-challenges
5. One bottleneck to some of the above is AI with strong guarantees+abilities of structured transparency. It’s possible that more good work here can wind up going a long way. That said, some of this is definitely already something companies are trying to do for commercial reasons. https://forum.effectivealtruism.org/posts/piAQ2qpiZEFwdKtmq/llm-secured-systems-a-general-purpose-tool-for-structured
6. I think there are a lot of interesting ways for us to experiment with [AI tools to help our research/epistemics]. I want to see a wide variety of highly creative experimentation here. I think people are really limiting themselves in this area to a few narrow conceptions of how AI can be used in very specific ways that humans are very comfortable with. For example, I’d like to see AI dashboards of “How valuable is everything in this space” or even experiments where AIs negotiate on behalf of people and they use the result of that. A lot of this will get criticized for being too weird/disruptive/speculative, but I think that’s where good creative works should begin.
7. Right now, I think the field of “AI forecasting” is actually quite small and constrained. There’s not much money here, and there aren’t many people with bold plans or research agendas. I suspect that some successes / strong advocates could change this.
8. I think that it’s likely that Anthropic (and perhaps Deepmind) would respond well to good AI+epistemics work. “Control” was quickly accepted at Anthropic, for example. I suspect that it’s possible that things like the idea of an “Internal AI+human auditor” or an internal “AI safety strategist” could be adopted if done well.
I think SBF was fairly unique in caring about this, and his empire collapsed before he did anything like this in that election. When I said "undervalue", I wasn't referring to SBF, given he hasn't been active in this time. (Obvious flag that while I might have sympathized with some of SBF's positions, I very much disagree with many of his illegal and fraudulent actions)
Looking back though, paying $5B for Trump to definitely not run/win seems like a great deal to me, though the act of paying him raises a lot of qualms I'd be uncomfortable with.
It is orthogonal. More that TAI might be soon, we probably want an administration that would both promote AI safety and broadly be cooperative/humble/deliberate.