OscarD🔸

1724 karmaJoined Working (0-5 years)Oxford, UK

Comments
271

I just remembered another sub-category that seems important to me: AI-enabled very accurate lie detection. This could be useful for many things, but most of all for helping make credible commitments in high-stakes US-China ASI negotiations.

Thanks Caleb, very useful. @ConnorA I'm interested in your thoughts re how to balance comms on catastrophic/existential risks and things like Deepfakes. (I don't know about the particular past efforts Caleb mentioned, and I think I am more open to comms of Deepfakes being useful to develop a broader coalition, even though deepfakes are a tiny fraction of what I care about wrt AI.)

Have you applied to LTFF? Seems like the sort of thing they would/should fund. @Linch @calebp if you have actually already evaluated this project I would be interested in your thoughts as would others I imagine! (Of course, if you decided not to fund it, I'm not saying the rest of us should defer to you, but it would be interesting to know and take into account.)

Unclear - as they note early on, many people have even shorter timelines than Ege, so not representeative in that sense. But probably many of the debates are at least relevant axes people disagree on.

If these people weren't really helping the companies it seems surprising salaries are so high?

I think I directionally agree!

One example of timelines feeling very decision-relevant is for people who are looking to specialise in partisan influence, you might want to specialise far more in Republicans the larger your credence in TAI/ASI by Jan 2029. Whereas for longer timelines on priors Democrats have a ~50% chance of controlling the presidency from 2029, so specialising in Dem political comms could make more sense.

Of course criticism is only a partially overlapping set with advice, but this post reminded me a bit of this take on giving and receiving criticism.

I overall agree we should prefer USG to be better AI-integrated. I think this isn't a particularly controversial or surprising conclusion though, so I think the main question is how high a priority this is, and I am somewhat skeptical it is on the ITN pareto frontier. E.g. I would assume plenty of people care about government efficiency and state capacity generally, and a lot of these interventions are generally about making USG more capable rather than too targeted towards longtermist priorities.

So this felt like neither the sort of piece targeted to mainstream US policy folks, nor that convincing for why this should be an EA/longtermist focus area. Still, I hadn't thought much about this before, and so doing this level of medium-depth investigation feels potentially valuable, but I'm unconvinced that e.g. OP should spin up a grantmaker focused on this (not that you were necessarily recommending this).

Also, a few reasons govts may have a better time adopting AI come to mind:

  • Access to large amounts of internal private data
  • Large institutions can better afford one-time upfront costs to train or finetune specialised models, compared to small businesses

But I agree the opposing reasons you give are probably stronger.

we should do what we normally do when juggling different priorities: evaluate the merits and costs of specific interventions, looking for "win-win" opportunities

If only this were how USG juggled its priorities!

Load more