Cameron Holmes

Research Manager @ MATS
33 karmaJoined Working (6-15 years)London, UK

Bio

Participation
4

Amplifying the AI Alignment research projects of MATS scholars at LISA.

Formerly worked as a director of Product Management in Capital Markets, Data Analytics at Coalition Greenwich (S&P Global), McLagan (Aon)

Interested in Prediction markets, Semiconductors. AMF monthly donor for 9 years.

How others can help me

I am looking for any potential candidates for MATS scholars, potential collaborators including mentors or other organizations.

How I can help others

Managing career transitions from broader technology/finance to high-impact careers. Particular for those mid-way through their career, as parents or moving into AI Safety.

Anything related to MATS, in particular the extension research phase at LISA, London

Posts
1

Sorted by New

Comments
9

What makes you think this? Every technique there is is statistical in nature (due to the nature of the deep learning paradigm), and none are even approaching 3 9s of safety and we need something like 13 9s if we are going to survive more than a few years of ASI.


I also don't see how it's less foomy. SWE bench and ML researcher automation are still improving - what happens when the models are drop in replacements for top researchers?


The gap between weak AGI and strong AGI/ASI timeline predictions seems to have ticked up a bit. It doesn't seem like the intra-token reasoning/capabilities is scaling as hard as I'd previously feared. The models themselves are not getting so scarily capable and agentic in each forward pass, instead we are increasingly eliciting those capabilities/agency in context with the models remaining myopic and largely indifferent. 

If the new paradigm holds with a significant focus on scaling inference it seems to both be less aggressive (in terms of scaling intelligence) and more conducive to 'passing' safety.

The current paradigm likely places a much lower burden on hard interpretability than I expected ~1 year ago, it feels much more like a verification problem than a full solve. With current rates of interpretability progress (and AI accelerating safety ~inline with capabilities) we could actually be able to verify that a CoT is faithful and legible and that might be ~sufficient.

Agreed, I still think there's a reasonable chance that ML research does fall within the set of capabilities that quickly reach superhuman levels and foom is still on the cards, also more RL in general is just inherently quite scary. 

The 9s of safety makes sense from a control perspective but I think there's another angle, which is the possibility of a model that is aligned-enough to actually not want to pursue human extinction.

What is the eventual end result after total disempowerment? Extinction, right?

Potentially, but I think there's still room for scenarios where humans are broadly disempowered yet not extinct - worlds where we get a passing grade on safety. Where we effectively avoid strongly-agentic systems and achieve sufficient alignment such that human lives are valued, but fall short of the full fidelity necessary for a flourishing future.

Still this point has updated me slightly, I've reduced my disagreement.

My model looks something like this:

There are a bunch of increasingly hard questions on the Alignment Test. We need to get enough of the core questions right to avoid the ASI -> everyone quickly dies scenario. This is the 'passing grade'. There are some bonus/extra credit questions that we need to also get right to get an A (a flourishing future). 

We don't know exactly which questions will be included or in which section. We also don't know the thresholds for these grades and we are (rightly) focusing the vast majority of our efforts on the expected fundamental questions to maximise our chance of the passing grade. 

Relatively to ~1 year ago the 'passing grade' for alignment feels a bit easier and we've got a bit more study time. I've also become aware of just how much more difficult the A grade might be and that a pass might not be very valuable at all - I don't think anything has changed there, I was just somewhat ignorant of risks from gradual disempowerment.

It might make sense to dedicate say 5-20% of our effort to study for questions we expect in the bonus/extra credit section. I think we currently do less than that (perhaps 1-5%). So I think the vast majority of the effort should be spent on avoiding extinction, but I'm less sure about effort at the margin.


 

Digital sentience could also dominate this equation.

Cameron Holmes
1
0
0
71% ➔ 50% disagree

AI NotKillEveryoneism is the first order approximation of x-risk work.
 

I think we probably will manage to make enough AI alignment progress to avoid extinction. AI capabilities advancement seems to be on a relatively good path (less foomy) and AI Safety work is starting to make real progress for avoiding the worst outcomes (although a new RL paradigm, illegible/unfaithful CoT could make this more scary).

Yet gradual disempowerment risks seem extremely hard to mitigate, very important and pretty neglected. The AI Alignment/Safety bar for good outcomes could be significantly higher than avoiding extinction.

Most fundamentally human welfare currently seems highly contingent on our productivity and decoupling that could be very hard.

 

I'm completely sold on the arguments in general EV terms (the vast suffering, tractability, importance, neglect - even within EA), up to the limits of how confident I can be about anything this complex. That's basically the fringe possibilities - weird second, third-order impacts from the messiness of life that mean I couldn't be >98% on something like this.

The deontological point was that maybe there is a good reason I should only care or vastly weight humans over animals through some moral obligation. I don't currently believe that but I'm hedging for it, because I could be convinced.

I realise now I'm basically saying I 90% agree that rolling a D20 for 3+ is a good idea, when it would be fair to also interpret it that I 100% agree it's a good idea ex ante.

(Also my first comment was terrible, sorry I just wanted to get on the board on priors before reading the debate)

I think most of my reservations are mostly deontological, plus a few fringe possibilities

Thank you Jonny, admittedly I only made it to one event but it was my first in person interaction with an EA group and I really enjoyed it and found you very welcoming.

I just find it delightful that that HPMOR is the start of so many people's EA origin story, partly just as a curiosity as I had an opposite path to so many people (AMF > EA > LW > HPMOR)

Presumably there are many people alive today because of a chain of events started with EY writing a fanfic of all things.

Great post

Regarding the second point about how EAs (or anyone else) might exploit an inefficiency in this space, I think it's tricky just because the amount of other risks that inform the pricing of long-dated bonds. Many of these (climate, demographics, geopolitics, populism etc..) could wipe out any short (or especially leveraged short) position before TAI is realised.

As noted in my other comment I expect for someone with high-conviction views on short TAI timelines there are bets that are:

  • Much higher in expected returns
  • Less capital intensive
  • Less susceptible to other risks

Examples of these bets are broadly discussed elsewhere but often are related to long/short equity bets on  disrupting/disrupted companies and companies part of the supply chain (semiconductors design/fab/tooling, datacentre, data aggregators, communications etc..)

I think perhaps at best short long-dated bonds could form part of a short-timelines TAI bet in order to hedge against long positions elsewhere/maintain neutrality against other factors rather than the core position. It feels likely there are considerably better options for someone taking such a bet (as you allude to in the opportunities for future work)

Regarding the first point about the extent to which we should update timelines based on the fact the bond market is not pricing in short timelines for TAI; my prior is that in general the fixed income (bonds) markets are fairly efficient and are more sophisticated/efficient than equity markets. This leads me to initially believe we likely should update based on this/consider it more strongly than bullish equity sentiment towards some AI themes.

However on the flipside I think the size of this market does mean it can retain inefficiencies around subtle themes for longer.  I think of this as a form of Expecting Short Inferential Distances - there are a lot of inferential reasoning steps around TAI, scaling, take-off  etc.. which make it slower for conviction to spread when compared to something like demographic shifts which have a much more straightforward causality. This is relevant because to move government bond markets requires people to take this bet with a huge amount of assets as this is a very capital-intensive trade with a lot of exposure to other uncorrelated risks/conflating variable (climate, demographics, geopolitics, populism etc..). The reason I think it may be unlikely that many people are making this bet is related to this:

An analysis of the most capital-efficient way to bet on short AI timelines and the possible expected returns (“the greatest trade of all time”).

I suspect there are far more highly levered bets that market participants with a high-conviction belief in short TAI timelines could take, potentially diluting the impact on lower beta instruments (like bonds). For example I expect even being long fairly broad equity markets might outperform this bet and much more targeted bets (especially if they could be hedged against other risks, bringing them closer to  a 'pure' TAI bet) could be expected to return many multiples of the short-US30Y trade.

If the amount of money being managed by those with high conviction TAI views is 'small' (<<$100bn) then I expect there are many  more favourable inefficiencies/price dislocations for them to exploit and not a sufficient mass of 'smart TAI' money to spill over into long-dated bonds.