This is a timely post. It feels like funding is a critical obstacle for many organisations.
One idea: Given the recent calls by many tech industry leaders for rapid work on AI governance, is there an opportunity to request direct funding from them for independent work in this area.
To be very specific: Has someone contacted OpenAI and said: "Hey, we read with great interest your recent article about the need for governance of superintelligence. We have some very specific work (list specific items) in that area which we believe can contribute to making this happen. But we're massively understaffed and underfunded. With $1m from you, we could put 10 researchers working on these questions for 1 year. Would you be willing to fund this work?"
What's in it for them? Two things:
Forgive me if this is something that everyone is already doing all the time! I'm still quite new to EA!
IMHO this is quite an accurate and helpful statement, not a euphemism. I offer this perspective as someone who has worked many years in a corporate research environment - actually, in one of the best corporate research environments out there.
There are three threads to the comment:
Put all that together, and it's logical that once we have an AI that can do a specific domain task as well as a human (e.g. design and interpret simulated research into potentially interesting candidate molecules for drugs to fight a given disease), it is almost a no-brainer to reach the point where a corporation could use AI to massively accelerate their progress.
As AI gets closer to AGI, the domains in which AI can work independently will grow, the need for human involvement will decrease and the pace of innovation will grow. Yes, there will be some limits, like physical testing, where AI will still need humans, but even there robots already do much of the work, so human involvement is decreasing every day.
it's also important to consider who was saying this: OpenAI. So their message was NOT that AI is bad. What they wanted us to take away was that AI has huge potential for good - like the way it can accelerate the development of medical cures, for example - BUT that it is moving forward so fast and most people do not realise how fast this can happen, and so we (in the know) need to keep pushing the regulators (mostly not experts) to regulate this while we still can.
Just reading this now. I love the approach and especially the tangibility of it - identifying specific institutions, eventually developing specific plans to influence them in specific ways.
Of course there are people spending billions of dollars to influence some of these institutions, but that doesn't mean that a small, well-organised group cannot have an important impact with a well-designed, focused campaign.
My first reaction to the list itself is that it feels quite narrow (government institutions and IT companies). I wonder if this reflects reality or is just a result of the necessary limitations of a first iteration with quite limited resources. Nobody could question that every institution on this list is an important, influential institution, and if you manage to influence any of them in a positive way, that will be a great success.
My point is more that there seem to be whole classes of organisations and institutions with huge impact which are not represented at all in the list. And while one could perhaps argue that any one of the institutions listed is more important than any one of the institutions I suggest below, I'm not sure that necessarily means that the most effective way forward is to focus on just the types of institutions listed, vs. looking at what potential there is in a much broader group - especially focusing on the question of influenceability (on which I'd suggest several of the institutions listed above might score quite poorly).
Here are a few of the categories I would have expected to see in the list, but didn't:
I'm not criticising the work - I think it's fantastic! - but rather wondering if it would be worth someone doing a second iteration and looking at some of these bodies which may not have been so obvious. All the ones I've listed above just reflect my personal expertise and experience as a chemical engineer, so probably is also far too narrow. Would be interesting to see what ideas we might get from a group of others, maybe historians, geologists, teachers, lawyers, biologists, astronomers, ...
Really looking forward to following this and seeing where it leads!
Thanks Jessica and Sean for this powerful and inspiring post.
I would add one more, perhaps even more important, way in which engineers can contribute here - which is more or less what you have just done!
As engineers, you probably don't even consciously realise this anymore, but the type of analysis you've shared here is pure engineering, it's a way of thinking that we get so drilled into us that we don't realise we didn't always think that way.
For example, the three layer model (prevention, response, resilience) is almost perfectly analogous to how chemical engineers study explosion safety when handling solvents. First, you ask how to prevent an explosion (safe procedures, no ignition sources, nitrogen-blanketing, ...). Second, you ask how to minimise the risk of harm if there is an explosion (safe enclosures, PPE, minimum people present, etc. ...), and third, you study how to minimise the extent of harm (fire-evacuation procedures, emergency-help, first-aid training, ...).
From this perspective, I'd also add one comment, which is that the assumption that each reduction of 50% in one means to total reduction of 50% is only true if they are independent. One of the most difficult challenges in a safety analysis is to identify cases where one accident can break two barriers at once. The stereotypical example of this (from my youth) was the nuclear war scenario in which the electromagnetic wave from the explosion destroyed the communication structure and so a lot of the response capability. We know about that now and can design around it, but there are probably other factors, like the viral infection that makes it impossible for the vaccine designers to do their work.
I look forward to seeing more and more engineers working on civilisation resilience! Thanks for getting the ball rolling with Hi-Eng!
Really great discussion. How can we get this kind of information out into the general population?
IMHO the biggest challenge we face is convincing people that the default outcome, if we do nothing, is more likely to be that we get an AI which is much more powerful than humans. Tom and Luisa, you do a great job of making this case. If someone disagrees, it is up to them to demonstrate where is the flaw in the logic.
I think we face three critical challenges in getting people to act on this as urgently as we need to:
All this means that most of us (I include myself in this) read this article, fully accept that Tom's arguments are compelling, realise that we absolutely must do something, but somehow do not rush out and storm the parliament demanding immediate action. Instead, we go on to the next item on our to-do list, maybe laundry or grocery shopping ... I'm really determined to figure out a way to overcome this inertia.
*Obviously this is true for those of us in the current generation, in the West. I'm sure those who lived through world wars or famines or national wars, even those today in Syria or Ukraine or Sudan, will have a better understanding of how things can suddenly go wrong. But most of the people taking the decisions about AI have never experienced anything like that.
Great article!
The analogy to the economy at the end is wonderful. A lot of us don't realise how badly the economy works. But it's easy to see by just thinking about AI and what's happening right now. People are speculating that AI might one day do as much as 50% of the work now done by humans. A naive outsider might expect us to be celebrating in the streets and introducing a 3-day work-week for everyone. But instead, because our economy works the way it does, with almost all of most people's income directly tied to their "jobs", the reaction is mostly fear that it will eliminate jobs and leave people without any income.
I'm guessing that the vast majority of people would love to move to a condition (which AI could enable) where everyone works only 50% as much but we keep the same benefits. But there is no realistic way to get there with our economy, at least not quickly. Even if we know what we want to achieve, we just cannot overcome all the barriers and Nash equilibria and individual interests. We understand the principles of each different part of the economy, but the whole picture is just far too complex for anyone to understand or for us, even with total collaboration, to manipulate effectively.
I'm sure that if we were trying to design the economy from scratch, we would not want to create a system in which a hedge-fund manager can earn 1000 times as much as a teacher, for example. But that's what we have created. If we cannot control the incentives for humans within a system that we fundamentally understand, how well can we control the incentives for an AI system working in ways that we don't understand?
It's worrying. And yet, AI can do so much good in so many ways for so many people, we have to find the right way forward.
This is a great post.
The ideal job is the one where you're doing things you love to do anyway, and getting paid for it. But it's hard to know in advance what that will be. So I would encourage people to test the waters a bit too.
For example, lab-research is exciting, but for each 3 hour experiment you might spend a week preparing, doing risk-analyses, ordering materials, running QA checks, requesting budget, booking lab-space. And you may end up in a lab where you need to spend 15 minutes every time you enter or exit the lab just putting on and taking off protective clothing. Some people thrive in this environment, they love the details and the precision and the perfectionism of getting everything just right.
Likewise, doing literature research and learning about the leading edge of the field is fascinating. But are you sure you're the kind of person who will look forward to having a 40-page technical document in highly concise, technical language to read every morning and every afternoon?
Being involved in policy-setting feels like an incredibly important role - and it is. But it also requires keeping your own ego and opinions very much in check, being able to push enthusiastically for incremental gains, being able to compromise strategically, being willing to accept things that you don't like, listening respectfully to opinions you find repulsive and so on. (for example: imagine you're negotiating with China about cutting methane emissions and they say "we can agree to your proposal, but in return, we want you to endorse our one-China policy on Taiwan" ).
The people who are good at this will talk about the times they succeeded, but you need to be aware of how much effort and how many failed efforts they have to deal with. It requires huge degrees of resilience and grit. It is not for everyone.
Really like this post. Simple, clear and very provocative. It would be great to see it shared more widely.
If we could get people to ask themselves "which camp do I belong to?" and then to act accordingly ...
Most of us look back at history and assume we would have been the exceptions - the people opposing slavery, protecting Jews, giving to the poor, etc. But the reality of our actions today (my own absolutely included) belies this for most of us.
Your post is a timely reminder for us to ask ourselves some questions.
There is a corporate motto: "10% of decisions need to be right. 90% of decisions just need to be taken!" which resonates perfectly with this post.
To put this in an EA context - if you're unsure which of two initiatives to work on, that probably means that (to the best of your available knowledge) they are likely to have similar impacts. So, in the grand scheme of things, it probably doesn't matter which you choose. But the time you spend deciding is time that you are NOT dedicating to either of the initiatives.
However, this is a good rule-of-thumb, but you need to be wary of exceptions. There are those 10% of cases where your decision matters a lot. In my case, as a chemical engineer, decisions about safety would typically be in that 10%. In an EA context, maybe it's decisions where you really are not sure if a particular initiative might be doing more harm than good which fall into this 10%.
How to decide whether you can already take a decision?
"for superhuman rogue AIs to be catastrophic for humanity, they need to not only be catastrophic for 2023_Humanity but also for humanity even after we also have the assistance of superhuman or near-superhuman AIs."
This is a very interesting argument, and definitely worthy of discussion. I realise you have only sketched your argument here, so I won't try to poke holes in it.
Briefly, I see two objections that need to be addressed:
1. One fear is that the rogue AIs may well be released on 2023_Humanity or a version very close to that due to the exponential capability growth we could see if we create an AI that is able to develop better AI itself. Net, it may be enough that it would be catastrophic for 2023_Humanity.
2. The challenge of developing aligned superhuman AIs which would defend us against rogue AIs while themselves offering no threat is not trivial, and I'm not sure how many major labs are working on that right now, or if they can even write a clear problem-statement about what such an AI system should be.
From first principles, the concern is that this AI would necessarily be more limited (it needs to be aligned and safe) than a potential rogue AI, so why should we believe we could develop such an AI faster and enable it to stay ahead of potential rogue AIs?
Far from disagreeing with your comment, I'm just thinking about how it would work and what tangible steps need to be taken to create the kind of well-aligned AIs which could protect humanity.