One of my favorite questions to ask longtermists is: “where do your inside views differ most from your outside views – i.e. what are your hot takes?”
This question has a few benefits:
- It tends to extract the information that the person thinks is most valuable.
- It can get people to realize that a popular idea isn’t sitting right with them and think critically about why that might be.
- It counteracts ‘double updating.’ Let’s say that ‘Bob’ comes up with an idea and tells it to ‘Alice’ and ‘John’ on separate occasions. Alice might repeat the idea to John, and he might think that Alice and Bob came up with this idea independently and make a larger update in favor of it than he should. Propagating inside views helps to counteract this echo.
I asked a lot of people for their hot-takes during EAGx Boston. Surprisingly, people often had similar answers, so some of these ideas are probably not hot takes. Here is a list of the answers that stick out in my memory. I don’t fully agree with all of these opinions, but I think they are all helpful perspectives.
Field builders should be thinking more long term
People might be thinking ‘what is the best project can I can finish in the next few months?’ when they should be thinking more along the lines of ‘what sort of organizations will we need in the next couple of decades and how can I start building them?’
Teams should be more permanent
Teams are sometimes formed around a field-building project and disassemble after the project is finished. It might be better if people focus on forming really good teams and continue to work in these them, allowing the members to learn how to work more efficiently together over time.
AI governance theories of change are often weak.
A common AI governance plan is to get a position in government or a think tank and ‘hope you are in the right place at the right time to influence policy when it matters.’ Maybe this is a good thing to do if you are exclusively passionate about these kinds of careers; but if you are optimizing for impact, doing AI governance research or field-building might be superior alternatives.
The standard University EA group pipeline does not make sense
Typically, students are exposed to Effective Altruism, then longtermism, and then to some specific cause area like AI safety. We probably lose a lot of people at each of these steps. It may be more efficient to directly get people into AI safety since EA or longtermsism are not necessary prerequisites for being concerned about it.
There should be a variety of paths to AI-safety, but the primary ones should be the most direct.
Eliminate required readings from University fellowships
Readings are good, but most people won’t do them before fellowship meetings. Instead, make meetings longer and spend the first hour reading through the material. A similar approach seems to be working well for the AI Safety group at Harvard.
Retreats are better than fellowships
Empirically, retreats tend to get people more engaged than fellowships do. Fellowships could be a good filtering mechanism to figure out who should be invited to retreats, but maybe there are more efficient filtering mechanisms we can use – e.g. a speaking event where people can sign up for a one-on-one afterward.
The idea that fellowships are far from ideal is not new.
Community builders should have good models of direct work
If you are trying to do AI safety field building, it is important to know what kind of people you want to recruit. Having an understanding of the work being done in the field is very helpful for this. This means that if you are a field-builder, you should frequently read up on the object-level work being done and talk to the people doing it.
Feel free to add your own hot takes in the comments.