One of my favorite questions to ask longtermists is: “where do your inside views differ most from your outside views – i.e. what are your hot takes?”

This question has a few benefits:

  1. It tends to extract the information that the person thinks is most valuable.
  2. It can get people to realize that a popular idea isn’t sitting right with them and think critically about why that might be.
  3. It counteracts ‘double updating.’ Let’s say that ‘Bob’ comes up with an idea and tells it to ‘Alice’ and ‘John’ on separate occasions. Alice might repeat the idea to John, and he might think that Alice and Bob came up with this idea independently and make a larger update in favor of it than he should. Propagating inside views helps to counteract this echo.

I asked a lot of people for their hot-takes during EAGx Boston. Surprisingly, people often had similar answers, so some of these ideas are probably not hot takes. Here is a list of the answers that stick out in my memory. I don’t fully agree with all of these opinions, but I think they are all helpful perspectives.
 

Field builders should be thinking more long term

People might be thinking ‘what is the best project can I can finish in the next few months?’ when they should be thinking more along the lines of ‘what sort of organizations will we need in the next couple of decades and how can I start building them?’ 

 

Teams should be more permanent

Teams are sometimes formed around a field-building project and disassemble after the project is finished. It might be better if people focus on forming really good teams and continue to work in these them, allowing the members to learn how to work more efficiently together over time.

 

AI governance theories of change are often weak.

A common AI governance plan is to get a position in government or a think tank and ‘hope you are in the right place at the right time to influence policy when it matters.’ Maybe this is a good thing to do if you are exclusively passionate about these kinds of careers; but if you are optimizing for impact, doing AI governance research or field-building might be superior alternatives.
 

The standard University EA group pipeline does not make sense

Typically, students are exposed to Effective Altruism, then longtermism, and then to some specific cause area like AI safety. We probably lose a lot of people at each of these steps. It may be more efficient to directly get people into AI safety since EA or longtermsism are not necessary prerequisites for being concerned about it. 

There should be a variety of paths to AI-safety, but the primary ones should be the most direct.


 


Eliminate required readings from University fellowships

Readings are good, but most people won’t do them before fellowship meetings. Instead, make meetings longer and spend the first hour reading through the material. A similar approach seems to be working well for the AI Safety group at Harvard.

 

Retreats are better than fellowships

Empirically, retreats tend to get people more engaged than fellowships do. Fellowships could be a good filtering mechanism to figure out who should be invited to retreats, but maybe there are more efficient filtering mechanisms we can use – e.g. a speaking event where people can sign up for a one-on-one afterward. 

The idea that fellowships are far from ideal is not new.

 

Community builders should have good models of direct work

If you are trying to do AI safety field building, it is important to know what kind of people you want to recruit. Having an understanding of the work being done in the field is very helpful for this. This means that if you are a field-builder, you should frequently read up on the object-level work being done and talk to the people doing it.

Feel free to add your own hot takes in the comments.

45

9 comments, sorted by Click to highlight new comments since: Today at 9:28 AM
New Comment

Thanks for the Harvard AI Safety Team shout-out! I do think in person reading is great, because it (1) creates a super low barrier to showing up, and (2) feels good/productive to be in a room with everyone silently reading. Two points on this:

  1. We usually read for much more than 30 minutes. Our meetings are 2 hours (5:30-7:30), and often over half is silently reading (usually alternating with discussion/lecture).
  2. Many people (myself included) prefer reading physical paper (shame!). I usually print out the readings (and I've given out binders). I think there are some people who learn better reading on paper, but wouldn't be bothered to actually print things out.

This is a cool idea! It feels so much easier to me to get myself started reading a challenging text if there's a specified time and place with other people doing the same, especially if I know we can discuss right after. 

I do think it may be important to make sure new people know what to expect in the beginning, since the silent reading seems like it could be a bit weird if someone isn't expecting it. Also, I think that if it's really just silent reading, people who have read the thing in advance should be aware so that they don't just show up and have to wait on everyone else, and that it should be acceptable if they don't show up until 30 minutes in/after the silent reading finishes.

Worth noting that (1) the AST is for people already planning to go into alignment after graduating (and isn't an intro program), and (2) I usually have backups prepared in case people have already read the thing (I don't think showing up 30 minutes in would be great!).

Got it! I edited the point about in-person reading so that it provides a more accurate portrayal of what you all are doing.

Great post, Joshua! I mostly second all of these points.

I'd add another hot take:

Both the return of fellowships and retreats mostly tracks one variable, and that is time participants spend in small (eg. one-on-one) interactions with highly engaged EAs. Retreats are good mostly because they're a very efficient way to have a lot of this interaction in a small period of time.  More in this here.

Especially regarding points like "retreats > fellowships" and "consider going straight for AI risk before EA/longtermism": I'd be interested to see some kind of virtual event where community organizers and some other participants lay out some of their  views community building strategies/etc. in a collaborative space (e.g., a Kialo argument map, an epistemic map, or a plain ol' Google Doc), identify where people disagree and which disagreements are most consequential (and maybe also which disagreements seem most tractable),  and then share supporting/contrasting arguments and clarifying comments/questions. (Not sure what this event could be called; a "claimathon," perhaps?)

Personally, I think Kialo's multi-thesis discussion could be a great platform for something like this, but I'd be interested to hear if anyone has other suggestions.

(Assuming something like this doesn't already exist and I am just not aware of it)

This could be helpful. Maybe posting questions on the EA forum and allowing the debate to happen in the comments could be a good format for this.

The problem with using forums/comment chains is that the debate can become difficult to navigate and contribute to, due to the linear presentation of nested and parallel arguments. 2-dimensional/tree formats like Kialo seem to handle the problem much more efficiently, in my experience.