Jonathan_Michel

Head of Property and Head of Staff Support @ Effective Ventures Foundation UK
498 karmaJoined Oct 2019Working (0-5 years)

Bio

Participation
5

I am working at EV as the Head of Property and Head of Staff Support, managing the office projects of EV and its staff support team. I formerly was the office manager of Trajan House, an EA Hub in Oxford. I studied Philosophy and Economics in Bayreuth, Germany and was one of the core organisers of the EA group in Bayreuth for about 4 years. I love Beach Volleyball, bouldering, and vegan cooking.

Please reach out if you want to chat about operations, office management, EA Hubs, diversity, and community building strategy.

Comments
22

Thanks for writing this, Alix! 

I just wanted to add some data about two of your empirical claims about the prevalence of native English speakers in leadership positions:

Native English speakers are overrepresented in EA’s thought leadership

One very crude measure of this is to look at the attendees of the 2023 coordination forum. 
AFAICT, based on the LinkedIn profiles of the 31 attendees that are public: 22 are native English speakers, and 7 are not (two I am not sure about). Hence ~22% are non-native English speakers.

The monopoly on funding does not facilitate this – my impression is that it is especially true in community building. Indeed, homogeneity in culture and ways of thinking might strengthen itself with grantmakers selecting projects and people who are closer to them [...]

The three funders I looked up are OP, CEA, and EA Funds. The percentage of non-native English speakers is slightly higher than the percentage attending the coordination forum (7 out of 23 people are non-native English speakers, ~30%): 

  1. On the OP CB team ("GLOBAL CATASTROPHIC RISKS CAPACITY BUILDING"), four out of six people are native English speakers;
  2. On the CEA Groups team, three out of six people are native English speakers;
  3. At EA Funds, four out of four grant managers are native English speakers, and five out of seven grant advisors are native English speakers;

[I have not read the whole post and might be missing something]

Yeah, I also felt confused/uneasy about this section and it did not feel like a strong piece of evidence to have a numbered list that only contains stuff like:

  1. Alice accused [Person] of [abusing/persecuting/oppressing her] 
  2. Alice accused [Person] of [abusing/persecuting/oppressing her] 
  3. Alice accused [Person] of [abusing/persecuting/oppressing her] 
  4. Alice accused [Person] of [abusing/persecuting/oppressing her] 
  5. Alice accused [Person] of [abusing/persecuting/oppressing her] 
  6. Alice accused [Person] of [abusing/persecuting/oppressing her] 
  7. Alice accused [Person] of [abusing/persecuting/oppressing her] 
  8. Alice accused [Person] of [abusing/persecuting/oppressing her] 
  9. Alice accused [Person] of [abusing/persecuting/oppressing her] 
  10. Alice accused [Person] of [abusing/persecuting/oppressing her] 
  11. Alice accused [Person] of [abusing/persecuting/oppressing her] 
  12. Alice accused [Person] of [abusing/persecuting/oppressing her] 
  13. Alice accused [Person] of [abusing/persecuting/oppressing her] 

This feels especially true since our basic assumption should probably be that cases like this are rarely straightforward, and each bullet point probably should have a lot of nuanced discussion of the situation. 
That being said, I am not sure how they would be able to provide evidence for these claims without deanonymizing Alice, which leaves us in an unhappy place, especially given that if these claims were true, that would be relevant information to have. 

I'd be keen to hear ideas of how we could see more evidence for these claims. 

One obvious one would be having a trustworthy third party could review those claims e.g. the community health team. But there are a lot of difficulties in practice with this solution. 

FWIW, I also "walked into an argument half-way through", and for me, the section "What do EA vegan advocates need to do?" was very useful to get a better sense of what exactly you were arguing for - you could consider putting a Tldr version of it at the beginning of the article.

Hey Elizabeth,
I just wanted to thank you for this post. I think it addresses a very important issue. It's really costly to write long, thorough articles like this one that contribute to our epistemic commons, and I am very grateful that you invested the time and effort! 
 

I can see that this does not feel great from a nepotism angle. However, as Weaver mentions the initial application is only a very rough pre-screening, and for that, a recommendation might tip the scales (and that might be fine).

Reasons why this is not a problem:

First, expanding on Weavers argument:

I think that the short hand of "this person vouches for this other person" is a good enough basis for a lot of pre-screening criteria. Not that it makes the person a shoe in for the job, but it's enough to say that you can go by on a referral. 


If the application process is similar to other jobs in the EA world, it will probably involve 2-4  work trials, 1-2 interviews, and potentially an on-site work trial before the final offer is made. The reference maybe gets an applicant over the hurdle of the first written application, but won't be a factor in the evaluation of the work trials and interviews. So it really does not influence their chances too much.

Secondly, speaking of how I update on referrals: I don't think most referrals are super strong endorsements by the person referring, and one should not update on them too much. I.e. most referrals are not of the type "I have thought about this for a couple of hours, worked with the person a lot in the last year, and think they will be an excellent fit for this role", but rather "I had a chat with this person, or I know them from somewhere and thought they might have a 5%-10% chance of getting the job so I recommended they apply". 

Other reasons why this could be bad:
1. The hiring manager might be slightly biased and keep them in the process longer than they ought to (However, I do not think this would be enough to turn a "not above the bar for hiring" person into the "top three candidate" person). Note that this is also bad for the applicant as they will spend more time on the application process than they should.

2. The applicant might rely too much on the name they put down, and half-ass the rest of the application, but in case the hiring manager does not know the reference, they might be rejected, although their non-half-assed application would have been good. 

I am confused about what your claims are, exactly (or what you’re trying to say). 

One interpretation, which makes sense to me, is the following

“Starting an AI safety lab is really hard and we should have a lot of appreciation for people who are doing it. We should also cut them some more slack when they make mistakes because it is really hard and some of the things they are trying to do have never been done before.” (This isn’t a direct quote)

I really like and appreciate this point. Speaking for me personally, I too often fall into the trap of criticising someone for doing something not perfectly and not 1. Appreciating that they have tried at all and that it was potentially really hard, and 2. Criticising all the people who didn’t do anything and chose the safe route. There is a good post about this: Invisible impact loss (and why we can be too error-averse).

In addition, I think it could be a valid point to say that we should be more understanding if e.g. the research agendas of AIS labs are/were off in the past as this is a problem that no one really knows how to solve and that is just very hard. I don’t really feel qualified to comment on that.  

 

Your post could also be claiming something else:

“We should not criticise / should have a very high bar for criticizing AI safety labs and their founders (especially not if you yourself have not started an AIS lab). They are doing something that no one else has done before, and when they make mistakes, that is way understandable because they don’t have anyone to learn from.” (This isn’t a direct quote)

For instance, you seem to claim that the reference class of people who can advise people working on AI safety is some group whose size is the number of AI safety labs multiplied by 3. (This is what I understand your point to be if I look at the passage that starts with “Some new organizations are very similar to existing organizations. The founders of the new org can go look at all the previous closeby examples, learn from them, copy their playbook and avoid their mistakes.” and ends in “That is the roughly the number of people who are not the subject of this post.”)

If this is what you want to say, I think the message is wrong in important ways. In brief: 

  1. I agree that when people work on hard and important things, we should appreciate them, but I disagree that we should avoid criticism of work like this. Criticism is important precisely when the work matters. Criticism is important when the problems are strange and people are probably making mistakes. 
  2. The strong version of “they’re doing something that no one else has done before … they don’t have anyone to learn from” seems to take a very narrow reference class for a broad set of ways to learn from people. You can learn from people who aren’t doing the exact thing that you’re doing.

 

1. A claim like: “We should not criticise / should have a very high bar for criticizing AI safety labs / their founders (especially not if you yourself have not started an AIS lab).”

As stated above, I think it is important to appreciate people for trying at all, and it’s useful to notice that work not getting done is a loss. That being said, criticism is still useful. People are making mistakes that others can notice. Some organizations are less promising than others, and it’s useful to make those distinctions so that we know which to work in or donate to. 

In a healthy EA/LT/AIS community, I want people to criticise other organisations, even if what they are doing is very hard and has never been done before. E.g. you could make the case that what OP, GiveWell, and ACE are doing has never been done before (although it is slightly unclear to me what exactly “doing something that has never been done before” means), and I don’t think anyone would say that those organisations should be beyond criticism. 

This ties nicely into the second point I think is wrong: 

2. A claim like: “they’re doing something that no one else has done before … they don’t have anyone to learn from”

A quote from your post:

The founders of the new org can go look at all the previous closeby examples, learn from them, copy their playbook and avoid their mistakes.  If your org is shaped like a Y-combinator company, you can spend dozens of hours absorbing high-quality, expert-crafted content which has been tested and tweaked and improved over hundreds of companies and more than a decade. You can do a 15 minute interview to go work next to a bunch of the best people who are also building your type of org, and learn by looking over their shoulder and troubleshooting together. You get to talk to a bunch of people who have actually succeeded  building an org-like-yours.  … How does this look for AI safety? … Apply these updates to our starting reference class success rate of ONE. IN. TWENTY. Now count the AI safety labs. Multiply by ~3.  


A point I think you’re making:  

“They are doing something that no one else has done before [build a successful AI safety lab], and therefore, if they make mistakes, that is way understandable because they don’t have anyone to learn from.”

It is true that the closer your organisation is to an already existing org/cluster of orgs, the more you will be able to copy. But just because you’re working on something new that no one has worked on (or your work is different in other important aspects), it doesn’t mean that you cannot learn from other organisations, their successes and failures. For things like having a healthy work culturetalent retention, and good governance structures, there are examples in the world that even AIS labs can learn from. 

I don’t understand the research side of things well enough to comment on whether/how much AIS labs could learn from e.g. academic research or for-profit research labs working on problems different from AIS. 


 

[Disclaimer, I have very little context on this & might miss something obvious and important]

In discussions of this post (the content of which I can’t predict or control), I’d ask that you just refer to me as Cathleen, to minimize the googleable footprint. And I would also ask that, as I’ve done here, you refrain from naming others whose identities are not already tied up in all this.

As there is some confusion on this point, it is important to be clear. The central complaint in the Twitter thread is that *5 days* after Cathleen’s post, the poster edited their comment to add the names of Leverage and Paradigm employees back to the comment, including Cathleen’s last name. This violates Cathleen’s request.

AFAICT the disagreement between Kerry and Ben stems from interpreting the second part of Cathleen's ask differently. There seem to be two ways of reading the second part:
1. Asking people to refrain from naming others who are not already tied into this in general.
2. Asking people to refrain from naming others who are not already tied into this in discussions of her post.

To me, it seems pretty clear that she means the latter, given the structure of the two sentences. If she was aiming for the first interpretation, I think she should have used a qualifier like "in general" in the second sentence. In the current formulation, the "And" at the beginning of the sentence connects the first ask/sentence very clearly to the second. 
I guess this can be up for debate, and one could interpret it differently, but I would certainly not fault anyone for going with interpretation 2.[1]

If we assume that 2 is the correct reading, Kerry's claim (cited above) does not seem relevant anymore / Ben's original remark (cited below) seems correct. The timeline of edits doesn't change things. Ben's original remark  (emphasis mine): 

The comment in question doesn’t refer to the former staff member’s post at all, and was originally written more than a year before the post. So we do not view this comment as disregarding someone’s request for privacy.

  1. ^

    Even if she meant interpretation 1, it is unclear to me that this would be a request that I would endorse other people enforcing. Her request in interpretation 2 seems reasonable, in part because it seems like an attempt to avoid people using her post in a way she doesn't endorse. A general "don't associate others with this organisation" would be a much bigger ask. I would not endorse other organisations asking the public not to connect its employees to them (e.g. imagine a GiveWell employee making the generic ask not to name other employees/collaborators in posts about GiveWell), and the Forum team enforcing that.

You can indicate uncertainty in the form, so feel free to fill it out and state your probability :)

Nope :D
Thanks for pointing that out!

I agree that this is an important thing to keep in mind. Especially introductory events (talks, fellowships etc.) should be offered in German (or at least with a German option, i.e. one fellowship group which is in German).

Load more