Head of property and interim head of staff support @ Effective Ventures Foundation UK
342 karmaJoined Oct 2019Working (0-5 years)



I am working at EV as the head of property and interim head of staff support, managing the office projects of EV and its staff support team. I formerly was the office manager of Trajan House, an EA Hub in Oxford. I studied Philosophy and Economics in Bayreuth, Germany and was one of the core organisers of the EA group in Bayreuth for about 4 years. I love Beach Volleyball, bouldering, and vegan cooking.

Please reach out if you want to chat about operations, office management, EA Hubs, diversity, and community building strategy.


FWIW, I also "walked into an argument half-way through", and for me, the section "What do EA vegan advocates need to do?" was very useful to get a better sense of what exactly you were arguing for - you could consider putting a Tldr version of it at the beginning of the article.

Hey Elizabeth,
I just wanted to thank you for this post. I think it addresses a very important issue. It's really costly to write long, thorough articles like this one that contribute to our epistemic commons, and I am very grateful that you invested the time and effort! 

I can see that this does not feel great from a nepotism angle. However, as Weaver mentions the initial application is only a very rough pre-screening, and for that, a recommendation might tip the scales (and that might be fine).

Reasons why this is not a problem:

First, expanding on Weavers argument:

I think that the short hand of "this person vouches for this other person" is a good enough basis for a lot of pre-screening criteria. Not that it makes the person a shoe in for the job, but it's enough to say that you can go by on a referral. 

If the application process is similar to other jobs in the EA world, it will probably involve 2-4  work trials, 1-2 interviews, and potentially an on-site work trial before the final offer is made. The reference maybe gets an applicant over the hurdle of the first written application, but won't be a factor in the evaluation of the work trials and interviews. So it really does not influence their chances too much.

Secondly, speaking of how I update on referrals: I don't think most referrals are super strong endorsements by the person referring, and one should not update on them too much. I.e. most referrals are not of the type "I have thought about this for a couple of hours, worked with the person a lot in the last year, and think they will be an excellent fit for this role", but rather "I had a chat with this person, or I know them from somewhere and thought they might have a 5%-10% chance of getting the job so I recommended they apply". 

Other reasons why this could be bad:
1. The hiring manager might be slightly biased and keep them in the process longer than they ought to (However, I do not think this would be enough to turn a "not above the bar for hiring" person into the "top three candidate" person). Note that this is also bad for the applicant as they will spend more time on the application process than they should.

2. The applicant might rely too much on the name they put down, and half-ass the rest of the application, but in case the hiring manager does not know the reference, they might be rejected, although their non-half-assed application would have been good. 

I am confused about what your claims are, exactly (or what you’re trying to say). 

One interpretation, which makes sense to me, is the following

“Starting an AI safety lab is really hard and we should have a lot of appreciation for people who are doing it. We should also cut them some more slack when they make mistakes because it is really hard and some of the things they are trying to do have never been done before.” (This isn’t a direct quote)

I really like and appreciate this point. Speaking for me personally, I too often fall into the trap of criticising someone for doing something not perfectly and not 1. Appreciating that they have tried at all and that it was potentially really hard, and 2. Criticising all the people who didn’t do anything and chose the safe route. There is a good post about this: Invisible impact loss (and why we can be too error-averse).

In addition, I think it could be a valid point to say that we should be more understanding if e.g. the research agendas of AIS labs are/were off in the past as this is a problem that no one really knows how to solve and that is just very hard. I don’t really feel qualified to comment on that.  


Your post could also be claiming something else:

“We should not criticise / should have a very high bar for criticizing AI safety labs and their founders (especially not if you yourself have not started an AIS lab). They are doing something that no one else has done before, and when they make mistakes, that is way understandable because they don’t have anyone to learn from.” (This isn’t a direct quote)

For instance, you seem to claim that the reference class of people who can advise people working on AI safety is some group whose size is the number of AI safety labs multiplied by 3. (This is what I understand your point to be if I look at the passage that starts with “Some new organizations are very similar to existing organizations. The founders of the new org can go look at all the previous closeby examples, learn from them, copy their playbook and avoid their mistakes.” and ends in “That is the roughly the number of people who are not the subject of this post.”)

If this is what you want to say, I think the message is wrong in important ways. In brief: 

  1. I agree that when people work on hard and important things, we should appreciate them, but I disagree that we should avoid criticism of work like this. Criticism is important precisely when the work matters. Criticism is important when the problems are strange and people are probably making mistakes. 
  2. The strong version of “they’re doing something that no one else has done before … they don’t have anyone to learn from” seems to take a very narrow reference class for a broad set of ways to learn from people. You can learn from people who aren’t doing the exact thing that you’re doing.


1. A claim like: “We should not criticise / should have a very high bar for criticizing AI safety labs / their founders (especially not if you yourself have not started an AIS lab).”

As stated above, I think it is important to appreciate people for trying at all, and it’s useful to notice that work not getting done is a loss. That being said, criticism is still useful. People are making mistakes that others can notice. Some organizations are less promising than others, and it’s useful to make those distinctions so that we know which to work in or donate to. 

In a healthy EA/LT/AIS community, I want people to criticise other organisations, even if what they are doing is very hard and has never been done before. E.g. you could make the case that what OP, GiveWell, and ACE are doing has never been done before (although it is slightly unclear to me what exactly “doing something that has never been done before” means), and I don’t think anyone would say that those organisations should be beyond criticism. 

This ties nicely into the second point I think is wrong: 

2. A claim like: “they’re doing something that no one else has done before … they don’t have anyone to learn from”

A quote from your post:

The founders of the new org can go look at all the previous closeby examples, learn from them, copy their playbook and avoid their mistakes.  If your org is shaped like a Y-combinator company, you can spend dozens of hours absorbing high-quality, expert-crafted content which has been tested and tweaked and improved over hundreds of companies and more than a decade. You can do a 15 minute interview to go work next to a bunch of the best people who are also building your type of org, and learn by looking over their shoulder and troubleshooting together. You get to talk to a bunch of people who have actually succeeded  building an org-like-yours.  … How does this look for AI safety? … Apply these updates to our starting reference class success rate of ONE. IN. TWENTY. Now count the AI safety labs. Multiply by ~3.  

A point I think you’re making:  

“They are doing something that no one else has done before [build a successful AI safety lab], and therefore, if they make mistakes, that is way understandable because they don’t have anyone to learn from.”

It is true that the closer your organisation is to an already existing org/cluster of orgs, the more you will be able to copy. But just because you’re working on something new that no one has worked on (or your work is different in other important aspects), it doesn’t mean that you cannot learn from other organisations, their successes and failures. For things like having a healthy work culturetalent retention, and good governance structures, there are examples in the world that even AIS labs can learn from. 

I don’t understand the research side of things well enough to comment on whether/how much AIS labs could learn from e.g. academic research or for-profit research labs working on problems different from AIS. 


[Disclaimer, I have very little context on this & might miss something obvious and important]

In discussions of this post (the content of which I can’t predict or control), I’d ask that you just refer to me as Cathleen, to minimize the googleable footprint. And I would also ask that, as I’ve done here, you refrain from naming others whose identities are not already tied up in all this.

As there is some confusion on this point, it is important to be clear. The central complaint in the Twitter thread is that *5 days* after Cathleen’s post, the poster edited their comment to add the names of Leverage and Paradigm employees back to the comment, including Cathleen’s last name. This violates Cathleen’s request.

AFAICT the disagreement between Kerry and Ben stems from interpreting the second part of Cathleen's ask differently. There seem to be two ways of reading the second part:
1. Asking people to refrain from naming others who are not already tied into this in general.
2. Asking people to refrain from naming others who are not already tied into this in discussions of her post.

To me, it seems pretty clear that she means the latter, given the structure of the two sentences. If she was aiming for the first interpretation, I think she should have used a qualifier like "in general" in the second sentence. In the current formulation, the "And" at the beginning of the sentence connects the first ask/sentence very clearly to the second. 
I guess this can be up for debate, and one could interpret it differently, but I would certainly not fault anyone for going with interpretation 2.[1]

If we assume that 2 is the correct reading, Kerry's claim (cited above) does not seem relevant anymore / Ben's original remark (cited below) seems correct. The timeline of edits doesn't change things. Ben's original remark  (emphasis mine): 

The comment in question doesn’t refer to the former staff member’s post at all, and was originally written more than a year before the post. So we do not view this comment as disregarding someone’s request for privacy.

  1. ^

    Even if she meant interpretation 1, it is unclear to me that this would be a request that I would endorse other people enforcing. Her request in interpretation 2 seems reasonable, in part because it seems like an attempt to avoid people using her post in a way she doesn't endorse. A general "don't associate others with this organisation" would be a much bigger ask. I would not endorse other organisations asking the public not to connect its employees to them (e.g. imagine a GiveWell employee making the generic ask not to name other employees/collaborators in posts about GiveWell), and the Forum team enforcing that.

You can indicate uncertainty in the form, so feel free to fill it out and state your probability :)

Nope :D
Thanks for pointing that out!

I agree that this is an important thing to keep in mind. Especially introductory events (talks, fellowships etc.) should be offered in German (or at least with a German option, i.e. one fellowship group which is in German).

Very strong upvote. Thanks for commenting this Simon.

(Meta: I am afraid that I am strawmaning your position because I do not understand it correctly, so please let me know if that is the case )

Personally, I am a pretty strong believer that the unique thinking style of effective altruism has been essential for its success so far, and that this thinking style is very closely related to certain skills & virtues common in STEM fields.  So I am skeptical that there is much substance behind claims #1 or #2 in general.  

I agree with you that it seems plausible that the unique thinking style of EA has been essential to a lot of the successes achieved by EA + that those are closely related to STEM fields. 

  1. The "core" thinking tools of EA need to be improved by an infusion of humanities-ish thinking.  Right now, the thinking style of EA is on the whole too STEM-ish, and this impairment is preventing EA from achieving its fundamental mission of doing the most good.

But it is unclear to me why this should imply that #1 is wrong. EA wants to achieve this massive goal of doing the most good. This makes it very important to get a highly accurate map of the territory we are operating in. Taking that into account, it is a very strong claim that we are confident that the “core” thinking tools we have used so far are the best we could be using and that we do not need to look at the tools that other fields are using before we decide that ours are actually the best. This is especially true since we do lack a bunch of academic disciplines in EA. Most EA ideas and thinking tools are from western analytic philosophy and STEM research. And that does not mean they are wrong - it could be that they all turn out to be correct - but they do encompass only a small portion of all knowledge out there. I dare you to chat to a philosopher who researches non-western epistemology - your mind will be blown by how different it is. 

More generally: The fact that it is sometimes hard to understand people from very different fields is why it is so incredibly important and valuable to try to get those people into EA. They usually view the world through a very different lens and can check whether they see an aspect of the territory we do not see that we should incorporate into EA. 

I am afraid that we are so confident in the tools we have that we do not spend enough time trying to understand how other fields think and therefore miss out on an important part of reality. 

To be clear: I think that a big chunk of what makes EA special is related to STEM style reasoning and we should probably try hard to hold onto it. 

2. The "core" thinking tools of EA are great and don't need to change, but STEM style is only weakly correlated with those core thinking tools.  We're letting great potential EAs slip through the cracks because we're stereotyping too hard on an easily-observed surface variable, thus getting lots of false positives and false negatives when we try to detect who really has the potential to be great at the "core" skills.  STEM style is more like an incidental cultural difference than a reliable indicator of "core" EA mindset.

Small thing: It is unclear to me whether we get a lot of false positives + this was also not the claim of the post if I understand it correctly. 


Load more