Luise

Pursuing an undergraduate degree
Working (0-5 years experience)

Bio

I'm a community builder interested in alignment field-building and supporting other community builders to be more impactful. In the past, I've been community manager at the SERI summer research fellowship and the ML Alignment Theory Scholars Program, I ran the Longtermist Organizer Summit 2022, and I've helped organize EA Edinburgh and EA Cambridge.

Comments
20

ah, the thing about fragile cooperative equilibria makes sense to me.

I'm not as sure as you that this shift would happen to core EA though. I could also imagine that current EAs will have a very allergic reaction to new, unaligned people coming in and trying to take advantage of EA resources. I imagine something like a counterculture forming where aligned EAs start purposefully setting themselves apart from people who're only in it for a piece of the pie, by putting even more emphasis on high EA alignment. I believe I've already seen small versions of this happening in response to non-altruistic incentives appearing in EA.

The faster the flood of new people and change of incentives happens, the more confident I am in this view. Overall, I'm not extremely confident at all though.

On your last point, if I understand this right this is not the thing you're most worried about though? Like, these people hijacking EA are not the mechanism by which EA may collapse in your view?

It's unclear to me whether you are saying that the potentially huge number of new people in EA will try to take advantage of EA resources for personal gain or that WE, who are currently in EA for altruistic reasons, will do so. The former sounds likely to me, the latter doesn't.

 

I might be missing crucial context here since I'm not familiar with the Thielosphere and all that, but overall I also don't think a huge number of new, unaligned people will be the downfall of EA. As long as leadership, thought-leaders, and grantmakers in EA stay aligned, it may be harder for them to determine whom to give that grant (or that stamp of approval), but wouldn't that just simply lead to less grants? Which seems bad but not like the end?

 

Or are you imagining highly intelligent people with impressive resumes who strategically aim to hijack EA resources for their aims and get into important positions in EA?

Thanks for commenting this. Any tips for how to get disordered breathing diagnosed reliably? 

If effective altruists' messages are hacked, taken out of context, and publicly revealed, it could substantially and even permanently harm the movement. Consider the example of John Podesta, chair of Hillary Clinton's 2016 presidential campaign. Many of his emails, including those that made Clinton and her campaign look bad, were obtained by hackers in a data breach and published in Wikileaks.

 

How likely is it that someone would target the EA movement by hacking messages and taking them out of context?

I agree with you, being "a highly cool and well networked EA" and "do  things which need to be done" are different goals. This post is heavily influenced by my experience as a new community builder and my perception that, in this situation,  being "a highly cool and well networked EA" and "do  things which need to be done" are pretty similar. If I wasn't so sociable and network-y, I'd probably still be running my EA reading group with ~6 participants, which is nice but not "doing things which need to be done". For technical alignment researchers, this is probably less the case, though still much more than I would've expected.

Hi Claire,

what are your thoughts on "going one meta-level up" and trying to build the meta space? Specifically creating opportunities like UGAP, the GCP internships, or running organisers' summits to get more and better community builders? I'm unsure but I thought this might be at odds with some of the points you raised, e.g., that we might neglect object-level work and its community-building effect. I'd love to hear your thoughts!

So when I entered university I was probably capable of doing 0.5 hours per day on average.

hahah I feel this

(I'm an organiser at EA Edinburgh and from Germany.)

Yes. Your point about the social culture at German universities seems crucial. The lack of an extensive extracurricular life in and around the university should lead to smaller EA groups (because of people not looking for student groups, less enthusiasm from organisers, lacking knowledge about how to build such groups, ...)

In terms of action plans, I think an important component is getting EA group organisers excited and ambitious. Communication between large, vibrant EA groups and German groups would be good for this. Show them what is possible. And then we probably need upskilling.

Apart from that, German groups probably need the same things as other groups. I think that this is mainly, again, excited organisers, who are willing to put in time. Just in case this is interesting to you, I'm thinking about running a “bootcamp” in the UK for new organisers, fellowship facilitators, etc. to get them excited about organising. (Approaches similar to this seem to have an amazing track record.) Would be happy to chat about this!

Thanks! I need to ask a lot of clarifying questions:

When you say "This is because the type of centralized support CEA might provide and the type of skills/characteristics required of someone working full-time running a university group or a city/national professional network might look very different depending on the ultimate model.", (1) does "This" refer to the fact that you have 2 subteams working with focus locations as opposed to everyone working on all locations? (2) If so, could I reword the explanation the sentence gives to "We need to work on focus locations to figure out the ultimate model before scaling up with that ultimate model"? In even more words, "We want to hire knowing for what model we are hiring, and we want to grow CEA knowing for what model we are growing it as soon as possible."

I really want to know how you mean this!

(3) I interpret your staff capacity being limited as "we need to prioritise" and the prioritisation coming out of that being "prioritise building a model based on focus-locations, then scale later". Correct?

(4) Your staff capacity being limited also suggests the major priority of hiring. I understand CEA is hiring quite fast, but I don't have any idea how  fast. Do you think you are prioritising hiring highly enough?

(5) What do you mean by "we think that this focus will enable faster scaling in the long term"? Firstly, again, which "focus" exactly is this referring to? Secondly, isn't "focussing" more intended to improve the quality  at the expense of speed of scaling? Intuitively I would say scaling is what enables faster scaling in the long term.

Maybe I can give some context from my side so we can find the crux of this quickly, and we are working in the same direction. I mostly see the lack of a pipeline into full-time CB in non-focus locations in stark contrast to all the extremely high-impact low-hanging fruit in CB and think "This can't be the best we can do".  It seems imperative we find a way to funnel talented EAs everywhere into this neglected career path. Hence my insistence on rolling things like the CBGs out in as many locations as possible.

I'm really interested in getting to the bottom of this. I hope I don't come across as intrusive into CEA's decisions without having any background knowledge. My interest is not to criticise CEA, but to solve this problem I see! :)

Load More