All of Luise's Comments + Replies

I found the framing of "Is this community better-informed relative to what disagreers expect?" new and useful, thank you!

To point out the obvious: Your proposed policy of updating away from EA beliefs if they come in large part from priors is less applicable for many EAs who want to condition on "EA tenets". For example, longtermism depends on being quite impartial regarding when a person lives, but many EAs would think it's fine that we were "unusual from the get-go" regarding this prior. (This is of course not very epistemically modest of them.)

Here are ... (read more)

3
trammell
10mo
Thanks! Glad to hear you found the framing new and useful, and sorry to hear you found it confusingly written. On the point about "EA tenets": if you mean normative tenets, then yes, how much you want to update on others' views on that front might be different from how much you want to update on others' empirical beliefs. I think the natural dividing line here would be whether you consider normative tenets more like beliefs (in which case you update when you see others disagreeing--along the lines of this post, say) or more like preferences (in which case you don't). My own guess is that they're more like beliefs--i.e. we should take the fact that most people reject temporal impartiality as at least some evidence against longtermism--but thanks for noting that there's a distinction one might want to make here. On the three bullet points: I agree with the worries on all counts! As you sort of note, these could be seen as difficulties with "implementing the policy" appropriately, rather than problems with the policy in the abstract, and that is how I see them. But I take the point that if an idea is hard enough to implement then there might not be much practically to be learned from it.

My impression is that others have thought so much less about AI x-risk than EAs and rationalists, and for generally bad reasons, that EAs/rats are the "largest and smartest" expert group basically 'by default'. Unfortunately with all the biases that come with that. I could be misunderstanding the situation tho.

I think this is true, and I only discovered in the last two months how attached a lot of EA/rat AI Safety people are to going ahead with creating superintelligence— even though they think the chances of extinction are high— because they want to reach the Singularity (ever or in their lifetime). I’m not particularly transhumanist and this shocked me, since averting extinction and s-risk is obviously the overwhelming goal in my mind (not to mention the main thing these Singularitarians would talk about to others). It made me wonder of we could have sought regulatory solutions earlier and we didn’t because everyone was so focused on alignment or bust…

2
Guy Raveh
10mo
We've thought about it a lot, but that doesn't mean we got anything worthwhile? It's like saying that literal doom prophets are the best group to defer to about when the world would end, because they've spent the most time thinking about it. I think maybe about 1% of publicly available EA thought about AI isn't just science fiction. Maybe less. I'm much more worried about catastrophic AI risk than 'normal people' are, but I don't think we've made convincing arguments about how those will happen, why, and how to tackle them.

Yeah, there's almost certainly some self-selection bias there. If someone thinks that talk of AI x-risk is merely bad science fiction, they will either choose not to become an EA or one chooses to go into a different cause area (and are unlikely to spend significant time thinking any more about AI x-risk or discussing their heterodox view).

For example, people in crypto have thought so much more about crypto than people like me . . . but I would not defer to the viewpoints of people in crypto about crypto. I would want to defer to a group of smart, ethical ... (read more)

Thanks a lot, I think it's really valuable to have your experience written up!

Luise
11mo4
-1Aim
1Clarity
😮 1

Thanks Max!

Sounds like a plausible theory that you lost motivation because you pushed yourself too hard. I'd also pay attention to "dumber" reasons like maybe you had more motivation from supervisors/social environment/more achievable goals in the past.

Similar to my call to take a vacation, maybe it's worth it for you to only do motivating work (like a side project) for 1.5 weeks and see if the tiredness disappears.

All of this with the caveat that you understand your situation a lot better than I do ofc!

yes! From reading about burnout it can seem like it only happens to people who hate their job, work in bad environments, etc. But it can totally happen to people who love their job!

3
Milena Canzler
11mo
A funny/useful term for that is "burn on" - when you really like your work/hobbies/duties, but just don't give yourself a break and grind yourself to the bone as a consequence. 

thanks and big agree; I want to see many more different experiences of energy problems written up!

the causes of people's energy problems are so many and varied! It would be great to have many different experiences written up, including stress and anxiety-induced problems.

Thanks for feedback re:appendix, will see if others say the same :)

2
Vaughn Papenhausen
11mo
Pretty sure I would also benefit from reading the appendix

Optimistic note with low confidence:

In my impression, SBF thought he was doing an 'unpalatable' but right thing given the calculations (and his epistemic immodesty). Promoting a central meme in EA like "naïve calculations like this are too dangerous and too fallible" might solve a lot of the issue. I think dangerously-optimize-y people in EA are already updating in this direction as a result of FTX. Before FTX, being "hardcore" and doing naïve calculations was seen as cool sometimes. If we correct hard for this right now, it may be less of an issue in the ... (read more)

3
Benjamin_Todd
1y
Yes I agree that could be a good scenario to emerge from this – a very salient example of this kind of thinking going wrong is one of the most helpful things to convince people to stop doing it.

ah, the thing about fragile cooperative equilibria makes sense to me.

I'm not as sure as you that this shift would happen to core EA though. I could also imagine that current EAs will have a very allergic reaction to new, unaligned people coming in and trying to take advantage of EA resources. I imagine something like a counterculture forming where aligned EAs start purposefully setting themselves apart from people who're only in it for a piece of the pie, by putting even more emphasis on high EA alignment. I believe I've already seen small versions of this... (read more)

It's unclear to me whether you are saying that the potentially huge number of new people in EA will try to take advantage of EA resources for personal gain or that WE, who are currently in EA for altruistic reasons, will do so. The former sounds likely to me, the latter doesn't.

 

I might be missing crucial context here since I'm not familiar with the Thielosphere and all that, but overall I also don't think a huge number of new, unaligned people will be the downfall of EA. As long as leadership, thought-leaders, and grantmakers in EA stay aligned, it m... (read more)

I think cooperative equilibria are fragile. For example, as salaries have increased in EA, I've seen many people who previously took very low salaries now feel much worse about taking low salaries, because their less-aligned colleagues are paid a lot more than them, and this makes them feel much worse about making this additional sacrifice. 

Similarly, I've seen many people who really cared about honesty, who ended up being in environments where honesty was less valued, and then quickly also adopted less honest norms. 

I think EA leadership has a l... (read more)

Thanks for commenting this. Any tips for how to get disordered breathing diagnosed reliably? 

If effective altruists' messages are hacked, taken out of context, and publicly revealed, it could substantially and even permanently harm the movement. Consider the example of John Podesta, chair of Hillary Clinton's 2016 presidential campaign. Many of his emails, including those that made Clinton and her campaign look bad, were obtained by hackers in a data breach and published in Wikileaks.

 

How likely is it that someone would target the EA movement by hacking messages and taking them out of context?

Seems almost certain to happen if more EAs run for public office

Pretty sure non-zero people have tried, my guess is the question is "how competent of an attacker and how much effort do they put into it".

I agree with you, being "a highly cool and well networked EA" and "do  things which need to be done" are different goals. This post is heavily influenced by my experience as a new community builder and my perception that, in this situation,  being "a highly cool and well networked EA" and "do  things which need to be done" are pretty similar. If I wasn't so sociable and network-y, I'd probably still be running my EA reading group with ~6 participants, which is nice but not "doing things which need to be done". For technical alignment researchers, this is probably less the case, though still much more than I would've expected.

8
Alex Turner
2y
Even though these two goals may lead to similar instrumental actions (e.g. doing important work), I think these two goals grow different motivational structures inside of you. I recently wrote:
1
Jack O'Brien
2y
I feel like that's a good argument for why hanging around the cool, smart people can be good for "skilling up". But a lot of the value of meeting cool, smart people seems to come from developing good models! and surely it's possible to build good models of e.g community building, AI safety by doing self-directed study, and occasionally reaching out with specific questions as they arise. I think it's important to split up the value of meeting cool, smart people into A) networking and social signalling, and B) building better models. And maybe we should be focusing on B.
7
IanDavidMoss
2y
Separating out how important networking is for different kinds of roles seems valuable, not only for the people trying to climb the ladder but also for the people already on the ladder. (e.g., maybe some of these folks desperate to find good people to own valuable projects that otherwise wouldn't get done should be putting more effort into recruiting outside of the Bay.)

Hi Claire,

what are your thoughts on "going one meta-level up" and trying to build the meta space? Specifically creating opportunities like UGAP, the GCP internships, or running organisers' summits to get more and better community builders? I'm unsure but I thought this might be at odds with some of the points you raised, e.g., that we might neglect object-level work and its community-building effect. I'd love to hear your thoughts!

6
ClaireZabel
2y
I'm interested in and supportive of people running different experiments with meta-meta efforts, and I think they can be powerful levers for doing good. I'm pretty unsure right now if we're erring too far in the meta and meta-meta direction (potentially because people neglect the meta effects of object-level work) or should go farther, but hope to get more clarity on that down the road. 

So when I entered university I was probably capable of doing 0.5 hours per day on average.

hahah I feel this

(I'm an organiser at EA Edinburgh and from Germany.)

Yes. Your point about the social culture at German universities seems crucial. The lack of an extensive extracurricular life in and around the university should lead to smaller EA groups (because of people not looking for student groups, less enthusiasm from organisers, lacking knowledge about how to build such groups, ...)

In terms of action plans, I think an important component is getting EA group organisers excited and ambitious. Communication between large, vibrant EA groups and German groups would be ... (read more)

Thanks! I need to ask a lot of clarifying questions:

When you say "This is because the type of centralized support CEA might provide and the type of skills/characteristics required of someone working full-time running a university group or a city/national professional network might look very different depending on the ultimate model.", (1) does "This" refer to the fact that you have 2 subteams working with focus locations as opposed to everyone working on all locations? (2) If so, could I reword the explanation the sentence gives to "We need to work on focu... (read more)

1
Joan
2y
1. yes 2. correct 3. yes 4. we think focusing will improve quality in the short term, which will enable more potential scale / impact in the long term Thanks for your questions! As mentioned before, I’m excited for others to consider full time community building via the infrastructure fund, and hope that you and others would peruse this option if you feel well positioned. I don’t think CEA is covered all the net positive opportunities in this space — just the ones we think are the best given our view of our core competencies, staff capacity, and theory of change.

I'm running two retreats this week whilst working with Swarthmore College EA. Both retreats are along the lines of what you described as a bootcamp

Ah, super exciting! I'll DM you

I agree you could let someone run a social straight off. In general I guess people are more likely to agree to running a social if they are already a fellowship facilitator (fellowship social), and more likely to agree to become a committee member if they are already organising socials. The whole idea of moving people down a funnel etc.

To your skepticism: Thanks for raising the point! It's true that if we had perfect organiser training either locally in the groups or in one big bootcamp, it's unclear the bootcamp would cost less organiser hours. However organisers locally often don't have the time/skills to train new organisers. So the comparison probs isn't decisive. Hope that makes sense!

2
Edward Tranter
2y
Background: I'm running two retreats this week whilst working with Swarthmore College EA. Both retreats are along the lines of what you described as a bootcamp ("where newer organisers, facilitators & similar are skilled up and gain lots of motivation from interacting with others in-person"), but for ~18 people.  I think talking together about this sounds promising!  I agree with your response to casebash: How are you thinking about the intended ‘quality’ (broadly defined, somewhat similar to production value) of the proposed bootcamp, relative to: the quality of generic EA retreats, the retreat mentioned in your post, or a larger event like Icecone? I’d love more details on this.
3
Chris Leong
2y
I guess that makes sense. I suppose organising such a bootcamp is probably one of the most useful things that national level organisers could be doing.

Thanks for asking! The pitch goes something like this:

Uni groups are constrained by their organisers' time. The typical way of getting a new organiser is to find an excited EA and to slowly give them more and more responsibilities (e.g. intro fellowship facilitator -> run a social -> committee member). This takes time and there's dropout at every stage. The observation is that organisers are usually the most motivated after a retreat/conference/... So we might be able to significantly speed up this process and reduce dropout by having a retreat-ish t... (read more)

3
Chris Leong
2y
Seems like you could let someone run a social essentially straight off, as it's pretty hard to mess up a social. That said, I agree with your core point, it's important to provide people exciting opportunities when they're most enthusiastic: That said, your ideas for sessions all sound really useful: I guess my main skepticism is the following: It seems like there is a lot of effort in running a retreat and that this would likely involve multiple people, so I don't see you coming out ahead here. That said, I expect you'd end up with more highly trained organizers at the end of this both because of increased amount of training time for each organizer and from the peer-to-peer exchange of ideas.

Agreed! In the meantime, it's definitely worth it contacting a prior organiser of an organisers' retreat for guidance. Henry Sleight ran this one, and Jessica McCurdy ran the one in Boston.

CEA has asked EAIF to assess applications from groups that are not eligible for CBG funding. CEA chose to do this rather than hire more staff, as we believe there will be benefits from us running a more focused programme.

 

We expect to include more universities in this list as we build up capacity for our university program. However, we think there are benefits to piloting our university support program with a smaller number of groups. 

 

This seems to be saying there was the option of hiring more staff and rolling out the CBG programme and sup... (read more)

3
Joan
2y
Within the CEA Groups team, we have several different sub-teams. Two of the sub-teams are focused on experimenting and understanding what a model looks like with full-time community builders in a focused set of locations (one sub-team for university groups, another sub-team for city/national groups). This is because the type of centralized support CEA might provide and the type of skills/characteristics required of someone working full-time running a university group or a city/national professional network might look very different depending on the ultimate model.    Our staff capacity is limited (to either hiring, piloting, scaling) and we think that this focus will enable faster scaling in the long term.    I also want to note a couple things:  * In addition to the sub teams mentioned above, we have two sub teams supporting part-time organizers. One team provides foundational support to all part-time/volunteer group organizers (basic funding, resources hub, EA slack, phone calls), and another team runs the University Groups Accelerator Program to help part-time university organizers launch their group.  * Additionally, just because the CEA Groups team building up the ‘full-time’ model is prioritizing certain locations, that doesn't mean we want to stop experiments in other locations. We'd encourage people interested in full-time organizing in places that aren't on the locations list above to apply to the EAIF, help us innovate on the community building model in different locations, and share back your learnings with other organizers and on the forum.  

My understanding is that CEA is limited by the time of their employees. Because hiring rounds and all the support rolled out to focus groups as listed above take time

Have you considered applying for funding from the EA Infrastructure fund? They're keen to support community building as far as I know :)

Thanks, your perspective on this is really helpful! Especially the points you made about consciousness research not being very neglected. On the other hand, AI research can also not really be described as neglected anymore. Maybe the intersection of both is the way to go - as you said, C might be crucial to AGI.

1
george
2y
This is why I'm pursuing Cognitive Science.

I'm not sure why your answer is so full of repetition, but I will definitely check those orgs out, thanks!

3
MichaelStJules
3y
Woops, fixed.

I did not know about the meta-problem of consciousness before. I will have to think about this, thank you!