All of Fran's Comments + Replies

+1. I don't know about attention, but I do think the 'community' tag has a vibe of being 'less important' than posts without the tag. I think this is mostly a feature of the community itself and what users want the forum to be primarily focussed on. I don't mind this, even though I personally enjoy community posts just as much, and I also like the separation. But, if my vibes-based sense is correct, then that does make the system by which posts are tagged slightly more consequential. So I think it's good that Arepo looked into this and is bringing it up. Thanks for doing that!

EA Global is coming to New York City for the very first time, from October 10–12 at the Sheraton Times Square! And you can apply now! Why NYC, you might ask?

1. Close to policy
With the United Nations based in NYC and DC just a train ride away, NYC is well-placed to host policy professionals working on pressing global issues like AI governance, pandemic preparedness, foreign aid, and more. 

2. Media capital
NYC is often called the media capital of the world, hosting major publishers and media outlets. We’re excited to welcome both writers and communicatio... (read more)

Congratulations on the new baby Drew, how beautiful!! And of course, congrats on the new role as well :') I find these mid-career transition stories really lovely, it makes me a bit emotional. It's just nice to hear all the different ways people engage with EA and all the effort and time that goes into finding a role. 

1
Drew Housman
Thanks! It's a lot of big exciting changes at once, when it rains it pours :)  I'm glad my story resonated, and I hope you can keep posting about mid-career transitions as you come across them. They are really inspiring.    
Fran
18
6
0
4

This was so well-written and now I'm glad to have found your substack! Sometimes, when this debate comes up, I feel that critiques which rely on a different kind of language than that which dominates EA are reworded or ever-so-slightly glazed over. This post takes every perspective it explores, and its language, seriously (which I really appreciate). 

Hey Kip! That makes sense to me. I think I basically just can't objectively comment or reflect on this post because I know OP and the details of some of the stuff they were reported for.  So I won't say anything more meaningful here, but I appreciate your comment :) 

thanks for updating! I realise I'm becoming overwhelmed since it's very obvious to me who wrote this post, so I'm just going to bow out and delete my comment (so as to prevent me thinking about this post any further). 

The way you've formatted this post makes it seem like my article and the time article are examples of the thesis, you might want to clarify that. 

1
Fran
thanks for updating! I realise I'm becoming overwhelmed since it's very obvious to me who wrote this post, so I'm just going to bow out and delete my comment (so as to prevent me thinking about this post any further). 

[edit] deleted because I realise i should not engage with this post for the reasons I clarify below (I know the person & a few of the reasons they were reported, I find this emotionally-charged and overwhelming and don't know how to be neutral or "objective" on the basis of the post alone) 

1
SpeedyOtter
Hi, I read the piece a while back. I liked some bits and disliked others. Mainly I wanted to give some context for my piece.  I don't think my piece is deeply engaging with yours, nor is it intending to. On harms versus intent, I agree harms matter more. But I disagree on the last point. I think harm probably is sometimes the result of people with different norms/ preferences/ boundaries interacting. And I think EA takes particular sides in these cases. 
7
Kip
FWIW, I read your post and appreciated it. (I'm the same Kip who commented on it when you posted it initially. Hi!) But "Men who upset women in EA don’t care about women’s feelings" was roughly one of my takeaways from the post! So I don't think it's an unfair interpretation. I didn't see it as the main thesis, but I found that point interesting and memorable. Here's the snippet that gave me that takeaway (emphasis added by me): > it turned out, the problem wasn’t that my cues were too difficult to read. Or that I was too passive or too fawning or too inarticulate. That was mostly a convenient story. The problem was: they did not care what I wanted if it contradicted what they wanted. The above snippet makes it sound like EA guys were fully blaming you for communication issues, and they didn't care what you wanted. And it seems to claim that their lack-of-care was the core problem. OP sounds like a counter example for this pattern; he (at least partly) blames himself for being clumsy, and expresses (in many ways) that he did actually care about "what they wanted."

I'm so excited for this event!! This past year has felt unusually unstable with AI feeling scarier to me and all the (bad) changes to the global health funding landscape. I want to know what other people are thinking career-wise. 

For a long time now, I have also really wanted more community-building for mid-late career professionals. I'm going to be posting a series on mid-late career transitions, with four profiles from some absolutely wonderful people who kindly chatted with me!!! 

3
Clare G
I'm keen to hear more about the community-building for mid-late career professionals and the mid-late career transitions series. 

Hi Simon! Each Slack is only open to registered attendees. To gain access, you'll need to first register for the event, after your application is approved. You'll then be automatically added. 

Hey Bhart! I'm Frances, I work on the EA Global team. 

It looks like this was resolved over email! Our team aims to process travel support requests within 10 working days of submission. In general, we ask that attendees do not assume their travel support will be approved when making plans. We're budget-constrained and unable to fund all those who request support, unfortunately. We also cannot reimburse any incurred costs, should travel support not be approved. Please do feel free to email hello@eaglobal.org at any time if you're confused or have questions, we'll be very happy to help!

1
bhart
hey Frances, thank you so much for your response. I really appreciate it and yes, I would keep that it in mind. 

Hey Neel! This reply upset me so much that I'm now planning to make AGI and actively oppose AI safety :) Hope it was worth it!

Typical anti-feedback-doomers making everyone scared to plug their ears, where does it end?

Thank you, I have no reply. 

I think this is very brave. 

Fran
19
1
1
12

This is a really good idea actually, but I have to be fundamentally opposed to this comment, sorry :( 

Okay, Claude says, "telling someone "Do better" could technically be considered feedback, but it's extremely limited and not very constructive," which makes it feel like not-quite-feedback. To your first point, I fear I've been shadow banned by the forum for speaking out :( 

2
SiobhanBall
Don't do better. Is that better?

That's correct, thanks Toby :) Although, it's really important for us to know if our advertising has been reaching people. We definitely want to know if this post is the first time someone's hearing about EAG, especially if they would have attended had they heard about it earlier. 

Fran
21
12
3

I'm definitely sympathetic to this point, yep. I think it would be very difficult to write a post of this nature if you felt that your participation in EA was being wrongly affected by CH.

At the same time, I think both the negative and positive experiences are difficult to talk about, due to their sensitive nature. I felt comfortable writing this because the incident is now four years old and I'm lucky to be in an incredibly supportive environment; many who have had positive experiences will not want to write about them. Thus, I am not confident there is a... (read more)

Bruce is completely correct, yep. We'll definitely send out reminder emails. If you run into any confusion, you can always email hello@eaglobal.org. 

Hello! Yep, that’s correct. After the application deadline passes for an upcoming event, you’re welcome to re-apply to EA Global. The bar does not change at all when you reapply. We don't factor that in. You are very welcome to reuse old applications, the system should automatically auto-fill previous responses that you’ve used. 

Hey! To your last point — yeah, our goal is to approve applications that we suspect meet the bar. In cases where we’re unsure and would benefit from more information, we’ll request that (for full context, our requests are always unspecific and default to a general ask for additional information). 

Fran
10
1
0

Hey Jason! This is a cool idea. At the same time, we face capacity constraints and aren’t always able to implement changes that would increase application review time or add more moving parts. In general, I’m wary of the application review process becoming too convoluted—I want to save people time, and also, I think it’s okay to ask people to fill out the application. Applicants are very welcome to use bullet points, the application doesn’t need to be long or polished by any means. The system should also save your responses from previous years, to save some time. 

Fran
30
6
1

Hey Scott, thanks for the comment! 

I understand your argument as: allowing anyone to attend would mean the event includes all the people currently approved, plus those deterred by the admissions bar, plus some attendees who we would have previously rejected. If that latter group is small (e.g., 16%), that might not have much of an effect, and the event reaches more of our target audience.

Here’s why we’re not confident in this reasoning:

  1. Our primary concern is that removing the bar would significantly increase the volume of applications from people we’d
... (read more)
Answer by Fran17
3
0
2

Hey there! I work on the EA Global team, thanks for the question :) At EAG London, each floor of the venue will have an all gender bathroom. For future reference, our team can always be reached by emailing hello@eaglobal.org (forum questions usually get flagged to us, but we don't actively monitor the forum).

Fran
16
0
0
2

Hello :) I currently work as an Events Associate on the EA Global (EAG) team, a subset of the Events team. I joined in January 2023 (with no prior events experience). I'm incredibly excited for the team to expand, so I thought I might share a bit about my experience so far, for anyone who's unsure whether to apply. 

What I love about working on the team:

  • I think there's an implicit motto of, “take the serious stuff seriously and otherwise have fun.” We use charitable funding to run events with the goal of helping others do good in the world, in alignmen
... (read more)

Hey Patrick! My name is Frances and I work on the EA Global team :) About two weeks before the event, we'll send an email inviting everyone to our conference app (Swapcard). Swapcard will have the event agenda and allow you to book meetings with other attendees. If you have any further questions, please email hello@eaglobal.org and we'll be very happy to help. 

Hey Vasco, thanks for the question! This is an idea we've looked into quite a bit. There are some unresolved considerations (e.g. whether it makes sense for CEA to run an event like this), but the idea is still on our radar.

Fran
38
8
0

80,000 Hours has a great 2018 article on Operations management roles, which includes a 'How to assess your fit' section (I'll link to it at the bottom of this take). Having worked on the EA Global team for a year now, here are two important traits I would add for assessing fit:

1) Good at task-switching. I think it's pretty crucial that task-switching isn't super costly for you and you can do it relatively quickly. Otherwise, I imagine many Ops roles will be quite tiring / frustrating. It might be particularly emphasised in my role, but as an anecdote: in t... (read more)

3
Spencer R. Ericson
So true! When I read the 80k article, it looks like I'd fit well with ops, but these are two important executive function traits that make me pretty bad at a lot of ops work. I'm great at long-term system organization/evaluation projects (hence a lot of my past ops work on databases), but day-to-day fireman stuff is awful for me.
Fran
11
2
0
2

I'll commit to not commenting more now unless I've gotten something really wrong or it's really necessary or something :') 

Fran
77
37
4

Yeah, I don't necessarily mind an informal tone. But the reality is, I read [edit: a bit of] the appendix doc and I'm thinking, "I would really not want to be managed by this team and would be very stressed if my friends were being managed by them. For an organisation, this is really dysfunctional." And not in an, "understandably risky experiment gone wrong" kind of way, which some people are thinking about this as, but in a, "systematically questionable judgement as a manager" way. Although there may be good spin-off convos around, "how risky orgs should ... (read more)

Fran
11
2
0
2

I'll commit to not commenting more now unless I've gotten something really wrong or it's really necessary or something :') 

Fran
43
28
6

But you see how they provide approximately no additional evidence, right? Because photos provide no account for how long someone was away or not away, etc. Basically, in both Alice/Chloe's world and your world, these photos can exist. One of them is just Alice sitting on a beach chair? And to the second point, I don't believe the claim was that the environment was materially poor (please tell me if I'm wrong).

-24
Kat Woods 🔶 ⏸️
Fran
172
100
17
2

I think this comment will be frustrating for you and is not high quality. Feel free to disagree, I'm including it because I think it's possible many people (or at least some?) will feel wary of this post early on and it might not be clear why. In my opinion, including a photo section was surprising and came across as near completely misunderstanding the nature of Ben's post. It is going to make it a bit hard to read any further with even consideration (edit: for me personally, but I'll just take a break and come back or something). Basically, without any c... (read more)

In my opinion, including a photo section was surprising and came across as near completely misunderstanding the nature of Ben's post. It is going to make it a bit hard to read any further with even consideration

In addition to the overall tone of this post being generally unprofessional. 

-7
Kat Woods 🔶 ⏸️
Fran
29
9
0

What a fantastic post, thank you so so much for writing this. 
1. I don't often get to hear from people in EA who deeply committed to one path to impact and have long-term experience with it. It's incredibly valuable to hear from someone who has built up so much context around the path and can describe it in different phases, rather than the shorter stints I more often hear about (which are valuable in their own way of course, but more common). 

2. Yeah, I've been involved since 2019-ish and never considered earning-to-give, yet distinctly noticed ... (read more)

Hey  Jonny, thanks so much for pointing that out, that's my  bad!! I've replaced the link with hopefully a more helpful resource :D

Oh that's totally okay, thanks for clarifying!! And good to get more feedback because I was/am still trying to collect info on how accessible this is

this is really good to know, thank you!! I'm  thinking we hit more of a 'familiar with some technical concepts/lingo' accessibility level rather than being accessible to people who truly have no/little familiarity  with the field/concepts. 

Curious if that seems right or not (maybe some aspects of this post are just broadly confusing). I was hoping this could be accessible to anyone so will have to try and hit that mark better in the future.

3
David Mathers🔸
Ah, I made an error here, I misread what was in which thread and thought Amber was talking about Gwern's comment rather than your original post. The post itself is fine!  Sorry!

Luke, thank  you for always being  so kind :)) I very much appreciate you sharing your thoughts!!

"sometimes people exclude short-term actions because it's not 'longtermist enough'"
That's a really good point on how we see longtermism being pursued in practice. I would love to investigate whether others are feeling this way. I have certainly felt it myself in AI Safety. There's some vague sense that current-day concerns (like algorithmic bias) are not really AI Safety research. Although I've talked to some who think addressing these issues first is... (read more)

4
Algo_Law
  Now don't go setting me off about this topic! You know what I'm like. Suffice to say I think combatting social issues like algorithmic bias are potentially the only way to realistically begin the alignment process. Build transparency etc.  But that's a conversation for another post :D

Do  you think that's a factor of: how many places you could apply for longtermist vs. other cause area funding? How high the bar is for longtermist ideas vs. others? Something  else?

I think it's a factor of global health being already allocated to much more scalable opportunities than exist in longtermism, whereas the longtermists have a much smaller amount of funding opportunities to compete for. EA individuals are the main source of longtermist opportunities and thus we get showered in longtermist money but not other kinds of money.

Animals is a bit more of a mix of the two.

Thank you, I really appreciate the breadth of this list, it gives me a much stronger picture of the various ways a longtermist worldview is being promoted.

Fran
11
0
0

Yeah, absolutely! Happy to go through posts offering career advice, how one might implement the advice, if there are any other perspectives to consider, etc.

I would really encourage having a low-bar for sending people our way, very happy to talk to anyone! But generally, we offer coaching to those trying to get into the AI Safety field (ex. undergrads looking for research positions, software engineers or research scientists looking for work in the field, independent researchers or community-builders interested in applying for funding). Also happy to talk people through AI Safety career-related decisions (ex. whether or not to go to graduate school, choosing between positions, etc.)

This is great advice :) Already mentioned below; however, for people in similar positions, please do consider booking a coaching call with AI Safety Support: https://www.aisafetysupport.org/. We have experience helping people navigate the AI Safety field and can also connect you to others.

4
Yonatan Cale
Ah wow! Would you recommend contacting you over reading this? https://forum.effectivealtruism.org/posts/pbiGHk6AjRxdBPoD8/ai-safety-starter-pack (Please tell me who you would or wouldn't like me to send your way, I ongoingly talk to lots of software developers, and sometimes they want to do AI safety)

Good idea :) thank you!