SEM

S.E. Montgomery

Policy Advisor @ Statistics NZ
504 karmaJoined Mar 2022Working (6-15 years)

Bio

Participation
5

I'm an experienced policy advisor currently living in New Zealand. 

Comments
27

I notice that the places that provide the most job security are also the least productive per-person (think govt jobs, tenured professors, big tech companies). The typical explanation goes like "a competitive ecosystem, including the ability for upstarts to come in and senior folks to get fired, leads to better services provided by the competitors

Do you have evidence for this? Because there is lots of evidence to the contrary - suggesting that job insecurity negatively impacts people's productivity as well as their physical, and mental health.[1][2][3]

I think respondents on the EA Forum may think "oh of course I'd love to get money for 3 years instead of 1". But y'all are pretty skewed in terms of response bias -- if a funder has $300k and give that all to the senior EA person for 3 years, they are passing up on the chance to fund other potentially better upstarts for years 2 & 3.

This goes both ways - yes, there is a chance to fund other potential better upstarts, but by only offering short-term grants, funders also miss out on applicants who want/need more security (eg. competitive candidates who prefer more secure options, parents, people supporting family members, people with big mortgages, etc). 

Depending on which specific funder you're talking about, they don't actually have years of funding in the bank! Afaict, most funders (such as the LTFF and Manifund) get funds to disburse over the next year, and in fact get chastised by their donors if they seem to be holding on to funds for longer than that. Donors themselves don't have years of foresight into how they would like to be spending their money (eg I've personally shifted my allocation from GHD to long-termist in-network opportunities)

I think there are options here that would help both funders and individuals. For example, longer grants could be given with a condition that either party can give a certain amount of notice to end the agreement (typical in many US jobs), and many funders could re-structure to allow for longer grants/a different structure for grants if they wanted to. As long as these changes were well-communicated with donors, I don't see why we would be stuck to a 1-year cycle. 

My experience: As someone who has been funded by grants in the past, job security was a huge reason for me transitioning away from this. It's also a complaint I've heard frequently from other grantees, and something that not everyone can even afford to do in the first place. I'm not implying that donors need to hire people or keep them on indefinitely, but even providing grants for 2 or more years at a time would be a huge improvement to the 1-year status quo. 

I have a couple thoughts here, as a community builder, and as someone who has thought similar things to what you've outlined. 

I don't like the idea of bringing people into EA based on false premises. It feels weird to me to 'hide' parts of EA to newcomers. However, I think the considerations involved are more nuanced than this. When I have an initial conversation with someone about what EA is, I find it difficult to capture everything in a way that comes across as sensible. If I say, "EA is a movement concerned with finding the most impactful careers and charitable interventions," to many people I think this automatically comes across as concerning issues of global health and poverty. 'Altruism' is in the name after all. I don't think many people associated the word 'altruism' with charities aimed at ensuring that artificial intelligence is safe. 

If I forefront concerns about AI and say, "EA is a movement aimed at finding the most impactful interventions... and one of the top interventions that people in the community care about is ensuring that artificial intelligence is safe," that also feels like it's not really capturing the essence of EA. Many people in EA primarily care about issues other than AI, and summarising EA in this way to newcomers is going to turn off some people who care about other issues. 

The idea that AI could be a existencial risk is (unfortunately) just not a mainstream idea yet. Over the past several months, it seems like it has been talked about a lot outside of EA, but prior to that, there were very few major media organisations/celebrities that brought attention to it. So from my point of view, I can understand community builders wanting to warm up people to the idea. A minority of people will be convinced by hearing good arguments for the first time. Most people (myself included) need to hear something said again and again in different ways in order to take it seriously. 

You might say that these are really simplistic ways of talking about EA, and there's a lot more than I could say than a couple simple sentences. That's true, but in many community building circumstances, a couple sentences is all I am going to get. For example, when I've run clubs fair booths at universities, many students just want a short explanation of what the group stands for. When I've interacted with friends or family members who don't know what EA is, most of the time I get the sense that they don't want a whole spiel. 

I also think it is not necessarily a 'persuasion game' to think about how to bring more people on board with an idea - it is thinking seriously about how to communicate ideas in an effective way. Communication is an art form, and there are good ways to go about it and bad ways to go about it. Celebrities, media organisations, politicians, and public health officials all have to figure out how to communicate their ideas to the public, and it is often not as simple as 'directly stating their actual beliefs.' Yes, I agree we should be honest about what we think, but there are many different ways to go about this. for example, I could say, "I believe there's a decent chance AI could kill us all," or I could say, "I believe that we aren't taking the risks of AI seriously enough." Both of these are communicating a similar idea, but will be taken quite differently. 

Thanks for posting this! I agree, and one thing I've noticed while community building is that it's very easy to give career direction to students and very early-career professionals, but much more challenging to mid/late-career professionals. Early-career people seem more willing to experiment/try out a project that doesn't have great support systems, whereas mid/late-career people have much more specific ideas about what they want out of a job. 

Entrepreneurship is not for everyone, and being advised to start your own project with unclear parameters and outcomes often has low appeal to people who have been working for 10+ years in professions with meaningful structure, support, and reliable pay. (It often has low appeal to students/early-career professionals too, but younger people seem more willing to try.) I would love to see EA orgs implement some of the suggestions you mentioned. 

We already have tons of implicit norms that ask different behaviours of men and women, and these norms are the reason why it's women coming forward to say they feel uncomfortable rather than men. There are significant differences in how men and women approach dating in professional contexts, see power dynamics, and in the ratio of men in powerful positions versus women (as well as the gender ratio in EA generally). Drawing attention to these differences and discussing new norms that ask for different behaviours of men in these contexts (and different behaviours from the institutions/systems that these men interact with) is necessary  to prevent these situations from happening in the future.

Something about this comment rubbed me the wrong way. EA is not meant to be a dating service, and while there are many people in the community who are open to the idea of dating someone within EA or actively searching for this, there are also many people who joined for entirely different reasons and don't consider this a priority/don't want this.  

I think that viewing the relationship between men and women in EA this way - eg. men competing for attention, where lonely and desperate men will do what it takes to to get with women - does a disservice to both genders. It sounds like a) an uncomfortable environment for women to join, because they don't want to be swarmed by a bunch of desperate men, and b) an uncomfortable environment for men, because to some extent it seems to justify men doing more and more to get the attention of women, often at the cost of women being made to feel uncomfortable. (And many men in EA do not want women to feel uncomfortable!)  

Let's zoom out a bit. To me, it's not that important that everyone in EA gets a match. I find the gender imbalance concerning for lots of reasons, but ‘a lack of women for men to match with’ is not on my list of concerns. Even if there was a perfect 50/50 balance of men and women, I think there would still be lonely men willing to abuse their power.  (Like you said, many women come into the movement already in relationships, some men/women do not want to date within the movement, and some people are unfortunately just not people others want to date.) So the problem is not the lack of women, but rather the fact that men in powerful positions are either blind to their own power,  or can see their power and are willing to abuse that power, and there are not sufficient systems in place to prevent this from happening, or even to stop it once it has happened. 

I disagree-voted on this because I think it is overly accusatory and paints things in a black-and-white way.

There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would've been glad to see and enthusiastically supported and considered concrete progress for the community.

Who says we can't have both? I don't get the impression that EA NYC wants this to be the only action taken on anti-racism and anti-sexism, nor did I get the impression that this is the last action EA NYC will take on this topic.

But by just saying "hey, [thing] is bad! We're going to create social pressure to be vocally Anti-[thing]!" you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to "oh, so you're NOT opposed to bigotry, huh?"

I don't think this is the case - I, for one, am definitely not thinking that anyone who chose not to sign didn't do so because they are not opposed to bigotry. (Confusing double-negative - but basically, I can think of other reasons why people might not have wanted to sign this.) 

The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing. 

I can think of better outcomes than that - the next time there is a document or initiative with a bit more substance, here's a big list of people who will probably be on board and could be contacted. The next time a journalist looks through the forum to get some content, here's a big list of people who have publicly declared their commitment to anti-racism and anti-sexism. The next time someone else makes a post delving into this topic, here's some community builders they can talk to for their stance on this. There's nothing inherently wrong with symbolic gestures as long as they are not in place of more meaningful change, and I don't get the sense from this post that this will be the last we hear about this. 

People choose whom they date and befriend - no-one is forcing EAs to date each other, live together, or be friends. EAs associate socially because they share values and character traits.

To an extent, but this doesn't engage with the second counterpoint you mentioned: 

2. The work/social overlap means that people who are engaged with EA professionally, but not part of the social community, may miss out on opportunities.

I think it would be more accurate to say that, there are subtle pressures that do heavily encourage EAs to date each other, live together, and be friends (I removed the word 'force' because 'force' feels a bit strong here). For example, as you mentioned, people working/wanting to work in AI safety are aware that moving to the Bay Area will open up opportunities. Some of these opportunities are quite likely to come from living in an EA house, socialising with other EAs, and, in some cases, dating other EAs. For many people in the community, this creates 'invisible glass ceilings,' as Sonia Joseph put it. For example, a woman is likely to be more put-off by the prospect of living in an EA house with 9 men than another man would be (and for good reasons, as we saw in the Times article). It is not necessarily the case that everyone's preference is living in an EA house, but that some people feel they will miss opportunities if they don't.  Likewise, this creates barriers for people who, for religious/cultural reasons, can't or don't want to have roommates who aren't the same gender, people who struggle with social anxiety/sensory overload, or people who just don't want to share a big house with people that they also work and socialise with. 

If you're going to talk about the benefits of these practices, you also need to engage with the downfalls that affect people who, for whatever reason, choose not to become a part of the tight-knit community. I think this will disproportionately be people who don't look like the existing community. 

I think the usefulness of deferring also depends on how established a given field is, how many people are experts in that field, and how certain they are of their beliefs. 

If a field has 10,000+ experts that are 95%+ certain of their claims on average,  then it probably makes sense to defer as a default. (This would be the case for many medical claims, such as wearing masks, vaccinations, etc.)  If a field has 100 experts and they are more like 60% certain of their claims on average, then it makes sense to explore the available evidence yourself or at least keep in mind that there is no strong expert consensus when you are sharing information. 

We can't know everything about every field, and it's not reasonable to expect everyone to look deeply into the arguments for every topic. But I think there can be a tendency of EAs to defer on topics where there is little expert consensus, lots of robust debate among knowledgeable people, and high levels of uncertainty (eg. many areas of AI safety). While not everyone has the time to explore AI safety arguments for themselves, it's helpful to keep in mind that, for the most part, there isn't a consensus among experts (yet), and many people who are very knowledgeable about this field still carry high levels of uncertainty about their claims. 

As with any social movement, people disagree about the best ways to take action. There are many critiques of EA which you should read to get a better idea of where others are coming from, for example, this post about effective altruism being an ideology, this post about someone leaving EA, this post about EA being inaccessible, or this post about blindspots in EA/rationalism communities. 

Even before SBF, many people had legitimate issues with EA from a variety of standpoints. Some people find the culture unwelcoming (eg. too elitist/not enough diversity), some people take issue with longtermism (eg. too much uncertainty), others disagree with consequentialism/utilitarianism, and still others are generally on board but find more specific issues in the way that EA approaches things. 

Post-SBF it's difficult to say what the full effects will be, but I think it's fair to say that SBF represents what many people fear/dislike about EA (eg. elitism, inexperience, ends-justifies-the-means reasoning, tech-bro vibes, etc). I'm not saying these things are necessarily true, but most people won't spend hundreds of hours engaging with EA to find out for themselves. Instead, they'll read an article on the New York Times about how SBF committed fraud and is heavily linked to EA and walk away with a somewhat negative impression. That isn't always fair, but it also happens to other social movements like feminism, Black Lives Matter, veganism, environmentalism, etc. EA is no exception, and FTX/SBF was a big enough deal that a lot of people will choose not to engage with EA going forward. 

Should you care? I think to an extent, yes - you should engage with criticisms, think through your own perspective, decide where you agree/disagree, and work on improving things where you think they should be improved going forward. We should all do this. Ignoring criticisms is akin to putting your fingers in your ears and refusing to listen, which isn't a particularly rational approach. Many critics of EA will have meaningful things to say about it and if we truly want to figure out the best ways to improve the world, we need to be willing to change (see: scout mindset). That being said, not all criticisms will be useful or meaningful, and we shouldn't get so caught up in the criticism that we stop standing for something. 

Thinking that 'the ends justifies the means' (in this case, making more donations justifies tax evasion) is likely to lead to incorrect calculations about the trade-offs involved. It's very easy to justify almost anything with this type of logic, which means we should be very hesitant. 

As another commenter pointed out, tax money isn't 'your' money. Tax evasion (as opposed to 'tax avoidance' - which is legal) is stealing from the government. It would not be ethical to steal from your neighbour in order to donate the money, and likewise it is not ethical to steal from the government to donate money. 

Load more