Engineering, impact evaluation, operations.
Tell me why I'm wrong.
I believe you are conflating several things here. But first, a little tip on phrasing responses: putting the word 'just' in front of a critical response makes it more dismissive than you might have intended.
If you think the movement has serious flaws that make it not a good means for doing the most good, then you should not be trying to work for an EA org in the first place, and the access to those opportunities is irrelevant.
Agreed to that as stated, but I think this is a straw man. Things can both be bad in some ways, and better than some other options, but that doesn't mean any flaws should be dismissed. This could even go to the extreme of (hypothetically) 'I know I can have the highest impact if I work here, so I will bear the inappropriate attention of my colleagues/will leave and not have the highest impact I can'.
People should not be using the movement for career advancement independent of the goal of doing the most good they can do with their careers (and in most cases, can't do that even if they intend to, because EA org jobs that are high-status within the movement are not similarly high-status outside of it) [..] I find the EA movement a useful source of ideas and a useful place to find potential collaborators for some of my projects, but I have no interest in working for an EA org because that's not where I expect I'd have the biggest impact.
Some people may think that working at an EA org is the highest impact thing they could be doing (even if just for the short term), and career paths are very dependent on the individual. EA basically brands itself as the way to do the most good, so it should not be surprising if people hold this view. I was writing up my first comment it was with the broad assumption of 'connections/opportunities within EA = connections/opportunities that help you do the most good' (given the EA forum audience), not as a judgement of 'EA is the only way of having a high impact' (which is a different conversation).
I think the movement as a whole would be more successful, and a lot of younger EAs would be a lot happier, if they approached the movement with this level of detachment.
I also have thoughts on this one, but this again is a different conversation. EA is not the only way to have a very high impact, but this should not be used as an excuse for avoiding improvements.
Thanks for your response!
I don't think changing "some EAs" to "we" necessarily changes my point of 'people concerned should not have to move to a different community which may have fewer resources/opportunities', independent of who actually creates that different community.
Note that my bigger point overall was why the second bullet point set off alarm bells, rather than specific points on the others (mostly included as a reference, and less thought put into the wording). That said:
there are probably people considering joining EA who would find EA a much easier place to get funding than their other best opportunities for trying to do the kind of good they think most needs doing.
I agree with this. I added "although may reduce future opportunities if they would benefit a lot from getting more involved in EA" after "i.e. someone considering joining EA does not have as much if anything already invested in it" a couple of minutes after originally posting my comment to reflect a very similar sentiment (however likely after you had already seen and started writing your response).
However, there is very much a difference between losing something that you have, and not gaining something that you could potentially have. When talking about personal cost, one is significantly higher than the other (agreed that both are bad), as is the toll of potentially broken trust and losing close relationships. It could potentially also have an impact cost ignoring social factors,e.g. if people have built up career/social capital that is very useful within EA, but not ranked as highly outside of EA/is not linked with the relevant people outside of EA, rather e.g. than building up non-EA networks.
That bullet point is also written as 'someone considering joining' rather than 'we should'. 'Someone considering joining' may or may not join for a variety of reasons, and is a potential consequence to the community but not an action point. It is the action points/how action is approached that seem more relevant here.
I am pretty certain it wasn't intended that way but:
Some EAs should start an unaffiliated group ("Impact Maximizers") that tries to avoid these problems. (Somewhat like the "Atheism Plus" split.)
Set off minor alarm bells when reading it, more so than the other bullet points, so I tried to put some thought into why that is (and why I didn't get the same alarm bells for the other two points).
I think it's because it (most likely inadvertently) implies "If people already in the movement do not like these power dynamics (around making women feel uncomfortable, up to sexual harrassment etc) then they should leave and start their own movement."(I am aware this asks for some people, not necessarily women/the specific person concerned by this, to start the group, but this still does not address the potentially lower resources, career and networking opportunities). This can almost be used as an excuse not to fix things, as if people don't like it they can leave. But, leaving means potentially sacrificing close relationships and career and funding opportunities, at least to some degree. Taken together, this could be taken to mean:
If you are a woman uncomfortable about the current norms on dealing with sexual harrassment, consider leaving/starting your own movement, taking potential career and funding hits to do so.
I fully don't think you intended this, but please take this as my attempt to put words to why this set off minor alarm bells on first reading, and I would be interested to hear the thoughts of others. (It is also possible that that bullet point was in response to a previous comment, which I may not have read in enough depth).
The first and third bullet point do not have this same issue, as the first one does not explicitly reduce existing opportunities for people (i.e. someone considering joining EA does not have as much if anything already invested in it, although may reduce future opportunities if they would benefit a lot from getting more involved in EA), and the third bullet point speaks about making improvements.
If organisations were privately informed of their tier, then the additional work of asking (even in the email) whether they would want to opt into tier sharing would be low/negligible.
Of course people may dispute their tier or only be happy to share if they are in a high tier, but this should at least slightly go against the argument of it being a lot of additional work to ask people for consent for the public list.
They'd have the information of upvotes and downvotes already (to calculate the overall karma). I don't know how the forum is coded, but I expect they could do this without too much difficulty if they wanted to. So if you hover, it would say something like: "This comment has x overall karma, (y upvotes and z downvotes)." So the user interface/experience would not change much (unless I have misinterpreted what you meant there).
It'll give extra information. Weighting some users higher due to contribution to the forum may make sense with the argument that these are the people who have contributed more, but even if this is the case it would be good to also see how many people overall think it is valuable or agree or disagree.
Current information:
New potential information:
e.g. 2 people strongly agreeing and 3 people weakly disagreeing may update me differently to 5 people weakly agreeing. One is unanimous, the other people have more of a divided opinion of, and it would be good for me to know that as it might be useful to ask why (when drawing conclusions based on what other people have written, or when getting feedback on my own writing).
I would like to see this implemented, as the cost seems small, but there is a fair bit of extra information value.
This does not give a complete picture though.
Say something has 5 karma and 5 votes. First obvious thought: 5 users upvoted the post, each with a karma of 1. But that's not the only option:
Or a whole range of other permutations one can think of that add up to 5, given that different users' votes have different values (and in some cases strong up/downvoting). Hovering just shows the overall karma and overall number of people who have voted, unless I am missing a feature that shows this in more detail?
There seem to have been a lot of responses to your comment, but there are some points which I don’t see being addressed yet.
I would be very interested in seeing another similarly detailed response from an ‘EA leader’ whose work focusses on community building/community health Put on top as this got quite long, rationale below, but first:
I think at least a goal of the post is to get community input (I’ve seen in many previous forum posts) to determine the best suggestions without claiming to have all the answers. Quoted from the original post (intro to 'Suggested Reforms'):
Below, we have a preliminary non-exhaustive list of suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.
It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!
In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.
This suggests to me that instead of trying to convince the ‘EA leadership’ of any one particular change, they want input from the rest of the community.
From a community building perspective, I can (epistemic status: brainstorming, but plausible) see that a comment like yours can be harmful, and create more negative perception of EA than the post itself. Perhaps new/newer/potential/(and even existing) EAs will read the original post, and they may skim this post/read parts/even read the comments first (I don’t think very many people will have read all 84 minutes and the comments on long posts sometimes point to key/interesting sections). A top post: yours, highly upvoted.
Impressions that they can potentially draw from your response (one or more of the below):
I am not saying that any of the above is true, or that it is absolute (i.e. someone would be led to believe in one of these things absolutely instead of it being on a sliding scale). But if I was new to EA, it is plausible that this comment would be far more likely to put me off continuing engaging than anything written in the actual post itself. Perhaps you can see how this may be perceived this way, even if it was not intended this way?
I also think some of the suggestions are likely more relevant and require more thought from people actively working in e.g. community building strategy, than someone who is CTO of an AI alignment research organisation (from your profile)/a technical role more generally, at least in terms of considerations that are required in order to have greatest impact in their work.
I don't think the point is that all of the proposals are inherently correct or should be implemented. I don't agree with all of the suggestions (agree with quite a few, don't agree with some others), but in the introduction to the 'Suggested Reforms' section they literally say:
Below, we have a preliminary non-exhaustive list of suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.
It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!
In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.
Picking out in particular the parts you don't agree with may seem almost like strawmanning in this case, and people might be reading the comments not the full thing (was very surprised by how long this was when I clicked on it, I don't think I've seen an 84 minute forum post before). But I'm not claiming this was intentional on either of your parts.
The data I would be most interested to see (if you plan to do further research on this) is of when people started following the page (rather than overall numbers of followers). I believe you mentioned this briefly in the limitations footnote.
A lot of people follow a lot of pages, and may have followed something years ago. If their interests change, but the page doesn't post, it seems relatively unlikely that someone will go out of their way to unfollow it. Perhaps they've even forgotten that they've followed it to begin with!
That was my first thought (intuition, no evidence) when you mentioned that the correlation between followers and date since last post was steeper when organisations that have not posted in the last 2 months were removed, i.e. this cuts out the 'followed this years ago and forgot, no new posts to remind me' followers.
That's good to hear re in favour of efforts to make EA better (edited for clarity). Thanks for your engagement on this.
Agreed with the necessity for awareness around power dynamics with the nuance of fixing this not having to fall on the people impacted by it. I found it good to see that post when it came out as it points out things people may not have been aware of.