H

howdoyousay?

603 karmaJoined May 2019

Comments
62

I've been thinking about this very thing for quite some time, and have been thinking up a concrete interventions to help the ML community / industry grasp this. DM me if you're interested to discuss further.

Now having read your reply, I think we're likely closer together than apart on views. But...

But the question of board choice is firstly a question of who should be given legal control of EA organisations.

I don't think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may trade off against legal control being put in the 'safest pair of hands'.

That said, I feel back and forth responses on the EA forum may be exhausting their value here; I feel I'd have more to say in a brainstorm about potential trade-offs between legal control and ability to check and challenge, and open to discussing further if helpful to some concrete issue at hand :)

Isn't the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our "belief system" is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?

Also I think a lot of the time when people say "value alignment", they are in fact looking for signals like self-identification as EAs, or who they're friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often). But within social-justice circles, emotive language can be seen as a signal of value alignment. Basically, there's a lot more to unpack with "value alignment" and what it means in reality vs. what we say it ostensibly means.

Also to tackle your response, and maybe I'm reading between the lines too hard here / being too harsh on you here, but I feel there's goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also "value aligned".

Another reflection: the more we speak about "value alignment" being important, the more it incentivises people to signal "value alignment" even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.

Ultimately, if you think there is enough value within EA arguments about how to do good, you should be able to find smart people from other walks of life who have: 1) enough overlap with EA thinking (because EA isn't 100% original after all) to have a reasonable starting point along with 2) more relevant leadership experience and demonstrably good judgement, and linked to the two previous 3) mature enough in their opinions and / or achievements to be less susceptible to herding.

If you think that EA orgs won't remain EA orgs if you don't appoint "value aligned" people, it implies out arguments aren't strong enough for people who we think should be convinced by them. If that's the case, it's a real good indicator your argument might not be that good and to reconsider.

To be concrete, I expect a board of 50% card-carrying EAs and 50% experienced high achievement non-EAs with good understanding of similar topics (e.g. x-risk, evidence based interventions) to appraise arguments of what high-/lower-risk options to fund much better than a board of 100% EAs with the same epistemic and discourse background and limited prior career / board experience.

Edit- clarity and typos

For the times when I have been done wrong to, I would have been really happy if the injuring party had been as reflective, genuinely self-critical and (from what I can see) steadfast in trying to do better as you are.

From what I can tell, you didn't have to "out" yourself for this. I respect the move to do this and to make amends and doing so (from what it seems) with the person you wronged. It's impressive that (from what it seems) they're keen to see things put right and giving you some support in this regard (if only fact checking this post).

I've more often felt like the younger woman in this scenario did. But the depths of your reflections Owen are leading me to think more critically about how I can be careless / flippant from my relative position of power and how that would make others feel.

Whether someone has had a big fuck up or minor infractions, I think this stuff is life long learning. There is no simple algorithm (though some guidelines, always evolving); diversity of people and emotional world's mean that it's just hard work to understand other people better, and not become complacent or over-confident when you think you're doing is well or dismissive and derisive if you're dismayed by lack of success.

I'm curious about the down-voting with out explanations as well, keen to hear why people disagree.

Why we're doing this

You can see our full reasoning here (and in the comments). In brief, we are worried about a way in which the voting structure on the Forum leads to more engagement with “Community” posts than users endorse, we’ve been hearing user feedback on related issues for a long time, and we’ve been having lots of conversations on hypotheses that we’d like to test. 

I feel mixed about this. 

On one hand, I log on to the forum and sometimes think "I'd rather not read about more drama", or indeed "I keep getting sucked in by drama rather than cause prioritisation". Really I just want to learn more rather than get sucked into debates about polycules and stuff.

On the other, if people are gravitating towards posts and discussion about the community, that's telling you something about what matters to people and what is soaking up a lot of mental energy among the community. It means there's an opportunity for (hopefully) progress to be made, when messages can land better, when it's more acceptable to dissent; in large part spurred on by FTX fraud fallout. 

I feel like this experiment is maybe not the best response to increased visceral dialogue about the community; it feels more like trying to package certain conversations in another space because XYZ reasons. I say XYZ reasons because the reasons themselves aren't the important part; the important part is that the message latent in people's behaviour isn't being paid attention to. 

It's a human response, and it's typically the response of very cerebral / less emotional people when there has been emotional trauma in a small group setting. But it isn't necessarily the response which leads to the best results, as others feel silenced or that the opportunity for changing things for the better is being taken away. 

Also I find the rationale here for removing community posts because they are intrinsically ones that everyone can grapple with wildly counterintuitive. 

If you haven't come across it yet, worth seeing what Wellcome Leap's R3 project involves, it focuses on increasing the production capability of MRNA vaccines in globally distributed labs so that there is better responsiveness to emerging pathogens; whether another covid, ebola, a particularly bad flu etc.

https://wellcomeleap.org/r3/

I don't actually think that's necessarily messed up? That sometimes your role conflicts with a relationship you'd like to have is unfortunate, but not really avoidable:

  • A company telling its managers that they can't date their reports .
  • ...
  • A school telling professors they can't date their students.
  • A charity telling their donor services staff that they can't date major donors.

 

In theory, I think it makes a lot of sense  to have some clear hard lines related to some power dynamics, but even when I'm trying to write those red lines I notice myself writing guidelines because human dynamics are subtle and context specific. For instance:

  • because you shouldn't have power dynamics potentially seep into a romantic relationship or influence your work behaviour: don't date direct reports; and most likely same for anyone in your workplace hierarchy; or where you have any inkling this negative dynamic could arise 
  • Someone at a certain level of seniority and / or power within an organisation will find others feel less able to speak up if their behaviour doesn't chime with them, so you should be extra careful in non-work social settings, especially if there's banter which could be flirtatious or a bit too close to the bone
  • Someone very experienced / well regarded in a company - even if very junior - can wield a lot of power over someone more senior, so they too need to keep themselves in check in terms of how they affect the other person; including how they challenge them

Basically, I don't think there's a feasible checklist for dealing effectively with the range of issues that might occur: affection and loyalty, and power, influence and control are all so subtle. To ensure you're not doing wrong to others or being done wrong to, it's more a constant process of checking in with  yourself and empowering others to speak up.

Moreover, I think it's difficult to separate out what counts as more / less ok workplace relationships. You could say we need to see fewer people working in EA orgs who were friends outside of work (as opposed to friends you make at work) or romantic relationships starting in EA orgs, but then there's just the people who you get along with and see eye-to-eye with within the work place and sometimes develop more impactful and / or toxic relationships with. For example: 

  • having political loyalty to a colleague, leading to factions 
  • nepotism between close friends / former work colleagues
  • simply the more junior person feels their career is still dependent on their mentor / friend[1]

Which is to say implement all these rules targeting 'out of work' friendships / relationships, but human power dynamic issues will still prevail in adjacent domains.

Reflecting that tighter human alliances and power dynamics is somewhat inevitable (I postulate), it's worth noting that a lot of the time companies - big or small - deal with relationships in flexible ways, and sometimes this is worth considering. Things like: 

  • if two people want to start dating - whether they're in the same team or in a line management hierarchy - give them the option of changing teams where that's appropriate for the business and still maintains sufficient separation to avoid power imbalances / 
  • in some cases, just turning a blind eye because people have their shit together and it isn't interfering in their work-lives
  • stipulating that a married couple cannot be in the same senior leadership team (as they'll be a unit)

 

 

  1. ^

    Side note: most of the above examples are why I'm often banging on about diversity within organisations - just newbies full stop - because they can break much of these dynamics up both through behaviours and new ideas, but I won't get on that hobby-horse just now.

I don't think this is a fair comment, and aspects of it reads more of a personal attack rather than an attack of ideas. This feels especially the case given the above post has significantly more substance and recommendations to it, but this one comment just focuses in on Zoe Cremer. It worries me a bit that it was upvoted as much as it was. 

For the record, I think some of Zoe's recommendations could plausibly be net negative and some are good ideas; as with everything, it requires further thinking through and then skillful implementation. But I think the amount of flack she's taken for this has been disproportionate and sends the wrong signal to others about dissenting.

I think this aspect of the comment is particularly harsh, which is in and of itself likely counterproductive. But on top of that, it's not the type that should be made lightly or without a lot of evidence that that is the person's agenda (bold for emphasis):

- I think part of Cremer's reaction after FTX is not epistemically virtuous; "I was a vocal critic of EA" - "there is an EA-related scandal" - "I claim to be vindicated in my criticism" is not sound reasoning, when the criticisms are mostly tangentially related to the scandal. It will get you a lot of media attention, in particular if you present yourself as some sort of virtuous insider who  was critical of the leaders and saw this coming,  but I hope upon closer scrutiny people are actually able to see through this.

Load more