All of howdoyousay?'s Comments + Replies

I've been thinking about this very thing for quite some time, and have been thinking up a concrete interventions to help the ML community / industry grasp this. DM me if you're interested to discuss further.

1
Luiza
I'm new to thinking about this (getting close to a year), but a thing I learned (a bit the hard way) is that translating thinking into words proves to be a good path to a translation into actions too.  concrete interventions to help the ML community / industry grasp this <- this sounds useful to expand on, in case you did not yet do so since you posted the comment.

Now having read your reply, I think we're likely closer together than apart on views. But...

But the question of board choice is firstly a question of who should be given legal control of EA organisations.

I don't think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may tra... (read more)

9
Grayden 🔸
Two quick points: 1. Yes, legal control is the first consideration, but governance requires skill not just value-alignment 2. I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique

Isn't the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our "belief system" is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?

Also I think a lot of the time when people say "value alignment", they are in fact looking for signals like self-identification as EAs, or who they're friends with or have collaborated / worked with. I also notice we conflate our aesthe... (read more)

Isn't the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our "belief system" is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?

 

Yes - but the issue plays itself out one level up.

For instance, most people aren't very scope sensitive – firstly in their intuitions, and especially when it comes to acting on them.

I think scope sensitivity is a key part of effec... (read more)

Ultimately, if you think there is enough value within EA arguments about how to do good, you should be able to find smart people from other walks of life who have: 1) enough overlap with EA thinking (because EA isn't 100% original after all) to have a reasonable starting point along with 2) more relevant leadership experience and demonstrably good judgement, and linked to the two previous 3) mature enough in their opinions and / or achievements to be less susceptible to herding.

If you think that EA orgs won't remain EA orgs if you don't appoint "value alig... (read more)

I agree that there's a broader circle of people who get the ideas but aren't "card carrying" community members, and having some of those on the board is good. A board definitely doesn't need to be 100% self-identified EAs.

Another clarification is that what I care about is whether they deeply grok and are willing to act on the principles – not that they're part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing. 

This said, I think there are surprisingly few people out there like that. And due to the ... (read more)

For the times when I have been done wrong to, I would have been really happy if the injuring party had been as reflective, genuinely self-critical and (from what I can see) steadfast in trying to do better as you are.

From what I can tell, you didn't have to "out" yourself for this. I respect the move to do this and to make amends and doing so (from what it seems) with the person you wronged. It's impressive that (from what it seems) they're keen to see things put right and giving you some support in this regard (if only fact checking this post).

I've more o... (read more)

From what I can tell, you didn't have to "out" yourself for this.

This might not be true. Effective Ventures's statement on the matter says that they were informed of this by Julia Wise, so at least they knew, and it's possible that someone would have outed him more publicly had he not done so himself.

I'm curious about the down-voting with out explanations as well, keen to hear why people disagree.

Why we're doing this

You can see our full reasoning here (and in the comments). In brief, we are worried about a way in which the voting structure on the Forum leads to more engagement with “Community” posts than users endorse, we’ve been hearing user feedback on related issues for a long time, and we’ve been having lots of conversations on hypotheses that we’d like to test. 

I feel mixed about this. 

On one hand, I log on to the forum and sometimes think "I'd rather not read about more drama", or indeed "I keep getting sucked in by drama rathe... (read more)

5
Nathan Young
While I like the change I think it papers over the cracks of us not being great at community sensemaking. I sense we agree.

If you haven't come across it yet, worth seeing what Wellcome Leap's R3 project involves, it focuses on increasing the production capability of MRNA vaccines in globally distributed labs so that there is better responsiveness to emerging pathogens; whether another covid, ebola, a particularly bad flu etc.

https://wellcomeleap.org/r3/

I don't actually think that's necessarily messed up? That sometimes your role conflicts with a relationship you'd like to have is unfortunate, but not really avoidable:

  • A company telling its managers that they can't date their reports .
  • ...
  • A school telling professors they can't date their students.
  • A charity telling their donor services staff that they can't date major donors.

 

In theory, I think it makes a lot of sense  to have some clear hard lines related to some power dynamics, but even when I'm trying to write those red lines I notice myself wri... (read more)

I don't think this is a fair comment, and aspects of it reads more of a personal attack rather than an attack of ideas. This feels especially the case given the above post has significantly more substance and recommendations to it, but this one comment just focuses in on Zoe Cremer. It worries me a bit that it was upvoted as much as it was. 

For the record, I think some of Zoe's recommendations could plausibly be net negative and some are good ideas; as with everything, it requires further thinking through and then skillful implementation. But I think ... (read more)

I was reacting mostly  to this part of the post

I’ve honestly been pretty surprised there has not been more public EA discussion post-FTX of adopting a number of Cremer's proposed institutional reforms, many of which seem to me obviously worth doing 
...
Also, insofar as she'd be willing (and some form of significant compensation is clearly merited), integrally engaging Cremer in whatever post-FTX EA institutional reform process emerges would be both directly helpful and a public show of good faith efforts at rectification.

I think it's fine for a co... (read more)

This discussion here made me curious, so I went to Zoe's twitter to check out what she's posted recently. (Maybe she also said things in other places, in which case I lack info.) The main thing I see her taking credit for (by retweeting other people's retweets saying Zoe "called it") is this tweet from last August:

EA is to me to be unwilling to implement institutional safeguards against fuck-ups. They mostly happily rely on a self-image of being particularly well-intentioned, intelligent, precautious. That’s not good enough for an institution that prizes i

... (read more)

A few disjointed challenges, basically:

  1. Governance has value regardless of degree of trust
  2. I find EA leadership's attitude low trust in some instances

There's a false dichotomy implied in this post; that high trust and less governance go together, and having strong governance means you have low trust. I don't think this is the case; in fact you often create high trust by having appropriate strong governance, or lack of governance destroying trust.

I also find the characterisation of governance being there to protect against bad actors narrow, even naive; ... (read more)

3
Michael_PJ
Probably a consequence of me trying to say how I feel about the vague zeitgeist at the moment, but I think I was complaining mostly about proposals for governance that do seem aimed at eliminating bad actors or bad behaviour. I think this sounds reasonable and I don't object to it, except insofar as I'd want to be sure it was actually pulling its weight and not guarding against hypothetical problems that don't occur or don't matter much. "We weren't paying attention and we lost some money" is common and damaging: yes you should have good accounting. Other stuff I don't know. I like your point that it doesn't feel like "leadership" (scare quotes because I still don't really believe it exists) don't have as much trust in the community as vice versa. I personally think this is a matter of perception versus reality - most of the time when this has come up "leadership" has argued that they don't actually want people to defer to them and they're not sure why people are getting that impression, etc.

Strong disagree. 

I would have qualified my statements even if I was 99% confident that SBF had committed outright fraud.  I would expect more and more information to come to light which could nuance my views (if not outright change them) which would lead me to want to change my public position. 
 

Moreover, I felt Will's statement was quite strong in outright denouncing SBF, and had this qualifier as an afterthought; not in the foreground. 

I didn't take issue with Holden's post. For me the subtext was "I'm going to be giving cold tak... (read more)

2
throwaway790
  I just don't understand how you can say that. Almost every tweet had the qualification in the foreground. Certaintly it wasn't an afterthought. His offer to apologize was in the 4th tweet, before he even condemns Sam. (Edit: link to thread) I read "more likely than not" to be ~60% confidence, which doesn't show a very high level of understanding of the situation (which is not very acceptable for someone who should have been monitoring this situation very closely). It also colors the rest of what I read as someone who doesn't think that there is much certainty to be had here. Emphasis mine, but I'm afraid we're either talking about different statements, or read these tweets very differently.

I really appreciated your comment and think it's important to acknowledge and ensure neurodiverse people feel welcome, and I'm coming from a place where I agree with Maya's reflections on emotions within EA and am neurotypical.

Not sure I have time to post my thoughts in depth but I think the rational Vs. intuitive emotional intelligence tension within EA is something worth a lot more thought. It's a tension / trade-off I've picked up on in the EA professional realm: where people aren't getting on in EA organisations, where people aren't feeling heard, and... (read more)

howdoyousay - thanks for this supportive post. 

I agree that many neurodivergent people can develop quite a good set of emotional skills (like some of your Bay Area rationalists did), and can promote emotionally responsive and caring environments. 

(When I teach my undergrad course on 'Human Emotions' -- syllabus here -- one of my goals is to help neurodivergent students improve their understanding of the evolutionary origins and adaptive functions of specific emotions, so they take them more seriously as human phenomena worth understanding) 

M... (read more)

Yeah I had the same thought too. Though when I said priors I personally did not mean updating quantifiably (i.e. 0.05 --> 0.1); more in the folk sense of priors, or base rates.  

Also the examples I gave are more about certain features of a business / company that I should be more sceptical about.

0
NunoSempere
Right, what I meant is that you probably shouldn't update all that much about the frequency of fraud within finance, the same way that you shouldn't update on how often redheads are evil are after reading a list of evil redheads.

Great post, thank you for writing. Definitely makes me re-evaluate my own priors on fraud, and also think about structural risks inherent in companies where:

  • there is a trading house and fund owned by the same person (as in Bernie Madoff's case)
  • the nature of the business and / or investments may not be a ponzi scheme by nature or have multi-level marketing built in, but where it's growth model in effect looks a bit like that; i.e. growth seems highly driven by enthusiasm, is activated through social networks / word of mouth, and the intrinsic value of the business / commodity is subject to a lot of debate (in contrast with e.g. stock prices of minerals necessary in electronics manufacturing)
7
NunoSempere
Cheers. For what it's worth, I'm not sure how much one should update one's priors based on this list, because it's not clear how many people in finance there are in total  (though maybe one could quickly do a back of the envelope calculation here). So I think that this kind of thing is more useful when thinking about what happens once you know there is fraud.

Strong upvote from me. I really appreciated the frank sharing of your experience and also that it was playfully written (and more on this forum could be!)

Particularly keen on the policy mandating anonymous donations, and building a receiving organisation to do this and (presumably) pool all different donations for cause / intervention X together and administering them? TBH, it seems like a no-brainer to me to do in the first place if governance was being prioritised. Main reason I think you would keep the close donor-adviser relationship in place would be:... (read more)

I think seeing it as "just putting two people in touch" is narrow. It's about judgement on whether to get involved in highly controversial commercial deal which was expected to significantly influence discourse norms, and therefore polarisation, in years to come. As far as I can tell, EA overall and Will specifically do not have skills / knowhow in this domain.

Introducing Elon to Sam is not just like making a casual introduction; if everything SBF was doing was based on EA, then this feels like EA wading in on the future of Twitter via the influence of SBF... (read more)

Strong upvote. It's definitely more than "just putting two people in touch." Will and SBF have known each other for 9 years, and Will has been quite instrumental in SBF's career trajectory - first introducing him to the principles of effective altruism, then motivating SBF to 'earn to give.' I imagine many of their conversations have centred around making effective career/donation/spending decisions. 

It seems likely that SBF talked to Will about his intention to buy Twitter/get involved in the Twitter deal, at the very least asking Will to make the in... (read more)

Posting quickly, haven't thought it through but guessing that one of the key benefits of having an independent investigator is that it gives routes for any evidence to be evaluated, thereby preventing a witch-hunt which could happen if no legitimate routes for recourse.

Fair! 

In case wasn't clear from my post, I'm not saying there's a clear action here yet but to still think beyond the EA community.

This seems like an obviously good thing to do, but would challenge us to think of how to take it further.

One further thought - is there something the EA community can do on mental health / helping those affected that goes wider than within the EA community? 

A lot of people will have lost a huge deal of savings, potentially worse than that. Supporting them matters no less than supporting EAs, beyond the fact that it's easier to support other EAs because of established networks. Ironically, that argument leaning into supporting just other EAs is the typ... (read more)

6
jacquesthibs
One worry one might have is the following reaction: “I don’t need mental health help, I need my money back! You con artists have ruined my life and now want to give me a pat on the back and tell me it’s going to be ok?” Then again, I do want us to do something if it makes sense. :(

Yes, but a lot of EAs were those retail investors as well losing their shirts, or will likely lose their jobs now as they were funded via FTX.  Many in our community will be a subset of those affected, who indeed need lots of support, but a reasonable number nonetheless.

The announcement raises the possibility that "a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem." But it seems to me that if someone successfully argues for this position, they won't be able to win any of the offered prizes.

 

Thanks for clarifying this is in fact the case Nick. I get how setting a benchmark - in this case an essay's persuasiveness at shifting probabilities you assign to different AGI / extinction scenarios - makes it easier to judge across the board. But as someone who w... (read more)

Three things:

1. I'm mostly asking for any theories of victory pertaining to causes which support a long-termist vision / end-goal, such as eliminating AI risk. 

2. But also interested in a theory of victory / impact on long-termism itself, of which multiple causes interact. For example, if 

  • long-termism goal = reduce all x-risk and develop technology to end suffereing, enable flourishing + colonise stars

then the composites of a theory of victory/ impact could be...:

  • reduce X risk pertaining to Ai, bio, others
  • research / udnerstanding around enabling
... (read more)

Agree there's something to your 1-3 counterarguments but I find the fourth less convincing, maybe more because of semantics than actual substantive disagreement.  Why? A difference in net effect of x-risk reduction of 0.01% Vs. 0.001% is pretty massive. These especially matter if the expected value is massive, because sometimes the same expected value holds across multiple areas. For example, preventing asteroid related X risk Vs. Ai Vs. Bio Vs. runaway climate change; (by definition) all same EV (if you take the arguments at face value). But plausibility of each approach and individual interventions within that would be pretty high variance. 

2
Marcel2
I'm a bit uncertain as to what you are arguing/disputing here. To clarify on my end, my 4th point  was mainly just saying "when comparing long-termist vs. near-termist causes, the concern over 'epistemic and accountability risks endemic to long-termism' seems relatively unimportant given [my previous 3 points and/or] the orders of magnitude of difference in expected value between near-termism vs. long-termism." Your new comment seems to be saying that an order-of-magnitude uncertainty factor is important when comparing cause areas within long-termism, rather than when comparing between overall long-termism and overall near-termism. I will briefly respond to that claim in the next paragraph, but if your new comment is actually still arguing your original point that the potential for bias is concerning enough that it makes the expected value of long-termism less than or just roughly equal to that of near-termism, I'm confused how you came to that conclusion. Could you clarify which argument you are now trying to make? Regarding the inter-long-termism comparisons, I'll just say one thing for now: some cause areas still seem significantly less important than other areas. For example, it might make sense for you to focus on x-risks from asteroids or other cosmic events if you have decades of experience in astrophysics and the field is currently undersaturated (although if you are a talented scientist it might make sense for you to offer some intellectual support to AI, bio, or even climate). However, the x-risk from asteroids is many orders of magnitude smaller than that from AI and probably even biological threats. Thus, even an uncertainty factor that for some reason only reduces your estimate of expected x-risk reduction via AI or bio work by a factor of 10 (e.g., from 0.001% to 0.0001%) without also affecting your estimate of expected x-risk reduction via work on asteroid safety will probably not have much effect on the direction of the inequality (i.e., x > y; 0.1

The more I reread your post, the more I feel our differences might be more nuances, but I think your contrarian / playing to an audience of cynics tone (which did amuse me) makes them seem starker? 

Before I grace you with more sappy reasons why you're wrong, and sign you up to my life-coaching platform[1] counter-argue, I want to ask a few things...

  • I am not sure whether you're saying "treating people better / worse depending on their success is good"; particularly in the paragraphs about success and worth. Or that you think that's just an immutab
... (read more)
0
NegativeNuno
I think that I disagree with you with regards to how people value other people, and how people should expect other people to value them, and less about where one should derive one's own self-worth from [1]. As such, I do think that we have a disagreement. ---------------------------------------- I think it is good in the case of, for instance, your professional life. For instance, funders are likely to fund projects differentially for people who have previous successes under their belt. People might fire other people if they haven't been going well at their jobs. In the case of personal life, it's more ambiguous. As we both agree on, it causes sorrow. However, I think it's hard to change, because there are traits that make someone a good friend, romantic partner, colleague, and I think that it's a bit futile to go against that. I don't think it's literally impossible, but I think that there are time tradeoffs, and developing existential chill is one of many things one could do with one's time. I've also had bad experiences with situations which gave the outer impression of being high trust/high acceptance, but weren't in the end when that acceptance was pushed a bit. I think that sometimes you can get away with a "judge once" regime, where once you are in someone's circle of care they care about you unconditionally, but I also think that people have limited spots. ---------------------------------------- I'm not sure what your point of trying your hardest is, maybe: I think a difference might be that I derive some self-worth from staying true to my ideals, or "staying true to inner self", but I read you as saying that you derive self-worth from some intrinsic value. I read that paragraph as saying that "you can work as hard as you can", but not making a statement related to that as self-worth. It's possible I'm missing what the point was. ---------------------------------------- I think that we have different things: * How you value yourself * How othe

I agree with this in principle... But there's a delicious irony in the idea of EA leadership (apols for singling you out in this way Ben) now realising "yes this is a risk; we should try and convince people to do the opposite of it", and not realising the risks inherent in that.

The fundamental issue is the way the community - mostly full of young people - often looks to / overrelies on EA leadership for ideas of causes to dedicate themselves to, but also ideas about how to live their life. This isn't necessarily the EA leadership fault, but it's not as if ... (read more)

I didn't down/up-vote this comment but I feel the down-votes without explanation and critical engagement are a bit harsh and unfair, to be honest.  So I'm going to try and give some feedback (though a bit rapidly, and maybe too rapidy to be helpful...)
 

It feels like just an statement of fact to say that IQ tests have a sordid history; and concepts of intelligence have been weaponised against marginalised groups historically (including women, might I add to your list ;) ). That is fair to say. 

But reading this post, it feels less interested i... (read more)

Thank you for your comment, at the beginning I did not understand about the downvotes and why I  wasn't getting any kind of criticism

I agree with what you say with my comment, I would not contribute anything to Olivia's post, I realized this within hours of writing it and I did not want to delete or edit it. I prefer that the mistakes I may do remain present so that I can study a possible evolution for the near-medium future.

But reading this post, it feels less interested in engaging with the OP's post let alone with Linch's response, and more like th

... (read more)
4
Frederik
Nothing to add -- just wanted to explicitly say I appreciate a lot that you took the time to write the comment I was too lazy to.

Thanks for your open and thoughtful response.

Just to emphasise, I would bet that ~all participants would get a lot less value from one / a few doom circle sessions than they would from:

  • cultivating skills to ask / receive feedback that is effective (with all the elements I've written about above) which they can use across time - including after leaving a workshop, and / or;
  • just a pervasive thread throughout the workshop helping people develop both these skills and also initiate some relationships at the workshops where they can keep practising this feedback
... (read more)
1
Amy Labenz
Thanks for the follow-up! I'm working on a different format that I think might address some of your concerns (I posted this quickly to link to it in an email about the new format).  I agree I should add a caveat above. It seems like you and others are getting the impression that I think this is the best way to get feedback/I'm an expert on Doom Circles (which is understandable, since I chose to post about them!). I'll write something quickly now (I don't have childcare at the moment, so might make changes next week).  Also agree I could have done more to explain the importance of setting norms. I'll make a note to revisit next week when I have more time.  Really appreciate you pushing to make sure I understood the feedback :) 

The original CFAR alumni workshop included a warning:
"be warned that the nature of this workshop means we may be pushing on folks harder than we do at most other CFAR events, so please only sign up if that sounds like something that a) you want, and b) will be good for you."

 

I'm struggling to understand the motivations behind this. 

Reading between the lines, was there a tacit knowledge by the organisers that this was somewhat experimental, and that it could perhaps lead to great breakthroughs and positive emotions as well as the opposite; but cou... (read more)

1
Amy Labenz
Copying from the link, I think they were pretty explicitly doing something experimental. I wasn't involved in the workshop, I suppose they found the previous experiment with Hamming Circles useful and wanted to try a variation: 

I'm struggling to understand why anyone would choose one big ritual like 'Doom circles' instead of just purposefully inculcating a culture of opennes to giving / receiving critique that is supportive and can help others? And I have a lot of concerns about unintended negative consequences of this approach. 

Overall, this runs counter to my experience of what good professional feedback relationships look like; 

  • I suspect the formality will make it feel weirder  for people who aren't used to offering feedback / insights to start doing it in a mor
... (read more)
8
Kaj_Sotala
These don't sound mutually exclusive to me; you can have a formal ritual about something and also practice doing some of the related skills on a more regular basis. That said, for many people, it would be emotionally challenging if they needed to be ready to receive criticism all the time. A doom circle is something where you can receive feedback at such a time when you have emotionally prepared for it, and then go back to receiving less criticism normally.   It might be better if everyone was capable of always receiving it, but it's also good to have options for people who aren't quite there yet. A doom circle is a thing that can be done in less than an hour, whereas skill-building is much more of a long-term project. That's true, but also: since people generally know that low-context feedback can be unhelpful, they might hold back offering any such feedback, even when it would be useful! Having an explicit context for offering the kind of critical feedback that you know might be incorrect gives people the chance to receive even the kinds of impressions that they otherwise wouldn't have the chance to hear. Yes, in any well-run doom circle, this exact thing would be emphasized in the beginning (levin mentioned this in their comment and it was probably done in the beginning of the circles I've participated in, as well; I don't remember what the exact preliminaries were, but it certainly matches my general sense of the spirit of the circle).
1
Amy Labenz
Thanks for your comment. I'm glad to hear you feel more comfortable setting boundaries now. I think it is a good flag that some people might not be in a place to do that, so we should be mindful of social / status dynamics and try our best to make this truly opt-in. I agree there are other types of feedback that are probably better for most people in most cases, and that Doom Circles are just one format that is not right for lots of people. I meant to emphasize that in the post but I see that might not have come through. 

"70,000 hours back"; a monthly podcast interviewing someone who 'left EA' about what they think are some of EAs most pressing problems, and what somebody else should do about them.

Is it all a bit too convenient?

There's been lots of discussion about EA having so much money; particularly long-termist EA. Worries that that means we are losing the 'altruist' side of EA, as people get more comfortable, and work on more speculative cause areas. This post isn't about what's right / wrong or what "we should do"; it's about reconciling the inner tension this creates.

Many of us now have very well-paid jobs, which are in nice offices with perks like table-tennis. And that many people are working on things which often yield no benefit to humans... (read more)

3
Teun van der Weij
I myself don't have too much to add, but in this 80,000 hours podcast with Will MacAskill they do discuss it. (if you don't want to listen to the whole podcast, you can look up the transcript.

Your question seems to be both about content and interpersonal relationships / dynamics. I think it's very helpful to split out the differences between the groups along those lines.

In terms of substantive content and focus, I think the three other responders outline very well; particularly on attitudes towards AGI timelines and types of models they are concerned about.

In terms of the interpersonal dynamics, my personal take is we're seeing a clash between left / social-justice and EA / long-termism play out stronger in this content area than most others, t... (read more)

To add to the other papers coming from the "AI safety / AGI" cluster calling for a synthesis in these views...

https://www.repository.cam.ac.uk/handle/1810/293033

https://arxiv.org/abs/2101.06110

I think taking this forward would be awesome, and I'm potentially interested to contribute. So consider this comment an earmarking for me to come speak with you and / or Rory about this at a later date :)

Thanks for writing this, completely agree.

I'd love if the EA community was able to have increasingly sophisticated, evidence -backed conversations about e.g. mega-projects vs. prospecting for and / or investing more in low-hanging fruits.

It feels like it will help ground a lot more debates and decision making within the community, especially around prioritising projects which might plausibly benefit the long term future compared with projects we've stronger reasons to think will benefit people / animals today (albeit not an almost infinitely large number of people / animals).

But also, you know, an increasingly better understanding of what seems to work is valuable in and of itself!

Equally there's an argument to thank and reply to critical pieces made against the EA community which honestly engage with subject matter. This post (now old) making criticisms of long-termism is a good example: https://medium.com/curious/against-strong-longtermism-a-response-to-greaves-and-macaskill-cb4bb9681982

I'm sure / really hope Will's new book does engage with the points made here. And if so, it provides the rebuttal to those who come across hit-pieces and take them at face value, or those who promulgate hit-pieces because of their own ideological drives.

Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves.  

 

In fact, I think I'll reflect on this list for a long time to ensure I continue not to respond on Twitter!

Agreed, and I was going to single out that quote for the same reason. 

I think that sentence is really the crux of imposter syndrome. I think it's also, unfortunately, somewhat uniquely triggered by how EA philosophy is a maximising philosophy, which necessitates comparisons between people or 'talent' as well as cause areas. 

As well as individual actions, I think it's good for us to think more about community actions around this as any intervention targeting the individual but not changing environment rarely makes the dent needed.

Full disclosure: I'm thinking about writing up about ways in which EA's focus on impact, and the amount deference to high status people, creates cultural dynamics which are very negative for some of its members. 

I think that we should just bite the bullet here and recognise that the vast majority of smart dedicated people trying very hard to use reason and evidence to do the most good are working on improving the long run future.

 

It's a divisive claim, and not backed up with anything. By saying 'bite the bullet', it's like you're taunting the rea... (read more)

I do suspect there is a lot of interaction happening between social status, deference, elitism and what I'm starting to feel is more of a mental health epidemic then mental health deficit within the EA community.  I suspect it's good to talk about these together, as things going hand in hand.

What do I mean by this interaction?

Things I often hear, which exemplify it:

  • younger EAs, fresh out of uni following particular career advice from a person / org, investing a lot of faith in it - probably moreso than the person of higher status expects them to. Thei
... (read more)
2
PabloAMC 🔸
I would not be as strong. My personal experience is a bit of a mixed bag: the vast majority of people I have talked to are caring and friendly, but I (rarely) keep sometimes having moments that feel a bit disrespectful. And really, this is the kind of thing that would push new people outside the movement.

I'd add a fifth; one about individuals personally exploring ways in which an EA mindset and / or taking advice / guidance on lifestyle or career from their EA community has led to less positive results in their own lives.

Some that come to mind are:

Denise's post about "My mistakes on the path to impact": https://forum.effectivealtruism.org/posts/QFa92ZKtGp7sckRTR/my-mistakes-on-the-path-to-impact And though I can't find it the post about how hard it is to get a job in an EA organisation, and how demoralising that is (among other points)

1
Kaleem
Wouldn't this category be part of the fourth one? You're just pointing to more concrete examples of "practices in the EA community"? Or am I missing something (pretty likely).
1
Teo Ajantaival
The latter might refer to this one: https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really

Here's a podcast I listened to years ago which has influenced how I think about groups and what to be sceptical about; most specifically what we choose not to talk about.

This is why I'm somewhat sceptical about how EA groups would respond to an offer of an ethnography; what do people find uncomfortable to talk about with a stranger observing them, let alone with each other?

https://open.spotify.com/episode/0yKnibTpN8XaszrjP2lDha?si=2NLagAXkScmizZO8jMGBHQ&utm_source=copy-link

Yes to links of what conversations on gaming the system are happening where! 

Surely this is something that should be shared directly with all funders as well? Are there any (in)formal systems in place for this?

One way to approach this would simply be to make a hypothesis (i.e. the bar for grants is being lowered, we're throwing money at nonsense grants), and then see what evidence you can gather for and against it.

Another way would be to identify a hypothesis for which it's hard to gather evidence either way. For example, let's say you're worried that an EA org is run by a bunch of friends who use their billionaire grant money to pay each other excessive salaries and and sponsor Bahama-based "working" vacations. What sort of information would you need in order t

... (read more)

I'm personally concerned that horoscopes weren't taken into account in devising this scheme, when there is literally thousands of years worth of work on this, all going back to classical civilisation and aristotle or something.  Classic EAs overcomplicating things / reinventing the wheel.

Interesting, I think it's the other way round; there are tonnes of companies and academic groups who do action-oriented evaluation work which can include (and I reckon in some cases exclusively be) ethnography. But in my experience the hard part is always "what can feasibly be researched?" and "who will listen and learn from the findings?"  In the case of EA community this would translate to something like the following, which are ranked in order of hardest to simpler...:

  • what exactly is the EA community? or what is a representative cross-section / gro
... (read more)
2
Holly Elmore ⏸️ 🔸
I think that EAs are, at least ostensibly, very open to being studied and critiqued. I think they could be an excellent population for academic ethnographers or simply very compliant client community for action-oriented evaluation.

Yeah I'd know how to go about making this happen, including figuring out what's a decent research question for it, but not undertaking it myself.

4
Linch
I don't think the research question is the hard part, compared to finding the people to do it. If you're interested in this I'd be happy to see a proposal on it, including who you think is good to research stuff here! :)
Load more