Now having read your reply, I think we're likely closer together than apart on views. But...
But the question of board choice is firstly a question of who should be given legal control of EA organisations.
I don't think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may tra...
Isn't the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our "belief system" is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Also I think a lot of the time when people say "value alignment", they are in fact looking for signals like self-identification as EAs, or who they're friends with or have collaborated / worked with. I also notice we conflate our aesthe...
Isn't the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our "belief system" is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?
Yes - but the issue plays itself out one level up.
For instance, most people aren't very scope sensitive – firstly in their intuitions, and especially when it comes to acting on them.
I think scope sensitivity is a key part of effec...
Ultimately, if you think there is enough value within EA arguments about how to do good, you should be able to find smart people from other walks of life who have: 1) enough overlap with EA thinking (because EA isn't 100% original after all) to have a reasonable starting point along with 2) more relevant leadership experience and demonstrably good judgement, and linked to the two previous 3) mature enough in their opinions and / or achievements to be less susceptible to herding.
If you think that EA orgs won't remain EA orgs if you don't appoint "value alig...
I agree that there's a broader circle of people who get the ideas but aren't "card carrying" community members, and having some of those on the board is good. A board definitely doesn't need to be 100% self-identified EAs.
Another clarification is that what I care about is whether they deeply grok and are willing to act on the principles – not that they're part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing.
This said, I think there are surprisingly few people out there like that. And due to the ...
For the times when I have been done wrong to, I would have been really happy if the injuring party had been as reflective, genuinely self-critical and (from what I can see) steadfast in trying to do better as you are.
From what I can tell, you didn't have to "out" yourself for this. I respect the move to do this and to make amends and doing so (from what it seems) with the person you wronged. It's impressive that (from what it seems) they're keen to see things put right and giving you some support in this regard (if only fact checking this post).
I've more o...
From what I can tell, you didn't have to "out" yourself for this.
This might not be true. Effective Ventures's statement on the matter says that they were informed of this by Julia Wise, so at least they knew, and it's possible that someone would have outed him more publicly had he not done so himself.
Why we're doing this
You can see our full reasoning here (and in the comments). In brief, we are worried about a way in which the voting structure on the Forum leads to more engagement with “Community” posts than users endorse, we’ve been hearing user feedback on related issues for a long time, and we’ve been having lots of conversations on hypotheses that we’d like to test.
I feel mixed about this.
On one hand, I log on to the forum and sometimes think "I'd rather not read about more drama", or indeed "I keep getting sucked in by drama rathe...
If you haven't come across it yet, worth seeing what Wellcome Leap's R3 project involves, it focuses on increasing the production capability of MRNA vaccines in globally distributed labs so that there is better responsiveness to emerging pathogens; whether another covid, ebola, a particularly bad flu etc.
I don't actually think that's necessarily messed up? That sometimes your role conflicts with a relationship you'd like to have is unfortunate, but not really avoidable:
- A company telling its managers that they can't date their reports .
- ...
- A school telling professors they can't date their students.
- A charity telling their donor services staff that they can't date major donors.
In theory, I think it makes a lot of sense to have some clear hard lines related to some power dynamics, but even when I'm trying to write those red lines I notice myself wri...
I don't think this is a fair comment, and aspects of it reads more of a personal attack rather than an attack of ideas. This feels especially the case given the above post has significantly more substance and recommendations to it, but this one comment just focuses in on Zoe Cremer. It worries me a bit that it was upvoted as much as it was.
For the record, I think some of Zoe's recommendations could plausibly be net negative and some are good ideas; as with everything, it requires further thinking through and then skillful implementation. But I think ...
I was reacting mostly to this part of the post
I’ve honestly been pretty surprised there has not been more public EA discussion post-FTX of adopting a number of Cremer's proposed institutional reforms, many of which seem to me obviously worth doing
...
Also, insofar as she'd be willing (and some form of significant compensation is clearly merited), integrally engaging Cremer in whatever post-FTX EA institutional reform process emerges would be both directly helpful and a public show of good faith efforts at rectification.
I think it's fine for a co...
This discussion here made me curious, so I went to Zoe's twitter to check out what she's posted recently. (Maybe she also said things in other places, in which case I lack info.) The main thing I see her taking credit for (by retweeting other people's retweets saying Zoe "called it") is this tweet from last August:
...EA is to me to be unwilling to implement institutional safeguards against fuck-ups. They mostly happily rely on a self-image of being particularly well-intentioned, intelligent, precautious. That’s not good enough for an institution that prizes i
A few disjointed challenges, basically:
There's a false dichotomy implied in this post; that high trust and less governance go together, and having strong governance means you have low trust. I don't think this is the case; in fact you often create high trust by having appropriate strong governance, or lack of governance destroying trust.
I also find the characterisation of governance being there to protect against bad actors narrow, even naive; ...
Strong disagree.
I would have qualified my statements even if I was 99% confident that SBF had committed outright fraud. I would expect more and more information to come to light which could nuance my views (if not outright change them) which would lead me to want to change my public position.
Moreover, I felt Will's statement was quite strong in outright denouncing SBF, and had this qualifier as an afterthought; not in the foreground.
I didn't take issue with Holden's post. For me the subtext was "I'm going to be giving cold tak...
I really appreciated your comment and think it's important to acknowledge and ensure neurodiverse people feel welcome, and I'm coming from a place where I agree with Maya's reflections on emotions within EA and am neurotypical.
Not sure I have time to post my thoughts in depth but I think the rational Vs. intuitive emotional intelligence tension within EA is something worth a lot more thought. It's a tension / trade-off I've picked up on in the EA professional realm: where people aren't getting on in EA organisations, where people aren't feeling heard, and...
howdoyousay - thanks for this supportive post.
I agree that many neurodivergent people can develop quite a good set of emotional skills (like some of your Bay Area rationalists did), and can promote emotionally responsive and caring environments.
(When I teach my undergrad course on 'Human Emotions' -- syllabus here -- one of my goals is to help neurodivergent students improve their understanding of the evolutionary origins and adaptive functions of specific emotions, so they take them more seriously as human phenomena worth understanding)
M...
Great post, thank you for writing. Definitely makes me re-evaluate my own priors on fraud, and also think about structural risks inherent in companies where:
Strong upvote from me. I really appreciated the frank sharing of your experience and also that it was playfully written (and more on this forum could be!)
Particularly keen on the policy mandating anonymous donations, and building a receiving organisation to do this and (presumably) pool all different donations for cause / intervention X together and administering them? TBH, it seems like a no-brainer to me to do in the first place if governance was being prioritised. Main reason I think you would keep the close donor-adviser relationship in place would be:...
I think seeing it as "just putting two people in touch" is narrow. It's about judgement on whether to get involved in highly controversial commercial deal which was expected to significantly influence discourse norms, and therefore polarisation, in years to come. As far as I can tell, EA overall and Will specifically do not have skills / knowhow in this domain.
Introducing Elon to Sam is not just like making a casual introduction; if everything SBF was doing was based on EA, then this feels like EA wading in on the future of Twitter via the influence of SBF...
Strong upvote. It's definitely more than "just putting two people in touch." Will and SBF have known each other for 9 years, and Will has been quite instrumental in SBF's career trajectory - first introducing him to the principles of effective altruism, then motivating SBF to 'earn to give.' I imagine many of their conversations have centred around making effective career/donation/spending decisions.
It seems likely that SBF talked to Will about his intention to buy Twitter/get involved in the Twitter deal, at the very least asking Will to make the in...
This seems like an obviously good thing to do, but would challenge us to think of how to take it further.
One further thought - is there something the EA community can do on mental health / helping those affected that goes wider than within the EA community?
A lot of people will have lost a huge deal of savings, potentially worse than that. Supporting them matters no less than supporting EAs, beyond the fact that it's easier to support other EAs because of established networks. Ironically, that argument leaning into supporting just other EAs is the typ...
The announcement raises the possibility that "a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem." But it seems to me that if someone successfully argues for this position, they won't be able to win any of the offered prizes.
Thanks for clarifying this is in fact the case Nick. I get how setting a benchmark - in this case an essay's persuasiveness at shifting probabilities you assign to different AGI / extinction scenarios - makes it easier to judge across the board. But as someone who w...
Three things:
1. I'm mostly asking for any theories of victory pertaining to causes which support a long-termist vision / end-goal, such as eliminating AI risk.
2. But also interested in a theory of victory / impact on long-termism itself, of which multiple causes interact. For example, if
then the composites of a theory of victory/ impact could be...:
Agree there's something to your 1-3 counterarguments but I find the fourth less convincing, maybe more because of semantics than actual substantive disagreement. Why? A difference in net effect of x-risk reduction of 0.01% Vs. 0.001% is pretty massive. These especially matter if the expected value is massive, because sometimes the same expected value holds across multiple areas. For example, preventing asteroid related X risk Vs. Ai Vs. Bio Vs. runaway climate change; (by definition) all same EV (if you take the arguments at face value). But plausibility of each approach and individual interventions within that would be pretty high variance.
The more I reread your post, the more I feel our differences might be more nuances, but I think your contrarian / playing to an audience of cynics tone (which did amuse me) makes them seem starker?
Before I grace you with more sappy reasons why you're wrong, and sign you up to my life-coaching platform[1] counter-argue, I want to ask a few things...
I agree with this in principle... But there's a delicious irony in the idea of EA leadership (apols for singling you out in this way Ben) now realising "yes this is a risk; we should try and convince people to do the opposite of it", and not realising the risks inherent in that.
The fundamental issue is the way the community - mostly full of young people - often looks to / overrelies on EA leadership for ideas of causes to dedicate themselves to, but also ideas about how to live their life. This isn't necessarily the EA leadership fault, but it's not as if ...
I didn't down/up-vote this comment but I feel the down-votes without explanation and critical engagement are a bit harsh and unfair, to be honest. So I'm going to try and give some feedback (though a bit rapidly, and maybe too rapidy to be helpful...)
It feels like just an statement of fact to say that IQ tests have a sordid history; and concepts of intelligence have been weaponised against marginalised groups historically (including women, might I add to your list ;) ). That is fair to say.
But reading this post, it feels less interested i...
Thank you for your comment, at the beginning I did not understand about the downvotes and why I wasn't getting any kind of criticism
I agree with what you say with my comment, I would not contribute anything to Olivia's post, I realized this within hours of writing it and I did not want to delete or edit it. I prefer that the mistakes I may do remain present so that I can study a possible evolution for the near-medium future.
...But reading this post, it feels less interested in engaging with the OP's post let alone with Linch's response, and more like th
Thanks for your open and thoughtful response.
Just to emphasise, I would bet that ~all participants would get a lot less value from one / a few doom circle sessions than they would from:
The original CFAR alumni workshop included a warning:
"be warned that the nature of this workshop means we may be pushing on folks harder than we do at most other CFAR events, so please only sign up if that sounds like something that a) you want, and b) will be good for you."
I'm struggling to understand the motivations behind this.
Reading between the lines, was there a tacit knowledge by the organisers that this was somewhat experimental, and that it could perhaps lead to great breakthroughs and positive emotions as well as the opposite; but cou...
I'm struggling to understand why anyone would choose one big ritual like 'Doom circles' instead of just purposefully inculcating a culture of opennes to giving / receiving critique that is supportive and can help others? And I have a lot of concerns about unintended negative consequences of this approach.
Overall, this runs counter to my experience of what good professional feedback relationships look like;
Is it all a bit too convenient?
There's been lots of discussion about EA having so much money; particularly long-termist EA. Worries that that means we are losing the 'altruist' side of EA, as people get more comfortable, and work on more speculative cause areas. This post isn't about what's right / wrong or what "we should do"; it's about reconciling the inner tension this creates.
Many of us now have very well-paid jobs, which are in nice offices with perks like table-tennis. And that many people are working on things which often yield no benefit to humans...
Your question seems to be both about content and interpersonal relationships / dynamics. I think it's very helpful to split out the differences between the groups along those lines.
In terms of substantive content and focus, I think the three other responders outline very well; particularly on attitudes towards AGI timelines and types of models they are concerned about.
In terms of the interpersonal dynamics, my personal take is we're seeing a clash between left / social-justice and EA / long-termism play out stronger in this content area than most others, t...
To add to the other papers coming from the "AI safety / AGI" cluster calling for a synthesis in these views...
Thanks for writing this, completely agree.
I'd love if the EA community was able to have increasingly sophisticated, evidence -backed conversations about e.g. mega-projects vs. prospecting for and / or investing more in low-hanging fruits.
It feels like it will help ground a lot more debates and decision making within the community, especially around prioritising projects which might plausibly benefit the long term future compared with projects we've stronger reasons to think will benefit people / animals today (albeit not an almost infinitely large number of people / animals).
But also, you know, an increasingly better understanding of what seems to work is valuable in and of itself!
Cross-post to Leftism Virtue Café's commentary on this: https://forum.effectivealtruism.org/posts/bsTXHJFu3Srurbg7K/leftism-virtue-cafe-s-shortform?commentId=Q8armqnvxhAmFcrAh
Equally there's an argument to thank and reply to critical pieces made against the EA community which honestly engage with subject matter. This post (now old) making criticisms of long-termism is a good example: https://medium.com/curious/against-strong-longtermism-a-response-to-greaves-and-macaskill-cb4bb9681982
I'm sure / really hope Will's new book does engage with the points made here. And if so, it provides the rebuttal to those who come across hit-pieces and take them at face value, or those who promulgate hit-pieces because of their own ideological drives.
Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves.
In fact, I think I'll reflect on this list for a long time to ensure I continue not to respond on Twitter!
Agreed, and I was going to single out that quote for the same reason.
I think that sentence is really the crux of imposter syndrome. I think it's also, unfortunately, somewhat uniquely triggered by how EA philosophy is a maximising philosophy, which necessitates comparisons between people or 'talent' as well as cause areas.
As well as individual actions, I think it's good for us to think more about community actions around this as any intervention targeting the individual but not changing environment rarely makes the dent needed.
Full disclosure: I'm thinking about writing up about ways in which EA's focus on impact, and the amount deference to high status people, creates cultural dynamics which are very negative for some of its members.
I think that we should just bite the bullet here and recognise that the vast majority of smart dedicated people trying very hard to use reason and evidence to do the most good are working on improving the long run future.
It's a divisive claim, and not backed up with anything. By saying 'bite the bullet', it's like you're taunting the rea...
I do suspect there is a lot of interaction happening between social status, deference, elitism and what I'm starting to feel is more of a mental health epidemic then mental health deficit within the EA community. I suspect it's good to talk about these together, as things going hand in hand.
What do I mean by this interaction?
Things I often hear, which exemplify it:
I'd add a fifth; one about individuals personally exploring ways in which an EA mindset and / or taking advice / guidance on lifestyle or career from their EA community has led to less positive results in their own lives.
Some that come to mind are:
Denise's post about "My mistakes on the path to impact": https://forum.effectivealtruism.org/posts/QFa92ZKtGp7sckRTR/my-mistakes-on-the-path-to-impact And though I can't find it the post about how hard it is to get a job in an EA organisation, and how demoralising that is (among other points)
Here's a podcast I listened to years ago which has influenced how I think about groups and what to be sceptical about; most specifically what we choose not to talk about.
This is why I'm somewhat sceptical about how EA groups would respond to an offer of an ethnography; what do people find uncomfortable to talk about with a stranger observing them, let alone with each other?
...One way to approach this would simply be to make a hypothesis (i.e. the bar for grants is being lowered, we're throwing money at nonsense grants), and then see what evidence you can gather for and against it.
Another way would be to identify a hypothesis for which it's hard to gather evidence either way. For example, let's say you're worried that an EA org is run by a bunch of friends who use their billionaire grant money to pay each other excessive salaries and and sponsor Bahama-based "working" vacations. What sort of information would you need in order t
Interesting, I think it's the other way round; there are tonnes of companies and academic groups who do action-oriented evaluation work which can include (and I reckon in some cases exclusively be) ethnography. But in my experience the hard part is always "what can feasibly be researched?" and "who will listen and learn from the findings?" In the case of EA community this would translate to something like the following, which are ranked in order of hardest to simpler...:
I've been thinking about this very thing for quite some time, and have been thinking up a concrete interventions to help the ML community / industry grasp this. DM me if you're interested to discuss further.