Geoffrey, or anyone really, can you please define wokeness?
I fail to see how EA‘s vague opposition to being anti-woke in partisan culture wars are anything more than internecine credible threats to open society. As a neurodivergent and self-identified Black American EA who was moved by and still respects your article on viewpoint and neurodiversity but pragmatically votes on the left as a transpartisan because I don't see another middle way that isn't omnicidal?
With genuine respect, I find the blanket dismissals of wokenness to be extremely inflammatory and ineffective in eliciting the calm and respectful pushback from people who want to break new ground that you/EA/we(?) are looking for.
Also, thank you, Lauren, Nick and others for bringing attention to this.
I think if we found a comment that you considered racist/sexist and asked the author if they thought their comment was racist/sexist, the author would likely say no.
James Watson's denial of having made racist statements is a social fact worth noting. Most 'alt-center,' etc. researchers in HBD and the latest thinking on euphemisms intended to reappropriate racism for metapolitical and game-theoretic purposes scientifically will, perforce, never outright say this.
To be clear, I don't think many EAs are formally working in race science, and surely skeptical and morally astute EAs can have the integrity to admit to having made racist comments or reasonably disagree. (And no: as an African American EA on the left, I don't think we should unsubscribe every HBD-EA, Bostrom, etc., from social life. Instead, we should model a safer environment for us all to be wrong categorically. Effective means getting all x-risks and compound x-risks, etc. right the first time.)
But after mulling over most of the HBD-affirmed defenses of Bostrom's email/apology that I've read or engaged on the EA forum that weren't obviously (yet also highly upvoted) red pills by bad actors, I think there are other reasons many of those EAs won't say their comments were racist even if they themselves are not actually certain they are non-racist.
My hunch is whether those EAs see HBD as part of hard core or protective belt of longtermism/EA's program may be a good predictor of whether they believe and therefore would be willing say that their comments were racist.
For these, among other reasons, I think this instance of Hirshman's rhetoric of reaction above is mistaken. It is not disvaluable that community builders in a demographically, socially and epistemically isolated elitist technocratic movement like EA doesn't allow the best provisional statement clearly stating their stance on these issues to become the enemy of the good.
As I was relieved to see this, as well as the fact that Guy made the pushback I wish I had time to do 3 days ago. If there's any way I can support your efforts, please let me know!
1.1 For want of an intensional definition of value-alignment.1.2. I take little pleasure in suggesting that HBD-relevant beliefs strongly coupled with, e.g., Beckstead et al.'s (frankly narrow and imaginatively lacking) stance on the most likely sources of economic innovation in the future which therefore may have greater instrumental value to longtermist utopia may be one contributing factor for this problem within EA. And even anti-eugenics has its missteps.
This was a distinctively wholesome read. I restarted my (mostly focus) meditation practice late last year, and I have been meaning to leverage that foundation for a loving-kindness practice as well. The details of your post have substantially motivated that intention. Thank you for sharing!
Agreed. I've also seen other studies that suggest that the rate and quality of knowledge production increases from that kind of good faith dialectical feedback. Makes a lot of sense that some forms of conflict could be quite synergistic.I will definitely give the piece a more thorough review when I get a chance.
I could be wrong but I didn't see political conflict mentioned specifically in that article, at least not explicitly. Not saying it can't reasonably be inferred but given the political centrist majority within EA, I just wanted to clarify this observation as it could be misleading (?).
From what I briefly read (and gleaned from asking Ghostreader [GPT-3] in Readwise Reader), the studies found that when there is a lot of different knowledge and experience, increased task conflict (e.g. viewpoint diversity over content of a task) can override other forms of conflict, and actually lead to improved performance. More work here is needed, of course, but thanks for sharing this.
I just realized that I forgot to respond earlier, but your consideration and transparent explanation are appreciated.
Sounds great to me, let's talk!
Can you unpack a little bit of your individual impression of the metacrisis?
I've been trying to pin down this disconnect for about 6 months within the metacrisis space. Not sure how much OP has looked into it but I'm quite interested get a broader understanding of EA's take on it.
Thank you for sharing this! I've been casually following the Game B/metacrisis for only about 3ish years and after posing this question to the main Game B forums, I didn't get much of a response.
Does the content not resonate well enough with the "techno utopian approach" that some say is the EA mainstream way of thinking and, thus, other perspectives are simply neglected
I'm unfortunately fairly confident that this may be part of the answer, and that this EA criticism (particularly the sections on complex adaptive systems, excessive quantitative reasoning, vulnerability/resilience approaches, etc.) outlines some of the conclusions I've independently come to over the past 6 months.
Do you think that we should engage more with the ongoing work around the metacrisis?
I'd love to see this. I joined an EA co-working space over the summer where I asked about this disconnect. Only 1-2 people had heard of it but I found the feedback from those previously unfamiliar with the metacrisis somewhat promising.
My public notebook is currently down for maintenance, but I hope to share more of my investigations later.
Edit: I initially wrote a more detailed response that I accidentally posted prematurely. But this pessimism vs optimism explanation is quite interesting and after trying to hastily revise my comment, I think I'll have to reflect on it a bit more and likely pick your brain if you don't mind later.