This is a special post for quick takes by Vaipan. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Reading McAskill's AMA from 4 years ago about what would kill EA, I can't help but find his predictions chillingly realistic!

  1. The brand or culture becomes regarded as toxic, and that severely hampers long-run growth. (Think: New Atheism) = OpenAI reshuffling and general focus on AI safety has increased the caution of the mainstream public towards EA 
  2. A PR disaster, esp among some of the leadership. (Think: New Atheism and Elevatorgate) = SBF debacle!
  3. Fizzle - it just ekes along, but doesn’t grow very much, loses momentum and goes out of fashion = This one hasn't happened yet but is the obviously structural one, still has a chance to happen

When will we learn? I feel that we haven't taken seriously the lessons from SBF given what happened at OpenAI and the split in the community concerning support for Altman and his crazy projects. Also, as a community builder who talks to a lot of people and who does outreach, I hear a lot of bad criticism concerning EA ('self-obsessed tech bros wasting money'), and while it's easy to think that these people speak out of ignorance, ignoring the criticism won't make it go away.

I would love to see more worry and more action around this. 

I feel that we haven't taken seriously the lessons from SBF given what happened at OpenAI

How specifically? Seems to me you could easily argue that SBF should make us more skeptical of charismatic leaders like Sam Altman.

But absolutely, and yet a big part of EAs seem to be pro-Altman! That was my point, I might not have been clear enough, thanks for calling this to attention

But absolutely, and yet a big part of EAs seem to be pro-Altman!

What makes you think a big part of EAs are pro-Altman? My impressions is that this is not true, and I cannot come up with any concrete example.

It's what I've seen. Happy to be wrong. It's an impression--I didn't register in a notebook every time someone was supporting Altman, but I've read it quite a lot; just like you I can't prove it.

I'm happy to be wrong--not sure downvoting me to hell will make the threats mentioned in my quick take go away though.

I don't feel like they are pro Altman in general, but not sure. Maybe in the past they were when OpenPhil funded OpenAI not sure.

When will we learn? I feel that we haven't taken seriously the lessons from SBF given what happened at OpenAI and the split in the community concerning support for Altman and his crazy projects.

Huh? What's the lesson from FTX that would have improved the OpenAI situation?

Don't trust lost-canony individuals? Don't revere a single individual and trust him with deciding the fate of a such an important org?

To the extent that EA can be considered a single agent that can learn and act, I feel like 'we' just made an extraordinary effort to remove a single revered individual, an effort that most people regard as extremely excessive. What more would you have the board have done? I can see arguments that it could have been done more skillfully (though these seem like monday morning quarterbacking, and are made on incomplete information), but the magnitude and direction seem like what you are looking for?

The board did great, I'm very happy we had Tasha and Helen on board to make AI safety concerns prevail. 

What I've been saying from the start is that this opinion isn't what I've seen on Twitter threads within the EA/rationalist community (I don't give credits to Tweets but I can't deny the role they play in AI safety cultural framework), or even on the EA forum, reddit, etc. Quite the opposite, actually: people advocating for Altman's return and heavily criticizing the board for their decision (I don't agree with the shadiness that surrounds the board's decision, but I nevertheless think it's a good decision).

The Economist describes EAs as 'Oxford University philosophers who came up with the name in 2011, New York hedge-fund analysts and Silicon Valley tech bros'; while many might think it's exaggerated, I think that's a relevant description of the image that is given by the loudest voices in our community, and if we want to be taken seriously in terms of policy recommendations, we should aim to change this actively. 

Disastrous experiences underwent by women in AI safety (https://forum.effectivealtruism.org/posts/LqjG4bAxHfmHC5iut/why-i-spoke-to-time-magazine-and-my-experience-as-a-female), hosts and guests of the 80k podcast laughing at the 'wokeness' of this or that when civil rights/feminism are being brought in a conversation, constant refusal to admit that EA has an issue of sexism and homogenous cultural norms (see all posts related to diversity + https://forum.effectivealtruism.org/posts/W8S3EuYDWYHQxm77u/racial-demographics-at-longtermist-organizations), posts on LessWrong talking about foetus's sentience without mentioning ONCE reproductive rights, are, I think, strong elements of why we are seen as such a elitist, un-diverse and culturally closed community. 

The frequency at which these things happen is enough to know that these issues are not a one-time, marginal kind of things. 

We can do better. We should do better. And if we don't tackle these issues seriously and keep being in denial, we will be unable to pass AI safety regulations or be taken seriously when we talk about existential risks, because people will brush it off as a 'tech bro thing'. And I must say, I had the same reaction before reading on the topic, because the offputting aspect of the culture around GCR is so strong that when you do care about that stuff, it's hard to not be repelled forever. And the external world cares about that stuff, fortunately for me, and unfortunately for some of you! 

If you are truly worried about GCR, consider these issues and try to talk about it with community members. We cannot just stay among ourselves and pat ourselves on the back for creating efficient charities. Also talk to me if you recognize this cultural offputtingness I'm talking about: I'm preparing a series of posts on diversity and AI and need to back it up as much as I can, despite the youth of the field. 

Downvoting this quicktake won't make these issues go away; if we are real truth-seekers, we cannot stay in denial. 

Some reactions I have to this:

  • In my (limited) personal experience, AI safety / longtermism isn't diverse along racial or gender lines, which probably indicates talented people aren't being picked up. Seems worth figuring out how to do a better job here. Similarly for EA as a whole, although this varies between cause area (iirc EA animal advocacy has a higher % of women than EA as a whole?)
  • I'm genuinely unsure how accurate / fair the statement "EA has an issue of sexism" is. But certainly there is a nonzero amount, which is more than there should be, and the accounts of sexism and related unwelcome-attitudes-toward-women in the community make me very sad.
  • The optimal amount of "cultural offputtingness" is not zero. It should be possible to "keep EA weird" without compromising on racial/gender/etc inclusion, and there are a lot of contingent weird things about EA that aren't core to its goodness. But there are also a lot of ways I can see a less-weird EA being a less-good EA overall. 
  • The link between increased diversity / decreased tech-bro reputation and passing AI safety regulations seems tenuous to me.
    • I have a general, vague sense that "do this for PR reasons" is not a good way to get something done well
    • It doesn't seem like public perception updates very frequently (to take one example, here's Fast Company two days ago saying ETG is the "core premise" of EA). I don't think we should completely give up here, but unfortunately the "EA = techbro" perception is pretty baked in and I expect it to only change very gradually, if at all.
    • EA is also not very politically diverse -- there are very few Republicans, and even the ones that are around tend to be committed libertarians rather than mainstream GOP voters. If we're just considering the impact on passing AI safety regulations, having a less left-leaning public image could be more useful. (For the reasons in the two bullet points above though, I'm also skeptical of this idea; I just think it has a similar a priori plausibility.)
  • On reflection, I think the somewhat combative tone (framing disagreement as "refusal to admit" and being "in denial") is fine here, but it did negatively color my initial reading, and probably contributed to some downvotes / disagree votes.

Two more nitpicky points:

hosts and guests of the 80k podcast laughing at the 'wokeness' of this or that when civil rights/feminism are being brought in a conversation

A google search turned up one instance of a guest discussing wokeness, which was Bryan Caplan discussing why not to read the news:

(15:45) But the main thing is they’re just giving this overwhelmingly skewed view of the world. And what’s the skew exactly? The obvious one, which I will definitely defend, is an overwhelming left-wing view of the world. Basically, the woke Western view is what you get out of almost all media. Even if you’re reading media in other countries, it’s quite common: the journalists in those other countries are the most Westernised, in the sense of they are part of the woke cult. So there’s that. That’s the one that people complain about the most, and I think those complaints are reasonable.
But long before anyone was using the word “woke,” there’s just a bunch of other big problems with the news. The negativity bias: bad, bad, bad, sad, sad, sad, angry, angry, angry.

This wasn't in the context of civil rights or feminism being discussed, and I couldn't find any other instances where that was the case. Rob doesn't comment on the "woke" bit here one way or another, and doesn't laugh during these paragraphs. So unless there's an example I missed, I think this characterization is incorrect.
 

posts on LessWrong talking about foetus's sentience without mentioning ONCE reproductive rights

This is probably an example of decoupling vs contextualizing norms clashing, but I don't think I see anything wrong here. Whether or not a fetus is sentient is a question about the world with some correct answer. Reproductive rights also concern fetuses, but don't have any direct bearing on the factual question; they also tend to provoke heated discussion. So separating out the scientific question and discussing it on its own seems fine.

  • Yes, this is exactly the issue. Talent isn't being picked on. If we are going to do good for future beings we need to take into account as many perspectives as we can instead of staying within the realm of our own male-centered western narratives.
  • Many posts exist on the EA forum about diversity that show how bad EA can be for women. The Times article on sexual assault is just the tip of the iceberg. 
  • Being weird is fine (eg thinking about far-fetched ideas about the future). Calling out sexism is not incompatible with that.
  • Thing is doing it just to 'reduce sexism and improve women wellbeing in EA' is clearly not a worthy cause for many here. So I guess I have to use arguments that make sense for others. And this is a real issue though--EA ideas and thus funding in the right direction could be so much more widespread and accepted without all these PR scandals.

The hostile tone has to do with being tired of having to advocate for the simpliest things. There are always the same comments on all the posts denouncing diversity issues: 'it is not a priority' 'there is no issue' 'stop being a leftist'. 

People who downvote have probably not even read the forum post on abuse in the AI spheres, while it shows how ingrained sexism is in this Silicon Valley culture. They don't care, because it doesn't concern them. Wanting the wellbeing of animals is all good and fine, but when it comes to women and people of colour, it becomes political, so yeah, there is denial. Animals can't speak--they can't upset them. Women and people of colour speak and ask for more justice--and that's where it becomes political, because then these men have to share power and acknowledge harm. So I don't think denial is a bad word.

When your life is at stake--when women are being harassed, raped, denied the right to dispose of their own bodies and lives, the tone can get hostile. I have something to lose here; for those who downvote me, it's just only another intellectual topic. I won't apologize for that.

brook
Moderator Comment-1
0
0

I think this kind of discussion is important, and I don't want to discourage it, but I do think these discussions are more productive when they're had in a calm manner. I appreciate this can be difficult with emotive topics, but it can be hard to change somebody's mind if they could interpret your tone as attacking them.

In summary: I think it would be more productive if the discussion could be less hostile going forwards.

Also talk to me if you recognize this cultural offputtingness I'm talking about: I'm preparing a series of posts on diversity and AI and need to back it up as much as I can, despite the youth of the field. 

The "please send me supporting anecdotes" method of evidence gathering.

Well that is a step among others, and asking is better than not asking and acting as if there was no issues at all. I didn't specify the epistemic value I would attribute to these testimonies, so this is a sneaky comment. 

But I was expecting you--never fail to comment negatively on posts that dare bringing up these issues. For someone who clearly says in a comment under a post about political opinions on EA that we need more right-wingers in EA and who also says that EA shouldn't be carrying leftist discourses to avoid being discredited, you sure are consistent in your fights. Nothing much about the content of the post though so I guess you didn't have much to say aside from inferring the epistemic value I'd put on anecdotal data. 

For those who would worry about the 'personal aspect' of this comment, understand that when you see a pattern of someone constantly advocating against a topic every time it's brought up on the topic, it sounds legitimate to me to understand why such a thing happen. There is motivated reasoning here--I don't expect objectivity on this topic from someone who so openly shows their political camp. Since Larks isn't attacking anything content-wise about the post other than some assumption on methodology, I do feel justified to note Lark's lack of objectivity. 

That is all I needed to say, there is no need to comment further on my side to avoid escalation. I just want people to have a clear picture of who is commenting here and the motivation behind.

Does making impact justify a careless attitude about harming the planet?

So, everyone loves EA for impact. However, I often see well-known EA orgs who clearly behave in harmful ways. Read this :

 'We choose to live in sunny, warm places all year long in an endless summer. This year we’ll be staying the winter in the Caribbean then going to EA hubs (Bay Area and London) in the summer time.', from a famous EA org (that I won't name and shame, but you can ask me in private if you want to avoid them). This is from a job add to receive funds to create your own EA hiring agency. Moreover, Bay Area and London are very costly and living there clearly increases your carbon footprint. If you have a choice of where to live, shouldn't you think about this?

It is often that I see EA taking the plane for long distances and not caring a tad about their carbon footprint. These are rich, mobile people who like to travel from conferences to conferences but...for which impact? Does saving a child from malaria allows you to pollute? These things aren't compatible in my mind.

I wish this community was more sensible to these ideas. It's not because we don't deem climate change a priority that we should not bat an eye regarding these behaviours. It is absurd to me. There seem to be little room between fully ascets EA people who live for impact and don't allow themselves any personal pleasure, and such people who don't even think about their carbon footprint. Should we follow data and encourage people to include their carbon footprint in their daily thinking? I wish we did better.

Bay Area and London are very costly and living there clearly increases your carbon footprint

I'm not going to dispute that those places have very high costs of living, but for carbon footprint, are you sure? I thought that the general trend was that cities (or any dense population area) tend to have lower carbon footprints due to more public transportation, more walking, and the general consolidation of everything. Am I mis-remembering this?

What you write strikes me as true: there does seem to be a tendency for EAs to focus on their particular cause area and to neglect/ignore other areas of ethical behavior. Here is my very rough typology, off the top of my head:

Sometimes these things look bad, but actually make sense. I've sure we've all heard stories about people who pay someone to clean their house or to wash their dishes, and that looks really weird to me. But if you are actually producing something of great value in the 10 minutes it would take you to wash your dishes, then I can see how it kind of makes sense to pay someone else a modest amount of money to wash your dishes for you. Paying money for a taxi rather than taking the city bus seems wasteful to me, but if you can actually get 30 minutes of work done in the taxi that you wouldn't be able to do on the bus, then I can understand the logic of it. Team retreats might fall into this category: a dozen people getting together at a tropical Airbnb using donor money is bad optics, but if we measure the gains comes out net positive I don't really have many complaints about it.

Sometimes there is simply ethical fading in one area because of such a large focus in another area. My best guess is that flights, consumption, and carbon footprint mostly fall into this category. Think of the people work on X-risk who eat dead animals. Or think about me when I didn't think about my carbon footprint while flying to a new city to meet EA people and talk about impact; I was thinking about networking rather than about climate.

And sometimes people just make silly or bad decisions. Person_A slept with Person_B, even though they work at the same small organization. Was that a great decision? No, they made a mistake. John_Doe_Org_Leader spoke dismissively and unkindly to a volunteer at an event. Not ideal, but a fumble rather than a travesty.

I do think that we should try harder, but at the same time I don't feel thrilled about asking someone who is dedicating so much energy to dedicate even more.

I think there are good arguments for thinking that personal consumption choices have relatively insignificant impact when compared to the impact you can have with more targeted work.

However, I also think there's likely to be some counter-countersignalling going on. If you mostly hang out with people who don't care about the world, you can't signal uniqueness by living high. But when your peers are already very considerate, choosing to refrain from refraining makes you look like you grok the arguments in the previous paragraph—you're not one of those naive altruists who don't seem to care about 2nd, 3rd, and nth-level arguments.

Fwiw, I just personally want the EA movement to (continue to) embrace a frugal aesthetic, irrespective of the arguments re effectiveness. It doesn't have to be viewed as a tragic sacrifice and hinder your productivity. And I do think it has significant positives on culture & mindset.

I think this is a useful question and I'm glad to be discussing this.

I agree with many of your concerns - and would love to see a more culturally-unified EA on the axis of how conscious we are of our own impact - but I also think you're failing to acknowledge something crucial: As much as EA is about altruism, it is also about focus on what's important, and your post doesn't acknowledge this as a potential trade-off for the folks you're discussing.

You'll find a lot of EA folks perceive climate change as a real problem but also perceive marginal carbon costs as not a thing worth focusing on given all the other problems in the world and the fact that carbon is offsetable. You are reading this as a "careless attitude" but I don't think this is a fair characterization. There are real tradeoffs to be made here about how to use marginal attention; they may be offsetting and just not talking about it, or deciding that it's not going to make enough difference in the short run, but regardless I think you have insufficient evidence to conclude that their attitude is wrong.

(I personally offset all my CO2 with Wren and think for at least 5 minutes about each plane flight I decide to take to decide if it is worth it; but have never written about this till now, and would have no reason to bother writing it down.)

I personally offset all my CO2 with Wren

And yet I highly doubt that most EA do that. You say that carbon is offsetable but it's still a vigorous debate. The measures we take to offset the said carbone often won't remove carbon before years, if not centuries. 

For someone who goes to a conference, how can they really measure the trade-offs? meeting one person who helps them get a EA job with 10 other persons from other contexts? It sounds hypocritical. Truth is, it's hard to calculate truthfully the impact you're having at these conferences because the results take years; however, the carbon is spent. Here. Now. And seeing global warming as a 'marginal' is a grave error to make IMO. 

These folks justify their highly carbonate cost of living by saying they make impact elsewhere,but they can't really calculate it. 

All this doesn't make my post less relevant : 1) we need to talk about it more and have some kind of pledge/be transparent about it 2) we need to do something about this carelesness because of lack of accountability. 

Here, "marginal" means "on the margin" -- would it be better for me to have spent a certain amount of attention on this issue or a different issue? The word can mean something "of little importance" in other contexts, though.

I share your general skepticism about offsets -- it is possible, but you have to be really careful the offset is actually counterfactual (e.g., that it results in the creation of a good thing that wouldn't have happened but for you paying the offset). Don't know about Wren specifically.

It's not at all obvious to me that marginal carbon actually cashes out as bad even in expectation.

I don't have lots of context about that team/org, but from what I've seen online I do think there might be some issues with that particular team of people that are not representative of the issues within EA more broadly.

I do hope they are not representative. I'm really hoping that we'll get statistics about EAs behaviour when it comes to carbon footprint. I know there's a big silent mass of EAs in low-income countries whose carbon footprint is close to null compared to wealthy notherners. I just wonder what is the part of wealthy notherners in EA--since we hear from them most.

More from Vaipan
Curated and popular this week
Relevant opportunities