This is a special post for quick takes by James Herbert. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 6:02 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

My recommended readings/resources for community builders/organisers

Looks like Charity Navigator is taking a leaf from the EA book!

Here they're previewing a new ‘cause-based giving’ tool - they talk about rating charities based on effectiveness and refer to research by Founder's Pledge. 

If anyone wants to see what making EA enormous might look like, check out Rutger Bregmans' School for Moral Ambition (SMA). 

It isn't an EA project (and his accompanying book has a chapter on EA that is quite critical), but the inspiration is clear and I'm sure there will be things we can learn from it. 

For their pilot, they're launching in the Netherlands, but it's already pretty huge, and they have plans to launch in the UK and the US next year. 

To give you an idea of size, despite the official launch being only yesterday, their growth on LinkedIn is significant. For the 90 days preceding the launch date, they added 13,800 followers (their total is now 16,300). The two EA orgs with the biggest LinkedIn presence I know of are 80k and GWWC. In the same period, 80k gained 1,200 followers (their total is now 18,400), and GWWC gained 700 (their total is now 8,100).[1]

And it's not like SMA has been spamming the post button. They only posted 4 times. The growth in followers comes from media coverage and the founding team posting about it on their personal LinkedIn pages (Bregman has over 200k followers). 

  1. ^

    EA Netherlands gained 137, giving us a total of 2900 - wooo!

When I translated to English, their 3 "Os" (In dutch not English) were....

"Bulky, underexposed and solvable"

Sounds a lot like important, neglected and tractable to me?

And then they interviewed Rob Mathers from the Against Malaria Foundation...

I completely agree with James that these guys are showing EA a different way of movement building which might end up being effective (we'll see). It seems like they are building on the moral philosophy foundations of EA, then packaging it in a way that will be attractive to the wider population - and they've done it well. I love this page with their "7 principles" and found it inspiring - I would sign up to those principles, and I appreciated that the scout mindset is in there as well.

 https://www.moreleambitie.nl/grondbeginselen

I do wonder what his major criticisms of EA are though, given that this looks pretty much like EA packaged for the masses, unless I'm missing something.

9
James Herbert
1mo
Yup! The three OOOs are inspired by EA (although a different Dutch foundation should get the credit for the Dutch acronym).[1] The main criticism can be found in chapter 8 of the book (only in Dutch for now). The subheading for this chapter gives a clue: "What you can achieve by radical prioritization, and how your moral ambition can be completely derailed." Spoiler: it's SBF.  The introduction to that chapter closes with the following paragraphs (machine translation): "I cannot emphasize enough how important this third point is. Rob [Mather] was equally talented through all three phases, but it wasn't until he took a step back and carefully weighed his options that he started to make a huge difference. So don't start with the question: 'What is my passion?' but with the question 'How can I contribute the most?' – and then choose the role that suits you. Remember: your talent is just a tool, and your ambition is raw energy. The question is what you do with it. And that also applies to something else. So far I have mainly talked about the waste of talent and ambition. But there is another privilege that we waste on a massive scale: money. In this chapter I will take a step back in time and tell you about my introduction to a young cult that became aware of that. It's a movement that has taken the pursuit of impact to the extreme. A movement that is always looking for the best financial investments with the highest return for as many people and animals as possible. Their story is about how much you can achieve through radical prioritization, but it also shows how your moral ambition can be completely derailed." His conclusion to the chapter is much more positive about EA, but it's far from a ringing endorsement.  I think this is a very interesting case study from the SBF saga. Yes, public polling suggests it didn't damage the reputation of EA as much as some might have feared. However, it has resulted in a loss of support from potential allies, e.g., Bregman.
3
EffectiveAdvocate
1mo
The sad fact is that this book might be the main way people in the Netherlands learn about the link between SBF and EA. But I guess there is little we can do about it now.

Yes, although I guess it's good that people know the link. We shouldn't hide our mistakes, and I know Bregman likes some of what we do, so there are worse people to have sharing this info with the Dutch population.

3
EffectiveAdvocate
1mo
Yes, I totally agree it is important not to hide our mistakes. I just wish SBF was presented in the context I see it in: As an unbelievable fuck-up / distaster / crime in a community that is at least trying very hard to do good.  

Saying it isn't an EA project seems too strong - another co-founder of SMA is Jan-Willem van Putten, who also co-founded Training for Good which does the EU tech policy and Tarbell journalism fellowships, and at one point piloted grantmaker training and 'coaching for EA leaders' programs. TfG was incubated by Charity Entrepreneurship.

You missed the most impressive part of Jan-Willem’s EA CV - he used to co-direct EA Netherlands, and I hear that's a real signal of talent ;)

But yes, I guess it depends on how you define ‘EA project’. They're intentionally trying to do something different, so that's why I don't describe them as one, but the line is very blurred when you take into account the personal and philosophical ties. 

If EA was a broad and decentralised movement, similar to e.g., environmentalism, I'd classify SMA as an EA project. But right now EA isn't quite that. Personally, I hope we one day get there.  

9
Aaron Gertler
22d
I think of EA as a broad movement, similar to environmentalism — much smaller, of course, which leads to some natural centralization in terms of e.g. the number of big conferences, but still relatively spread-out and heterogenous in terms of what people think about and work on.  Anything that spans GiveWell, MIRI, and Mercy for Animals already seems broad to me, and that's not accounting for hundreds of university/city meetups around the world (some of which have funding, some of which don't, and which I'm sure host people with a very wide range of views — if my time in such groups is any indication). That's my way of saying that SMA seems at least EA-flavored, given the people behind it and many of the causes name-checked on the website. At a glance, it seems pretty low on the "measuring impact" scale, but you could say the same of many orgs that are EA-flavored. I'd be totally unsurprised to see people go through an SMA program and end up at EA Global, or to see an SMA alumnus create a charity that Open Phil eventually funds. (There may be some other factor you're thinking of when you think of breadth — I could see arguments for both sides of the question!)
7
James Herbert
22d
I'm thinking about power. I don't (yet) liken EA to environmentalism because power is far, far more centralised in EA. As you mentioned, this is probably because we're small and young. I expect this will change in the future.
6
Jamie_Harris
1mo
Side comment / nitpick: Animal Advocacy Careers has 13k LinkedIn followers (we prioritised it relatively highly when I was working there) https://www.linkedin.com/company/animal-advocacy-careers/
1
James Herbert
1mo
Oh nice! Congrats with that. Do you know if it was a good use of resources?
2
Jamie_Harris
1mo
Thanks! IIRC, we focused on it substantially because a lot of the sign ups for our programmes (e.g. online course) were coming from LinkedIn even when we hadn't put much effort into it. The number of sign ups and the proportion attributed to LinkedIn grew as we put more effort into it. This was mostly the work of our wonderful Marketing Manager, Ana. I don't have access to recent data or information about how it's gone to make much of a call on whether it was worth it, relative to other possible uses of our/Ana's time.
1
James Herbert
1mo
Very interesting! We have made exactly the same observation so we’ve started investing in it more, but we’re still learning how best to go about this.
5
anormative
1mo
I'm not suggesting this in any serious way, and I don't know anything about Bregman or this organization, but an interesting thought comes to mind—I've often heard people ask something along the lines of "should we rebrand EA?" and the answer is "maybe, but that's not probably not feasible." If this organization is truly so good at growth, is based on the same core principles EA is based on (it might not be beyond the shallow "OOO"), and it hasn't been aspersed or tarnished by SBF etc—prima facie it might not be so bad for the EA brand to recede and for currently EA invididuals and institutions to transitions to SMA (SoMA?) ones. Edit: it's SfMA, I realize now, but I care too much for my bad pun that I'll keep it there...

I think it's far too early to make judgements about this groups success yet. Hype on social media is different from deep engagement, a vibrant community and billions of dollars of donations which EA has.

1
anormative
1mo
Agreed!
1
James Herbert
1mo
This is an insubstantial comment but yes I'm also sad they aren't calling themselves SoMA. 
2
Jamie_Harris
1mo
Not a criticism of your post or any specific commenter, but I think it's a shame (for epistemics related reasons) when discussions end up more about "how EA is X" as opposed to "how true is X? How useful is X, and for what?".
3
James Herbert
1mo
Yeah I see what you’re saying but I guess if you know the answer to the Q ‘is it EA?’ then you have a data point that informs the probability you give to a bunch of other things, e.g., do they prioritise impartiality, prioritisation, open truth seeking, etc., to an unusual degree? So it’s a heuristic. And given they’re a new org it’s much easier to answer the Q ‘is it EA’ than it is ‘is it valuable’. But I agree, knowing whether it’s actually useful is always far more valuable. Apart from anything else, just because the founders prioritise things EAs often prioritise, it doesn’t mean they’re actually doing anything of value.
1
akash
1mo
What do you think is the reason behind such a major growth? What are they doing differently that GWWC or other EA orgs could adopt?
2
James Herbert
1mo
I’m not super closely involved, I just know a few of the key people. That being said: a big name is putting his heart and soul into it, they’ve pulled together a big budget, and they’re very open to doing marketing. They’re also a talented bunch, but I think that’s at least partly downstream from the thing being kicked off by a big name. EDIT: Oh and they are doing something different from EA, so it might just be intrinsically more popular. But I don’t think that’s the main thing going on here.

The latest episode of the Philosophy Bites podcast is about Derek Parfit.[1] It's an interview with his biographer (and fellow philosopher) David Edmonds. It's quite accessible and only 20 mins long. Very nice listening if you fancy a walk and want a primer on Parfit's work.

  1. ^

    Parfit was a philosopher who specialised in personal identity, rationality, and ethics. His work played a seminal role in the development of longtermism. He is widely considered one of the most important and influential moral philosophers of the late 20th and early 21st centuries.

What should the content split at EAGxUtrecht[1] be? Below is our first stab. One of our subgoals is to inspire people to start new projects, hence the heavy focus on entrepreneurship under 'Meta'.

  • Neartermist[2] 35%
    • Global Health & Dev 35%
    • Animal welfare 60%
    • Mental health 5%
  • Longtermist 45%
    • AI risk 50%
    • Biosec 30%
    • Nuclear 10%
    • General longtermist 5%
    • Climate change 5%
  • Meta 20%
    • Priorities research 5%
    • Entrepreneurship skills 85%
    • Community building 5%
    • Effective giving 5%
  • Gender split of speakers: 50:50
  • Proportion of speakers with a strong Dutch connection: 35%
  1. ^

    July 5-7 - be there or be square. Or be there and do square things like check out the world's largest bicycle garage. You do you.

  2. ^

    Yeah, I don't like the terms 'neartermism' and 'longtermism' either, and it's messy, but this is our attempt at organising things. We used RP's 2022 survey's categorisation of the two to guide us, with some small modifications. 

How many talks are you expecting to have? These seem very prescriptive, and things like multiple 1% categories will be difficult to achieve if you have <100 talks. I would worry that a strict focus on distribution like this would lead to having to sacrifice quality.

Given that EAGx Utrecht might be the most convenient EAGx for a good chunk of Western Europe, I'm not sure how important it is to have a goal for a % speakers with strong Dutch connections rather than Europe connections. But the density of talented Dutch folk in the community is very high, so you might hit 35% without any specific goal to do so.

2
EffectiveAdvocate
1mo
Out of curiosity, why do you think this is the case? Isn't the Berlin and Nordics conference (and the London EAG) much more accessible for most EAs in Western Europe?  (Also, personally I assumed that the 35% was not a goal but a maximum to make sure the speakers are not from the Netherlands too much.)
3
James Herbert
1mo
Three factors I’d say. Firstly, population density. There are about 15 million people within 100km of Utrecht, this compares to 6 million for Berlin and 4 million for Copenhagen. Secondly, location. Berlin is actually quite far East, I’d say it’s more Central Europe than it is Western Europe. And obviously Copenhagen is more Northern European. This means that, whereas Utrecht is an afternoon’s train ride from some of the biggest Western European metropoles (London, Brussels, and Paris), the equivalent journeys to CPH/BER are 8+ hours. Thirdly, air connectivity. Schiphol scores much higher on direct connectivity than both CPH and BER. To sense check this, I just Googled flight frequency for Rome. AMS has about 180 per month whilst BER and CPH have around 80 per month.
3
Lorenzo Buonanno
1mo
You know much more than I do, but I would be surprised if these were the most relevant factors. 1. People within 100km of Utrecht are still mostly in the Netherlands, or at least are likely to have a strong Dutch connection. 2. I know a surprising amount of people interested in these events really value limiting their flights for environmental reasons, so this might be true. 3. Berlin is extremely well-connected. I don't think anyone didn't go to EAGx Berlin for lack of flights. If anything, plane tickets from Rome, Milan and Paris are slightly cheaper for Berlin vs Schiphol.   In my limited experience, the two most important factors for the people who get the most value from these conferences are: 1. Timing: people are busy, so they might e.g. have to defend their PhD thesis on the same day of EAGx (real example) 2. Acceptance rates: for some people from Italy who just went through an intro program, either Berlin, Rotterdam/Utrecht, or Nordics could be the most convenient because they wouldn't get accepted into the others In any case, I would expect people who find Utrecht more convenient than other EAGxs for whatever reason will also find opportunities presented by Dutch-connected speakers more valuable than the typical EAGx participant, so it might make sense to lean into that. I wouldn't be surprised if the ideal number were higher than 35%. Given all the things going on with e.g. The School For Moral Ambition and Doneer Effectief, I would also consider whether having Netherlands-specific events would make sense. Posssibly in the spirit of making EA more decentralized, like environmentalism. But I guess all the above depends heavily on what % of participants live near the Netherlands, do you know the percentage of people from NL/BE for EAGxRotterdam 2022? (Although that was a while ago). And I strongly agree with Nick that the quality is more important
1
James Herbert
1mo
 Just to be clear, I was attempting to answer @EffectiveAdvocate's question about why one might think Utrecht is probably the most accessible location for many EAs in Western Europe. I wasn't making this point to defend the 35% figure :)   I wanted to make the point about accessibility because I'm quite certain it isn't the case that Berlin and Copenhagen are much more accessible than Utrecht, and I worry some people will underrate Utrecht's accessibility and therefore choose not to come. I agree timing is probably a more important determiner of attendance than accessibility, that the quality of the speaker should probably be the most important factor when choosing, and I think Catherine makes a very good point about extending our partiality beyond NL.  Re your Q about the national residencies of attendees in 2022, all I have to hand is the following: * 41% living in the Netherlands  * 14% living in Germany  Thanks for your input so far! 

Sounds good overall. 1% each for priorities, cb and giving seems pretty low. 1.75% for mental health might also be on the low side, as there appears to be quite a bit of interest for global mental health in NL. I think the focus on entrepreneurship is great!

3
James Herbert
1mo
Thanks for your thoughts Jelle!
8
NickLaing
1mo
As a side note, I think content split is important, but the quality of presentation / group discussion and people that are leading those is more important. Obviously there needs to be a decent content split, but if you have the opportunity to get many really great people presenting great things in one area, I wouldn't necessarily cut some because it exceeds your "content percent budget" or whatever. I haven't organised these kinds of events though, so this comment might not be relevent/helpful.
1
James Herbert
1mo
Thanks!
5
miller-max
1mo
Hi James thanks for opening this up for feedback, This is a tough one, overall it looks good!  My general point of feedback would be to be more cause-agnostic OR put higher emphasis on "priorities research".  For example I could suggest making 1/5th content about priorities research, promoting it as a category of its own, as seen below.  The reason for this is because I would argue that cause areas & meta have their own communities/conferences already, priorities research on the other hand may not so much. And priorities research represents EA's mission of "where to allocate resources to do the most good" most holistically. Then again I haven't done the thinking you have behind these weights!  It may be worth making a survey with 1-100 scales?  * Neartermist 30% (-5) * Global Health & Dev 35%  * Animal welfare 60% * Mental health 5%   * Longtermist 40% (-5) * AI risk 50% * Biosec 30% * Nuclear 10% * General longtermist 5% * Climate change 5% * Priorities research 20% * Meta 10% (-10)  * Priorities research 5% * Entrepreneurship skills 85% * Community building 5% * Effective giving 5%
3
EffectiveAdvocate
1mo
I believe the division of areas for the event is quite decent. However, I think EAGx events also allow for the introduction of new ideas into the EA community. What cause areas do others believe we should prioritize but currently do not? Personally, I am considering areas like protecting liberal democracy, improving decision-making (individual and institutional), and addressing great power conflicts (broader than AI and nuclear issues). There are likely many other areas, and the causes I've listed here are already somewhat related to EA. Perhaps there are topics that are further outside the box. I am also somewhat uncertain about the term "Entrepreneurship skills." Could someone clarify what is meant by this exactly? 

I think events are underrated in EA community building. 

I have heard many people argue against organising relatively simple events such as, 'get a venue, get a speaker, invite people'. I think the early success of the Tien Procent Club in the Netherlands should make people doubt that advice.

Why? Well, the first thing to mention is that they simply get great attendance, and their attendees are not typical EAs. I think their biggest so far has been 400, and the typical attendee is a professional in their 30s or 40s. It also does an amazing job of generating buzz. For example, suppose you've got a journalist writing an article about your community. In that case, it's pretty cool if you can invite them to an event with hundreds of regular people in attendance.

Now, of course, attendance doesn't translate to impact. However, I think we can see the early signs of people actually changing their behaviour. 

For example, running a quick check on GWWC's referral dashboard, I can see four pledges that refer to the Tien Procent Club (2 trial, 2 full). Based on GWWC's March 2023 impact evaluation, they can therefore self-attribute ~$44k of 2022-equivalent donations to high-impact fundin... (read more)

6
Minh Nguyen
5mo
I'm actually very surprised to hear this. What does the "common view" presume then? Personally, I see 3 tiers of events: 1. Any casual, low-commitment, low stakes events 2. Big EA conferences that I find quite valuable for meeting lots of people intentionally and socially 3. Professionally-focused events (research fellowships, incubators etc) I think "simple" events like 1 are great for socialising and meeting new people. While 2 and 3 get more done, I don't think the community would feel as welcoming if the only events occurring were ones where you had to be fully professional. Sometimes I still want to interact with EAs, but without the expectation of "meeting right" or "networking". I suspect this applies especially to introverts and beginners. Even just going to a conference with the expectation of booking lots of 1-on-1s vs just chilling feels very different.
3
James Herbert
5mo
Yeah, that's a good categorisation, although often 3 is less 'professionally focused events' and more 'events for highly committed EAs'.  I think the common EA CB view is captured in the below quote (my own italics), which is taken from the CEA's Group Resource Centre's page 'How do EA groups produce impact?'. I think this is broadly right. But I think EA CBs often overcorrect in this direction and, as a result, neglect events that aim for broad reach but shallow engagement. 
3
Minh Nguyen
5mo
On CB, my views that are half informed by EA CBs and half personal opinions: 1. Very casual events - If you are holding no events for a long time and don't have much capacity, just hold low-stakes casual events and follow-up with high-engaged people afterwards. Highly-engaged people tend to show up/follow up several times after learning about EA anyway. 80-90% of the time, I think having some casual events every few weeks is better than no casual events. 2. Bigger events - Try to direct highly-engaged people to bigger and/or more specialised events. The EA community is big and diverse, and letting people know other events exist lets them self-select better. When I first explored beyond EA Singapore, I spent 2 months straight learning about every EA org and resource in existence, individually reviewing all the Swapcard profiles at every EAG. That was absolutely worth the effort, IMO.[1] 3. 1-on-1s are probably still important - 1-on-1s with someone of very similar interest areas or career trajectories are the most valuable experiences in EA, in my opinion. Only 10% of 1-on-1s are like this, but they more than make up for the 90% that don't really go anywhere. As much as I try to optimise, this seems to be a numbers game of just finding and meeting a lot of potentially interesting people.[2] 4. Online resources - For highly-engaged EAs, important information should be online-first. I'm of the opinion that highly-engaged/agentic new EAs tend to read a lot online, and can gain >80% of the same field-specific knowledge reading on their own. This especially holds true in AI Safety, which is like ... code and research that's all publicly available short of frontier models. I think events should be for casual socials, intentional networking and accountability+complex coordination (basically, coworkers). 1. ^ If you want the 80/20 for AI Safety, check out aisafety.training, aisafety.world, check EA Forum, Lesswrong and Alignment Forum once a week (~1 hour/wee
3
OllieBase
5mo
I agree! > I have heard many people argue against organising relatively simple events such as, 'get a venue, get a speaker, invite people'. Where have you heard this? I've not seen this. > get an endorsement from someone like Bregman Noting that this isn't easy and could be a large driver of the value!
5
James Herbert
5mo
When I first started at EA Netherlands I was explicitly advised against it, and more generally it seems to be 'in the air'. For example: * The groups resource hub says "This also suggests that you should focus time and effort on deeply engaging the most committed members rather than just shifting some choices of many people." * Kuhan's widely shared post on 'lessons from running Stanford EA' has in its summary "Focus on retention and deep engagement over shallow engagement" * CEA's Groups Team's post on 'advice we give to new university organiser' says "We think it's good to do broad recruiting at the beginning of the semester, as with any club or activity. But beyond this big push of raising awareness, we think it’s most often better to pay more attention to people who seem very interested in - and willing to take significant action based on - EA ideas" Writing this out has made me realise something. I think this advice makes more sense in a university context, where students are time-rich and are going through an intense social experience, but it makes less sense when you're targeting professionals. I suspect it's still 'in the air' because, historically, CEA has been very good at targeting students.  As a consequence, very few national orgs (including ourselves) organise TPC-esque events (broad reach, low engagement). For us, this is because our strategy is to focus on supporting local organisers in organising their own events (the theory is that then we can have lots of events without having to organise all of them ourselves). But I don't think that's the case for other national organisations (other national CBs, please jump in and correct me if I'm wrong, e.g., I know @lynn at EA UK has been organising career talks).  Ultimately, I guess what I'm saying is what I've said elsewhere: you need a blend of ‘mobilising’ (broad reach, low engagement) and ‘organising’ (narrow reach, high engagement), and I think EA groups often do too much organising.
2
OllieBase
5mo
Thanks, that makes sense. I guess I don't interpret those bullets as "arguing against organising simple events" but rather "put your effort into supporting more engaged people" and that could even be consistent with running simple events, since it means less time on broad outreach compared to e.g. a high-effort welcoming event.  I agree with the first part of your last sentence (the blend), I don't know how EA groups spend their time.
1
James Herbert
5mo
Hmm, yeah, but by arguing for "put your effort into supporting more engaged people" you're effectively arguing against "relatively large events that require relatively shallow engagement". I think that's the mistake. I think it should be an even blend of the two. 

Politico just published a fairly negative article about EA and UK politics. Previously they’ve published similar articles about EA and Brussels.

I think EA tends to focus on the inside game, or narrow EA, and I believe this increases the likelihood of articles such as this. I worry articles such as this will make people in positions of influence less likely to want to be associated with EA, and that this in the long run will undermine efforts to bring about the policy changes we desire. Still, of course, this focus on the inside game is also pretty cost-effective (for the short term, at least). Is it worth the trade-off? What do people think?

My gut feeling is that, putting to one side the question of which is the most effective strategy for reducing x-risk etc., the 'narrow EA' strategy is a mistake because there's a good chance it is wrong to try to guide society without broader societal participation.

In other words, if MacAskill argues here we should get our shit together first and then either a) collectively decide on a way forward or b) allow for everyone to make their own way forward, I think it's also important that 'the getting our shit together' has broad societal participat... (read more)

My guess is this is mostly just a product of success, and insofar as the political system increasingly takes AI X-risk seriously, we should expect to see stuff like this from time to time. If the tables were flipped and Sunak was instead pooh-poohing AI X-risk and saying things like "the safest path forward for AI is accelerating progress as fast as we can – slowing down would be Luddism" then I wouldn't be surprised to see articles saying "How Silicon Valley accelerationists are shaping Rishi Sunak’s AI plans". Doesn't mean we should ignore the negative pieces, and there very well may be things we can do to decrease it at the margin, but ultimately, I'd be surprised if there was a way around it. I also think it's notable how much press there is that agrees with AI X-risk concerns; it's not like there's a consensus in the media that it should be dismissed.

4
Sean_o_h
7mo
+1; except that I would say we should expect to see more, and more high-profile. AI xrisk is now moving from "weird idea that some academics and oddballs buy into" to "topic which is influencing and motivating significant policy interventions", including on things that will meaningfully matter to people/groups/companies if put into action (e.g. licensing, potential restriction of open-sourcing, external oversight bodies, compute monitoring etc). The former, for a lot of people (e.g. folks in AI/CS who didn't 'buy' xrisk) was a minor annoyance. The latter is something that will concern them - either because they see the specific interventions as a risk to their work, or because they feel policy is being influenced in a major way by people who are misguided. I would think it's reasonable to anticipate more of this.
6
Daniel_Eth
7mo
or because they feel it as a threat to their identity or self-image (I expect these to be even larger pain points than the two you identified)
1
James Herbert
7mo
Hmm, I agree that with influence comes increased scrutiny, and the trade-off is worth it in many cases, but I think there are various angles this scrutiny might come from, and I think this is a particularly bad one.  Why? Maybe I'm being overly sensitive but, to me, the piece has an underlying narrative of a covert group exercising undue influence over the government. If we had more of an outside game, I would expect the scrutiny to instead focus on either the substance of the issue or on the outside game actors. Either would probably be an improvement.  Furthermore, there's still the very important issue of how appropriate it is for us to try to guide society without broader societal participation.
3
Daniel_Eth
7mo
My honest perspective is if you're an lone individual affecting policy, detractors will call you a wannabe-tyrant, if you're a small group, they'll call you a conspiracy, and if you're a large group, they'll call you an uninformed mob. Regardless, your political opponents will attempt to paint your efforts as illegitimate, and while certain lines of criticism may be more effective than others, I wouldn't expect scrutiny to simply focus on the substance either way. I agree that we should have more of an outside game in addition to an inside game, but I'd also note that efforts at developing an outside game could similarly face harsh criticism (e.g., "appealing to the base instincts of random individuals, taking advantage of these individuals' confusion on the topic, to make up for their own lack of support from actual experts").
6
James Herbert
7mo
Maybe I'm in a bubble, but I don't recall seeing many reputable publications label large-scale progressive movements (e.g., BLM, Extinction Rebellion, or #MeToo) as "uninformed mobs". This article from the Daily Mail is about as close as it gets, but I think I'd rather have the Daily Mail writing about a wild What We Ourselves party than Politico insinuating a conspiracy.  Ultimately, I don't think any of us know the optimal split in a social change portfolio between the outside game and the inside game, so perhaps we should adapt as the criticism comes in. If we get a few articles insinuating conspiracy, maybe we should reallocate towards the outside game, and vice versa.     And again, I know I sound like a broken record, but there's also the issue of how appropriate it is for us to try to guide society without broader participation. 
7
Daniel_Eth
7mo
  So progressive causes will generally be portrayed positively by progressive-leaning media, but conservative-leaning media, meanwhile, has definitely portrayed all those movements as ~mobs (especially for BLM and Extinction Rebellion), and predecessor movements, such as for Civli Rights, were likewise often portrayed as mobs by detractors. Now, maybe you don't personally find conservative media to be "reputable," but (at least in the US, perhaps less so in the UK) around half the power will generally be held by conservatives (and perhaps more than half going forward).
5
Shakeel Hashim
7mo
Yeah, the phrase "woke mob" (and similar) is extremely common in conservative media!
2
David Mathers
7mo
I suspect the ideology of Politico and most EAs are not that different (i.e. technocratic liberal centrism). 
1
James Herbert
7mo
For sure progressive publications will be more positive, and I don't think conservative media ≠ reputable.  When I say "reputable publications" I am referring to the organisations at the top of this list of the most trusted news outlets in the US. My impression is that very few of these regularly characterise the aforementioned movements as "uninformed mobs". 
5
Daniel_Eth
7mo
So I notice Fox ranks pretty low on that list, but if you click through to the link, they rank very high among Republicans (second to only the weather channel). Fox definitely uses rhetoric like that. After Fox (among Republicans) are Newsman and OAN, which similarly both use rhetoric like that. (And FWIW, I also wouldn't be super surprised to see somewhat similar rhetoric from WSJ or Forbes, though probably said less bluntly.) I'd also note that the left-leaning media uses somewhat similar rhetoric for conservative issues that are supported by large groups (e.g., Trumpism in general, climate denialism, etc), so it's not just a one-directional phenomena.
7
James Herbert
7mo
Yes, I noticed that. Certain news organisations, which are trusted by an important subsection of the US population, often characterise progressive movements as uninformed mobs. That is clear. But if you define 'reputable' as 'those organisations most trusted by the general public', which seems like a reasonable definition, then, based on the YouGov analysis, Fox et al. is not reputable. But then maybe YouGov's method is flawed? That's plausible. But we've fallen into a bit of a digression here. As I see it, there are four cruxes: 1. Does a focus on the inside game make us vulnerable to the criticism that we're a part of a conspiracy?  1. For me, yes. 2. Does this have the potential to undermine our efforts? 1. For me, yes. 3. If we reallocate (to some degree) towards the outside game in an effort to hedge against this risk, are we likely to be labelled an uninformed mob, and thus undermine our efforts? 1. For me, no, not anytime soon (although, as you state, organisations such as Fox will do this before organisations such as PBS, and Fox is trusted by an important subsection of the US population). 4. Is it unquestionably OK to try to guide society without broader societal participation? 1. For me, no.    I think our biggest disagreement is with 3. I think it's possible to undermine our efforts by acting in such a way that organisations such as Fox characterise us as an uninformed mob. However, I think we're a long, long way from that happening. You seem to think we're much closer, is that correct? Could you explain why? I don't know where you stand on 4.  P.S. I'm enjoying this discussion, thanks for taking the time!

I agree and this is why I'm in favour of a Big Tent approach to EA. This risk comes from a lack of understanding about the diversity of thought within EA and that it isn't claiming to have all the answers. There is a danger that poor behaviour from one part of the movement can impact other parts.

Broadly EA is about taking a Scout Mindset approach to doing good with your donations, career and time. Individual EAs and organisations can have opinions on what cause areas need more resources at the margin but "EA" can't - it isn't a person, it's a network. 

I really liked this post How CEA’s communications team is thinking about EA communications at the moment — EA Forum (effectivealtruism.org) from @Shakeel Hashim and hope that whatever happens in terms of shake ups at CEA - communications and clarity around the EA brand are prioritised.

This is really interesting. Thanks for sharing!

I think:

  1. If you have a lot of influence, articles like this are inevitable.
  2. EAs in AI should really try to make nice with the AI ethics crowd (i.e. help accomplish their goals). That's where the most criticism is coming from. From my perspective their concerns are useful angles of attack into the broader AI safety problem, and if EA policy does not meet the salient needs of present-day people it will be politically unpopular and lose influence (a challenge for the political longtermism agenda more broadly).
  3. I agree about EAs needing to cast a wider net, in really every sense of the term. We also need to be flexible to changing circumstances, particularly in something like AI that is so rapidly moving and where the technology and social consequences are likely to be far different in crucial respects to earlier predictions of them (even if the predictions are mostly true -- this is a very hard dynamic to manage).
  4. The article underscores the dangers to a movement so deeply connected to one foundation, and I expect we'll see Open Phil becoming more politically controversial (and very possible perceived as more Soros-esque) fairly soon.
  5. EA is al
... (read more)

Thanks! 

I agree that negative articles are inevitable if you get influence, but I think there are various angles these negative articles might come from, and this is a particularly bad one.

The Soros point is an excellent analogy, but I worry we could be headed for something worse than that. Soros gets criticism from people like Orban but praise from orgs like the FT and Politico. Meanwhile, with EA, people like Orban don't give a damn about EA but Politico is already publishing scathing pieces.  

I don't think reputation management is as hard as is often supposed in EA. I think it's just it hasn't been prioritised much until recently (e.g., CEA didn't have a head of comms until September 2022). I can imagine many national organisations such as mine would love to have a Campaign Officer or something to help us manage it, but we don't have the funding. 

Do you have any encouraging examples of progress on 2? Some of the prominent people are incredibly hostile (i.e. they genuinely believe we are all literal fascists and also Machiavellian naive utilitarians who lie automatically whenever it's in our short-term interests) so I'm a bit pessimistic, though I agree it is a good idea to try. What's a good goal to help them accomplish in your view? 

Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.

External oversight over the power of big tech is a good goal to help accomplish. This is from one of the leading AI ethics orgs; it could almost as easily have come from an org like GovAI:
https://ainowinstitute.org/publication/gpai-is-high-risk-should-not-be-excluded-from-eu-ai-act

JWS
7mo18
7
1

epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance

I really wish I had your positive view on this Sean, but I really don't think there's much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.

Gebru is, imo, never going to view EA positively. And she'll use her influence as strongly as possible in the 'AI Ethics' community. 

Seth Lazar also seems intractably anti-EA. It's annoying how much of this dialogue happens on Twitter/X, especially since it's very difficult for me as a non-Twitter user to find them, but I remember he posted one terrible anti-longtermist thread and later deleted it.

Shannon Vallor once also posted a similarly anti-longtermist thread, and then respond to Jess Whittlestone once saying lamenting the gap between the Safety and Ethics fields. I just really haven't seen where the Safety->Ethics hostility has been, I've really only ever seen the reverse, but of course I'm 100% sure my sample is biased here.

The Belfield<>McInerney collaboration is extremely promising for sure, and I look forward to the outputs. I hope my impression is wrong and more work alon... (read more)

just really haven't seen where the Safety->Ethics hostility has been

From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of "everything for everyone" models – and also distracted away from the increasing harms that result from the development and use of those models.

Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs. 

I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around. 

That feels unfair if we focus on the explicit exchange in the moment. 
But there is more to it.

AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.

It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI E... (read more)

7
JWS
7mo
I found this comment very helpful Remmelt, so thank you. I think I'm going to respond to this comment via PM.

I think this is imprecise. In my mind there are two categories:

  1. People who think EA is a distraction from near term issues and competing for funding and attention (e.g. Seth Lazar as seen by his complaints about the UK taskforce and trying to tag Dustin Moskovitz and Ian Hogarth in his thinkpieces). These more classical ethicists are just from what I can see analytical philosophers looking for funding and clout competition with EA. They've lost a lot of social capital because they repeated a lot of old canards about AI and just repeats them. My model for them is something akin to they can't do fizzbuzz or know what a transformer is, thus they'll just say sentences about how AI can't do things and there's a lot of hype and power centralisation. These are more likely to be white men from the UK, Canada, Australia, and NZ. Status games are especially important to them and they seem to just not have a great understanding of the field of alignment at all. A good example I show people is this tweet which tries to say RLHF solves alignment and "Paul [Christiano] is an actual researcher I respect, the AI alignment people that bother me are more the longtermists."
  2. People in the other camp are
... (read more)
8
quinn
7mo
I don't know why people overindex on loud grumpy twitter people. I haven't seen evidence that most FAccT attendees are hostile and unsophisticated. 
1
Remmelt
7mo
FAccT attendees are mostly a distinct group of researchers from the AI ethics researchers who come from or are actively assisting marginalised communities (and not with eg. fairness and bias abstractions).
5
JWS
7mo
Hmm I'm not quite sure I agree that there's such a clear division of two camps. For example, I think Seth is actually not that far off from Timnit's perspective on AI Safety/EA. Perhaps and bit less extreme and hostile, but I see that more of a degree in difference rather than a degree in kind.  I also disagree that people in your second camp are going to be useful for fruitful for collaboration, as they don't just have technical objections but I think core philosophical objections to EA (or what they view as EA). I guess overall I'm not sure. It'd be interesting to see some mapping of AI-researchers in some kind of belief-space plot so different groups could be distinguished. I think it's very easy to extrapolate from a few small examples and miss what's actually going - which I admit I might very well be doing with my pessimism here, but I sadly think it's telling that I see so few counterexamples of collaboration but I can easily find examples of AI researchers dismissive or hostile to the AI Safety/xRisk perspective.
4
David Mathers
7mo
I don't think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think it'll be hard to collaborate if one/both sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)

I totally buy "there are lots of good sensible AI ethics people with good ideas, we should co-operate with them". I don't actually think that all of the criticisms of EA from the harshest critics are entirely wrong either. It's only the idea that "be co-operative" will have much effect on whether articles like this get written and hostile quotes from some prominent AI ethics people turn up in them, that I'm a bit skeptical of. My claim is not "AI ethics bad", but "you are unlikely to be able to persuade the most AI hostile figures within AI ethics".

4
Sean_o_h
7mo
Sure, I agree with that. I also have parallel conversations with AI ethics colleagues - you're never going to be able to make much headway with a few of the most hardcore safety people that your justice/bias etc work is anything but a trivial waste of time; anyone sane is working on averting the coming doom. Don't need to convince everyone; and there will always be some background of articles like this. But it'll be a lot better if there's a core of cooperative work too, on the things that benefit from cooperation.  My favourite recent example of (2) is this paper: https://arxiv.org/pdf/2302.10329.pdf Other examples might include my coauthored papers with Stephen Cave (ethics/justice), e.g.  https://dl.acm.org/doi/10.1145/3278721.3278780 Another would be Haydn Belfield's new collaboration with Kerry McInerney http://lcfi.ac.uk/projects/ai-futures-and-responsibility/global-politics-ai/ Jess Whittlestone's online engagements with Seth Lazar have been pretty productive, I thought.
6
Chris Leong
7mo
I know you're probably extremely busy, but if you'd like to see more collaboration between the x-risks community and ai ethics, it might be worth writing up a list of ways in which you think we could collaborate as a top-level post. I'm significantly more enthusiastic about the potential for collaboration after seeing the impact of the FLI letter.
1
Remmelt
7mo
I expect many communities would agree on working to restrict Big Tech's use of AI to consolidate power.  List of quotes from different communities here.
4
joshcmorrison
7mo
EA isn't unitary so people should individually just try cooperating with them on stuff and being like "actually you're right and AIs not being racist is important" or should try to make inroads on the actors' strike/writer's strike AI issues. Generally saying "hey I think you are right" is usually fairly ingratiating.  For what it's worth, a friend of mine had an idea to do Harberger taxes on AI frontier models, which I thought was cool and was a place where you might be able to find common ground with more leftist perspectives on AI
5
David Mathers
7mo
People should say that things are right when they agree with them, even if there wasn't strategic purpose in doing so.  I doubt being sympathetic to left economic stuff on AI will do much to help persuade people whose complaint is that EAs are racists/sexist/authoritarian/naive utilitarian. Though it would certainly help with people who are just (totally reasonably!, I am worried about this!) concerned about EAs ties to the industry. 

The UK seems to take the existential risk from AI much more seriously than I would have expected a year ago. To me, this seems very important for the survival of our species, and seems well worth a few negative articles.

I'll note that I stopped reading the linked article after "Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs." This is inaccurate imo. In general, having low-quality negative articles written about EA will be hard to avoid, no matter if you do "narrow EA" or "global EA".

8
James Herbert
7mo
Politico is perhaps the most influential news source for EU decision-makers (h/t @vojtech_b). I'd be wary of dismissing the importance of 'a few negative articles' if they're articles like this. 

I agree that's a good argument why that article is a bigger deal than it seems, but I'd still be quite surprised if it were at all comparable to the EV of having the UK so switch on when it comes to alignment.

1
Rebecca
7mo
If this article sees others like it, it could cause the UK to back away from x-risk concerns
-1
James Herbert
7mo
My concern is that this particular media narrative will eventually undermine the policy progress we've made. 
4
Sean_o_h
7mo
>"Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs." This is inaccurate imo. Could we get a survey on a few versions of this question? I think it's actually super-rare in EA.  e.g.  "i believe super-intelligent AI should be pursued at all costs" "I believe the benefits outweigh the risks of pursuing superintelligent AI" "I believe if risk of doom can be agreed to be <0.2, then the benefits of AI outweight the risks" "I believe even if misalignment risk can be reduced to near 0, pursuing superintelligence is undesirable"
8
David_Moss
7mo
We could potentially survey the EA community on this later this year. Please feel free to reach out if you have specific requests/suggestions for the formulation of the question.
2
James Herbert
7mo
Yeah it's incredibly inaccurate, I don't think it even needs to be surveyed. 
4
Sean_o_h
7mo
I've heard versions of the claim multiple times, including from people i'd expect to know better, so having the survey data to back it up might be helpful even if we're confident we know. the answer.
[anonymous]7mo13
4
0

I think there are truths that are not so far from it. Some rationalists believe Superintelligent AI is necessary for an amazing future. Strong versions of AI Safety and AI capabilities are complementary memes that start from similar assumptions. 

Where I think most EAs would strongly disagree with is that they would find pursuing SAI "at all costs" to be abhorrent and counter to their fundamental goals. But I also suspect that showing survey data about EA's professed beliefs wouldn't be entirely convincing to some people given the close connections between EAs and rationalists in AI. 

3
James Herbert
7mo
Good point! You’re right
3
harfe
7mo
I feel a bit uneasy that EAs should put in a lot of effort into a survey (both the survey designers and takers) just because someone made up something at some point. Maybe asking the people who you'd expect to know better, why they believe what they believe?
8
Chris Leong
7mo
I think that EA has made the correct choice in deciding to focus on inside game. As indicated by the article, it seems like we've been incredibly successful at it. I agree that in an ideal world, we would save humanity by playing the outside game, but I feel that the current inside game is increasing our odds by enough that I feel very comfortable with our decision to promote it.  I agree that it's worth thinking about the potential for this success to result in a backlash, though surveys seem to indicate more concern among the public about AI risks than I had expected, so I'm not especially worried about there being a significant public backlash. Nonetheless, it doesn't make sense to take unnecessary risks, so there are a few things we should do: • I'd love to see EA develop more high-quality media properties like the 80k podcast, Rob Miles or Rationalist Animations, but very few people have the skills. • Books combined with media releases and appearances on podcasts are one way in which we can attempt to increase our support among the public. • I think it makes sense to try our best to avoid polarisation. If it seems that one side of the political spectrum is becoming hostile, then it would make sense to initiate some concerted outreach to it.
1
James Herbert
7mo
Thanks for your comment Chris! Although it appears contradictory? In the first half, you say we've made the right choice by focusing on the inside game, but in the second half, you suggest we expend more resources on outside game interventions.   Is your overall take that we should mostly do inside game stuff, but that perhaps we're due a slight reallocation in the direction of the outside game? 
2
Chris Leong
7mo
Exactly. I think EA should mostly focus on inside game, but that, as a lesser priority, we should take steps to mitigate the risks associated with this.
1
James Herbert
7mo
I think there's a good chance we broadly agree. If you had to put a number on it, what would you say is our current percentage split between inside game and outside game? And what would your new ideal split be? 
1
JanPro
7mo
epistemic status: gossip I've heard it's quite harmful to label oneself as EA in the EU policy space after the politico article.
6
Nathan Young
7mo
I think maybe let's revisit in a month. It's easy for these things to loom larger than they are.
-3
James Herbert
7mo
I think JanPro is talking about the EA and Brussels article I referenced in the OP ('Stop the killer robots! Musk-backed lobbyists fight to save Europe from bad AI'). This was published in November last year.  Many of the EAs I know who work in policy feel like they ought to keep their involvement in EA a secret. I once attended an event in Brussels where the host asked me to hide the fact I work for EA Netherlands. This was because they were worried their opponents would use their links with EA to discredit them. This seems like a very bad state of affairs. 
5
JWS
7mo
If what you and Jan say is true (not saying I doubt you, it doesn't mesh with my experiences being an open EA but then I don't live in the policy-world) then this does need to be higher up the EA priority list. I'd strongly, strongly advise against 'hiding' beliefs here. If there is already a hostile set of opponents actively looking to discredit EA and EA-links then we need to be a lot more pro-active in countering incorrect framings of EA and being more assertive to opponents who think EA is worth discrediting.
2
SiebeRozendal
7mo
I think one low hanging fruit is publicly dissociating from Elon Musk. He often gets brought up even though he's not part of the community. There's also very legitimate EA-/longtermism-based criticism of him available
4
pseudonym
7mo
Are you in a position to share more information that might help readers know how much they should update on this comment?
2
JanPro
7mo
No, not really, I am myself confused and wanted to provoke those who know more to reply and clarify. (Which already James Herbert slightly did and I hope more direct info will surface)
2
SiebeRozendal
7mo
I've heard the same thing from US sources about the US policy space, to the extent that important information doesn't get shared on the EA Forum because it would associate it with EA.

EA should take seriously its shift from a lifestyle movement to a social movement. 

The debate surrounding EA and its classification has always been a lively one. Is it a movement? A philosophy? A question? An ideology? Or something else? I think part of the confusion comes from its shift from a lifestyle movement to a social movement.

In its early days, EA seemed to bear many characteristics of a lifestyle movement. Initial advocates often concentrated on individual actions—such as personal charitable donations optimised for maximum impact or career decisions that could yield the greatest benefit. The movement championed the notion that our day-to-day decisions, from where we donate to how we earn our keep, could be channelled in ways that maximised positive outcomes globally. In this regard, it centred around personal transformation and the choices one made in their daily life.

However, as EA has evolved and matured, there's been a discernible shift. Today, whilst personal decisions and commitments remain at its heart, there's an increasing emphasis on broader, systemic changes. The community now acknowledges that while individual actions are crucial, tackling the underlyi... (read more)

7
Joseph Lemien
8mo
Could you describe this would look like? What behaviors/actions from people in EA what convince you that they are taking this seriously?
1
James Herbert
8mo
Sure! Ultimately, I think we should be aiming for a movement that looks something like this.  In terms of behaviours that would signal people taking this seriously, an example might be a rebalancing of how community building work is evaluated. Currently, the main outcome funders look for is longtermist career changes. This encourages very lifestyle movement-y community building. I would like to see more weight being given to things like the generation of passive support, e.g., is the public shifting support towards the movement? Is the movement’s narrative being elevated in public discourse? To use terminology I've used elsewhere, this change would encourage more 'mobilising' and less 'organising'. It would also encourage a rebalancing of our 'social change portfolio' in such a way that we become a slightly more outward-facing movement, one that spends more time talking to and working with the rest of society to achieve shared objectives and less time talking to ourselves.

What do you believe is the ideal size for the Dutch EA community?

We recently posed this question in our national WhatsApp community. I was surprised by the result, and others I've spoken to were also surprised. I thought I'd post it here to get other takes.

We defined 'being a member' as "someone who is motivated in part by an impartial care for others, is thinking very carefully about how they can best help others, and who is taking significant actions to help (most likely through their careers). In practice, this might look like selecting a job or degree ... (read more)

9
titotal
5mo
Why would you not want >1% of the population to fit this description? I think even prominent EA haters would be in favor, if you left out the name "EA" out. 
1
James Herbert
5mo
People often argue for 'Narrow EA'. Here is an example of where I suggested this strategy might not be wise and people disagreed. Although of course, there's an 'at the current margin' thing going on here. I.e., maybe the ideal size is huge, but since we've got limited time and resources we should not aim for that and instead focus on keeping it small and high quality.  Perhaps a more informative question would be something like, "For the next 5 years, should the Dutch EA community aim for broad growth or narrow specialisation?" (in other words, something similar to this Q from the MCF survey).
7
titotal
5mo
Yeah, I think you ended up asking "would it be good for a lot of people to share our values", instead of "should we try to actively recruit tons of people to our specific community"
1
James Herbert
5mo
Gave it a second go.  I asked, "As we plan our future initiatives, it's useful to understand where our community believes we should focus our efforts. Please share your opinion on which of the following we should prioritise.  * Growing the Community: Focus on increasing our membership and raising broader awareness of EA. * Developing Community Depth: Concentrate on deepening understanding and engagement. * Taking a Balanced Approach: Allocate our efforts equally between growing and deepening. * Other (Please specify): If you have a different perspective, we’d love to hear it. * I don't know" 27 people voted, 16 voted for 'taking a balanced approach', 6 for 'growing the community', 1 for 'developing community depth', and 4 for 'I don't know'.    
2
DavidNash
5mo
'Narrow EA' and having >1% of the population fitting the above description aren't opposite strategies. Maybe it's similar to someone interested in animal welfare thinking alt protein coordination should focus on scientists, entrepreneurs, funders and policy makers but also thinking it would be good for there to be lots of people interested in veganism.
3
James Herbert
5mo
Aren't they? Like, if I'm aiming for >1% of the population I ought to spend a lot of my resources on marketing and building a network of organisers. If I'm aiming for something smaller I ought to spend my time investing in the community I've already got and maybe some field building. To make it more concrete, in Q1 of 2024 I could spend 15% of my time investing in our marketing so that we double the number of intro programme sign-ups; alternatively, I could put that time into developing a Dutch Existential Risk Initiative. One is big EA, one is narrow EA. 
2
DavidNash
5mo
I think it depends on how you define 'narrow EA', if you focus on getting 1% of the population to give effectively, that's different to helping 100 people make impactful career switches but both could be defined as narrow in different ways. One being narrow as it focuses on a small number of people, one being narrow as it spreads a subset of EA ideas.   Taking the Dutch Existential Risk Initiative example, it will be narrow in terms of cause focus but the strategy could still vary between focusing on top academics or a mass media campaign.
3
James Herbert
5mo
I'm pretty sure Narrow EA is usually used to refer to the strategy of influencing a small number of particularly influential people. That's part of what I'm pushing back against (although we've deviated from the original discussion point, which was on organising vs mobilising). [got confused about which quicktake we were discussing] I think all of the ERIs are narrow (they target talented researchers). A more broad project would be the Existential Risk Observatory, which aims to inform the public through mass media outreach. They've done a lot of good work in the Netherlands and abroad, but I don't think they've been able to get funding from the biggest EA funds. I don't know why but I suspect it's because their main focus is the general public, and not the decision-makers. 
1
harfe
5mo
why would you like there to be less people "motivated in part by an impartial care for others, [are] thinking very carefully about how they can best help others [...]"? edit: please ignore, just saw that titotal asked the same question 10 minutes earlier.

Rutger Bregman has just written a very nice story on how Rob Mather came to found AMF! Apart from a GWWC interview, I think this is the first time anyone has told this tale in detail. There are a few good lessons in there if you're looking to start a high-impact org. 

It's in Dutch, but google translate works very well!

Curated and popular this week
Relevant opportunities