All of DavidNash's Comments + Replies

Answer by DavidNashApr 25, 20244
0
0
1

For UK, data from CAF.

"The proportion of donations going to overseas aid and disaster relief (7% -£931m) halved from a high in 2022 (14%)"

1
dominicroser
1d
Great -- thanks very much!

Over time it was getting less engagement, and I felt that the content made more sense as a substack/newsletter than a forum post - it's not the kind of post that leads to discussions.

It's also not a new thing - The Elitist Philanthropy of So-Called Effective Altruism - from 2013.

I'm not sure you have to do anything with it, generally groups that suggests money/influence should be shifted from A to B will get a negative response from the people it may affect or people who disagree with that direction of change. I tend to find energy spent on ideological EA critics is less valuable than good faith critics/people who are just looking for resources to help themselves to more good.

Depending on what you are aiming to achieve with that section of the website, you don't have to have notable figures, you could include people who are most relevant (or not include individuals at all).

For example Magnify Mentoring has people who have benefited from their mentoring programs. EA Philippines have photos of their local community. EA for Christians have stories from members on their community tab and no profiles of people on their intro page.

5
Abdurrahman Alshanqeeti
22d
Agreed. It could be wise to consider removing the "People" section at this stage. Initially, the concept was to have the website function as an entry point for EA, providing explanatory materials similar to utilitarianism.net, which has a dedicated page for key thinkers. However, in our case and at this point, it's not crucial to include leading EA figures. Appreciate your input David!

Thanks for the shout-out akash, I appreciate it. 

With engagement, there might be less comments/likes on substack but it generally gets 1.2k-1.5k views per month compared to the forum which was around 200-400 views per month.

1
Yarrow B.
16d
Why not keep posting to the forum?

Could the main difference be that TBP is a simple process change with reduced costs, while EA-style giving would fundamentally alter grant evaluation requiring more overhead from the funder.

I also think EA would involve extra costs to existing grantees, they will have to provide more evidence of their effectiveness or lose out to orgs that have those systems in place.

 

Separately I think it will be very hard to get existing foundations to shift to use more EA frameworks unless their main donors become interested in it. There is probably more to be gained by finding and helping the UHNW people/orgs that are inclined towards EA already.

There is a post about this (although it was written in 2015).

There are some good reasons for why large donors would want to not give too much money to a charity at once:

  1. Avoiding excessive reserves: Because of the opportunity costs (other charities could use money productively sooner), it is undesirable to have a charity having excessive reserves. Ideally, they would be promised a steady stream of funding if they meet specific targets over many years in order for them to be able to plan ahead.
  2. Risk diversification: Funds should be distributed to se
... (read more)
3
zeshen
1mo
Thanks for the link! I vaguely remember reading this but probably didn't really get an answer that I was hoping for. In the case of AMF, reason 1 doesn't apply, because they seem to want the money to do things now instead of building reserves. Reason 4 seems most relevant - maybe the Gates Foundation is hoping that a Malaria vaccine (which recent developments have shown positive results) could render bed nets futile? But I don't think I buy this either - considering how effective these vaccines currently are, how long it takes to roll out vaccines in these countries, and that Bill Gates himself has previously vouched for bed nets (albeit before the vaccines were endorsed by WHO). As for reasons 2, 3, and 5, I just don't really see how these reasons are worth killing so many babies for - I can't picture a decision maker in the Foundation saying "yeah we have decided to let a hundred thousand people die of Malaria so that we can diversify our risks and encourage others to donate".  I may be missing something, but I only see a few reasonable scenarios: 1. The Gates Foundation does indeed plan to donate, and they might be the 'donor of last resort' 2. They really do not intend to fill the funding gap, perhaps because they don't think additional funding to AMF is as cost-effective as advertised 3. They are confident that AMF will somehow get funding from other sources

CGD has a different take on this type of migration.

"Between the start of 2021 and 2022, the number of Nigerian-born nurses joining the UK nursing register more than quadrupled, an increase of 2,325. Becker’s human capital theory would suggest that this increase in the potential wages earned by Nigerian-trained nurses should lead to an increase in Nigerians choosing to train as nurses. So what happened? Between late 2021 and 2022, the number of successful national nursing exam candidates increased by 2,982—that is, more than enough to replace those who had ... (read more)

Thanks David appreciate the article - I think its a good indication of how complex the question of immigration  is and how I don't think its a slamdunk in either direction.

My impression is though that the article is a pretty poorly researched and misleading piece - even though some of its arguments might still stand in many cases despite that.

First its weird that the article makes zero mention of the state of the Nigerian health system, nor how this mass emmigration might be affecting it. Is staffing getting better or worse? Are outcomes getting bette... (read more)

I remember the 'subforums' being more like chat rooms in their user design than actual sub forums which you can navigate through from a front page.

It doesn't seem that great an opportunity as they've randomly selected 10,000 people out of 7.5 million adults. It then looks like you have to come to a consensus with the 50 participants otherwise the money goes back to her.

I found the Global Skills Partnerships from CGD interesting but I don't know how active it still is/if you can fund it specifically.

3
katriel
24d
My perception is that LaMP is leading that work now after being incubated at CGD

As far as I know they weren't funded by donated money, they received a grant from the S&F Fund and a smaller one from Open Phil (I don't think either org take donations). The rest was self funded, more details in the original post.

4
Wil Perkins
4mo
Thanks for the clarification! I probably should've read in more depth before commenting, I was just viscerally shocked by seeing all of these social-media style pictures so prominent in the beginning of the post. 

I think it depends on how you define 'narrow EA', if you focus on getting 1% of the population to give effectively, that's different to helping 100 people make impactful career switches but both could be defined as narrow in different ways.

One being narrow as it focuses on a small number of people, one being narrow as it spreads a subset of EA ideas.

 

Taking the Dutch Existential Risk Initiative example, it will be narrow in terms of cause focus but the strategy could still vary between focusing on top academics or a mass media campaign.

3
James Herbert
5mo
I'm pretty sure Narrow EA is usually used to refer to the strategy of influencing a small number of particularly influential people. That's part of what I'm pushing back against (although we've deviated from the original discussion point, which was on organising vs mobilising). [got confused about which quicktake we were discussing] I think all of the ERIs are narrow (they target talented researchers). A more broad project would be the Existential Risk Observatory, which aims to inform the public through mass media outreach. They've done a lot of good work in the Netherlands and abroad, but I don't think they've been able to get funding from the biggest EA funds. I don't know why but I suspect it's because their main focus is the general public, and not the decision-makers. 

'Narrow EA' and having >1% of the population fitting the above description aren't opposite strategies.

Maybe it's similar to someone interested in animal welfare thinking alt protein coordination should focus on scientists, entrepreneurs, funders and policy makers but also thinking it would be good for there to be lots of people interested in veganism.

3
James Herbert
5mo
Aren't they? Like, if I'm aiming for >1% of the population I ought to spend a lot of my resources on marketing and building a network of organisers. If I'm aiming for something smaller I ought to spend my time investing in the community I've already got and maybe some field building. To make it more concrete, in Q1 of 2024 I could spend 15% of my time investing in our marketing so that we double the number of intro programme sign-ups; alternatively, I could put that time into developing a Dutch Existential Risk Initiative. One is big EA, one is narrow EA. 

There are a lot of private sector community roles, some with salaries up to $180k - Here are some examples from a community manager job board.

2
Will Bradshaw
6mo
TIL! I think this strengthens my confidence in my original comment re: nearly all EA roles being paid under market rate.

It's not necessarily that the "EA" jobs are more poorly paid, just that the people that take these roles could realistically earn much more elsewhere. 

Answer by DavidNashNov 05, 202334
11
1

One way to think about it is that the aim of EA is to benefit the beneficiaries - the poorest people in the world, animals, future beings.

We should choose strategies that help the beneficiaries the most rather than strategies that help people that happen to be interested in EA (unless that also helps the beneficiaries - things like not burning out).

It makes sense to me that we should ask of those who have had the most privilege to give back the most, if you have more money you should give more of it away. If you have a stronger safety net and access to inf... (read more)

1
Stan Pinsent
6mo
Interesting. Are there any examples of EA jobs which are more poorly-paid than their private-sector counterparts?

Looking at the grants database for 2023, there seems to be only 24 projects listed there for a total of ~$204k, which is less than 10% of the money said to be granted in 2023.

Including the 2022 Q4-2 tag, there are now 54 projects with grants totalling $1,170,000 (although this does include some of the examples above). I don't know how many of these grants are included with the total sum given in the original post.

 

The ten largest grants were:-

  • $126k - 12-month part-time salary for 2 organisers, equipment and other expenses, to expand EA community build
... (read more)
0
calebp
6mo
(Just noting that these grants were made over a long time period, including periods when the funding bar was much lower than it is now; you can of course, look at our site for the rough time period the grant was made.)

I think this has been thought about a few times since EA started.

In 2015 Max Dalton wrote about medical research and said the below. 

"GiveWell note that most funders of medical research more generally have large budgets, and claim that ‘It’s reasonable to ask how much value a new funder – even a relatively large one – can add in this context’. Whilst the field of tropical disease research is, as I argued above, more neglected, there are still a number of large foundations, and funding for several diseases is on the scale of hundreds of millions of dol... (read more)

2
Linch
6mo
Related: early discussion of gene drives in 2016.

Also the $70 billion on development assistance for health doesn't include other funding that contributes to development.

  • $100b+ on non health development
  • $500b+ remittances
  • Harder to estimate but over a trillion spent by LMICs on their own development and welfare

The Panorama episode briefly mentioned EA. Peter Singer spoke for a couple of minutes and EA was mainly viewed as charity that would be missing out on money. There seemed to be a lot more interest on the internal discussions within FTX, crypto drama, the politicians, celebrities etc. 

Maybe Panorama is an outlier but potentially EA is not that interesting to most people or seemingly too complicated to explain if you only have an hour.

Yeah I was interviewed for a podcast by a canadian station on this topic (cos a canadian hedge fund was very involved). iirc they had 6 episodes but dropped the EA angle because it was too complex.

2
Sean_o_h
7mo
Good to know, thank you.

I've written a bit about this here and think that they would both be better off if they were more distinct.

As AI safety has grown over the last few years there may have been missed growth opportunities from not having a larger separated identity.

I spoke to someone at EAG London 2023 who didn't realise that AI safety would get discussed at EAG until someone suggested they should go after doing an AI safety fellowship. There are probably many examples of people with an interest in emerging tech risks who would have got more involved at an earlier time if the... (read more)

In 2015, one survey found 44% of the American public would consider AI an existential threat. In February 2023 it was 55%.

7
David_Moss
7mo
I think Monmouth's question is not exactly about whether the public believe AI to be an existential threat. They asked: "How worried are you that machines with artificial intelligence could eventually pose a threat to the existence of the human race – very, somewhat, not too, or not at all worried?" The 55% you cite is those who said they were "Very worried" or "somewhat worried." Like the earlier YouGov poll, this conflates an affective question (how worried are you) with a cognitive question (what do you believe will happen). That's why we deliberately split these  in our own polling, which cited Monmouth's results, and also asked about explicit probability estimates in our later polling which we cited above.

I've written about this idea before FTX and think that FTX is a minor influence compared to the increased interest in AI risk.

My original reasoning was that AI safety is a separate field but doesn't really have much movement building work being put into it outside of EA/longtermism/x-risk framed activities. 

Another reason why AI takes up a lot of EA space, is that there aren't many other places to go to discuss these topics, which is bad for the growth of AI safety if it's hidden behind donating 10% and going vegan and bad for EA if it gets overcrowde... (read more)

"Which is bad for the growth of AI safety if it's hidden behind donating 10% and going vegan"

This may be true and the converse is also possible concurrently, with the growth of giving 10% and going vegan potentially being hidden at times behind AI safety ;)

From an optimistic angle "Big tent EA" and AI safety can be synergistic - much AI safety funding comes from within the EA community. A huge reason those hundreds of millions are available, is because the AI safety cause grew out of and is often melded with founding EA principles, which includes giving wh... (read more)

If the definition of being more engaged includes going to EAG and being a member of a group, aren't some of these results a bit circular?

2
David_Moss
9mo
As another illustration, we can look at the association between engaging in different activities related to EA and reporting that EAG is important for getting involved in EA. This just asks whether people have completed each activity (0 = no, 1=yes), so there's no possible circularity in the wording. As we can see, people who have engaged in each of the individual the activities are much more likely to report that EAG was important for them getting involved.  In addition, I created a simple 'engagement_score' based on the number of activities individuals had completed (0=1SD lower below the mean, 1=1SD above the mean'). We can see that the results for the engagement_score are almost identical to the results for high_engagement on the self-reported engagement scale (in black). For more discussion of the associations between and validation of the engagement measures see our 2019 and 2020 reports.
4
David_Moss
9mo
In addition to my other comment, I would add that there are two different things to distinguish: * 'More engaged people are more likely to say that EAG helped them get involved, because more engaged people are more likely to have attended EAG.' [As I understand it, this is not your claim.] * I think this is broadly true. * But the results are still not circular, since you can engage with a factor, but not think it important for getting involved. As I noted in my other comment, many more highly engaged people have engaged with an EA book than an EA Group, but EA Groups trounce books in terms of importance for getting highly engaged EAs involved. * 'More engaged people are more likely to say that EAG helped them get involved, because the wording of the engagement scale includes attending EAG among the examples of Considerable engagement...' * I think this is very unlikely to be true. * The crux seems to me to be: if we were to change the wording of the engagement scale so that EAG/EA Groups (or all the concrete examples) were removed from the items, would this change these results. * I would predict that this would neither significantly change the pattern of low/high engagement respondents who have attended EAG/an EA Group, nor change the results in terms of what factors are important for getting involved in EA. * As noted in my other comment, I think there are a number of reasons to think this, not least, that attending an EA Group is not even mentioned among the high engagement categories in the wording of the question.
4
David_Moss
9mo
Thanks for the comment. I think the fact that the engagement scale makes reference to specific activities as examples is worth bearing in mind (we've discussed this before, going back to the 2019 post where we introduced the scale, and the 2020 post, and more recently in the comments on the Demographics post).  You're right that there could be an influence from 'the scale mentioning "attending an EA Global conference" as an example in one of the higher engagement categories' -> people who attend EAG being more likely to be select (and so be counted as) higher engagement -> higher engagement people being more likely to say EAG was important for them getting involved (because they are more likely to have attended EAG).  However, I think there a few reasons why the results are nevertheless not likely to be largely circular. * As implied above, this is somewhat indirect. The engagement measure refers to attending EA Global, but not whether EA Global was important for getting involved. So, in theory, it's possible for people to attend EAG but not select it as important for getting them involved. If people who attended EAG didn't report it being important for them getting involved, increasing the association between engagement and attending EAG would not help the association between engagement and reporting EAG being important. * Although EAG is mentioned as an example in the scale, it's possible to be highly engaged but not attend EAG (around just over 45% of highly engaged respondents had not attended EAG, or to attend EAG but not be highly engaged (around 31.6% of EAG attendees). This is using 2020 data, since we did not ask about EAG attendance this year. * You mention "being a member of a group", but "regularly attending events at a local group" is given as an example of Moderate engagement, which would fall below High engagement in the binary categorisation we use in these analyses. * Similarly, engaging with "articles, videos, podcasts, discussions, or even

EA isn't a political party but I still think it's an issue if the aims of the keenest members diverges from the original aims of the movement, especially if the barrier to entry to be a member is quite low compared to being in an EA governance position. I would worry that the people who would bother to vote would have much less understanding of what the strategic situation is than the people who are working full time.

Maybe we have had different experiences, I would say that the people who turn up to more events are usually more interested in the social sid... (read more)

For better and/or for worse, the membership organization's ability to get stuff done would be heavily constrained by donor receptivity. Taking EA Norway as an example, eirine's comments tell us that (at least as of ~2018-2021), "[t]he total income from the membership fee covers roughly the costs of organising the general assembly," that "board made sure to fundraise enough from private donors for" the ED's salary, but that most "funding came from a community building grant from the Centre for Effective Altruism (CEA)" (which, as I understand it, means Open... (read more)

5
James Herbert
10mo
Re divergence, there will always be people who want to move the movement in a different direction. More democracy just means more transparency, more reasoning in a social context,[1] more people to persuade, and a more informed membership. Hopefully, this stops bad divergence but still allows good pivots.  The downside is that everything takes longer. Honestly, this is perhaps my biggest worry about making things more democratic: it slows everything down. So, for example, the pivot from GHD to longtermism in EA's second wave would probably have taken much longer (or might not have occurred at all). If longtermism is true, and if it was right for EA to make that pivot, then slowing that pivot down would have been a disaster.  I don’t think I understand why you think having a voting membership would mean more social events. Could you explain it to me? I think it would make the movement more responsive to what the community thinks is best for EA, and I think there’s a case to be made that thousands of brains are better than dozens. This might mean more social events, but it might mean fewer. Let’s have the community figure it out through democracy.[2] Yes, people can definitely hold people to account without being members, but they have far less 'teeth'. They can say what they think on the forum, but that’s very different from being able to elect the board members, or pass judgements as part of a general assembly.  1. ^ See Sperber and Mercier's 'The Enigma of Reason' for why this might be a good thing 2. ^ Personally, I think we should do fewer purely social events, but we should do more things that are both impactful and social.

I think one large disadvantage of a membership association is that it will usually consist of the most interested people, or the people most interested in the social aspect of EA. This may not always correlate with the people who could have the most impact, and creates a definitive in and out.

I'd be worried about members voting for activities that benefit them the most rather than the ultimate beneficiaries (global poor, animals, future beings).

3
James Herbert
10mo
Yes these are things I worry about too!  First, about the risk of a membership association selecting for the people most interested in EA, the same holds for the current governance structure (but even more so). However, I don't think this is such a terrible thing. It can be an issue when you're a political party and you have a membership that wildly diverges from the electorate, thus hampering their ability to select policies/leaders that appeal to the electorate. But we aren't a political party.  Second, about the risk of a membership association selecting for those who are mostly interested in the social aspect of EA, I don't think this is necessarily the case. Do you think people join Greenpeace for the social side of things? You'd have to pay to become a member, and it would come with duties that, for most people, aren't very exciting (voting, following the money, etc). I'd be more worried about it selecting for people with political inclinations. But even then, it isn't a given that this would be a bad thing.  Lastly, your worry that members would vote for activities that benefit them the most, this is perhaps the main reason I think we ought to consider a more democratic movement. After all, the same risk holds for the current governance structure (to err is human). A big benefit of a membership association is that you have mechanisms to correct this; a core duty of membership would be holding the leaders to account.  In my opinion, the biggest issue with making the movement more democratic is that it could make things complicated and slow. This might make us less effective for a while. But, it might still be better in the long run.

A separate organisation just for CBGs would have been useful too rather than a lot of one and two person teams with constant turnover.

I thought about this briefly a few months ago and came up with these ideas.

  • CEA - incubate CBG groups as team members until they are registered as separate organisations with their own operations staff
  • CEA but for professional EA network building (EA Consulting network, High Impact Engineers, Hi-Med, etc). They are even more isolated than CBGs which have some support from CEA
  • Rethink Priorities - One of the incubated orgs could do similar work to EV Ops (which is maybe what the special projects team is doing already, but it might be good to have something mor
... (read more)
4
Jason
10mo
I directionally agree with this, but am generally averse to putting medium-risk-or-above projects inside a big organization without a sufficiently clear upside to the risk.  As relevant here, turning CBG grantees into CEA employees could potentially create a lot of exposure for CEA for various things that happen in the groups that these people lead. I'd be much more comfortable with spinning off community building into relatively asset-light, special-purpose organizations, e.g., "EA Community Builders of the Bay Area / UK / Etc." I think you get many of the benefits of centralization that way without exposing the balance sheets of projects that need significant operating capital (and creating a potentially more attractive target for that reason).

I didn't vote but there has been discussion of issues in richer countries that received votes but the author pointed out how it fit into the context of effective altruism.

There have also been posts about mass media interventions but they generally refer to stronger evidence for their effectiveness.

1
Celina - Griffith Johnson
10mo
Thank you for the insightful comment. That was the purpose of my post, though it'll be implemented with or without the forum. It seems I would need to fit it more into the context of Effective Altruism. Should I not be sharing if I don’t have a wide scale solution to a problem, for the purpose of it needing to have only the biggest impact?

Thanks for diving into the data David, I think a lot of this might hinge on the 'highly engaged EAs' metric and how useful that is for determining impact vs how much someone has an interest in EA.

Are you also able to see if there are differences between different types of local groups (National/City/University/interest)?

2
David_Moss
11mo
Thanks David. I'm afraid I'd have to potentially get back to you about this (in terms of whether individuals in different types of groups differ), because this would require manually coding a lot of individual references to groups to determine group type.

I would go further and say that more people are interested in specific areas like AI safety and biosecurity than the general framing of x-risks. Especially senior professionals that have worked in AI/bio careers. 

There is value for some people to be working on x-risk prioritisation but that would be a much smaller subset than the eventual sizes of the cause specific fields.

You mention this in your counterarguments but I think that it should be emphasised more. 

Also Matt Clifford has written regularly about wanting to encourage more entrepreneurship and increasing growth

4
DavidNash
11mo
Also Matt Clifford has written regularly about wanting to encourage more entrepreneurship and increasing growth

When I started community building I would see the 20 people who turned up most regularly or had regular conversations with and I would focus on how I could help them improve their impact, often in relatively small ways.

Over time I realised that some of the people that were potentially having the biggest impact weren't turning up to events regularly, maybe we just had one conversation in four years, but they were able to shift into more impactful careers. Partially because there were many more people who I had 1 chat with than there were people I had 5 chat... (read more)

I guess the overlap is quite high for myself between 'impact' and 'impact as a community builder'.

2
EdoArad
11mo
Thanks, that makes sense. Can you say a bit about what has changed, and in what way you now focus more on impact?

Thanks for writing this post, I've been thinking about this framing recently. Although more because I felt like I was member-first when I started community building and now I am much more cause-first when I'm thinking about how to have the most impact.

I don't agree with some of the categorisations in the table and think there are quite a few that don't fall on the cause/member axis. For example you could have member first outreach that is highly deferential (GiveWell suggestions) and cause-first outreach that brings together very different people that disa... (read more)

2
EdoArad
11mo
(I generally don't feel that happy with my proposed definitions and the categorization in the table, and I hope other people could make better distinctions and framing for thinking about EA community strategy. ) I don't quite share your intuition on the couple of examples you suggest, and I wonder whether that's because our definitions differ or because the categorization really is off/misleading/inaccurate. For me, your first example shows that the relation to deference doesn't necessarily result from a choice of the overall strategy, but I still expect it to usually be correlated (unless strong and direct effort is taken to change focus on deference). And for the second example, I think I view a kind of "member first" strategy as (gradually) pushing for more cause-neutrality, whereas the cause-first is okay with stopping once a person is focused on a high-impact cause. 
2
EdoArad
11mo
Do you mean, "the most impact as a community builder"?

I think the BOTEC is conflating being aware of EA with being an 'EA'.

Also most people are usually optimising for other factors when choosing where to live so the number on the table is much less.

1
Ben Dunn-Flores
1y
That's fair -- I was probably being quite optimistic there. If we split people into 3 samples: 200 people who engage monthly * 800 people who open the mailing list regularly * 9,000 people who know what the letters 'EA' stand for I'm hoping that the 200 would be willing to put in a reasonable amount of marginal effort for expected lifestyle improvements and greater donations I'd expect that the next 800 would be willing to put in a little marginal effort for proven lifestyle improvements and donations And I'd expect the next 8,000 to be willing to choose a co-op over a rented home if it wasn't any extra effort for themselves I think we can use the evidence from the first group to convince the next, while doing it for free, and then use revenue from the 800 group to build a product for the next 9,000. ... And from there scale up to the general population, offering something better than renting, as easy to access, and which donates an appreciable percentage to effective causes.

You can make the Global Health tab your front page if that's the main content you want to see.

I meant the communities/organisations that have overlap with EA but focused on a specific cause, but it would be useful to connect people to less EA related orgs like the Nuclear Threat Initiative, CEPI, etc.

It seems like there is less field building for existential risk but also not that much within specific causes compared with the amount of EA specific field building there has been.

This seems to be changing though with things like the Summit on Existential Security this year, and updates being made by people at EA organisations (mentioned by @trevor1 in... (read more)

1
elteerkers
1y
Thank you!

Earlier in the post - 'We also sent out a survey to the Foresight community, which generated 41 responses from participants in our technical groups'

41 out of ~1800 seems like an extremely low response rate – one would usually expect ~10% response rate, from what I've heard. Combining that with the singular female respondent, it seems to me that this survey is not particularly representative of their "STEM community".

In my example I was more referring to orgs like EVF, but I imagine if EA was more centralised there would be a range of larger orgs, some more like EVF and others more like Open Phil, who aren't incubating projects.

It seems that there would be more to be gained from building bridges between the STEM and existential risk communities rather than EA more broadly. 

EA has a lot of seemingly disconnected ideas that aren't as relevant to most people. Some will be interested in all of them, but most people will be interested in just a subset. Also with x-risk, some people will have much more interest in one of nuclear/AI/bio risks than all of them.

1
elteerkers
1y
Good point! Are there any other X-risk communities you think we should look at, other than the ones already active within EA?

I think it would be better to have 20 organisations with about 50 people each than 3 organisations with 50 people and then everyone else working as individuals. One organisation with 1000 people would probably be the worst option.

2
Linda Linsefors
1y
Thanks, that clarifies things. I'm still not sure what you mean by org. Do you count CEA as an org, or EVF as an org? I think in terms of projects and people and funding. Legal orgs are just another part of the infrastructure that supports funding and people.  I think it would be great if AI Safety Support, where given enough funding to hire 50 people, and used that funding to provide financial security to lots of existing projects. Although that is heavily biased by the fact that I personally know and trust the people running AISS, and that their work style and attitude is aligned with mine. I might feel very different about someone I did not trust as much getting that power. 

There is info here - although tickets are pricey, £120 for 90 minutes.

I might have missed this but can you say how many people took the survey, and how many people filled out the FTX section?

7
Willem Sleegers
1y
Yes we can! And admittedly that should have been in there already. I've added the numbers to Footnote 1, which now has an extra sentence reading: "3567 respondents completed the EAS survey. Of these,  1012 EAS respondents were willing to answer the FTX section of questions and 300 respondents completed the separate FTX survey, resulting in an overall sample size of 1312 for the FTX-related questions."
Load more