Hide table of contents

Currently, the EA movement tracks human capital and financial capital, using metrics like “number of engaged EAs” or “amount of money pledged toward EA causes”.  But other forms of capital seem as important, less understood, and less measured. Plausibly, part of the explanation for the neglect is measurability bias (a streetlight effect).
 

Network capital

Network capital is the existence and strength of links between people in a social network. Links can have different forms; a very basic one is the ability to get another party’s attention and time, or tacit permission to reach out to them. Other types can include trust, degree of ability to model the other party, and so on.


It would be good to think about what kind of network capital the movement is lacking, what kind of capital will be useful in future, but also how do we find the shortest paths through implicit networks not available as data?

In some sense the standard EA focus on broad career capital, recruitment from elite schools,  and elite expertise already builds a lot of network capital. What seems less clear is the ability to effectively use it to do good.

On one occasion, a small group of EAs went through the list of people Dominic Cummings follows on Twitter, and found that we had met or worked with ⅓ of them. 

Example: the EA bet on the civil service. A main effect of EAs entering and climbing the civil service is reducing the rest of EA's distance from power centres. Each EA civil servant is then a network capital multiplier for the rest of us. A counterpoint is that this will tend to reduce the distance from x people to 3 people, but in catastrophes it is far better to go from x to 1. (That is, EA → minister.) 

Structural capital

Structural capital is the ability of the holder to absorb resources (e.g. people or money) and turn them into useful things. It takes various forms:

  • functional and scalable processes, 
  • competent management,
  • suitable legal status and backing,
  • good operations support,
  • well designed spaces,
  • well written code.
     

On this framing, it may make sense to ask questions like:

  • How much of these forms of capital do we have?
  • How is it distributed?
  • When we are converting between different forms, or substituting one form of capital with another, what are the conversion rates?
  • Are we using the different forms of capital efficiently?
     

This is a part of series explains my part in the EA response to COVID, my reasons for switching from AI alignment work for a full year, and some new ideas the experience gave me. It was co-written with Gavin Leech. 

Comments8


Sorted by Click to highlight new comments since:

Thanks! Lately I've also thinking about concepts such as network capital or social capital. More specifically, I've been thinking about Chetty's work on social capital and economic mobility. I think this could be useful to help us think about 'impact-mobility'. 

What is impact mobility? If economic mobility is the ability of an individual to improve their economic status (usually measured in income); then impact-mobility is the ability of an individual to improve their impact-status (perhaps measured in QALYs achieved or whatever). Presumably, we want our EA communities to have high impact-mobility

How might we increase impact-mobility?

According to Chetty's research, the share of high socioeconomic status friends among individuals with low socioeconomic status (SES) is among the strongest predictors of upward income mobility identified to date. His team terms this 'economic connectedness'. 

I think something similar could be said of the EA community. The share of high impact-status friends among individuals with low-impact status could be one of the strongest predictors of upward impact-mobility. This could be referred to as 'impact-connectedness'. 

In a companion paper, Chetty's team analyse the determinants of economic connectedness. 

They show that about half of the social disconnection across socioeconomic lines —measured as the difference in the share of high-SES friends between people with low and high SES—is explained by differences in exposure to people with high SES in groups such as schools and religious organisations. 

The other half is explained by friending bias—the tendency for people with low SES to befriend people with high SES at lower rates even conditional on exposure. Friending bias is shaped by the structure of the groups in which people interact. For example, friending bias is higher in larger and more diverse groups and lower in religious organizations than in schools and workplaces.

So, transferring this to EA, we might want to build communities that expose low impact-status individuals to high impact-status individuals (inter-status exposure), and do it in such a way that friending bias is low. This would result in an EA community with high impact-connectedness, and thus high impact-mobility. 

How might we increase inter-status exposure and decrease friending bias?

We can look at what increases economic connectedness to help us think about what might increase impact-connectedness. 

Regarding inter-status exposure (the socioeconomic composition of the groups to which people belong), Chetty et al cite several policy efforts we might look at: busing programmes aimed at integrating schools; zoning and affordable housing policies aimed at integrating neighbourhoods; and college admissions reforms to boost diversity on campuses. What might the EA equivalents be?

Regarding friending bias (the rate at which cross-SES friendships are formed conditional on exposure), interventions have been studied less frequently. However, Chetty et al do suggest this is shaped by social structures and institutions and can therefore be influenced by policy changes. They list several examples.

  1. Changes in group size and tracking: Berkeley High School (BHS) tackled within-school segregation and friending bias by assigning students to small, intentionally diverse 'houses' or 'hives' in the ninth grade. This approach focuses on the way students are tracked and the size of their groups to encourage more inclusive interactions.
  2. New domains for interaction: Programs and venues promoting cross-SES interactions can help reduce friending bias. An example is the Boston gym Inner City Weightlifting, which recruits personal trainers from lower-SES backgrounds to coach affluent clients. This approach flips power dynamics, bridges social capital, and fosters genuine inclusion. Peer mentoring programs and internship opportunities can also contribute to reducing friending bias.
  3. Restructuring of space and urban planning: Lake Highlands High School in Texas identified its building architecture as a barrier to cross-SES interaction. A large-scale construction project created a single cafeteria and more spaces for all students to interact, encouraging encounters between students from different social groups. Architecture and urban planning can play a role in reducing friending bias outside schools through social infrastructure, public parks, and public transit.

  What might the EA equivalents be here?

I was originally skeptical of drawing a direct analogy between economic mobility and impact mobility, but after reading the paper I think the mechanisms seem pretty similar: upward income mobility comes from increased inter-economic-status exposure, which increases the exposure of lower-income people to opportunities outside of their communities and ways to attain them – this shapes aspirations and provides access to these opportunities.

This mechanism seems similar to the process I went through to start doing EA work: I met specifically one person who was doing something really cool and impactful and then realised this was something that was achievable for someone like me. Then I met more people, started a project, and now I'm still doing that.

I think EA equivalents for inter-status exposure could be through things like reading groups, fellowships, and conferences; friending bias can be reduced through activities like speed-friending, mentoring, and meet-ups, but I think there could definitely be more programs to introduce "new EAs" to people doing impactful work. For larger groups, perhaps a coffee roulette would do the trick?

Also, this line in the paper caught my eye: 

For other outcomes [other than increasing economic mobility], other social capital indices that we construct here may be stronger predictors. For example, differences in life expectancy among individuals with low income across counties are more strongly predicted by network cohesiveness measures (clustering coefficients and support ratios) than EC [economic connectedness].

I wonder if there could be a tenuous analogy from a prediction of life expectancy in this study to something like the longevity of engagement with EA. Highly unsure about this – the mechanisms are likely to be very different!

Thanks for highlighting possible similarities between mechanisms, that's an important part I forgot to cover! 

Another inter-status exposure intervention I quite like is the use of EA co-working spaces. I only have vibes to back this up, but I think this is where a lot of the value of our Amsterdam space lies. 

That's an interesting point about the relationship between network cohesiveness and longevity of engagement with EA, intuitively it feels right.

I love the framing of "structural capital", and would tentatively state that EA as a movement has much less structural capital than I would expect, relative to its amount of financial/human/network capital. In fact, I would argue that EA is bottlenecked on structural capital.

It seems to me like EA has a ton of money, a bunch of really smart people, and the ear of decisionmakers... but has had at best mixed results converting this into effective organizations, good ops, or good code. This is relative to my experience in the Silicon Valley tech scene, which feels like the best point of comparison. (You may draw different conclusions compared to e.g. academia)

One question I would be very interested in: how much of the money & people are being spent acquiring more money & people, vs being converted into structural capital?

Setting up an information markets for these questions here:

Then the follow up question would be: What % of EA money/FTE SHOULD be spent on gaining structural capital?

Fleshing out the argument more:

  • "Structural capital is the ability of the holder to absorb resources (e.g. people or money) and turn them into useful things". What useful things has EA produced (exclusive of fundraising and converting more EAs)? I think e.g. the outcomes around developing world health interventions are really great, but it's not clear to me how much of that is counterfactually attributable to EA; would the Gates foundation or somebody else have picked it up anyways?
  • Competent management: it feels like excellent management and managers are in short supply; there are a lot of people who do direct work (research, community work), but few managers and even fewer execs on the level of "VP or director at top series-A Silicon Valley startup"
  • Well written code: maybe the comparison to SV is especially harsh here, but I've been thinking that EA needs better software (still WIP). Software is an incredibly high-leverage activity, and I'd claim that eg most of the world's productivity gains in the last two decades can be attributed to software; but EA draws from an philosophical/academic tradition and thus wayyy overvalues "blogging" over "coding"

Good post. I would add a notion of idea pervasiveness in the public consciousness. What I mean is how often people think along EA-consistent lines, or make arguments around dinner tables that explicitly or implicitly draw upon EA principles. This will influence how EA-consistent government policy is. Ideas like democracy, impartial justice, and freedom of religion, have strong pervasiveness. You could measure it by surveying people about whether they have heard of EA, and if so, whether they would refer to it in casual conversations, or whether they think it would influence their actions. You could benchmark the responses by asking the same questions about democracy or some other ubiquitous idea.

I like this line of thinking! I'll be entering civil service for my next career move, and being new to the EA community had got me thinking along these lines - I've been asking myself 'how can synergies be created at these intersections?'.

Thank you for posting these great explanations! I realized when explaining EA Angels to someone today that the main benefit of a successful for-profit startup is not always financial capital. After reading this, I think it might be the network and structural capital

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr