Angelina Li

Data Analyst @ Centre for Effective Altruism
Working (0-5 years experience)

Bio

Hiya! I work on data stuff at CEA. I used to be the content lead on the EA Global team at CEA, and before that I did economic consulting. Here's an old website I might update at some point.

Think I'm making a mistake? Want to give me feedback? Here's my admonymous.

Comments
28

Thanks for these suggestions!

(P.S. I am unable to access the underlying data at the moment of writing this comment, nbd)

Thanks for flagging — I've updated the link. Let me know if you have further issues.

Here’s just the headings from the updates + implications sections, lightly reformatted. I don’t necessarily agree with all/any of it (same goes for my employer).

Updates

Factual updates (the world is now different, so the best actions are different)

  • Less money — There is significantly less money available
  • Brand — EA/longtermism has a lot more media attention, and will have a serious stain on its reputation (regardless of how well deserved you think that is)
  • Distrust — My prediction is that if we polled the EA community, we’d find EAs have less trust in several institutions and individuals in this community than they did before November. I think this is epistemically correct: people should have less trust in several of the core institutions in the community (in integrity; in motives; in decision-making)

Epistemic updates (beliefs about the world I wish I’d had all along, that I discovered in processing this evidence)

  • Non-exceptionalism — Seems less likely that a competent group of EAs could expect to do well in arbitrary industries / seems like making money is generally harder (which means the estimate of future funding streams goes down beyond the immediate cut in funding)
  • Dangerous ideas — We should be more worried that aspects of our memeplex systematically increase the risk of people taking extreme actions that are harmful
  • By the book — The robustness that comes from doing things by the book seems more important
  • Uncompromising utilitarianism — We should be more worried about people orienting to utilitarian arguments in absolutist ways that don’t admit other heuristics
  • Tribalism — I’m more worried that people identifying as EAs is net destructive
  • Conflicts — I’ve moved towards thinking conflicts of interest, broadly understood, are frequent and really guide people’s thinking
  • Integrity — I think that upholding consistently high standards of integrity is particularly important
  • Taking responsibility — Diffusion of responsibility for cross-cutting issues for the EA community can mean nobody works on them
  • Complicity — Tacit tolerance of bad behaviour is a serious issue

Implications

Implications for object level work:

  • We should be a bit more positive on people doing crucial work within established institutions
  • We should have a somewhat higher bar for funding things
  • We should consider lower salaries
  • We should care a bit more that plans look robustly good
  • We should be a bit more positive on research distillation

Implications for community-building activities:

  • Content (reading lists, talks, etc.) should:
    • Bit more positive on content from outside EA
    • Bit more tools-driven, and a bit less answers-driven
    • Bit more emphasis on the value of looking at things from several perspectives
    • Focus a bit more on social epistemology
  • The vibe of community-building activities should:
    • Lean a bit further away from encouraging people to identify as EA
    • Lean a bit further away from “we have the answers” and towards “we’re giving you the questions”
    • Send somewhat fewer in-group signals
    • Focus on building a culture which is high-integrity
    • Focus on building a culture which treats consequentialist analysis as just one tool in the toolkit
    • Focus on building a culture which asks people to make sure they know who has responsibility for things
  • Structurally, community-building activities should:
    • Put somewhat lower estimates on the monetary value of outcomes or programs
    • Be more transparent about these valuations and other tools for decision-making about community building
    • Scale down activities a little (or slow the growth trajectory)
    • Scale down salaries a bit

Implications for central community coordination:

  • We should lean a bit further towards professionalism
  • We should lean a bit further towards transparency
  • We should consider creating mechanisms for anonymously sharing updates/impressions
  • Orgs should be very explicit about what they are and aren’t taking responsibility for
  • Coordination mechanisms should facilitate making sure someone is taking responsibility for important things
  • We should ensure that people can access some core discussions by application, not just by networking
  • We should lean a bit more towards legible invite criteria, especially for flagship events like Coordination Forum
  • We should lean a bit further towards frugality

Implications for governance:

  • We should increase oversight of projects and decisions
  • We should increase transparency of governance
  • We should err towards doing more impact analyses
  • Projects and orgs should invite accountability primarily for whether they took responsibility for the right things, and how those things went
  • We should give less weight to straightforward consequentialist PR arguments 
  • We should spread governance work over more people

Thanks! I conducted most of the analytics underlying the post. I sympathize with the issue you point out here! The explanation is kind of boring: the data has limitations that make more granular analyses tricky.

In 2022, the EA Global team collected race/ethnicity data exclusively using free-response fields in the application and feedback forms. For this post, we asked assistants working for the events team to hand code each unique response to two fields: (i) whether or not someone is POC, and (ii) which US Census race / ethnicity category this corresponded with. On (ii), I chose this mostly to be consistent with how e.g. the EA Survey in 2020 coded race/ethnicity data, and to allow for easier further analysis.

This second hand categorization is necessarily less accurate than what people would have marked themselves. In particular, our disaggregated race/ethnicity counts are probably less accurate than the “is POC” / “not POC” labeling. As an example, if someone reports they are “Thai / Indian”, I don’t have great guesses for whether they would have marked themselves down as “Asian” or “Multiracial”, but it seems fairly likely to me that they would fit under the “people of color” umbrella. Incidentally, I suspect this kind of issue might be why the EA Survey reports a much larger percentage of multiracial EAs than we do in our attendance numbers.

For speakers, as mentioned in the footnotes most speakers did not give us race / ethnicity data, and so I hand coded a binary “is POC” flag myself. For a variety of reasons coding a more granular flag would have taken much more effort, so we skipped that exercise.

As a second general problem, all of the data we are working with is pretty small, splitting the race/ethnicity data up more granularly makes each cohort smaller, and doing meaningful statistics on small samples is hard.

For the two above reasons, we presented mostly findings on the less granular level here. We might eventually take a look at this question, but I expect this would be a non-trivial lift, so we are currently not prioritizing it over other projects.

As an aside, the events team as a whole is conscious of the dynamic where the term “people of color” hides some important nuance, and doesn’t try to optimize for only this binary categorization when thinking about diversity considerations. (I no longer work on the EA Global team and am passing this on from speaking with the team.)

Wow, thank you so much for this! I was looking for exactly this type of product a couple of months ago, and was feeling frustrated at the lack of good options in this niche.

Really excited to try this out!

It'll be hard to see you go, Max!

I’ve loved to be a part of a culture where staff are valued and empowered to do things.

I think of you as playing a major part in creating that culture (thank you :) ). I remember being really impressed when joining CEA how you take time to individually message & appreciate staff, meet regularly with everyone 1:1, and take staff feedback really seriously.

I admire you a ton, and I'm sorry this whole thing has taken such a toll. That really sucks. I'm very glad you're getting more rest these days, and am excited to hear about what you do next!

Some low effort thoughts:

  • If this is meant as a living resource, maybe move the first 2-3 paragraphs to the bottom of the post, and leave just a one line explainer at the top, to make it easier to skim ("There are now more free or discounted services available to EAs and EA orgs. Here is an updated list, which is mostly a repost of [this].")
  • Maybe worth linking to your anti karma farming comment in the post so ppl can find it easier?

Other things that might belong here:

My impression is AISS does a bunch of things outside from the health consulting thing fwiw, like maintaining this and this.

Thanks, this is an interesting post! (I was going back reading some posts on aquatic animal welfare and came across this).

I think the crux for me is:

I assume that the welfare gains of the fish stunner (the elimination of asphyxiation suffering of a small wild fish), is equivalent to the welfare gains of a cage-free system or higher welfare broiler breed for a chicken during a half-day.

I think this paragraph is hiding a lot of the work for me. I'd be interested in reading  a follow up (even in a quick BOTEC-y form) for:

  • How much better (per individual) do we expect stunning to be relative to asphyxiation
  • How should we weight this quality reduction relative to suffering from other animals

I guess I don't know enough to assess whether the heuristic of comparing "stunning v. asphyxiation" to "half-day of cage-free / higher welfare breed v. baseline" is sensible.

There are multiple regions where this can be done: Peruvian coasts, Black Sea, Sea of Japan, Yellow Sea, and East China Sea, South African coasts etc. 

Also curious where this list of examples is generated from?

Have you considered brushing this up and sending this as an idea for Charity Entrepreneurship to consider incubating as a charity (or consider applying yourself if you think you would be a good fit)?

I found this an interesting framing, thank you! I hadn't heard of the multidimensional poverty index before. 

(1) Do you know how widely this measure is currently being used in e.g. development research, charity evaluation? I was kind of surprised at how specific some of the components of the index are (e.g. I imagine the below is kind of hard to straight forwardly calculate based on past surveys -- not sure if all of these questions are standard to ask?).

Deprived if the household does not own more than one of these assets: radio, TV, telephone, computer, animal cart, bicycle, motorbike or refrigerator and does not own a car or truck.

(2) Minor point: I wonder if you will reach more of your intended audience by changing the title of this post to "The Capability Approach (to Improving Human Welfare)" or something. I initially pattern matched the word "capability" in this title onto something about AI, since I think on the EA forum folks talk more about capability in terms of AI systems than anything else.

Load more