Amber Dawn

4363 karmaJoined


I'm a freelance writer and editor for the EA community. I can help you edit drafts and write up your unwritten ideas. If you'd like to work with me, book a short calendly meeting or email me at Website with more info:


Topic contributions

Let’s research some impactful interventions! Would you come to an intervention evaluation 101 learning-together event in London?

I want to run an event where we get together and do some quick-and-dirty intervention evaluation research, to learn more about how it works. I know nothing about this so we’ll be learning together!

Where: (central?) London
When: a mutually-agreed weekend day
What: I’ll come up with a structure loosely based on (some stages of?) the AIM/Charity Entrepreneurship research process. We’ll research and compare different interventions addressing the same broad problem or cause area. For example, we might start by quickly ranking or whittling down a long list in the morning, and then do some deeper dives in the afternoon.  We’ll alternate between doing independent research and discussing that research in pairs or small groups.

If you're interested in coming, please DM me: I’ll use a WhatsApp chat to coordinate. No need to firmly commit at this stage!

I hope to:

-Better understand how charity evaluators and incubators such as GiveWell and AIM form their recommendations, so I feel more empowered to engage with their research and can identify personal cruxes

-Learn how to assess interventions in areas that I think are promising, but that haven’t been discussed or researched extensively by EAs

-Just learn more about the world?

The event could also be useful for people who want to test their fit for EA research careers, though that’s not my own motivation.

What cause area?
We’d vote on a cause area beforehand. My vision here is something like ‘an area in global health and development that seems very important, but that hasn’t been discussed, or has been discussed relatively minimally, by EAs’.

What next?
If this goes well, we could hold these events regularly and/or collaborate and co-work on more substantive research projects.

Is there a point to this? 
AIM’s process takes ~1300 hours and is undertaken by skilled professional researchers; obviously we’re not going to produce recommendations of anywhere near similar quality. My motivation is to become personally better-informed and better-engaged with the nitty gritty of EA/being impactful in the world, rather than to reinvent the GiveWell wheel. 
That said, we’re stronger together: if 50 people worked on assessing a cause area together, they’d only have to spend 26 hours each to collectively equal the AIM process. 26 hours isn’t trivial (and nor is 50 people), but it’s not crazy implausible either. If collectives of EAs are putting AIM-level amounts of hours into intervention evaluation in their spare time, seems like a win?

Ugh bad luck Saulius, I totally feel your frustration. I've had a few covid-bouts where I tested positive for over 2 weeks. It feels really frustrating to have to miss out on important things when it's unclear that you're even infectious, and also unclear that others are taking similar precautions. 

It sounds like you've made your decision but fwiw, in your position I'd tell people about my covid status and offer them outdoor meetings if they were comfortable with that. 

Yeah, that's what I hoped. I couldn't honestly say that I would care about these labels (cos I don't eat animal products anyway), but I said stuff like 'consumers would like to know this', which I think is true.

That's interesting! 

As a follow-up, in consultations you've been involved with, did they put weight on the thoughts on random members of the public, assuming the thoughts were sensible ofc?

I have a few thoughts on this.

First, it's definitely worth considering if you're contributing to conversations, but as others have said, I don't think the bar has to be "your post is as well-thought-out and detailed as a Scott Alexander post on the same topic". I basically trust the Forum's karma system + people's own judgment of what's valuable to them to effectively filter for what's worth reading, so I don't think writers have to do that themselves. If your post isn't valuable to individuals, they won't read it or upvote it.

A way you can see this is: if you write the thing, people can choose not to read it, but if you don't write it, they can't choose to read it. I feel like what you are doing is similar to how some EAs are like 'oh I won't apply to that job because I don't want to waste the org's time and surely I'm not a good candidate'. Well, that's true for some jobs, but most orgs want people to apply, even if they are uncertain, and they'll do the filtering themselves! 

Second, maybe if you're worried about diverting traffic from posts you see as better, you could incorporate those posts into your own and link them/give them a shout-out.

E.g.: [at the end of the post] "if you're interested in this topic, I found this post by [NAME] super helpful in clarifying my thoughts."
E.g.: [at the start of the post] "I really enjoyed this post by [NAME] on [TOPIC], and it inspired me to write up some more arguments about [TOPIC] that [NAME] didn't go into"

i.e. frame your post as a "yes and" or as a contribution to an ongoing conversation, rather than something designed to compete with, or be as good as, other posts. 

NON-example: "If you care about this topic you should probably read this post whch is waaaay better than mine I'm sure" self-flagellate, self-flagellate

Third, would it help to frame your writing (to yourself, or explicitly in the post) as a way for you to clarify your own thinking, rather than as something that has to make an original argument? For example, Holden Karnofsky has talked about 'learning by writing': maybe you are doing a version of that, rather than being at the absolute cutting edge of research. You might say 'well, in that case, I don't need to publish it', and it's true you don't have to publish anything, but some reasons to publish this sort of writing might be:

-it might be helpful, not for experts, but for others with similar expertise to you (or less) who are trying to clarify their own thinking on the matter
-you can get feedback from commenters that might help you learn
-the fact of having Published a Thing might motivate you to do more of this


FWIW I'm happy this question was asked publicly: I had no idea about this ruling (which is just extremely cruel and unhelpful) and this is a serious inclusion issue. 

Yeah, this is a good point: you can go a long way with just commitment/agency/creativity/confidence/?

I mean, maybe people who are strong in those traits aren't really "mediocre", ?

But yeah, this is a good reminder that excellence isn't just one axis.

Answer by Amber Dawn48

I’ve been thinking about this quite a bit recently. It’s not that I see myself as a “mediocre” EA, and in fact I work with EAs, so I am engaging with the community through my work. But I feel like a lot of the attitudes around career planning in EA sort of assume that you are formidable within a particular, rather narrow mould. You talk about mediocre EAs, but I’d also extend this to people who have strong skills and expertise that’s not obviously convertable into ‘working in the main EA cause areas’.

And the thing is, this kind of makes sense: like, if you’re a hardcore EA, it makes sense to give lots of attention and resources to people who can be super successful in the main EA cause areas, and comparatively neglect people who can’t. Inasmuch as the community’s primary aim is to do more good according to a specific set of assumptions and values, and not to be a fuzzy warm inclusive space, it makes sense that there aren’t a lot of resources for people who are less able to have an impact. But it's kind of annoying if you're one of those people! 

Or like: most EA jobs are crazy competitive nowadays. And from the point of view of "EA" (as an ideology), that's fine; impactful jobs should have large hiring pools of talented committed people. But from the point of view of people in the hiring pool, who are constantly applying to and getting rejected from EA jobs - or competitive non-EA jobs - because they've been persuaded these are the only jobs worth having, it kinda sucks.

There’s this well-known post ‘don’t be bycatch’; I currently suspect that EA structurally generates bycatch. By ‘structurally’ I mean ‘the behaviour of powerful actors in EA is kinda reasonable, but also it predictably creates situations where lots of altruistic, committed people get drawn into the community but can’t succeed within the paradigms the community has defined as success’. 

Thanks for writing this! I’ve long been suspicious of this idea but haven’t got round to investigating the claim itself, and my skepticism of it, fully, so I super appreciate you kicking off this discussion.

I also identify with ‘do I disagree with this empirically or am I just uneasy with the vibes/frame, how to tease those apart, ?'

For people who broadly agree with the idea that Sarah is critiquing: what do you think is the best defence of it, arguing from first principles and data as much as possible?

I have a couple of other queries/scepticisms about the power-law argument. I haven’t read all the other comments, so sorry if I repeat stuff said elsewhere.

1. Does it empirically hold up even assuming you can attribute stuff to individuals?
You focus a lot on critiquing conceptual idea of the individual impact of one person (since most actions happen in the context of other actions and actors). I think I also have empirical disagreements with the claim even if we can tease out what impact comes from which person. 

It feels to me like EAs sometimes over-generalize that finding from global health interventions — where I don’t doubt that it holds up — to other domains, where it hasn’t been established (e.g., orgs working in longtermist causes, or people compared to their peers, or actions one takes in one’s career). It’s possible that there *is* more discussion and substantiation of this idea out there, but I just haven’t seen it.

Like, even if we accept that (per your example) the President does have much more impact than the average person, or (per Jeff’s example above) a larger donor has more impact than a smaller donor to the same charity, can I generalize that to the actions available to me personally, or to questions of how impactful ‘overall’ I can be compared to my peers? What’s the empirical justification for such generalizations?

2. Is the bar low? Does this depend on how you define the space?

Benjamin Todd, in the article you linked, claims that the power-law pattern has been found in many areas of social impact. I’m sure this is true, but I want to point out that this is kind of contingent, not a law of nature. E.g., I’d guess this is due to some combination of ‘there’s not a culture of measuring outcomes and prioritization in general philanthropy’ (that’s kind of the whole point of EA) and/or ‘the world is very complicated and it’s hard to know ex ante (and sometimes even ex post) what will work/what did work’. 

Like, if there were a culture shift in philanthropy across the board meaning that interventions would only be funded or carried out if they met some effectiveness bar, would we still expect interventions to be power-law distributed? Surely less so?

To frame this another way, imagine I said to you ‘the nutritional value of foods follows a power-law distribution’, and you were like ‘hmm’, but then it turned out that among ‘foods’ I was counting inedible objects like chairs and rocks and grass. So yes, only a minority of objects have most of the nutritional value, but anything we’d call food is in the heavy tail, and this is a kind of silly frame.

This point isn’t fully worked out but yeah, I wonder if ‘what counts as the distribution’ is kind of socially constructed in a way that’s not always helpful.  

I guess I weakly disagree: I think that motivation and already having roots in an issue really are a big part of personal fit - especially now that lots of "classic EA jobs" seem highly oversubscribed, even if the cause areas are more neglected than they should be. 

Like to make this more concrete, if your climate-change-motivated young EA was like 'well, now that I've learnt about AI risk, I guess I should pursue that career, ?', but they don't feel excited about it. Even if they have the innate ability to excel in AI safety, they will still have to outcompete people who have already built up expertize there, many of whom will find it easier to motivate themselves to work hard because they are interested in AI. 

(On the object level, I assume that many roles in climate change and gender equality stuff are in fact more impactful than many roles in more canonical EA cause areas). 


Load more