DM

David Mears

1429 karmaJoined Aug 2021Working (6-15 years)

Comments
152

I'm taking away that how much I believe results is super sensitive to how I decide to model the distribution of actual intervention quality, and how I decide to model the contribution of noise.

How would I infer how to model those things?

As a snapshot of the landscape 1 year on, post-FTX:

80,000 Hours lists 62 roles under the skill sets 'software engineering' (50) and 'information security' (18) when I use the filter to exclude 'career development' roles.

This sounds like a wealth of roles, but note that the great majority (45) are in AI (global health and development is the distant second-place runner up, at 6); and the great majority are in the Bay Area (35; London is second with 5).

Of course, this isn't a perfectly fair test, as I just did the quick thing of using filters on the 80K job board rather than checking all the organisations as Sebastian did last year.

Looks like this was either fixed, or I was on mobile before and am now on desktop

I copied this related facebook comment by Kerry Vaughan from 6th September 2018 (from this public thread):

> (This post represents my views and not necessarily the views of everyone at CEA) [for whom Kerry worked at the time]

> [...] I think there are some biases in how the community allocates social status which incentivize people to do things that aren’t their comparative advantage.

> If you want to be cool in EA there are a few things you can do: (1) make sure you’re up to date on whatever the current EA consensus is on relevant topics; (2) work on whatever is the Hot New Thing in EA; and (3) have skills in some philosophical or technical area. Because most people care a lot about social acceptance, people will tend to do the things that are socially incentivized.

> This can cause too many EAs to try to become the shape necessary to work on AI-Safety or clean meat or biosecurity even if that’s not their comparative advantage. In the past these dynamics caused people to make themselves fit the shape of earning to give, research, and movement building (or feeling useless because they couldn’t). In the future, it will probably be something else entirely. And this isn’t just something people are doing on their own - at times it’s been actively encouraged by official EA advice.

> The problem is that following the social incentives in EA sometimes encourages people to have less impact instead of more. Following social incentives (1) disincentivizes people from actually evaluating the ideas for themselves and discourages healthy skepticism about whatever the intellectual consensus happens to be. (2) means that EAs are consistently trying to go into poorly-understood, ill-defined areas with poor feedback loops instead of working in established areas where we know how to generate impact or where they have a comparative advantage. (3) means that we tend to value people who do research more than people who do other types of work (e.g. operations, ETG).

> My view is that we should be praising people who’ve thought hard about the relevant issues and happen to have come to different conclusions than other people in EA. We should be praising people who know themselves, know what their skills are, know what they’re motivated to do, and are working on projects that they’re well-suited for. We should be praising people who run events, work a job and donate, or do accounting for an EA org, as well as people who think about abstract philosophy or computer science.

> CEA and others have taken some steps to help address this problem. Last year’s EA Global theme -- Doing Good Together -- was designed to highlight the ideas of comparative advantage, of seeing our individual work in the context of the larger movement and of not becoming a community of 1,000 shitty AI Safety researchers. We worked with 80K to communicate the importance of operations management (https://80000hours.org/articles/operations-management/) and CEA ran a retreat specifically for people interested in ops. We also supported the EA Summit because we felt that it was aiming to address some of these issues.

> Yet, there’s more work to be done. If we want to have a major impact on any cause we need to deploy the resources we have as effectively as possible. That means helping people in the community actually figure out their comparative advantage instead of distorting themselves to fit the Hot New Thing. It also means praising people who have found their comparative advantage whatever that happens to be.

So excited for you!

The EA UK newsletter often includes stories.

From the May newsletter:

Zeke - Story of a career/mental health failure

Aaron Gertler - Life in a Day: The film that opened my heart to effective altruism

Amber Dawn talking to Daniel Wu on being on the EA fringes, healthcare entrepreneurship and trying out different career paths

@DavidNash can probably send you the back catalogue.

For tagging, how do I tag someone whose username contains a space? I want to be able to tag ‘Amber Dawn’ without tagging ‘Amber’.

Several serious posts are drowned out on April 1st each year. I half intended to write a round up of these to help them avoid being drowned out, but didn’t get around to it before the work week; now I’m requesting that the EA Forum team consider doing this. In future years (assuming your timelines are that long) I would also be in favour of having a separate section for April fools (like the community section) even though this dampens the humour.

Isn't that a bit self-aggrandising? I prefer "aspiring EA-adjacent"

Load more