lexande

Posts

Sorted by New

Comments

Forget replaceability? (for ~community projects)

A major case where this is relevant is funding community-building, fundraising, and other "meta" projects.  I agree that "just imagine there was a (crude) market in impact certificates, and take the actions you guess you'd take there" is a good strategy, but in that world where are organizations like CEA (or perhaps even Givewell) getting impact certificates to sell? Perhaps whenever someone starts a project they grant some of the impact equity to their local EA group (which in turn grants some of it to CEA), but if so the fraction granted would probably be small, whereas people arguing for meta often seem to be acting like it would be a majority stake.

Don't Be Bycatch

Unfortunately this competes with the importance of interventions failing fast. If it's going to take several years before the expected benefits of an intervention are clearly distinguishable from noise, there is a high risk that you'll waste a lot of time on it before finding out it didn't actually  help, you won't be able to experiment with different variants of the intervention to find out which work best, and even if you're confident it will help you might find it infeasible to maintain motivation when the reward feedback loop is so long.

Some preliminaries and a claim

This request is extremely unreasonable and I am downvoting and replying (despite agreeing with your core claim) specifically to make a point of not giving in to such unreasonable requests, or allowing a culture of making them with impunity to grow. I hope in the future to read posts about your ideas that make your points without such attempts to manipulate readers.

Community vs Network

It seems unlikely that the distribution of 100x-1000x impact people is *exactly* the same between your "network" and "community" groups, and if it's even a little bit biased towards one or the other the groups would wind up very far from having equal average impact per person. I agree it's not obvious which way such a bias would go. (I do expect the community helps its members have higher impact compared to their personal counterfactuals, but perhaps e.g. people are more likely to join the community if they are disappointed with their current impact levels? Alternatively, maybe everybody else is swamped by the question of which group you put Moskovitz in?) However assuming the multiplier is close to 1 rather than much higher or lower seems unwarranted, and this seems to be a key question on which the rest of your conclusions more or less depend.

Community vs Network
I’ve made the diagram assuming equal average impact whether someone is in the ‘community’ or ‘network’ but even if you doubled or tripled the average amount of impact you think someone in the community has there would still be more overall impact in the network.

People in EA regularly talk about the most effective community members having 100x or 1000x the impact of a typical EA-adjacent person, with impact following a power law distribution. For example, 80k attempts to measure "impact-adjusted significant plan changes" as a result of their work, where a "1" is a median GWWC pledger (which is already more commitment than a lot of EA-adjacent people, who are curious students or giving-1% or something, so maybe 0.1). 80k claims credit for dozens of "rated 10" plan changes per year, a handful of "rated 100" per year, and at least one "rated 1000" (see p15 of their 2018 annual report here).

I'm personally skeptical of some of the assumptions about future expected impact 80k rely on when making these estimates, and some of their "plan changes" are presumably by people who would fall under "network" and not "community" in your taxonomy. (Indeed on my own career coaching call with them they said they thought their coaching was most likely to be helpful to people new to the EA community, though they think it can provide some value to people more familiar with EA ideas as well.) But it seems very strange for you to anchor on a 1-3x community vs network impact multiplier, without engaging with core EA orgs' belief that 100x-10000x differences between EA-adjacent people are plausible.

Thoughts on 80,000 Hours’ research that might help with job-search frustrations

The specific alternatives will vary depending on the path in question and hard to predict things about the future. But if someone spends 5-10 years building career capital to get an operations job at an EA org, and then it turns out that field is extremely crowded with the vast majority of applicants unable to get such jobs, their alternatives may be limited to operations jobs at ineffective charities or random businesses, which may leave them much worse off (both personally and in terms of impact) than if they'd never encountered advice to go into operations (and had instead followed one of the more common career path for ambitious graduates, and been able to donate more as a result).

I'm also concerned about broader changes in how we think about priority paths over the coming 5-10 years. A few years ago, 80k strongly recommended going into management consulting, or trying to found a tech startup. Somebody who made multi-year plans and sacrifices based on that advice would find today that 80k now considers what they did to have been of little value.

It's also important to remember that if in 10 years some 80,000 Hours-recommended career path, such as AI policy, is less neglected than it used to be, that is a good thing, and doesn't undermine people having worked toward it--it's less neglected in this case because more people worked toward it.

80,000 Hours has a responsibility to the people who put their trust in it when making their most important life decisions, to do everything it reasonably can to ensure that its advice does not make them worse off, even if betraying their trust would (considered narrowly/naively) lead to an increase in global utility. Comments like the above, as well as the negligence in posting warnings on outdated/unendorsed pages until months or years later, comments elsewhere in the thread worrying about screening off people who 80k's advice could help while ignoring the importance of screening off those who it would hurt, and the lack of attention to backup plans, all give me the impression that 80k doesn't really care about the outcomes of the individual people who trust it, and certainly doesn't take its responsibility towards them as seriously as it should. Is this true? Do I need to warn people I care about to avoid relying on 80k for advice and read its pages only with caution and suspicion?

Thoughts on 80,000 Hours’ research that might help with job-search frustrations

That applies to most of the deprecated pages, but doesn't apply to the quiz, because its results are based on the database of existing career reviews. The fact that it gives the same results for nearly everybody is the result of new reviews being added to that database since it was written/calibrated. It's not actually possible to get it to show you the results it would have showed you back in 2016 the last time it was at all endorsed.

Thoughts on 80,000 Hours’ research that might help with job-search frustrations

Why don't you just take it down entirely? It's already basically non-functional.

Thoughts on 80,000 Hours’ research that might help with job-search frustrations

These days 80k explicitly advises against trying to build flexible career capital (though I think they're probably wrong about this).

Thoughts on 80,000 Hours’ research that might help with job-search frustrations

Note that the "policy-oriented government job" article is specific to the UK. Some of the arguments about impact may generalize but the civil service in the UK in general has more influence on policy than in the US or some other countries, and the more specific information about paths in etc doesn't really generalize at all.

Load More