Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).


My mistakes on the path to impact

I don't think any of 80k's career advice has caused much harm compared to the counterfactual of not having given that advice at all, so I feel a bit confused how to think about this. Even the grossest misrepresentation of EtG being the only way to do good or something still strikes me as better than the current average experience a college graduate has (which is no guidance, and all career advice comes from companies trying to recruit you). 

Things CEA is not doing

Thank you for writing this! I think this is quite helpful. 

The ten most-viewed posts of 2020

Yep, 90% of readers on LW and the EA Forum never vote. And 90% of voters never comment. This holds empirically for lots of forums. 

The Folly of "EAs Should"

The "probably" there is just for the case of becoming an AI safety researcher. The argument for why being a doctor seems rarely the right choice does of course not just route through AI Alignment being important. It routes through a large number of alternative careers that seem more promising, many of which are analyzed and listed on 80k's website. That is what my second paragraph was trying to say.

I think if you take into account all of those alternatives, the "probably" turns into a "very likely" and conditioning on "any decent shot" no longer seems necessary to me. 

The Folly of "EAs Should"

Some specialisations for doctors are very high earning. If someone was on the path to being a doctor and could still specialise in one of them, that is what I would suggest as an earning-to-give strategy.

Yeah, I do think this is plausible. When I last did a fermi on this I tended to overestimate the lifetime earnings of doctors because I didn't properly account for the many years of additional education required to become one, which often cost a ton of money and of course replace potential other career paths during that same time, so my current guess is that while being a doctor is definitely high-paying, I think it's not actually that great for EtG. 

The key difference here does seem to be whether you are already past the point where you finished your education. After you finished med-school or maybe even have your own practice, then it's pretty likely being a doctor will be the best way for you to earn lots of money, but if you are trying to decide whether to become a doctor and haven't started med-school, I think it's rarely the right choice from an impact perspective.

The Folly of "EAs Should"

I do think anyone who has any decent shot at being an AI Safety researcher should probably stop being a doctor and try doing that instead. I do think that many people don't fit that category, though some of the most prominent doctors in the community who quit their job (Ryan Carey and Gregory Lewis) have fit that bill, and I am exceptionally glad for them to have made that decision. 

I don't currently know of a reliable way to actually do a lot of good as a doctor. As such, I don't know why from an impact perspective I should suggest that people continue being a doctor. Of course there are outliers, but as career advice goes, it strikes me as one of the most reliably bad decisions I've seen people make. It also seems from a personal perspective a pretty reliably bad choice, with depression and suicide rates being far above population average.

The Folly of "EAs Should"

Yeah,  I do think the selection effects here are substantial. 

I do think I can identify multiple other  very similarly popular pieces of advice that did turn out to be bad reasonably frequently, and caused people to regret their choices, which is evidence the selection effects aren't completely overdetermining the outcome. 

Concretely, I think I know of a good number of people who regret taking the GWWC pledge, a good number of people who regret trying to get an ML PhD, and a good number of people who regret becoming active in policy. I do think those pieces of advice are a bit more controversial than the "don't become a doctor" advice within the EA Community, so the selection effects are less strong, but I do think the selection effects are not strong enough to make reasoning from experience impossible here.

The Folly of "EAs Should"

(I don't know of a practical scenario where either of those turned out to be bad advice, and multiple times when it saved someone from choosing a career that would have been much worse in terms of impact, so I don't think I understand why you think it's bad advice. At least for people I know it seems to have been really good advice, at least the doctor part.)

EA Forum feature suggestion thread

Yeah, I like it. Does seem like a good thing to have.

vaidehi_agarwalla's Shortform

I think it's most likely if the LessWrong team decides to run a conference, and then after looking into alternatives for a bit, decides that it's best to just build our own thing. 

I think it's much more likely if LW runs a conference than if CEA runs another conference, not because I would want to prioritize a LW conference app over an EAG app, but because I expect the first version of it to be pretty janky, and I wouldn't want to inflict that on the poor CEA team without being the people who built it directly and know in which ways it might break. 

Load More