alexrjl

@ 80,000 hours
3015Joined Nov 2018

Bio

I work on the 1-on-1 team at 80,000 hours talking to people about their careers; the opinions I've shared here (and will share in the future) are my own.

Comments
304

The part about newcomers doesn't reflect my experience FWIW, though my sample size is small. I published a major criticism while a relative newcomer (knew a handful of EAs, mostly online, was working as a teacher, certainly felt like I had no idea what I was doing). Though it wasn't the goal of doing so, I think that criticism ended causing me to gain status, possibly (though it's hard to assess accurately) more status that I think I "deserved" for writing it.

[I no longer feel like a newcomer so this is a cached impression from a couple of years ago and should therefore be taken with a pinch of salt]

I think "cost effective way to fundraise" is probably a stretch, and that this would likely have been better as a shortform, but I wanted to stop in and say the post made me smile, because I think it's a fun example of how you can get a bunch of EV by being risk neutral and thinking outside the box, so thanks for writing it!

In terms of forecasting accuracy on Metaculus, Eli's individual performance is comparable[1] to the community aggregate on his own, despite him having optimised for volume (he's 10th on the heavily volume weighted leaderboard). I expect that were he to have pushed less hard for volume, he'd have significantly outperformed the community aggregate even as an individual.[2]

Assuming the other Samotsvety forecasters are comparably good, I'd expect the aggregated forecasts from the group to very comfortably outperform the community aggregate, even if they weren't paying unusual attention to the questions (which they are).

  1. ^

    Comparing 'score at resolution time', Eli looks slightly worse than the community. Comparing 'score across all times', Eli looks better than the community. Score across all times is a better measure of skill when comparing individuals, but does disadvantage the community prediction, because at earlier times questions have fewer predictors.

  2. ^

    As some independent evidence of this, I comfortably outperform the community aggregate, having tried less hard than Eli to optimise for volume. Eli has beaten me in more than one competition, and think he's a better forecaster.

I think your comment is a good example (and from the votes it looks like I'm not the only one). You're making a good faith, sensible argument for a position I don't hold - I think the disagreement karma is a big improvement. 

I think your comment deserves an upvote for contributing to the discussion, but I disagree and wanted to indicate that.

I'm guessing here, but I imagine that the source of the downvotes might be that this piece is a specific criticism of one organisation, framed as a more general commentary on hiring. I also suspect that the organisation is guessable (there's lots of quite specific detail, including quotes from job ads), though I haven't guessed.

I suspect that either a general piece about pitfalls to avoid when hiring, or an open criticism of "hirely" (potentially having given them a chance to respond), would be better received.

(I haven't up or down voted, as I haven't dug into the object level claims yet)

Thanks for posting this! Do you have a take on Tarsney's point about uncertainty in model parameters in the paper you cite in your introduction? Quoting from his conclusions (though there's much more discussion earlier):
 

If we accept expectational utilitarianism, and therefore do not mind premising our choices on minuscule probabilities of astronomical payoffs, then the case for longtermism (specifically, for the persistent-difference strategy of existential risk mitigation) seems robust to the epistemic challenge we have considered (namely, epistemic persistence skepticism). While there are plausible point estimates of the relevant model parameters that favor neartermism, once we account for uncertainty, it takes only a very small credence in combinations of parameter values more favorable to longtermism for EV(L) to exceed EV(N) in our working example. [emphasis mine]

I disagree that the problem here is groupthink, and I think if you look at highly rated posts, you can't reasonably conclude that people who criticise the orthodox position will be reliably downvoted. I think the problem here is that some people vote based on tone and some on content, which means that when something is downvoted different people draw different conclusions about why.

Didn't separate karma for helpfulness and disagreement (frequently used on LessWrong) get implemented on the EA forum recently? This post feels like the ideal use case for it:

  •  There are some controversial comments with weakly positive karma despite lots of votes, where I suspect what's going on is some people are signalling disagreement with downvotes, and others are signalling 'this post constitutes meaningful engagement' with upvotes. 
  • There are also some comments where the tone seems to me to be over the line, with varying amounts of karma (from very positive to very negative), from various people.  

Were a two-karma system available, I think I would use both [strong upvote, strong disagree] and [strong downvote, strong agree] at least once each.

Will read the article later (and am excited to) commenting because it seems like the quickest getting admin attention that one of the tags in this post has unfortunately been edited by spammers, including in the name.

This has now been fixed, please ignore this comment :)

Various people have posted concerns about social dynamics below (particularly that there might be an impression, intentional or otherwise, that people opting out were deficient* in some way). I think these concerns are worth taking seriously. I’ve been on the receiving end of mostly negative, anonymous, unsolicited feedback, which was at least in part about things that I was aware of but which were outside my control. This made me feel bad not only because of the negative things the feedback mentioned, but also about not being the sort of person who welcomed feedback like that. I think my intuitive reaction to first reading the post was kind of along those lines, causing me to not actually comment the following when I first saw the post, as it felt like it reflected badly on me.

  • I would hate this.
  • I’m trying to get better at responding well to feedback, but I don’t see any remotely possible worlds where I would make enough progress for this to be anything other than awful for me.
  • I feel stressed just thinking about it.

     

I don’t know if it was a mistake to post (I can imagine the process being really valuable for some people), and I haven’t up or downvoted. I made myself write this mostly because I expect many people who feel similar will read and not want to post, for the same reason I didn’t want to, and seeing the couple of comments from people who did post feelings like this made me feel better about my reaction.

*bad at truth seeking, not rational enough, too soldier-y etc.


 

Load More