Hide table of contents

Early in a career, if you're uncertain about what you're good at, exploration of your skills/abilities is necessary. Choosing quicker/cheaper tests at this time can help you do this more efficiently.

However, for assessing skill/personal-fit in our priority areas, a lot of the advice we give is "major in X in undergrad" or "be able to get into this type of job/grad school." To my mind, these aren't efficient tests - by the time you've gotten to higher level classes that truly test your ability to potentially move the field forward/get into the right job or grad-school, it's pretty late to pivot. Also, this advice only applies to EAs currently in college.

Instead for priority paths, could/should 80000-Hours and the EA community curate sets of tests, starting from cheap to expensive, that one can use to rapidly gauge their skills? For instance for technical AI safety research, I layout the following hypothetical example (heads up - I'm no AI expert)[1]

  • Test 1 - Assess how fundamentally strong/flexible your technical thinking is
    • Learn a programming language / fundamentals of programming (ex. MIT's Intro to Python MOOC)
    • Learn Data Structures / Algorithms (ex. Princeton's Algorithms MOOC)
    • Start doing programming puzzles on Leetcode
    • Check:
      • Are you enjoying doing these sorts of computational puzzles?
      • Are you able to solve them in a reasonable amount of time? (~30 min)
      • If no: you may not have a high likelihood of being a good fit for technical AI safety research.You may be able to contribute to the field in other ways - but you might need to adjust your plans.
      • If yes: continue
  • Test 2 - (it would go on from here - for instance, now start doing competitive programming and learn fundamentals of Machine Learning)

The advantage of Test 1 is that you've found a way to test the fundamental skill of flexible technical thinking without investing a ton of time just learning accessory information (how vectors/matrices work, how TensorFlow works, etc.). You could arguably figure this out in one summer instead of over many years.The potential downsides are:

  • We'd need to make sure the tests truly do serve as a good indicator of the core skill - otherwise we're giving advice that leads to an unacceptable amount of false positives and/or false negatives.
  • It can be less motivating to work on proxy problems than learning stuff related to the actual topic of interest, which can throw off the accuracy of the tests.
  • We'd have to balance how specific, yet flexible these testing guidelines should be.

[1] Again, I'm no expert on technical AI research. Feel free to dispute this example if inaccurate, but I'd ask you to try and focus on the broad concept of "could a more accurate set of ranked tests exist and actually be useful for EAs?"

50

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

I definitely think this is worth experimenting with to see if we can effectively identify those who should pursue a particular path.

Definitely agree on "should," assuming it's tractable. As for "can", one possible approach is to hunt down the references in Hunter and Schimdt[1], or similar/more recent meta-analyses, disaggregate by career fields that are interesting to EAs, and look at what specific questions are asked in things like "work sample tests" and "structured employment interviews."

Ideally you want questions that are a) predictive, b)relatively uncorrelated with general mental ability[2] and c) are reasonable to ask earlier on in someone's studies[3].

One reason to be cynical of this approach is that personnel selection is a well-researched and economically really lucrative if for-profit companies can figure it out, and yet very good methods do not already exist.

One reason to be optimistic is that if we're trying to help EAs figure out their own personal skills/comparative advantages, this is less subject to adversarial effects.


[1] https://pdfs.semanticscholar.org/8f5c/b88eed2c3e9bd134b46b14b6103ebf41c93e.pdf

[2] Because if the question just tests how smart you are, it says something about absolute advantage but not comparative.

[3] Otherwise this will ruin the point of cheap tests.

Comments3
Sorted by Click to highlight new comments since: Today at 8:12 PM

Really cool idea. If this were possible would we expect to see big companies using similar tests to recruit undergraduates early before competitors do?

Agree on the "should" part! As for "can": a potentially valuable side project someone (perhaps myself, with the extra time I'll have on my hands before grad school) might want to try is looking for empirical predictors of success in priority fields. Something along these lines, although unfortunately the linked paper's formula wouldn't be of much use to people who haven't already entered academia.

I am interested in this. It can be very costly and difficult to pivot when you make commitments on the order of years, such as what to study at university. However, the sheer size of the commitment also has value as a costly signal. And that's why society relies on it so much. I think cheap tests like you describe are great to do before embarking on commitments on the order of years. And tracking the timing and directionality: ie. which opportunities might be better to take at another time, or how reversible is a pivot, what keeps my options open? I wish I had figured all that out earlier, ideally in high school. Probably telling people earlier, say in high school, to do cheap tests is pretty valuable.

Curated and popular this week
Relevant opportunities