In the last couple of years, I’ve noticed people playing with the idea that one the things the community most needs is people with identifiably good judgement.

In the 2020 EA Leaders Forum (EALF) survey, respondents were asked which traits they would most like to see in new community members over the next five years, and judgement came out highest by a decent margin.

You can see this data in a new blog post on 80,000 Hours, where I speculate about some of the reasons that judgement is so valued. In brief:

  1. Good judgement seems prized in general.
  2. Good judgment seems even more important when aiming to do good — especially in a longtermist paradigm — due to a lack of feedback and established best practice, which means we have to rely more than average on judgement calls.
  3. The bottlenecks the community currently faces require people with unusually good judgement (e.g. many of our priority paths).

I also try to clarify what good judgement means and how it differs from related concepts like decision-making and intelligence.

Fortunately, it seems possible for people to improve their judgement. In the second half of the post, I summarise some of the best research I’m aware of into how to improve your judgement into a prioritised list of steps. This is mainly about how to improve forecasting because that's where we seem to have the best evidence.

Read here

30

0
0

Reactions

0
0
Comments16
Sorted by Click to highlight new comments since: Today at 2:11 PM
AGB
4y21
0
0

How confident are you that the EALF survey respondants were using your relatively-narrow definition of judgment, rather than the dictionary definitions which, as you put it, "seem overly broad, making judgment a central trait almost by definition"?

I ask because scanning the other traits in the survey, they all seem like things where if I use common definitions I consider them useful for some or even many but not all roles, whereas judgment as usually defined is useful ~everywhere, making it unsurprising that it comes out on top. At least, that's why I've never paid attention to this particular part of the EALF survey results in the past.

But I appreciate you've probably spoken in person to a number of the EALF people and had a better chance to understand their views, so I'm mostly curious whether you feel those conversations support the idea that the other respondants were thinking of judgment in the narrower way you would use the term.

Hi Alex,

In the survey, good judgement was defined as "weighing complex information and reaching calibrated conclusions", which is the same rough definition I was using in my post.

I'm not sure how many people absorbed this definition and used their own definition instead. From talking to people, my impression is that most use 'judgement' in a narrower sense than the dictionary definitions, but maybe still broader than my definition.

It's maybe also worth saying that my impression that judgement is highly valued isn't just based on the survey - I highlighted that because it's especially easy to communicate. I also have the impression that people often talk about how it might be improved, how to assess it, as a trait to look for in hiring etc., and it seems to come up more in EA than in most other areas (with certain types of investing maybe the exception).

I'm actually confused about what you mean by your definition. I have an impression about what you mean from your post, but if I try to just go off the wording in your definition I get thrown by "calibrated". I naturally want to interpret this as something like "assigns confidence levels to their claims that are calibrated", but that seems ~orthogonal to having the right answer more often, which means it isn't that large a share of what I care about in this space (and I suspect is not all of what you're trying to point to).

Now I'm wondering: does your notion of judgement roughly line up with my notion of meta-level judgement? Or is it broader than that?

For one data point, I filled in the EALF survey and had in mind something pretty close to what I wrote about in the post Ben links to. I don't remember paying much attention to the parenthetical definition -- I expect I read it as a reasonable attempt to gesture towards the thing that we all meant when we said "good judgement" (though on a literal reading it's something much narrower than I think even Ben is talking about).

I think that good judgement in the broad sense is useful ~everywhere, but that:

  • It's still helpful to try to understand it, to know better how to evaluate it or improve at it;
  • For reasons Ben outlines, it's more important for domains where feedback loops are poor;
  • The cluster Ben is talking about gets disproportionately more weight in importance for thinking about strategic directions.

Hi Ben, almost everything you write about in your post is specific to forecasting. Do you think good judgment is primarily about being able to forecast well? Or is that just the only part of good judgment you feel confident advising people on?

People in this discussion may be interested in reading/contributing to my question on how to cleanly delineate the differences between good judgment and forecasting skill.

Hi Khorton, that's true. In the post I say:

Forecasting isn’t exactly the same as good judgement, but seems very closely related – it at least requires ‘weighing up complex information and coming to calibrated conclusions’, though it might require other abilities too. On the other side, I take good judgement to include ‘picking the right questions’, which forecasting doesn’t cover.

So I think they're pretty close.

The other point is that, yes - I think we have some reasonable evidence that calibration and forecasting can be improved (via the things mentioned in the post), but I'm less confident in other ways to improve judgement. I've made some edits to the post to make this clearer.

One other way of improving judgement in general that I do mention, though, is to spend time talking to other people who have good judgement.

Buck
4y25
0
0

I would be pretty surprised if most of the people from the EALF survey thought that forecasting is "very closely related" to good judgement.

I think I disagree, though that's just my impression. As one piece of evidence, the article I most drew on is by Open Phil and also treats them as very related: https://www.openphilanthropy.org/blog/efforts-improve-accuracy-our-judgments-and-forecasts

Good judgment is obviously broader than the narrow "forecasting" Tetlock is studying. But it seems to me that, other than high-level values questions (e.g. average vs aggregate utilitarianism) it all comes down to prediction skill in some sense, as a necessary consequence of consequentialism. If you can think of something that's part of jood judgment and not either part of core values or of prediction in a broad sense I'd like to hear what specifically it is, because I can't think of anything.


"Ultimately actions are good or based solely based on their consequences" necessarily implies your chosen actions will be better if you can predict outcomes better (all else being equal of course, especially your degree of adherence to the plan).


All this description of skills that are supposedly separate from forecasting , e.g."picking the right questions", "deciding which kinds of forecasting errors are more acceptable than others", etc. sounds like a failure to rigorously think through what it means to be good at forecasting. Picking the right questions is just Fermi-izing applied at a higher level than the Superforecasters are doing it. "Picking the right kinds of errors" really seems to be about planning for robustness in the face of catastrophe, arguing against this sort of straw man expected value calculation that I don't think an actually good forecaster would be naive enough to make.


Judgment is more about forecasting the consequences of your own actions/the actions you recommend to others, vs. the counterfactual where you/they don't take the action, than computing a single probability for an event you're not influencing. And you will never be able to calibrate it as well as you can calibrate Tetlockian forecasting because the thing you're really interested in is the marginal change between the choice you made and the best other one you could have made, rather than a yes/no outcome. But it's still forecasting.

Oh got it! Sorry I missed it, thanks

I'm doing research around forecasting, and I'd just note:
1) Forecasting seems nice for judgement, it is very narrow (as currently discussed). 
2) It seems quite damning that every single other field isn't currently recommended as an obvious improvement to judgement, but right now not much else comes to mind. There's a lot of Academia that seems like it could be good, but right now it's a whole lot of work to learn and the expected benefits aren't particularly clear. 

If anyone else reading this has suggestions, please leave them in the comments.

It's maybe also worth noting that my definition of 'judgement' is pretty narrow also, and more narrow than the standard usage. I'm working on a separate piece about 'good thinking' more broadly.

Hey Ben, what makes you think that judgment can be generally improved?

 

When Owen posted "Good judgement" and its components, I briefly reviewed the literature on  transfer of cognitive skills:

This makes me think that general training (e.g. calibration and to a lesser extent forecasting) might not translate to an overall improvement in judgment. OTOH, surely, getting skills broadly useful for decision making (e.g. spreadsheets, probabilistic reasoning, clear writing) should be good.

 

A bit of a tangent. Hanson's Reality TV MBAs is an interesting idea. Gaining experience via being a personal assistant to someone else seems to be beneficial², so maybe this could be scaled up by having a reality TV show. Maybe it is a good idea to invite people with good judgment/research taste to stream some of their working sessions and so on? 

[1]: According to Wikipedia: Near transfer occurs when many elements overlap between the conditions in which the learner obtained the knowledge or skill and the new situation. Far transfer occurs when the new situation is very different from that in which learning occurred.

[2]: Moreover, it is one of the 80K's paths that may turn out to be very promising.

Hi Misha,

I do agree there's a worry about how much calibration training or forecasting in one area, will transfer to other areas. My best guess is there some transfer but there's not as much evidence about it as I'd like.

I also think of forecasting as more like a subfactor of good judgement, so I'm not claiming there will be a transfer of cognitive skills – rather I'm claiming that if you practice a specific skill (forecasting), you will get better at that skill.

I'd also suggest looking directly at the evidence on whether forecasting can be improved and seeing what you think of it: https://www.openphilanthropy.org/blog/efforts-improve-accuracy-our-judgments-and-forecasts

I like the list of resources you put together, another laconic source of wisdom is What can someone do now to become a stronger fit for future Open Philanthropy generalist RA openings?.

Curated and popular this week
Relevant opportunities