Sam Clarke

940Joined Aug 2018


Strategy @ GovAI

Views are my own


Topic Contributions

Unfortunately, when someone tells you "AI is N years away because XYZ technical reasons," you may think you're updating on the technical reasons, but your brain was actually just using XYZ as excuses to defer to them.

I really like this point. I'm guilty of having done something like this loads myself.

When someone gives you gears-level evidence, and you update on their opinion because of that, that still constitutes deferring. What you think of as gears-level evidence is nearly always disguised testimonial evidence. At least to some, usually damning, degree. And unless you're unusually socioepistemologically astute, you're just lost to the process.

If it's easy, could you try to put this another way? I'm having trouble making sense of what exactly you mean, and it seems like an important point if true.

Thanks for your comment! I agree that the concept of deference used in this community is somewhat unclear, and a separate comment exchange on this post further convinced me of this. It's interesting to know how the word is used in formal epistemology.

Here is the EA Forum topic entry on epistemic deference. I think it most closely resembles your (c). I agree there's the complicated question of what your priors should be, before you do any deference, which leads to the (b) / (c) distinction.

Thanks for your comment!

Asking "who do you defer to?" feels like a simplification

Agreed! I'm not going to make any changes to the survey at this stage, but I like the suggestion and if I had more time I'd try to clarify things along these lines.

I like the distinction between deference to people/groups and deference to processes.

deference to good ideas

[This is a bit of a semantic point, but seems important enough to mention] I think "deference to good ideas" wouldn't count as "deference", in the way that this community has ended up using it. As per the forum topic entry on epistemic deference:

Epistemic deference is the process of updating one's beliefs in response to what others appear to believe, even if one ignores the reasons for those beliefs or do not find those reasons persuasive. (emphasis mine)

If you find an argument persuasive and incorporate it into your views, I think that doesn't qualify as "deference". Your independent impressions don't (and in most cases won't) be the views you formed in isolation. When forming your independent impressions, you can and should take other people's arguments into account, to the extent that you find them convincing. Deference occurs when you take into account knowledge about what other people believe, and how trustworthy you find them, without engaging with their object level arguments.

non-defensible original ideas

A similar point applies to this one, I think.

(All of the above makes me think that the concept of deference is even less clear in the community than I thought it was -- thanks for making me aware of this!)

Cool, makes sense.

The main way to answer this seems to be getting a non-self-rated measure of research skill change.

Agreed. Asking mentors seems like the easiest thing to do here, in the first instance.

Somewhat related comment: next time, I think it could be better to ask "What percentage of the value of the fellowship came from these different components?"* instead of "What do you think were the most valuable parts of the programme?". This would give a bit more fine-grained data, which could be really important.

E.g. if it's true that most of the value of ERIs comes from networking, this would suggest that people who want to scale ERIs should do pretty different things (e.g. lots of retreats optimised for networking).

*and give them several buckets to select from, e.g. <3%, 3-10%, 10-25%, etc.

Thanks for putting this together!

I'm surprised by the combination of the following two survey results:

Fellows' estimate of how comfortable they would be pursuing a research project remains effectively constant. Many start out very comfortable with research. A few decline.


Networking, learning to do research, and becoming a stronger candidate for academic (but not industry) jobs top the list of what participants found most valuable about the programs. (emphasis mine)

That is: on average, fellows claim they learned to do better research, but became no more comfortable pursuing a research project.

Do you think this is mostly explained by most fellows already being pretty comfortable with research?

A scatter plot of comfort against improvement in research skill could be helpful to examine different hypotheses (though won't be possible with the current data, given how the "greatest value adds" question was phrased.

Re (1) See When Will AI Exceed Human Performance? Evidence from AI Experts (2016) and the 2022 updated version. These surveys don't ask about x-risk scenarios in detail, but do ask about the overall probability of very bad outcomes and other relevant factors.

Re (1) and (3), you might be interested in various bits of research that GovAI has done on the American public and AI researchers.

You also might want to get in touch with Noemi Dreksler, who is working on surveys at GovAI.

A potentially useful subsection for each perspective could be: evidence that should change your mind about how plausible this perspective is (including things you might observe over the coming years/decades). This would be kinda like the future-looking version of the "historical analogies" subsection.

Another random thought: a bunch of these lessons seem like the kind of things that general writing and research coaching can teach. Maybe summer fellows and similar should be provided with that? (Freeing up time for you/other people in your reference class to play to your comparative advantage.)

(Though some of these lessons are specific to EA research and so seem harder to outsource.)

Love it, thanks for the post!

"Reading 'too much' is possibly the optimal strategy if you're mainly trying to skill up (e.g., through increased domain knowledge), rather than have direct impact now. But also bear in mind that becoming more efficient at direct impact is itself a form of skilling up, and this pushes back toward 'writing early' as the better extreme."

Two thoughts on this section:

  1. Additional (obvious) arguments for writing early: producing stuff builds career capital, and is often a better way to learn than just reading.

  2. I want to disentangle 'aiming for direct impact' and 'writing early'. You can write without optimising hard for direct impact, and I claim that more junior people should do so (on the current margin). There's some failure mode (which I fell into myself) where junior researchers try to solve hugely important problems, because they really want to make direct impact. But this leads them to work on problems that are wicked, poorly scoped, or methodologically fraught. Which ends with them getting stuck, demoralised, and not producing anything.

Often, I think it's better for junior researchers to still aim to write/produce stuff (bc of the arguments above/in your piece), but not be optimising hard for direct impact with that writing. Picking more tractable problems and less important ones.

Load More