Hide table of contents

Epistemic status: …hang on a second.[1]

It’s common to see posts on the EA Forum (this platform) start with “Epistemic status: [something about uncertainty, or time spent on the post].

This post tries to do three things: [2]

  1. Briefly explain what “Epistemic status” means
  2. Suggest that writers consider hyperlinking the phrase (e.g. to this explainer)
  3. Discuss why people use “epistemic status”

What does “epistemic status” mean?

According to Urban Dictionary:[3]

The epistemic status is a short disclaimer at the top of a post that explains how confident the author is in the contents of the post, how much reputation the author is willing to stake on it, what sorts of tests the thesis has passed.

It should give a reader a sense of how seriously they should take the post.

I think that’s a surprisingly good explanation. Commenters might be able to add to it, in which case I’ll add an elaboration.

Source

A bunch of examples:

  • Epistemic status: Pretty confident. But also, enthusiasm on the verge of partisanship
  • Epistemic Status: I have worked for 1 year in a junior role at a large consulting company. Most security experts have much more knowledge regarding the culture and what matters in information security. My experiences are based on a sample size of five projects, each with different clients. It is therefore quite plausible that consulting in information security is very different from what I experienced. Feedback from a handful of other consultants supports my views.
  • Epistemic status: personal observations, personal preferences extrapolated. Uses one small random sample and one hard data source, but all else is subjective.
  • Epistemic status/effort: I spent only around 5 hours on the work test and around 3 hours later on editing/adapting it, though I happened to have also spent a bunch of time thinking about somewhat related matters previously
  • Epistemic status: uncertain. I removed most weasel words for clarity, but that doesn't mean I'm very confident. Don’t take this too literally. I'd love to see a cluster analysis to see if there's actually a trend, this is a rough guess.
  • Epistemic status (how much you should trust me): Engaging with the Forum is my job, and I ran this by a few people, who all agreed with the argument. One person was surprised that this was an issue. So I’m more confident than usual.
  • Epistemic status: Writing off-the-cuff about issues I haven't thought about in a while - would welcome pushback and feedback
  • Epistemic status: Divine revelation.
  • Epistemic Status: In this post I’m mainly referring to university group community builders. It’s possible that a lot of what I say will still apply to city / country / other groups, but I’m less confident of this. In my problem section, I give some percentage estimates of how much organizers are marketing (defined later) and how much they should be marketing. This is based off of some rough estimates, which I’m not confident in. I’d love to see someone better estimate this or run a survey.
  • Epistemic status: a rambling, sometimes sentimental, sometimes ranting, sometimes informative, sometimes aesthetic autobiographical reflection on ten years trying to do the most good.
  • Note: epistemic confidence is lower here, as not much time was spent looking into these relative to other areas.
  • Epistemic note: I am engaging in highly motivated reasoning and arguing for veg*n.
  • Epistemic Status: I am uncertain about these uncertainties! Most of them are best guesses. I could also be wrong about the inconsistencies I've identified. A lot of these issues could easily be considered bike-shedding. [This post also includes: “Effort: This took about 40 hours to research and write, excluding time spent developing Squiggle.”]

Most of these express a lot of uncertainty. I’d be excited to see more posts that lean into their beliefs — if that’s the real position of the author.

Valuable information to list in an epistemic status

Some things I think are especially useful to include, when relevant: 

  • Biases you might have
    • E.g. “I’m funded by the main organization discussed in this post…” or “I’m arguing that this should be a priority area, but it’s also what I specialized in, so there might be some suspicious convergence.”
    • This can also include reasons the data you’re using might be biased (e.g. if you’re talking about events, but only have experience about certain types of events).
  • The main reasons (the cruxes) you believe what you write, especially if it might not be obvious from the body of the post (e.g. if you reference data or information that’s not actually crucial to your personal belief in the conclusion)
    • E.g. “I list some data in this post, but the strongest factor in me believing the conclusion I describe is my personal experience.” Or “The main reason I wrote this post is because of a conversation I had with a professional. Arguments 3-6 presented here are more like add-ons after I thought about it a little longer.”
    • This is very related to Epistemic Legibility.
  • Your qualifications
    • E.g. “This is not my field of expertise, but I read [this book] about it,” or “I have a Ph.D. in a related field, and have thought about this for at least 80 hours.”
  • The effort you put into this post and into making its claims very precise
    • E.g. “I wrote this up in 30 minutes, might have made mistakes or misstated my actual opinions,” or “I spent 40 hours researching this subject and writing this report.”
  • The number and type of people who gave feedback on the writing or project, and broadly what their feedback was
    • E.g. “I ran a sketch of this argument past 2 friends who are also in my field, and they broadly agreed.” Or “These are consolidated views. X, Y, and Z gave me feedback on this, and I’ve incorporated it.”
      If you end up having an epistemic status on a longer post with different claims, I think it’s also often very useful to have epistemic statuses or “how much I believe this and how much you should trust me” notes on the different sections, as in this post.

Consider hyperlinking the phrase

Epistemic status for this section: I think the harm I describe is real, but I’m not sure it’s actually very big. I’m pretty confident more people should just hyperlink the term, though.

I think the phrase “epistemic status” can be confusing to some readers, and some of what Michael writes in his post, 3 suggestions about jargon in EA, applies here. In particular, while I think adding epistemic statuses is often very useful (see below), newcomers might be disoriented by it, especially at the very top of a post. This seems especially relevant for posts that are aimed at a wider or more general audience. 

One solution: simply hyperlink this post (or some other explanation) when you use “epistemic status.” 

(As a reminder, you don’t need to have an epistemic status note.)

Why have an “epistemic status”? (And are there reasons to not have one?)

Adding a field like this can help readers broadly understand how seriously they should take what’s been written, highlight biases of the writer that readers might not be aware of (but should be), make the post more legible, and clarify the purpose of the post. I think this is especially important if the author is someone whose views might get accepted because they have some status or authority. 

Epistemic statuses can also help us discuss things more collaboratively. If I add “Epistemic status; just figuring things out, really uncertain” at the top of my post, commenters might feel more welcome to point out the flaws in my argument, and might do so more generously than if I had just straightforwardly argued for something incorrect.

And, importantly (and relatedly), epistemic statuses can help us avoid information cascades, which is a way to collectively arrive at false beliefs when people defer to each other without understanding the true reasons for why others might believe something. (Here’s a silly toy example; imagine three people A, B, and C trying to understand a given subject. C knows that A and B both hold belief X, and, deferring, decides that X is probably right. A and B notice that C also thinks X, and become more confident in X. In reality, B only thinks X because A thinks X. So everyone is depending on A’s belief in X, which could be the result of only one bit of independent information.

I’m probably missing some reasons for using epistemic statuses in Forum posts. I’d welcome more suggestions in the comments!

Note: an epistemic status communicating uncertainty doesn’t mean that all readers will fully process the uncertainty

One caveat to all of the above is that an epistemic status doesn’t fully remove the danger of over-deferral, and might give false confidence that we’ve addressed over-deferral

So if you’re very uncertain about what you’re claiming, or about specific claims you’re making, it’s worth stating that clearly (and repeatedly) in the body of the post.

As readers, we often less-than-critically accept the conclusions of a post (especially if it’s by someone who has some expertise or status), even if the post has a note at the top disclaiming: “epistemic status — these are just rough thoughts.” If misused, epistemic statuses might give people false confidence that they’ve caveated their writing enough. There’s a related discussion here

(Note also that deferring isn’t always bad. I just think it’s important to know when and why we’re deferring.)

Further reading

 

  1. ^

     For real though: Epistemic status: this is a post I think could be useful to some people, but I’m not confident I captured the right reasoning for the different things I argue for, and hope the comments will supplement anything I missed or got wrong. A couple of people looked at a draft, and their feedback led to minor edits, but I'm not sure they endorse everything here. 

  2. ^

     This post is not an attempt to get more people to use epistemic statuses. I think they can be useful, but don’t think they’re always necessary or even always helpful.

  3. ^

     Thanks to Lorenzo for discovering this! Some hyperlinks removed.

Comments2


Sorted by Click to highlight new comments since:

Thanks for this, I applaud the effort to make forum posts more legible to newcomers everyone.

I would go even further and eliminate the jargon completely (or preserve it just to link to this post):

  • How sure I am of this: I'm typing this at 2:40 am, so not much

OR

  • How sure I am of this (epistemic status): I'm writing this at 2:40 am, so not much

Thank you for posting this; with respect to the first footnote, I think that even if the post is missing some parts or is slightly miscalibrated, having it on the forum might nonetheless help raise the forum's epistemic standards. 

Some considerations: 

  • I find Epistemic Status notes more useful when the author includes the extent to which they researched or thought about something, which you mentioned under The effort you put into this post and into making its claims very precise.  It might also be useful to include statements concerning roughly what fractions of sources of evidence referenced in their post they skimmed and what fraction they engaged with deeply.
  • This point at the end of your post - we often less-than-critically accept the conclusions of a post (especially if it’s by someone who has some expertise or status) - could probably be elaborated on in its own paragraph; additionally, briefly addressing what people should take away / how people should update their beliefs in the Epistemic Status note might help with this.

In my view, Epistemic Status falls under the category of "general transparency". I highly recommend the Reasoning Transparency,  which you list in Further Reading, and want to quote what I consider to be a highly valuable and highly relevant part of the Motivation section : 

  • How trustworthy are you?
  • How/why is this post important?
  • Who would you recommend read this post?
  • What are your priors?
  • How much external review has this post undergone?
  • What is your epistemic confidence?
  • What types of support does this post’s claims use?
  • What shortcuts did you take?
  • What are this post’s major inadequacies?
  • How did you update your beliefs after writing this?
  • How should I update my beliefs after reading this?
  • Did you contribute anything notable by creating post?

In my own writing, I try to employ the following "canned transparency" list, which overlaps with both your list of valuable information to include under Epistemic Status and the above list from Reasoning Transparency. Of course, not all of these will be useful to include for every post; more speculative posts might benefit most from a simple Epistemic Status - the list below is geared more towards reports, reviews, and essays.  
 

  • Why does this essay exist? 
    • 1-2 sentences about why there is a need for this writing or why the topic deserves attention
  • Who is this post for?
    • 1 sentence explaining the author's opinion for which people / communities would benefit (find interesting, improve research, find insightful) most from reading this
  • How good is this essay?
    • 1-3 sentences about the time spent writing, claim-robustness of, evidence incorporated in this essay, including shortcuts
  • This essay's claims?
    • 1-2 sentences on the types of claims made in the essay 
  • What is my confidence?
    • 1-2 sentences on the confidence you have in the claims you've made 
  • Can you trust me?
    • 1-2 sentences on your qualifications, your track record, and your expertise
  • What are my priors? 
    • 1 sentence on what you believed about the topic before writing this post / essay /review
  • My updated beliefs?
    • 1 sentence on how you updated your beliefs
  • My sense of where you should update?
    • 1 sentence on how you believe others should or might update their beliefs
  • My contribution?
    • 1 sentence on what, if any, contribution you believe you made in the space you are writing in 

These are more of a blueprint towards a standard transparency block at the beginning of substantive posts or reviews, so I am open to edits / other people's thoughts on the comprehensiveness / utility of this listing.


 

More from Lizka
Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig
Relevant opportunities