Hide table of contents

I think “reasoning transparency” (and/or epistemic legibility) is a key value of effective altruism. 

As far as I can tell, the key piece of writing about it is this Open Philanthropy blog post by Luke Muehlhauser, which I’m cross-posting it to the Forum with permission. We also have a topic page for "reasoning transparency" — you can see some related posts there.


Published: December 01, 2017 | by Luke Muehlhauser

We at the Open Philanthropy Project value analyses which exhibit strong “reasoning transparency.” This document explains what we mean by “reasoning transparency,” and provides some tips for how to efficiently write documents that exhibit greater reasoning transparency than is standard in many domains.

In short, our top recommendations are to:

  • Open with a linked summary of key takeaways. [more]
  • Throughout a document, indicate which considerations are most important to your key takeaways. [more]
  • Throughout a document, indicate how confident you are in major claims, and what support you have for them. [more]

1 Motivation

When reading an analysis — e.g. a scientific paper, or some other collection of arguments and evidence for some conclusions — we want to know: “How should I update my view in response to this?” In particular, we want to know things like:

  • Has the author presented a fair or biased presentation of evidence and arguments on this topic?
  • How much expertise does the author have in this area?
  • How trustworthy is the author in general? What are their biases and conflicts of interest?
  • What was the research process that led to this analysis? What shortcuts were taken?
  • What rough level of confidence does the author have in each of their substantive claims?
  • What support does the author think they have for each of their substantive claims?
  • What does the author think are the most important takeaways, and what could change the author’s mind about those takeaways?
  • If the analysis includes some data analysis, how were the data collected, which analyses were done, and can I access the data myself?

Many scientific communication norms are aimed at making it easier for a reader to answer questions like these, e.g. norms for ‘related literature’ sections and ‘methods’ sections, open data and code, reporting standards, pre-registration, conflict of interest statements, and so on.

In other ways, typical scientific communication norms lack some aspects of reasoning transparency that we value. For example, many scientific papers say little about roughly how confident the authors are in different claims throughout the paper, or they might cite a series of papers (or even entire books!) in support of specific claims without giving any page numbers.

Below, I (Luke Muehlhauser) offer some tips for how to write analyses that (I suspect, and in my experience) make it easier for the reader to answer the question, “How should I update my views in response to this?”

2 Example of GiveWell charity reviews

I’ll use a GiveWell charity review to illustrate a relatively “extreme” model of reasoning transparency, one that is probably more costly than it’s worth for most analysts. Later, I’ll give some tips for how to improve an analysis’ reasoning transparency without paying as high a cost for it as GiveWell does.

Consider GiveWell’s review of Against Malaria Foundation (AMF). This review…

  • …includes a summary of the most important points of the review, each linked to a longer section that elaborates those points and the evidence for them in some detail.
  • …provides detailed responses to major questions that bear on the likely cost-effectiveness of marginal donations to AMF, e.g. “Are LLINs targeted at people who do not already have them?”, “Do LLINs reach intended destinations?”, “Is there room for more funding?”, and “How generally effective is AMF as an organization?”
  • …provides a summary of the research process GiveWell used to evaluate AMF’s cost-effectiveness.
  • …provides an endnote, link to another section or page, or other indication of reasoning/sources for nearly every substantive claim. There are 125 endnotes, and in general, the endnote provides the support for the corresponding claim, e.g. a quote from a scientific paper, or a link to a series of calculations in a spreadsheet, or a quote from a written summary of an interview with an expert. (There are some claims that do not have such support, but these still tend to clearly signal what the basis for the claim is; e.g. “Given that countries and other funders have some discretion over how funds will be used, it is likely that some portion of AMF’s funding has displaced other funding into other malaria interventions and into other uses.”)
  • …provides a comprehensive table of sources, including archived copies of most sources in case some of the original links break at some point.
  • …includes a list of remaining open questions about AMF’s likely cost-effectiveness, plus comments throughout the report on which claims about AMF GiveWell is more or less confident in, and why.
  • …links to a separate summary of the scientific evidence for the effectiveness of the intervention AMF performs, namely the mass distribution of long-lasting insecticide-treated nets (LLINs), which itself exhibits all the features listed above.
  • …plus much more

3 Most important recommendations

Most analysts and publishers, including the Open Philanthropy Project,[1] don’t (and shouldn’t) invest as much effort as GiveWell does to achieve improved reasoning transparency. How can you improve an analysis’ reasoning transparency cheaply and efficiently? Below are some tips, with examples in the text and in footnotes.

3.1 Open with a linked summary of key takeaways

Many GiveWell and Open Philanthropy Project analyses open with a summary of key takeaways, with links to later sections that elaborate and argue for each of those key takeaways at greater length (examples in footnote[2]). This makes it easy for the reader to understand the key takeaways and examine particular takeaways in more depth.

3.2 Indicate which considerations are most important

Which arguments or pieces of evidence are most critical for your key takeaways? Ideally, this should be made clear early in the document, or at least early in the section discussing each key takeaway.

Some of my earlier Open Philanthropy Project reports don’t do this well. E.g. my carbs-obesity report doesn’t make it clear that the evidence from randomized controlled trials (RCTs) played the largest role in my overall conclusions.

Some examples that do a better job of this include:

  1. My report on behavioral treatments for insomnia makes clear that my conclusions are based almost entirely on RCT evidence, and in particular on (1) the inconclusive or small results from RCTs that “tested [behavioral treatments] against a neutral control at ≥1mo follow-up using objective measures of [total sleep time or sleep efficiency],” (2) the apparent lack of any “high-quality, highly pragmatic” RCTs on the question, and (3) my general reasons for distrusting self-report measures of sleep quality.
  2. After surveying a huge variety of evidence, my report on consciousness and moral patienthood provides an 8-point high-level summary that makes it (somewhat) clear how I’ve integrated those diverse types of evidence into an overall conclusion, and which kinds of evidence play which roles in that conclusion.
  3. The introduction of Holden’s blog post on worldview diversification makes it clear which factors seem to be most important in the case one could make for or against using a worldview diversification strategy, and the rest of the post elaborates each of those points.
  4. Holden’s Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity does the same.

3.3 Indicate how confident you are in major claims, and what support you have for them

For most substantive claims, or at least for “major” claims that are especially critical for your conclusions, try to give some indication of how confident you are in each claim, and what support you think you have for it.

3.3.1 Expressing degrees of confidence

Confidence in a claim can be expressed roughly using words such as “plausible,” “likely,” “unlikely,” “very likely,” and so on. When it’s worth the effort, in some cases you might want to express your confidence as a probability or a confidence interval, in part because terms like “plausible” can be interpreted differently by different readers.

Below are examples that illustrate the diversity of options available for expressing degrees of confidence with varying precision and varying amounts of effort:[3]

  1. “I think there is a nontrivial likelihood (at least 10% with moderate robustness, and at least 1% with high robustness) of transformative AI within the next 20 years” (source). This is a key premise in the case for making potential risks from advanced AI an Open Philanthropy Project priority, so we thought hard about how confident we should be that transformative AI would be created in the next couple decades, and decided to state our confidence using probabilities.
  2. In a table, I reported specific probabilities that various species (e.g. cows, chickens) are phenomenally conscious, because such estimates were a major goal of the investigation. However, I also made clear that my probabilities are hard to justify and may be unstable, using nearby statements such as “I don’t have much reason to believe my judgments about consciousness are well-calibrated” and “I have limited introspective access to the reasons why my brain has produced these probabilities rather than others” and “There are many different kinds of uncertainty, and I’m not sure how to act given uncertainty of this kind.” (The last of these is followed by a footnote with links to sources explaining what I mean by “many kinds of uncertainty.”)
  3. “…my own 70% confidence interval for ‘years to [high-level machine intelligence]’ is something like 10–120 years, though that estimate is unstable and uncertain” (source). This is the conclusion to a report about AI timelines, so it seemed worthwhile to conclude the report with a probabilistic statement about my forecast — in this case, in terms of a 70% confidence interval — but I also indicate that this estimate is “unstable and uncertain” (with a footnote explaining what I mean by that).
  4. “It is widely believed, and seems likely, that regular, high-quality sleep is important for personal performance and well-being, as well as for public safety and other important outcomes” (source). This claim was not a major focus of the report, so I simply said “seems likely,” to indicate that I think the probability of my statement being true is >50%, while also indicating that I haven’t investigated the evidence in detail and haven’t tried to acquire a more precise probabilistic estimate.
  5. “CBT-I appears to be the most commonly-discussed [behavioral treatment for insomnia] in the research literature, and is plausibly the most common [behavioral treatment for insomnia] in clinical practice” (source). I probably could have done 1-3 hours of research and become more confident about whether CBT-I is the most common behavioral treatment for insomnia in clinical practice, but this claim wasn’t central to my report, so instead I just reported my rough impression after reading some literature, and thus reported my level of confidence as “plausible.”
  6. “The rise of bioethics seems to be a case study in the transfer of authority over a domain (medical ethics) from one group (doctors) to another (bioethicists), in large part due to the first group’s relative neglect of that domain” (source). Here, I use the phrase “seems to be” to indicate that I’m fairly uncertain even about this major takeaway, and the context makes clear that this uncertainty is (at least in part) due to the fact that my study of the history of bioethics was fairly quick and shallow.
  7. In my report on behavioral treatments for insomnia, I expressed some key claims in colloquial terms in the main text, but in a footnote provided a precise, probabilistic statement of my claim. For example, I claimed in the main text that “I found ~70 such [systematic reviews], and I think this is a fairly complete list,” and my footnote stated: “To be more precise: I’m 70% confident there are fewer than 5 [systematic reviews] on this topic, that I did not find, published online before October 2015, which include at least 5 [randomized controlled trials] testing the effectiveness of one or more treatments for insomnia.”
  8. “I have raised my best estimate of the chance of a really big storm, like the storied one of 1859, from 0.33% to 0.70% per decade. And I have expanded my 95% confidence interval for this estimate from 0.0–4.0% to 0.0–11.6% per decade” (source). In this case, the author (David Roodman) expressed his confidences precisely, since his estimates are the output of a statistical model.

3.3.2 Indicating kinds of support

Given limited resources, you cannot systematically and carefully examine every argument and piece of evidence relevant to every claim in your analysis. Nor can you explain in detail what kind(s) of support you think you have for every claim you make. Nevertheless, you can quickly give the reader some indication of what kind(s) of support you have for different claims, and you can explain in relatively more detail the kind(s) of support you think you have for some key claims.

Here are some different kinds of support you might have for a claim:

  • another detailed analysis you wrote
  • careful examination of one or more studies you feel qualified to assess
  • careful examination of one or more studies you feel only weakly able to assess
  • shallow skimming of one or more studies you feel qualified to assess
  • shallow skimming of one or more studies you feel only weakly able to assess
  • verifiable facts you can easily provide sources for
  • verifiable facts you can’t easily provide sources for
  • expert opinion you feel comfortable assessing
  • expert opinion you can’t can’t easily assess
  • a vague impression you have based on reading various sources, or talking to various experts, or something else
  • a general intuition you have about how the world works
  • a simple argument that seems robust to you
  • a simple argument that seems questionable to you
  • a complex argument that nevertheless seems strong to you
  • a complex argument that seems questionable to you
  • the claim seems to follow logically from other supported claims plus general background knowledge
  • a source you can’t remember, except that you remember thinking at the time it was a trustworthy source, and you think it would be easy to verify the claim if one tried[4]
  • a combination of any of the above

Below, I give a series of examples for how to indicate different kinds of support for different claims, and I comment briefly on each of them.

Here’s the first example:

“As stated above, my view is based on a large number of undocumented conversations, such that I don’t think it is realistic to aim for being highly convincing in this post. Instead, I have attempted to lay out the general structure of the inputs into my thinking. For further clarification, I will now briefly go over which parts of my argument I believe are well-supported and/or should be uncontroversial, vs. which parts rely crucially on information I haven’t been able to fully share…” [source]

In some cases, you can’t provide much of the reasoning for your view, and it’s most transparent to simply say so.

Next example:

“It is widely believed, and seems likely, that regular, high-quality sleep is important for personal performance and well-being, as well as for public safety and other important outcomes.” [source]

This claim isn’t the focus of the report, so I didn’t review the literature on the topic, and my phrasing makes it clear that I believe this claim merely because it is “widely believed” and “seems likely,” not because I have carefully reviewed the relevant literature.

“[CBT-I] can be delivered on an individual basis or in a group setting, via self-help (with or without phone support), and via computerized delivery (with or without phone support).” [source]

This claim is easily verifiable, e.g. by Googling “computerized CBT-I” or “self-help CBT-I,” so I didn’t bother to explain what kind of support I have for it — other than to say, in a footnote, “I found many or several RCTs testing each of these types of CBT-I.”

“PSG is widely considered the ‘gold standard’ measure of sleep, but it has several disadvantages. It is expensive, complicated to interpret, requires some adaptation by the patient (people aren’t used to sleeping with wires attached to them), and is usually (but not always) administered at a sleep lab rather than at home.” [source]

These claims are substantive but non-controversial, so as support I merely quote (in a footnote) some example narrative reviews which back up my claims.

“Supposedly (I haven’t checked), [actigraphy] correlates well with PSG on at least two key variables…” [source]

Here, I quote from narrative reviews in a footnote, but because the “correlates well with” claim is more substantive and potentially questionable/controversial than the earlier claims about PSG, I flag both my uncertainty, and the fact that I haven’t checked the primary studies, by saying “Supposedly (I haven’t checked)…” This is a good example of a phrasing that helps improve a document’s reasoning transparency, but would rarely be found in e.g. a scientific paper.

“these seven trials were only moderately pragmatic in design.” [source]

To elaborate on this claim, in a footnote I make a prediction about the results of a very specific test for “pragmaticness.” The footnote both makes it clear what I mean by “moderately pragmatic in design,” and provides a way for someone to check whether my statement about the studies is accurate. Again, the idea isn’t that the reader can assume my predictions are well-calibrated, but rather that I’m being clear about what I’m claiming and what kind of support I think I have for it. (In this case, my support is that I skimmed the papers and I came away with an intuition that they wouldn’t score very highly on a certain test of study pragmaticness.)

Also, I couldn’t find anything succinct that clearly explained what I meant by “pragmatic,” and the concept of pragmaticness was fairly important to my overall conclusions in the report, so I took the time to write a separate page on that concept, and linked to that page from this report.

“I have very little sense of how much these things would cost. My guess is that if a better measure of sleep (of the sort I described) can be developed, it could be developed for $2M-$20M. I would guess that the “relatively small” RCTs I suggested might cost $1M-$5M each, whereas I would guess that a large, pragmatic RCT of the sort I described could cost $20M-$50M. But these numbers are just pulled from vague memories of conversations I’ve had with people about how much certain kinds of product development and RCT implementation cost, and my estimates could easily be off by a large factor, and maybe even an order of magnitude.” [source]

Here, I’m very clear that I have no basis whatsoever for the cost estimates I provide.

“both a priori reasoning about self-report measures and empirical reviews of the accuracy of self-report measures (across multiple domains) lead me to be suspicious of self-reported measures of sleep.” [source]

In this case, I hadn’t finished my review of the literature on self-report measures, and I didn’t have time to be highly transparent about the reasoning behind my skepticism of the accuracy of self-report measures, so in a footnote I simply said: “I am still researching the accuracy of self-report measures across multiple domains, and might or might not produce a separate report on the topic. In the meantime, I only have time to point to some of the sources that have informed my preliminary judgments on this question, without further comment or argument at this time. [Long list of sources.] Please keep in mind that this is only a preliminary list of sources: I have not evaluated any of them closely, they may be unrepresentative of the literature on self-report as a whole, and I can imagine having a different impression of the typical accuracy of self-report measures if and when I complete my report on the accuracy of self-report measures. My uncertainty about the eventual outcome that investigation is accounted for in the predictions I have made in other footnotes in this report…” This is another example of a footnote that improves the reasoning transparency of the report, but is a paragraph you’d be unlikely to read in a journal article.

“I will, for this report, make four key assumptions about the nature of consciousness. It is beyond the scope of this report to survey and engage with the arguments for or against these assumptions; instead, I merely report what my assumptions are, and provide links to the relevant scholarly debates. My purpose here isn’t to contribute to these debates, but merely to explain ‘where I’m coming from.’” [source]

Here, I’m admitting that I didn’t take the time to explain the support I think I have for some key assumptions of the report.

“As far as we know, the vast majority of human cognitive processing is unconscious, including a large amount of fairly complex, ‘sophisticated’ processing. This suggests that consciousness is the result of some particular kinds of information processing, not just any information processing. Assuming a relatively complex account of consciousness, I find it intuitively hard to imagine how (e.g.) the 302 neurons of C. elegans could support cognitive algorithms which instantiate consciousness. However, it is more intuitive to me that the ~100,000 neurons of the Gazami crab might support cognitive algorithms which instantiate consciousness. But I can also imagine it being the case that not even a chimpanzee happens to have the right organization of cognitive processing to have conscious experiences.” [source]

Here, I make it clear that my claims are merely intuitions.

“By the time I began this investigation, I had already found persuasive my four key assumptions about the nature of consciousness: physicalism, functionalism, illusionism, and fuzziness. During this investigation I studied the arguments for and against these views more deeply than I had in the past, and came away more convinced of them than I was before. Perhaps that is because the arguments for these views are stronger than the arguments against them, or perhaps it is because I am roughly just as subject to confirmation bias as nearly all people seem to be (including those who, like me, know about confirmation bias and actively try to mitigate it). In any case: as you consider how to update your own views based on this report, keep in mind that I began this investigation as a physicalist functionalist illusionist who thought consciousness was likely a very fuzzy concept.” [source]

This example makes it clear that the reader shouldn’t conclude that my investigation led me to these four assumptions, but instead that I already had those assumptions before I began.

“Our understanding is that it is not clearly possible to create an advanced artificial intelligence agent that avoids all challenges of this sort. [footnote:] Our reasoning behind this judgment cannot be easily summarized, and is based on reading about the problem and having many informal conversations. Bostrom’s Superintelligence discusses many possible strategies for solving this problem, but identifies substantial potential challenges for essentially all of them, and the interested reader could read the book for more evidence on this point.” [source]

This is another example of simply saying “it would be too costly to summarize our reasoning behind this judgment, which is based on many hours of reading about the topic and discussing it with others.”

Often, a good way to be transparent about the kind of support you think you have for a claim is to summarize the research process that led to the conclusion. Examples:

  1. “Here is a table showing how the animals I ranked compare on these factors (according to my own quick, non-expert judgments)… But let me be clear about my process: I did not decide on some particular combination rule for these four factors, assign values to each factor for each species, and then compute a resulting probability of consciousness for each taxon. Instead, I used my intuitions to generate my probabilities, then reflected on what factors seemed to be affecting my intuitive probabilities, and then filled out this table” (source).
  2. “I spent less than one hour on this rapid review. Given this limitation, I looked only for systematic reviews released by the Cochrane Collaboration…, a good source of reliably high-quality systematic reviews of intervention effectiveness evidence. I also conducted a few Google Scholar keyword searches to see whether I could find compelling articles challenging the Cochrane reviews’ conclusions, but I did not quickly find any such articles” (source).
  3. “I did not conduct any literature searches to produce this report. I have been following the small field of HLMI forecasting closely since 2011, and I felt comfortable that I already knew where to find most of the best recent HLMI forecasting work” (source).
  4. “To investigate the nature of past AI predictions and cycles of optimism and pessimism in the history of the field, I read or skim-read several histories of AI and tracked down the original sources for many published AI predictions so I could read them in context. I also considered how I might have responded to hype or pessimism/criticism about AI at various times in its history, if I had been around at the time and had been trying to make my own predictions about the future of AI… I can’t easily summarize all the evidence I encountered that left me with these impressions, but I have tried to collect many of the important quotes and other data below” (source).
  5. “To find potential case studies on philanthropic field-building, I surveyed our earlier work on the history of philanthropy, skimmed through the many additional case studies collected in The Almanac of American Philanthropy, asked staff for additional suggestions, and drew upon my own knowledge of the history of some fields. My choices about which case studies to look at more closely were based mostly on some combination of (1) the apparent similarity of the case study to our mid-2016 perception of the state of the nascent field of research addressing potential risks from advanced AI (the current focus area of ours where the relevant fields seem most nascent, and where we’re most likely to apply lessons from this investigation in the short term), and (2) the apparent availability and helpfulness of sources covering the history of the case study. I read and/or skimmed the sources listed in the annotated bibliography below, taking notes as I went. I then wrote up my impressions (based on these notes) of how the relevant fields developed, what role (if any) philanthropy seemed to play, and anything else I found interesting. After a fairly thorough look at bioethics, I did quicker and more impressionistic investigations and write-ups on a number of other fields” (source).

For further examples, see the longer research process explanations in David Roodman’s series on the effects of incarceration on crime; my reports on the carbs-obesity hypothesis and behavioral treatments for insomnia; the “our process” sections of most Open Philanthropy Project cause reports and GiveWell top charity reviews and intervention reports; and a variety of other reports.[5]

4 Secondary recommendations

4.1 Provide quotes and page numbers when possible

When citing some support for a claim, provide a page number if possible. Even better if you can directly quote the most relevant passage, so the reader doesn’t need to track down the source to get a sense of what kind of support for the claim the source provides.

Especially if your report is published online, there are essentially no space constraints barring you from including dozens or hundreds of potentially lengthy quotes from primary sources in footnotes and elsewhere. E.g. see the many long quotes in the footnotes of my report on consciousness and moral patienthood, or the dozens of quotes in the footnotes of GiveWell intervention reports.

4.2 Provide data and code when possible

Both GiveWell and Open Philanthropy Project provide underlying data, calculations, and code when possible.

In some cases these supplementary materials are fairly large, as with the 800mb of data and code for David Roodman’s investigation into the impact of incarceration on crime, or the 11-sheet cost-effectiveness models (v4) for GiveWell’s top charities.

In other cases they can be quite small, for example a 27-item spreadsheet of case studies I considered examining for Some Case Studies in Early Field Growth, or a 14-item spreadsheet of global catastrophic risks.

4.3 Provide archived copies of sources when possible

Both GiveWell and Open Philanthropy Project provide archived copies of sources when possible,[6] since web links can break over time.

4.4 Provide transcripts or summaries of conversations when possible

For many investigations, interviews with domain experts will be a key source of information alongside written materials. Hence when possible, it will help improve the reasoning transparency of a report if those conversations (or at least the most important ones) can be made available to the reader, either as a transcript or a summary.

But in many cases this is too time-costly to be worth doing, and in many cases a domain expert will only be willing to speak frankly with you anonymously, or will only be willing to be quoted/cited on particular points.


 

  1. ^

    For more on openness and transparency at the Open Philanthropy Project, see What “Open” Means to Us and Update on How We’re Thinking about Openness and Information Sharing.

  2. ^

    Or, if the summary itself doesn’t link to elaborations on the key takeaways/arguments, then the summary is immediately followed by a linked table of contents that does so, as is the case for this document. Open Philanthropy Project examples: every cause report, plus many other reports and blog posts, e.g. How Will Hen Welfare Be Impacted by the Transition to Cage-Free Housing?2017 Report on Consciousness and Moral PatienthoodWorldview DiversificationThree Key Issues I’ve Changed My Mind AboutPotential Risks from Advanced Artificial Intelligence: The Philanthropic OpportunityHits-based GivingInitial Grants to Support Corporate Cage-free ReformsGiveWell examples: Every top charity review, most completed intervention reports, plus some other reports and blog posts, e.g. Approaches to Moral Weights: How GiveWell Compares to Other Actors.

  3. ^

    Examples in the main text of this document are typically numbered simply for ease of reference.

  4. ^

     In such a case, you might say something like: “We do not recall our source for this information but believe it would be straightforward to verify.”

  5. ^

    E.g. How Will Hen Welfare Be Impacted by the Transition to Cage-Free Housing? and the “process and findings” document for Ben Soskis’ bibliography for the history of American philanthropy.

  6. ^

    See the table of sources at the bottom of most large reports by GiveWell or the Open Philanthropy Project.

Comments8
Sorted by Click to highlight new comments since: Today at 3:07 AM

Thank you, this is an excellent post. This style of transparent writing can often come across as very 'ea' and gets made fun of for its idiosyncrasies, but I think it's a tremendous strength of our community.
 

I think it's sometimes a strength and sometimes a weakness. It's useful for communicating certain kinds of ideas, and not others. Contrary to Lizka, I personally wouldn't want to see it as part of the core values of EA, but just as one available tool.

[anonymous]2y1
0
0

Can you say more about when it's a weakness and what kinds of ideas it's not useful for communicating? 

Disclaimer: I wrote this while tired, not entirely sure it's coherent or relevant

It's less productive for communicating ideas that are not as analytic and reductionist, or more subjective. One type of example would be ones that are more like an ideology, a [theory of something], a comprehensive worldview. In such cases, trying to apply this kind of reductionist approach is bound to miss important facets, or important connections between them, or a meaningful big picture.

Specific questions I think this would be ill-suited for:

  • Should you be altruistic?
  • What gives life meaning?
  • Should the EA movement have a democratic governance structure? (Should it be centralised at all?)
  • Is capitalism the right framework for EA and for society in general?

It should be noted that I'm a mathematician, so for me it usually is a comfortable way of communication. But for less STEM-y or analytical people, who can communicate ideas that I can't, I think this might be limiting.

One benefit of reasoning transparency I've personally appreciated is that it helps the other party get a better sense of how much to update based on claims made.

I also think clear indication of the key cruxes of the stated argument and the level of supporting evidence can help hold us accountable in the claims we make and contribute to reducing miscommunication - how strong of a statement can be justified by the evidence I have? Am I aiming to explain, or persuade? How might my statements be misinterpreted as stronger or weaker than they are? (One example that comes to mind is the Bay of Pigs invasion which involved a miscommunication between JFK and the Joint Chiefs of Staff around understanding what "fair chance" meant).

It's not clear to me on a quick read that the questions you've listed are worse off under reasoning transparency, or that actions like "clearly indicating key cruxes/level of support you have for the claims that hold your argument together" would lead to more missing facets/important connections/a meaningful big picture.

For example, if I made a claim about whether "capitalism is the right framework for EA/society in general", would you find it less productive to know if I had done Nobel prize-winning research on this topic, if I'd run a single-country survey of 100 people, or if I was speaking just from non-expert personal experience?

If I made a claim about "What gives life meaning", would you find it less productive if I laid out the various assumptions that I am making, or the most important considerations behind my key takeaways?

(Commenting in personal capacity etc)

[anonymous]2y2
0
0

I agree with bruce that the specific questions you mentioned could benefit from some reasoning transparency. In general I think this is one of the best EA innovations/practices, although I agree that it's  only one tool among many and not always best suited to the task at hand. 

Here are some potential downsides I see:

  • Not well suited for communicating worldviews, as you said
  • Can effectively reduce exploration
  • Increases costs of writing

I think reasoning transparency is a very important concept that I wouldn't mind EA orgs adopting more, so was surprised I couldn't find anything about it except for Open Phil's 2017 blog post, thank you for this!

Edit: Effective Thesis is giving reasoning transparency workshops now!

EDIT: I gave this a mild rewrite about an hour after writing it to make a few points clearer. I notice I already got one strong disagreement. If anyone would like to offer a comment as well, I'm interested in disagreements, particularly around the value of statements of epistemic confidence. Perhaps they serve a purpose that I do not see? I'd like to know, if so.

Hmm. While I agree that it is helpful to include references for factual claims when it is in the author's interest[1], I do not agree that inclusion of those references is necessarily useful to the reader.  

For example, any topic about which the reader is unfamiliar or has a strongly held point of view is also one that likely has opposing points of view. While the reader might be interested in exploring the references for an author's point of view, I think it would make more sense to put the responsibility on the reader to ask for those references than to force the author to presume a reader without knowledge or agreement with the author's point of view. 

Should the author be responsible for offering complete coverage of all arguments and detail their own decisions about what sources to trust or lines of argument to pursue? I think not. It's not practical or needed.

 However, what seems to be the narrative here is that, if an author does not supply references, the reader assumes nothing and moves on.  After all, the author didn't use good practices and the reader is busy. On the other hand, so the narrative goes, by offering a self-written statement of my epistemic confidence, I am letting you know how much to trust my statements whether I offer extensive citations or not. 

The EA  approach of putting the need for references on authors up-front (rather than by request) is not a good practice and neither is deferring to experts or discounting arguments simply because they are not from a recognized or claimed expert.

In the case of a scientific paper, if I am critiquing its claims, then yes, I will go through its citations. But I do so in order to gather information about the larger arguments or evidence for  the paper's claims, regardless of the credentials or expertise that the author claims. Furthermore, my rebuttal might include counterarguments with the same or other citations.  

I can see this as valuable where I know that I (could) disagree with the author. Obviously, the burden is on me to collect the best evidence for the claims an author makes as well as the best evidence against them. If I want to "steelman" the author, as you folks put it, or refute the author's claims definitively, then I will need to ask for citations from the author and collect additional information as well.

The whole point of providing citations up-front is to allow investigation of the author's claims, but not to provide comfort that the claims are true or that I should trust the author. Furthermore, I don't see any point to offering an epistemic confidence statement because such a statement contributes nothing to my trust in the author. However, EA folks seem to think that  proxies for rigor and the reader's epistemic confidence in the author are valid.[2]  

With our easy access to:

  • scientific literature
  • think-tank distillations of research or informed opinion 
  • public statements offered by scientists and experts on topics both inside and outside their areas of expertise
  • books (with good bibliographies) written about most topics
  • freely accessible journalism
  • thousands of forum posts on EA topics
  • and other sources

I don't find it necessary to cite sources explicitly in forum posts, unless I particularly want to point readers to those sources. I know they can do the research themselves. If you want to steelman my arguments or make definitive arguments against them, then you should, is my view. It's not my job to help you nor should I pretend that I am by offering you whatever supports my claims.

In some cases, you can’t provide much of the reasoning for your view, and it’s most transparent to simply say so.

Well, whether a person chooses not to offer their reasoning, or can't offer their reasoning, you can conclude, if they don't offer their reasoning, and you want to know what it is, that you should ask for it. After all, if I assert something you find disputable, why not dispute it? And the first step in disputing it is to ask for my reasoning. This is a much better approach than deciding whether you trust my presentation based on my write-up of my epistemic status. 

For example, here is a plausible epistemic status for this comment:

  • a confident epistemic status: I spent several hours thinking over this comment's content in particular. I thought it through after browsing more than 30 statements of epistemic status presented by EA folks in the last six months. I have more than a decade of experience judging internet posts and articles on their argumentation quality (not in any professional capacity) and have personally authored about 1500-2000 posts and comments (not tweets) on academic topics in the last 20 years. My background includes a year of training in formal and informal logic, more than a year of linguistics study, and about a year of personal study of AI and semantic networks topics and an additional (approximate) year of self-study of research methods, pragmatics and argumentation, all devoted to assertion deconstruction  and development, and most applied in the context of responding to or writing forum/blog posts.

Actually, that epistemic status is accurate, but I don't see it as relevant to my claims here. Plus how do you know its true? But suppose you thought it was true and felt reassured that my claims might have merit based on that epistemic status. I think you would be mistaken but I won't argue it now. Interestingly, it does not appear representative of the sorts of epistemic status that I have actually read on this forum. 

Here is an alternative epistemic status statement for my comment that looks like others on this forum:

  • a less-confident epistemic status: I only spent a few minutes thinking over my position before I decided to write it down[implying that I'm brash].  Plus I wrote it when I was really tired.  I'm not an acknowledged expert on this topic, so my unqualified statements are not reliable. Also, I don't know of anybody else saying this. Finally, I'm not sure if I can articulate why I think what I do, at least to meet your standards, which seem high on this forum [implying that I'm intimidated]. So I guess I'm not that confident about my position and don't want you to agree with it easily, particularly since you've been using your approach for so long. I'm just trying to stimulate conversation with this comment.

It seems intended to acknowledge and validate potential arguments against it that are:

  • ad hominem: "[I'm brash and] I wrote this when I was really tired"
  • appeals to authority: "I'm not an acknowledged expert on this topic"
  • ad populum :"I don't know of anybody else saying this"
  • sunk cost: "you've been using your approach for so long"

and the discussion of confidence is confusing. After acknowledging that I:

  • am brash and tired
  • didn't spend long formulating my position
  • am not an expert
  • feel intimidated by your high standards
  • wouldn't want you to reverse your position too quickly

I let you know that I'm not that confident in my assertions here. And then I go make them anyway with the stated goal that "I'm just trying to stimulate conversation." Turned into a best practice as it is here, though, I see a different consequence for these statements of epistemic confidence.

What I have learned over my years on various forums (and blogs) includes:

  • an author will offer information that is appealing to readers or that helps the readers see the author in a better light. 

    Either of the epistemic status offerings I gave might convince different people to see me in a better light. The first might convince readers that I am knowledgeable and have strong and well-formed opinions. The second might convince readers that I am honest and humble and would like more information.  The second also compliments the reader's high standards.
     
  • readers that are good critical thinkers are aware of fallacies like ad hominem, ad populum, appeals to authority, and sunk cost.  They also distrust flattery.

    They know that to get at the truth, you need more information than is typically available in a post or comment. If they have a strong disagreement with an author, they have a forum to contest the author's claims. If the  author is really  brash, tired, ill-informed, inarticulate, and has no reason for epistemic confidence in his claims , then the way to find out is not to take the author's word for it, but to do your own research and then challenge the author's claims directly. You wouldn't want to encourage the author to validate irrelevant arguments against his claims by asking him for his epistemic confidence and his reasons for it. Even if the author chose on his own to hedge his claims with these irrelevant statements about his confidence, when you decide to steelman or refute the author's argument, you will need different arguments than the four offered by the author (ad hominem, ad populum, appeals to authority, and sunk cost) .

 

  1. ^

    I agree that an author should collect their data and offer reliable access to their sources, including quotes and page numbers for citations and archived copies, when that is in the author's interest. In my experience, few people have the patience or time to go through your sources just to confirm your claims. However, if the information is important enough to warrant assessment for rigor, and you as the author care about that assessment, then yeah. Supply as much source material as possible as effectively as possible. And do a good job in the first place, that is, reach the right conclusions with  your best researcher cap on. Help yourself out. 

  2. ^

    It should seem obvious (this must have come up at least a few times on the forum), but there's a dark side to using citations and sources, and you'll see it from think tanks, governments, and NGO's, and here.  The author will present a rigorous-looking but bogus argument.  You have to actually examine the sources and go through them to see if they're characterized correctly in the argument or if they are unbiased or representative of expert research in the topic area. Use of citations and availability of sources is not a proxy for correct conclusions or argument rigor, but authors commonly use just the access to sources and citations to help them persuade readers of their correctness and rigor, expecting that no one will follow up. 

More from Lizka
Curated and popular this week
Relevant opportunities