[Epistemic status: Was originally going to be a comment, but I decided to make a post because the topic seemed to be of general interest.]

I'm not sure how I feel about the idea that people should familiarize themselves with the literature on a topic before blogging about it. Here are some perspectives I've seen on each side:

My current hunch is that it's good for people to write blog posts about things even if they aren't familiar with the literature (though it might be even better if they wrote their post, then became familiar with the literature, then edited their post accordingly and published it).

  • In machine learning, "bagging" approaches where we combine the output of lots of classifiers which are each trained on a subset of the training data often outperform a single complex model with access to all the data. This isn't necessarily an argument for the wisdom of the crowd per se, but it seems like an argument for the wisdom of a reasonably informed crowd of people coming at something from a variety of perspectives.

  • When I read about highly creative people (Turing award winner types--this might be more true in computer science than other fields), a recurring theme is the importance of reinventing things from scratch without reading the thoughts other people have already had about a topic, applying ideas from apparently unrelated fields, and more generally bringing a novel perspective to bear.

  • Even absent the benefits of originality, I think reasoning things out for oneself can be a great way to internalize them. ("What I cannot create, I do not understand" - Richard Feynman.) The argument for publishing is less clear in this case, but you could think of a forum discussion as serving a similar purpose as a recitation or discussion section in a university course, where one gets to examine a topic through conversation. You and the people who comment on your post get some social reward in the process of thinking about a topic.

However, I'm also worried that if a blog post about a topic is more accessible than a journal paper, it might end up being read by more people and factored into our collective decision-making more than it deserves on a purely epistemic basis. Of course, people are already spending much more time reading Facebook posts than journal papers--probably including people in the EA movement--so it's not clear how to think about the harms here in general. (For example, if reading the EA Forum usually funges against Facebook for people, and EA Forum posts are generally higher quality than Facebook posts, that seems like an argument for more EA Forum posts.)

In any case, you can mitigate these risks by marking your blog post with an epistemic status before publishing. It certainly seems acceptable to me to link to a relevant paper in the comments of a blog post about a topic that has been covered in the literature (e.g. "Author X [disagrees] with you"). But I think it can be even better to additionally summarize the argument that Author X makes in a way that feels personally compelling to you and take the opportunity to re-examine the argument with fresh eyes--see my previous complaints about information cascades in the context of Effective Altruism.

33

0
0

Reactions

0
0

More posts like this

Comments17
Sorted by Click to highlight new comments since: Today at 5:33 PM

I agree with Huemer, and liked his post.

Regarding your question, I think one definitely should familiarize oneself with the literature before writing an EA post. While there could be benefits to not having read about other people's thoughts, I think they are on average substantially outweighed by the benefits to having read about them. Most uninformed posts are not creative, but simply miss key considerations, and therefore fail to contribute.

Posts which are written by people who haven't familiarized themselves with the literature will probably be quite high variance: lots of low-value posts, with the chance of the odd novel insight. I fear that time-pressed readers wouldn't want to take part of such a forum, because the ratio of reward to effort would be too poor. I certainly wouldn't find it enjoyable.

In fact, I'd like the average EA Forum post to more thoroughly discuss the existing literature than is the case at present.

(Upvoted)

Maybe it's possible to develop more specific guidelines here. For example, your comment implies that you think it's essential to know all the key considerations. OK... but I don't see why ignorance of known key considerations would prevent someone from pointing out a new key consideration. And if we discourage them from making them post, that could be very harmful, because as you say, it's important to know all the key considerations.

In other words, maybe it's worth differentiating the act of generating intellectual raw material, and the act of drawing conclusions.

I see your intuition, but in practice I think most uninformed posts are poor. (In principle, that could be studied empirically.) That's also my experience from academia; that lack of knowledge usually is more of a problem than an advantage.

I agree that sometimes outsiders make breakthroughs in scientific fields, but would guess that even those outsiders are usually relatively well acquainted with the field they're entering into. They've normally familiarized themselves with it. So I'm not sure whether it's right to view those cases as evidence that forum participants shouldn't familiarize themselves with the literature. I might guess that they more point to the value of intellectual diversity and a multiplicity of perspectives.

One possible synthesis comes from Turing award winner Richard Hamming's book The Art of Doing Science and Engineering. He's got chapters at the end on Creativity and Experts. The chapters are somewhat rambly and I've quoted passages below. My attempt to summarize Hamming's position: Having a deep intellectual toolkit is valuable, but experts are often overconfident and resistant to new ideas.

Chapter 25: Creativity

...Do not be too hasty [in refining a problem], as you are likely to put the problem in the conventional form and find only the conventional solution...

...

...Wide acquaintance with various fields of knowledge is thus a help—provided you have the knowledge filed away so it is available when needed, rather than to be found only when led directly to it. This flexible access to pieces of knowledge seems to come from looking at knowledge while you are acquiring it from many different angles, turning over any new idea to see its many sides before filing it away. This implies effort on your part not to take the easy, immediately useful “memorizing the material” path, but prepare your mind for the future.

...

Over the years of watching and working with John Tukey I found many times he recalled the relevant information and I did not, until he pointed it out to me. Clearly his information retrieval system had many more “hooks” than mine did. At least more useful ones! How could this be? Probably because he was more in the habit than I was of turning over new information again and again so his “hooks” for retrieval were more numerous and significantly better than mine were. Hence wishing I could similarly do what he did, I started to mull over new ideas, trying to make significant “hooks” to relevant information so when later I went fishing for an idea I had a better chance of finding an analogy. I can only advise you to do what I tried to do—when you learn something new think of other applications of it—ones which have not arisen in your past but which might in your future. How easy to say, but how hard to do! Yet, what else can I say about how to organize your mind so useful things will be recalled readily at the right time?

...

...Without self-confidence you are not likely to create great, new things. There is a thin line between having enough self-confidence and being over-confident. I suppose the difference is whether you succeed or fail; when you win you are strong willed, and when you lose you are stubborn!...

Chapter 26: Experts

...

In an argument between a specialist and a generalist the expert usually wins by simply: (1) using unintelligible jargon, and (2) citing their specialist results which are often completely irrelevant to the discussion. The expert is, therefore, a potent factor to be reckoned with in our society. Since experts are both necessary, and also at times do great harm in blocking significant progress, they need to be examined closely. All too often the expert misunderstands the problem at hand, but the generalist cannot carry though their side to completion. The person who thinks they understand the problem and does not is usually more of a curse (blockage) than the person who knows they do not understand the problem.

...

Experts in looking at something new always bring their expertise with them as well as their particular way of looking at things. Whatever does not fit into their frame of reference is dismissed, not seen, or forced to fit into their beliefs. Thus really new ideas seldom arise from the experts in the field. You can not blame them too much since it is more economical to try the old, successful ways before trying to find new ways of looking and thinking.

All things which are proved to be impossible must obviously rest on some assumptions, and when one or more of these assumptions are not true then the impossibility proof fails—but the expert seldom remembers to carefully inspect the assumptions before making their “impossible” statements. There is an old statement which covers this aspect of the expert. It goes as follows: “If an expert says something can be done he is probably correct, but if he says it is impossible then consider getting another opinion.”

...

...It appears most of the great innovations come from outside the field, and not from the insiders... examples occur in most fields of work, but the text books seldom, if ever, discuss this aspect.

...the expert faces the following dilemma. Outside the field there are a large number of genuine crackpots with their crazy ideas, but among them may also be the crackpot with the new, innovative idea which is going to triumph. What is a rational strategy for the expert to adopt? Most decide they will ignore, as best they can, all crackpots, thus ensuring they will not be part of the new paradigm, if and when it comes.

Those experts who do look for the possible innovative crackpot are likely to spend their lives in the futile pursuit of the elusive, rare crackpot with the right idea, the only idea which really matters in the long run. Obviously the strategy for you to adopt depends on how much you are willing to be merely one of those who served to advance things, vs. the desire to be one of the few who in the long run really matter. I cannot tell you which you should choose that is your choice. But I do say you should be conscious of making the choice as you pursue your career. Do not just drift along; think of what you want to be and how to get there. Do not automatically reject every crazy idea, the moment you hear of it, especially when it comes from outside the official circle of the insiders—it may be the great new approach which will change the paradigm of the field! But also you cannot afford to pursue every “crackpot” idea you hear about. I have been talking about paradigms of Science, but so far as I know the same applies to most fields of human thought, though I have not investigated them closely. And it probably happens for about the same reasons; the insiders are too sure of themselves, have too much invested in the accepted approaches, and are plain mentally lazy. Think of the history of modern technology you know!

...

...In some respects the expert is the curse of our society with their assurance they know everything, and without the decent humility to consider they might be wrong. Where the question looms so important I suggested to you long ago to use in an argument, “What would you accept as evidence you are wrong?” Ask yourself regularly, “Why do I believe whatever I do”. Especially in the areas where you are so sure you know; the area of the paradigms of your field.

Hamming shares a number of stories from the history of science to support his claims. He also says he has more stories which he didn't include in the chapter, and that he looked for stories which went against his position too.

A couple takeaways:

  • Survivorship bias regarding stories of successful contrarians - most apparent crackpots actually are crackpots.

  • Paradigm shifts - if an apparent crackpot is not actually a crackpot, their idea has the potential to be extremely important. So shutting down all the apparent crackpots could have quite a high cost even if most are full of nonsense. As Jerome Friedman put it regarding the invention of bagging (coincidentally mentioned in the main post):

The first time I saw this-- when would that have been, maybe the mid '90s-- I knew a lot about the bootstrap. Actually, I was a student of Brad Efron, who invented the bootstrap. And Brad and I wrote a book together on the bootstrap in the early '90s. And then when I saw the bag idea from Leo, I thought this looks really crazy. Usually the bootstrap is used to get the idea of standard errors or bias, but Leo wants to use bootstrap to produce a whole bunch of trees and to average them, which sounded really crazy to me. And it was a reminder to me that you see an idea that looks really crazy, it's got a reasonable chance of actually being really good. If things look very familiar, they're not likely to be big steps forward. This was a big step forward, and took me and others a long time to realize that.

However, even if one accepts the premise that apparent crackpots deliver surprisingly high expected value, it's still not obvious how many we want on the Forum!

"Breakthroughs" feel like the wrong thing to hope for from posts written by non-experts. A lot of the LW posts that the community now seems to consider most valuable weren't "breakthroughs". They were more like explaining a thing, such that each individual fact in the explanation was already known, but the synthesis of them into a single coherent explanation that made sense either hadn't previously been done, or had been done only within the context of an academic field buried in inferential distance. Put another way, it seems like it's possible to write good popularizations of a topic without being intimately familiar with the existing literature, if it's the right kind of topic. Though I imagine this wouldn't be much comfort to someone who is pessimistic about the epistemic value of popularizations in general.

The Huemer post kind of just felt like an argument for radical skepticism outside of one's own domain of narrow expertise, with everything that implies.

Ah, but should you familiarize yourself with the literature on familiarizing yourself with the literature before writing an EA Forum post?

Clever :)

However, I'm not sure that post follows its own advice, as it appears to be essentially a collection of anecdotes. And it's possible to marshal anecdotes on both sides, e.g. here is Claude Shannon's take:

...very frequently someone who is quite green to a problem will sometimes come in and look at it and find the solution like that, while you have been laboring for months over it. You’ve got set into some ruts here of mental thinking and someone else comes in and sees it from a fresh viewpoint.

[Edit: I just read that Shannon and Hamming, another person I cited in this thread, apparently shared an office at Bell Labs, so their opinions may not be 100% independent pieces of evidence. They also researched similar topics.]

It seems clear to me that epistemic-status disclaimers don't work for the purpose of mitigating the negative externalities of people saying wrong things, especially wrong things in domains where people naturally tend towards overconfidence (I have in mind anything that has political implications, broadly construed). This follows straightforwardly from the phenomenon of source amnesia, and anecdotally, there doesn't seem to be much correlation between how much, say, Scott Alexander (whom I'm using here because his blog is widely read) hedges in the disclaimer of any given post and how widely that post winds up being cited later on.

Interesting thought, upvoted!

Is there particular evidence for source amnesia you have in mind? The abstract for the first Wikipedia citation says:

Experiment 2 demonstrated that when normal subjects' level of item recall was equivalent to that of amnesics, they exhibited significantly less source amnesia: Normals rarely failed to recollect that a retrieved item derived from either of the two sources, although they often forgot which of the two experimenters was the correct source. The results are discussed in terms of their implications for theories of normal and abnormal memory.

So I guess the question is whether the epistemic status disclaimer falls into the category of source info that people will remember ("an experimenter told me X") or source info that people often forget ("Experimenter A told me X"). (Or whether it even makes sense to analyze epistemic status in the paradigm of source info at all--for example, including an epistemic status could cause readers to think "OK, these are just ideas to play with, not solid facts" when they read the post, and have the memory encoded that way, even if they aren't able to explicitly recall a post's epistemic status. And this might hold true regardless of how widespread a post is shared. Like, for all we know, certain posts get shared more because people like playing with new ideas more than they like reading established facts, but they're pretty good at knowing that playing with new ideas is what they're doing.)

I think if you fully buy into the source amnesia idea, that could be considered an argument for posting anything to the EA Forum which is above average quality relative to a typical EA information diet for that topic area--if you really believe this source amnesia thing, people end up taking Facebook posts just as seriously as papers they read on Google Scholar.

Epistemic status: During my psychology undergrad, I did a decent amount of reading on relevant topics, in particular under the broad label of the "continued influence effect" (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But I'm a bit rusty (my Honours was in 2017).

From this paper's abstract:

Information that initially is presumed to be correct, but that is later retracted or corrected, often continues to influence memory and reasoning. This occurs even if the retraction itself is well remembered. The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warning--giving detailed information about the continued influence effect (CIE)--succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warning--reminding people that facts are not always properly checked before information is disseminated--was even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether. (emphasis added)

This seems to me to suggest some value in including "epistemic status" messages up front, but that this don't make it totally "safe" to make posts before having familiarised oneself with the literature and checked one's claims.

From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.

Here's a couple other seemingly relevant quotes from papers I read back then:

  • "retractions [of misinformation] are less effective if the misinformation is congruent with a person’s relevant attitudes, in which case the retractions can even backfire [i.e., increase belief in the misinformation]." (source) (see also this source)
  • "we randomly assigned 320 undergraduate participants to read a news article presenting either claims both for/against an autism-vaccine link [a "false balance"], link claims only, no-link claims only or non-health-related information. Participants who read the balanced article were less certain that vaccines are safe, more likely to believe experts were less certain that vaccines are safe and less likely to have their future children vaccinated. Results suggest that balancing conflicting views of the autism-vaccine controversy may lead readers to erroneously infer the state of expert knowledge regarding vaccine safety and negatively impact vaccine intentions." (emphasis added) (source)
    • This seems relevant to norms around "steelmanning" and explaining reasons why one's own view may be inaccurate. Those overall seem like very good norms to me, especially given EAs typically write about issues where there truly is far less consensus than there is around things like the autism-vaccine "controversy" or climate change. But it does seem those norms could perhaps lead to overweighting of the counterarguments when they're actually very weak, perhaps especially when communicating to wider publics who might read and consider posts less carefully than self-identifying EAs/rationalists would. But that's all my own speculative generalisations of the findings on "falsely balanced" coverage.

I've been considering brushing up on this literature to write a post for the forum on how to balance risks of spreading misinformation/flawed ideas with norms among EAs and rationalists around things like just honestly contributing your views/data points to the general pool and trusting people will update on them only to the appropriate degree. Reactions to this comment with inform whether I decide investing time into that would be worthwhile.

Yeah, I should have known I'd get called out for not citing any sources. I'm honestly not sure I'd particularly believe most studies on this no matter what side they came out on; too many ways they could fail to generalize. I am pretty sure I've seen LW and SSC posts get cited as more authoritative than their epistemic-status disclaimers suggested, and that's most of why I believe this; generalizability isn't a concern here since we're talking about basically the same context. Ironically, though, I can't remember which posts. I'll keep looking for examples.

Another thought is that even if the original post had a weak epistemic status, if the original post becomes popular and gets the chance to receive widespread scrutiny, which it survives, it could be reasonable to believe its "de facto" epistemic status is higher than what's posted at the top. But yes, I guess in that case there's the risk that none of the people who scrutinized it had familiarity with relevant literature that contradicted the post.

Maybe the solution is to hire someone to do lit reviews to carefully examine posts with epistemic status disclaimers that nonetheless became popular and seem decision relevant.

If someone wants to toss off a quick idea with low confidence, it doesn't seem too important to dig deep into literature; anyone who wants to take the idea more seriously can do related research themselves and comment with their results. (Of course, better literature than no literature, but not having that background doesn't seem so bad.)

On the other hand, if someone wants to sink many hours into writing a post about ideas in which they are confident, it seems like a very good idea to be familiar with extant literature.

In particular, if you are trying to argue against expert consensus, or take a firm stance on a controversial issue, you should read very closely about the ideas you want to criticize, and perhaps even seek out an expert who disagrees with you to see how they think. Some of the lowest-value posts I see (all over the internet, not just on the Forum) are those which present a viewpoint along the lines of "experts are generally wrong, I've discovered/uncovered the truth!" but don't seriously engage with why experts believe what they believe.

Readers interested in this post may also like this post on epistemic modesty.

More thoughts re: the wisdom of the crowds: I suppose the wisdom of the crowds works best when each crowd member is in some sense an "unbiased estimator" of the quantity to be estimated. For example, suppose we ask a crowd to estimate the weight of a large object, but only a few "experts" in the crowd know that the object is hollow inside. In this case, the estimate of a randomly chosen expert could beat the average estimate of the rest of the crowd. I'm not sure how to translate this into a more general-purpose recommendation though.

My guess is that reading a bunch of EA posts is not the thing you really care about if, say, what you care about is people engaging fruitfully on EA topics with people already in the EA movement.

By way of comparison, over on LW I have the impression (that is, I think I have seen this pattern but don't want to go to the trouble of digging up example links) that there are folks trying to engage on the site who claim to have read large chunks of the Sequences but also produce low quality content, and then there are also people who haven't read a lot of the literature who manage to write things that engage well with the site or do well engaging in rationalist discussions in person.

Reading background literature seems like one way that sometimes works to make a person into the kind of person who can engage fruitfully with a community, but I don't think it always works and it's not the thing itself, hence why I think you see such differing views when you look for related thinking on the topic.

I didn't necessarily take "engage with the literature" to refer to reading previous EA posts. That would be helpful in many cases, but doesn't seem realistic until the Forum has a working search engine. However, I would like to see more people who write posts on topics like political science, computer science, international aid, or philosophy do a quick Google scholar search before posting their ideas.

site:forum.effectivealtruism.org on Google has been working OK for me.

Curated and popular this week
Relevant opportunities