Hide table of contents

As a community, we should think more about how to create and improve our collective epistemic institutions. With that, I mean the formalized ways of creating and organizing knowledge in the community beyond individual blogs and organizations. Examples are online platforms like the EA Forum and Metaculus, events like EA Global and the Leaders Forum, surveys like the EA survey and the survey at the Leaders Forum. This strikes me as a neglected form of community-building that might be particularly high-leverage.

The case for building more and better epistemic institutions

Epistemic progress is crucial for the success of this community.

Effective altruism is about finding out how to do the most good and then doing it. For that, epistemic progress is important. Will MacAskill has even referred to effective altruism as a “research project.” Since people in this community have changed their views about how to do the most good substantially over the last ten years, we should expect that we’re still wrong about many things.

Some institutions facilitate or significantly accelerate epistemic progress.

People in this community are probably more aware of the research showing this than many other people. Ironically, we even recommend working on improving the decision-making of other organizations or communities. Aggregative forecasting is talked about most often and it seems to have solid evidence behind it. Still, it has limitations. For instance, it cannot help us with conceptual work, improving our reasoning and arguments directly, and inherently vague concepts. There is some evidence on other instruments like certain forms of expert elicitation or structured analytic techniques (e.g., devil’s advocate), but the evidence base seems less sound. It might still be worth experimenting with them. Peer review seems to be another valuable institution facilitating epistemic progress. I’m not sure if this has ever been investigated properly but it has a lot of prima facie plausibility to it.

I don’t want to argue that we already know all the institutions that facilitate epistemic progress but there are at least some that do. If we think this is sufficiently important and there are more such institutions to be designed, experimenting and expanding the research base might be among the most important things we could do.

We are not close to the perfect institutional setup.

I don’t want to overstate the case. We have already built a number of great institutions in this regard, probably much better than other communities. Again, forecasting has probably seen the most attention (e.g., Metaculus, Foretold). The other examples I mentioned at the top, however, are also important and many have improved over the last few years.

Still, I’m confident we can do better. Starting from the evidence base I sketched out above, we might start experimenting with the following institutions:

  • Institutionalizing devil’s advocates: So far, we have had to rely on the initiative and courage of individuals to come forward with criticism of cause areas or certain paradigms within them (e.g., here, here). Perhaps there are ways to incentivize or institutionalize such work even more or even earlier. For instance, we could set up prizes for the best critique of apparently common assumptions or priorities.
  • Expert surveys/elicitation: Grace et al. (2017) did one for AI timelines. The Leaders Forum survey is focused on EA-related questions. If possible, we could experiment with validating the participants or systematizing participant selection in other ways. We could also just explore many more questions this way in order to get a sense of what the most knowledgable people in a particular cause believe.
  • Peer review: We could simply subject more of our ideas to peer review. The fact that the Global Priorities Institute is doing so is a great step in my opinion. We could also experiment with peer review internal to the community. In addition to regular posts and shortform posts on the EA Forum, we could introduce a research category where posts have to pass the review of people in the field of that particular post (Saulius recently suggested a change along the lines). For what it’s worth, the voting system captures some of the value of this already.

Below I sketch some more ideas for epistemic institutions, which are admittedly more speculative since they have not been investigated as rigorously:

  • Institutionalized adversarial collaborations: Adversarial collaborations are collaborations between people with opposing viewpoints. Scott Alexander has already experimented with a format that might also work for this community specifically (2019, 2018).
  • Literature reviews: It’s hard to keep track of all the advances in a particular field. Literature reviews could address this. The annual AI Alignment Literature Review and Charity Comparison is a good example of this. We would probably benefit from such work in other areas as well, judging from the response to this post every year. The LessWrong Review is how this could work for effective altruism as a field.
  • IPCC-analogues for cause areas: Reports on the scale of the IPCC reports are not feasible or desirable at this point. It could still be very important that we keep track of the state of knowledge in a particular field. What do the experts believe? What is that based on? Where are we most uncertain?
  • Effective altruism wiki: Intuitively, this makes a lot of sense as a means of organizing knowledge of a particular community. Also, if the US Intelligence Community is doing it, it has to be good. I know that there have been attempts at this (e.g., arbital, priority.wiki, EAWiki). Unfortunately, these didn’t catch on as much as would be necessary to create a lot of value. Perhaps there are still ways of pulling this off though. See here and here for recent discussions.

Some of these probably work better for epistemic progress in particular fields or causes. Others work better for organizing or advancing knowledge on global priorities.

We can build or improve such institutions.

This will depend a lot on the specific institution. The fact that we have a running forecasting platform, global conferences, a forum with prizes and an intricate voting system, and all of the other things I listed makes me hopeful. Not everything will work out, but this community seems generally capable of doing such things.

There are still a number of problems we need to overcome. Some will depend on the specific institution, but we can also make some general observations:

  • Some institutions require the time and effort of experts in a particular field. Participating in such institutions might not be the best use of their time. We could find ways of minimizing the needed effort, offer to compensate them, or find other ways of making it worth it for them.
  • As a community, we might suffer from the diffusion of responsibility or authority. Nobody feels called upon or vested with the required authority to set up such institutions. I am not sure to what extent this is the case. Incubators like Charity Entrepreneurship might be able to help here. CEA could also take on an even more active role in shaping such institutions.
  • There might be coordination platforms around platforms such as a wiki. It’s only worth it for an individual to participate if enough other people participate. Prizes, participation by respected members, and consistently making the case for the institution might help here.

How important is this compared to other things we could be doing?

Building such institutions is a form of community-building. Arguably, this is one of the most important ways of making a difference since it offers a lot of leverage. It came second in the Leaders Forum survey. It is not the only form of community-building. How does it compare to other things in this area? Below I sketch a few considerations.

Growth

The most common form of community-building is growing the size of the community. Improving institutions strikes me as more neglected but perhaps less tractable. The importance of both depends on the size of the community. On the one hand, coordinating around and debating the merits of such institutions will only become harder with increasing size. Since it’s plausible that they also prevent the drift of the community, they might be especially important to set up early. On the other hand, institutions might only be feasible once the community reaches a certain size. Before that point, they will not be very efficient. For instance, a forecasting platform or wiki with five people does not add a lot of value. Similarly, there might not be enough experts to warrant institutions like literature reviews. Overall, I lean toward thinking additional work on institutions is more valuable at the current margin.

Epistemic norms and folkways

Norms and folkways are less formalized ways of doing things. The difference to institutions is gradual but still meaningful. Applauding others for posting criticism or making probabilistic estimates are expressions of norms or folkways. Prizes for in-depth critiques and forecasting platforms are institutions. I find it really hard to compare these since norms and folkways are pretty fuzzy and I’m not sure what dedicated work on them would look like. The most insightful thing I can say is the following: Since institutions are less malleable than norms, you only want to set them up once you have become sufficiently certain that they are a good idea. This will differ from institution to institution and speaks in favor of experimentation.

Non-epistemic institutions

These might be institutions to improve preference aggregation (i.e., voting), community retention and coherence, and so on. Since this is a very broad basket of things, making a comparison is difficult. I would definitely welcome more people thinking about this.

Conclusion

Overall, this type of work strikes me as a valuable form of community-building that we currently underinvest in, despite quite a few resources going into both community-building more generally as well as the cause of improving institutional decision-making. It would be great if these two groups could join forces more.

Acknowledgements

Thanks to Tobias Baumann and Jesse Clifton for comments on an earlier draft of this post.

Comments22
Sorted by Click to highlight new comments since: Today at 4:40 AM

I really like the general class of improving community epistemics :)

That being said, I feel pretty pessimistic about having dedicated "community builders" come in to create good institutions that would then improve the epistemics of the field: in my experience, most such attempts fail, because they don't actually solve a problem in a way that works for the people in the field (and in addition, they "poison the well", in that it makes it harder for someone else to build an actually-functioning version of the solution, because everyone in the field now expects it to fail and so doesn't buy in to it).

I feel much better about people within the field figuring out ways to improve the epistemics of the community they're in, trialing them out themselves, and if they seem to work well only then attempting to formalize them into an institution.

Take me as an example. I've done a lot of work that could be characterized as "trying to improve the epistemics of a community", such as:

The first five couldn't have been done by a person without the relevant expertise (in AI alignment for the first four, and in EA group organizing for the fifth). If they were trying to build institutions that would lead to any of these six things happening, I think they might have succeeded, but it probably would have taken multiple years, as opposed to it taking ~a month each for me. (Here I'm assuming that an institution is "built" once it operates through the effort of people within the field, with no or very little ongoing effort from the person who started the institution.) It's just quite hard to build institutions for a field without significant buy-in from people in the field, and creating that buy-in is hard.

I think people who find the general approach in this post interesting should probably be becoming very knowledgeable about a particular field (both the technical contents of the field, as well as the landscape of people who work on it), and then trying to improve the field from within.

It's also of course fine to think of ideas for better institutions and pitch them to people in the field; what I want to avoid is coming up with a clever idea and then trying to cause it to exist without already having a lot of buy in from people in the field.

I agree with all of what you say here. Building things for others can often go badly wrong. Thanks for sharing this perspective!

Roam Research is

> starting a fellowship program where we are giving grants to researchers to explore the space of Tools for Thought, Collective Intelligence, Augmenting The Human Intellect.

They recently raised $9M at a $200M seed evaluation and previously received two grants from EA LTFF.

Thanks, interesting.

1) One distinction one might want to make is between better versions of previous institutions and truly novel epistemic institutions. E.g. Global Priorities Institutes and Future of Humanity Institute are examples of the former - university research institutes isn't a novel institution. Other examples could be better expert surveys (that already exists), better data presentation, etc. My sense is that some people who think about better institutions are too focused on entirely new institutions, while neglecting better versions of existing institutions. Building something entirely novel is often very hard, whereas it's easier to build a new version of an existing institution.

2) One fallacy people who design new institutions often make is that they overestimate the amount of work people want to put into their schemes. E.g. suggested new institutions like post-publication peer review and some forms of prediction institutions suffer from the fact that people don't want to invest the time in them that they need. I think that's a key consideration that's often forgotten. This may be a particular problem for certain complex decentralised institutions, which depend on freely operating individuals (i.e. whom you don't employ full-time) either voluntarily or for profit investing time in your institution. Such decentralised institutions can be theoretically attractive, and I think there is a risk that people get nerd-sniped into putting more time into theorising about some such institutions than they're worth. By contrast, I'm more generally positive about professional institutions who employ people full-time (e.g. university departments). But obviously each suggestion should be evaluated on its own merits.

3) With regards to "norms and folkways", there is a discussion in economics and the other social sciences about the relative importance of "culture" and (formal) institutions for economic growth and other desirable developments. My view is that culture and norms are often under-rated relative to formal institutions. The EA community has developed a set of epistemic norms and an epistemic culture which is by and large pretty good. In fact, it seems we didn't develop too many formal institutions that are as valuable as those norms and that culture. That seems to me a reason to think more about how to foster better norms and a better culture, both within the EA community, and outside it.

Re: #2, I've argued for minimal institutions where possible - relying on markets or existing institutions rather than building new ones, where possible.

For instance, instead of setting up a new organization to fund a certain type of prize, see if you can pay an insurance company to "insure" the risk of someone winning, as determined by some criteria, and them have them manage the financials. Or, as I'm looking at for incentifying building vaccine production now, offer cheap financing for companies instead of running a new program to choose and order vaccines to get companies to produce them.

Institutions for exchanging information (especially research) also seem helpful to me. For instance, many researchers circulate their work in semi-private google docs but only publish some of their work academically or on the Forum. (Sometimes, this is because of information hazards, but only rarely.) This makes it harder for new or less well-networked researchers to get up to speed with existing work. It also doesn't scale well as the community grows. It would be great if there were easy ways to make content public more easily. Wei Dai made a suggestion in this direction, and I bet there are further ways of making this happen.

Explicitly defined publication norms could also be helpful. It's often unclear how one should deal with information hazards, which seems to cause people to err on the side of not publishing their work. Instead, one could set up things like "info hazard peer review" or agree more explicitly on rules in the direction of "for issues around X and Y, or other potential info hazards, ask at least five peers from different orgs on whether to publish" (of course, this needs some more work).

This is a great post and I, like @rohinmshah, feel that simply the introduction of this general class of discussion is of value to the community.

With respect to expert surveys, I am somewhat surprised that there isn't someone in the EA community already pursuing this avenue in earnest. I think that it's firmly within the wheelhouse of the community's larger knowledge-building project to conduct something like the IGM experts panel across a variety of fields. I think, first, that this sort of thing is direly needed in the world at large and could have considerable direct positive effects, but secondly that it could have a number of virtues for the EA community:

  • Improve efficiency of additional research: Knowing what the expert consensus is on a given topic will save some nontrivial percentage of time when starting a literature review, and help researchers contextualize papers that they find over the course of the review. Expert consensus is a good starting place for a lit review, and surveys will save time and reduce uncertainty in that phase.
  • Let EAs know where we stand relative to the expert consensus: when we explore topics like growth as a cause area, we need to be able to (1) have a quick reference to the expert consensus at vital pivots in a conversation (e.g. do structural adjustments work?) and (2) identify with certainty where EA views might depart from the consensus.
  • Provide a basis for argument to policymakers and philanthropists: Appeals to authority are powerful persuasive mechanisms outside the EA community. Being able to fall back on expert consensus in any range of issues can be a powerful obstacle or motivator, depending on the issue. Here's an example: governments around the world continue to locally relitigate conversations about the degree to which electronic voting is safe, desirable, secure or feasible. Security researchers have a pretty solid consensus on these questions-- that consensus should be available to these governments and those of us who seek to influence them.
  • Demonstrate to those outside the community that EAs are directly linked to the mainstream research community: This is a legitimacy issue: regardless of whether the EA community ends up being broader or narrower, we are often insisting to some degree on a new way of doing things: we need to be able to demonstrate to newcomers and outsiders that we are not simply starting from scratch.
  • Establish continued relationships with experts across a variety of fields: Repeated deployment of these expert surveys affords opportunities for contact with experts who can be integrated into projects, sought for advice, or deployed (in the best case scenario) as voices on behalf of sensible policies or interventions.
  • Identify funding opportunities for further research or for novel epistemic avenues like the adversarial collaborations mentioned in the initial post: Expert surveys will reveal areas where there is no consensus. Although consensus can be and sometimes is wrong, areas where there is considerable disagreement seem like obvious avenues for further exploration. Where issues have a direct bearing on human wellbeing, uncovering a relative lack of conclusive research seems like a cause area in and of itself.
  • Finally, the question-finding and -constructing process is itself an important activity that requires expert input. Identifying the key questions to ask experts is itself very important research, and can result in constructive engagements with experts and others.

On expert surveys, I would personally like to see more institutionalized surveys of key considerations like these: https://www.stafforini.com/blog/what_i_believe/ One interesting aspect could be to see in which areas agreement / disagreement is largest.

This is great! I find this extremely important, and I agree that we have a lot of room to improve. Thank you for the clear explanation and the great suggestions.

Further ideas:

  1. A global research agenda / roadmap.
  2. Bounties for specific requests.
    1. Perhaps someone can set a (capped) 1-1 matching for individual requesters.
    2. Better, give established researchers or organizations credit to use for their requests.
  3. A peer review mechanism in the forum. A concrete suggestion:
    1. Users submitting a "research post" can request peer review, which is displayed in the post [a big blue "waiting for reviewers"].
    2. Reviewer volunteers to review, and present their qualifications (and statement of no conflict of interest) to a dedicated board that consists of "EA experts", which can approve them for review.
    3. There are strict-ish guidelines on what is expected from a good post, and a guide for reviewers.
    4. The reviewers submit their review anonymously and publicly.
    5. They can accept the post [a big green "peer reviewed"].
    6. They can also ask to fix some errors and improve clarity [a big yellow "in revision"].
    7. They can decide that it is just not good enough or irrelevant [a big red "rejected"].
  4. (The above is problematic in several ways. The reviewer is not randomized, so there is inherent bias. The incentive for reviewing is not clear. It can be tough to be rejected..)
  5. Better norms for linking to previous research and asking for it. Better norms for suitable exposition. These norms don't have to be strict on "non-research" posts.
  6. The forum itself can contain many further innovations (Good luck, JP!):
    1. Polls and embedded prediction tools.
    2. Community editable wiki posts.
    3. Suggested templates.
    4. Automated suggestion for related posts while editing (like in stackexchange).
    5. An EA tag on lesswrong/alignment forum (or vice versa) with which posts can be displayed on both sites (like the LW/AF workflow).
    6. A mechanism for highlighting and commenting like in Medium. (Not sure I like it)
    7. Suggestions that appear (only) to the editor like in google docs.
    8. There are some great stuff already on their way also :)
  7. Regarding a wiki, Viktor Petukhov wrote a post about it with some discussion following it on the post and in private communication.
  8. More research mentorships. Better support for researchers at the start of their path.
  9. Better expository and introductory materials, and guides to the literature.
  10. Better norm and infrastructure for partnering.
  11. A supportive infrastructure to coordinate projects globally, between communities. This can allow more easily to set up large scale, volunteer-led projects for better epistemic institutions. The importance of local communities here is as a vetting mechanism.

On 4., in addition to the incentive problem, there's also the problem of matching the right reviewer to reviewee such that the counterfactual value generated is high enough, which will depend greatly on the post and the reviewer. I think this is harder than the incentives problem. Downsides of not solving the matching problem could be people spending time reviewing posts that might have been better spent, or posts that are promising and need reviewing get reviewed by whoever is most incentivized/has time on their hands and then people think it's been reviewed already so the price for a 2nd review goes up.

Effective altruism wiki: Intuitively, this makes a lot of sense as a means of organizing knowledge of a particular community. Also, if the US Intelligence Community is doing it, it has to be good. I know that there have been attempts at this (e.g., arbital, priority.wiki, EAWiki). Unfortunately, these didn’t catch on as much as would be necessary to create a lot of value. Perhaps there are still ways of pulling this off though. See here and here for recent discussions.

In addition to the wikis, there are also EA Concepts and the LessWrong Wiki, which have similar roles.

Two hypotheses for why these encyclopedias didn't catch on so far:

  • Lack of coordination: Existing projects seemed to focus on content but not quality standards, editing/moderation, etc. Projects weren't maintained long-term. It probably wasn't sufficiently clear how new volunteers could best contribute. Resources were split between multiple projects.
  • Perhaps EA is still too small. Most communities with successful wikis have fairly large communities.

Personally, I'd be very excited about a better-coordinated and better-edited EA concepts/wiki. (I know of someone who is planning to work on this.)

Hello! I started eawiki.org just a few weeks ago to try to reinvigorate the concept - will do a proper launch sometime soon.

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

This post contains a well-structured argument for addressing a problem that could be dragging down the overall impact of EA work across many different areas. You could summarize the main point in a way that makes it seem obvious (“EA should try to figure things out in a better way than it does now”), but in doing so, you’d be ignoring the details that make the post great:

  • Pointing out examples of things the community has done that pushed EA in the right direction (e.g. influential criticism, expert surveys) in order to show that we could do even more work along the same lines.
  • Comparing one reasonable proposal (better institutions) to other reasonable proposals (better norms, other types of institution, focusing on growth over institution-building) without arguing too vociferously in favor of the first proposal. I liked the language “I sketch a few considerations,” where some posts might have used “I show how X is superior to Y and Z.”

If you read this post, I also strongly recommend reading the comments! (This applies to the post above as well.)

Building such institutions is a form of community-building. Arguably, this is one of the most important ways of making a difference since it offers a lot of leverage. It came second in the Leaders Forum survey.

(Not very important.) Hm, which result of the survey do you mean? I can't remember being given that option and can't find it immediately in that post.

I was referring to the option "Building the EA and related communities." If building such institutions is a form of community-building, then this gives some indication of its importance compared to other areas. Now, it might be the case that respondents didn't have this in mind when answering and if they did, they would give it a much lower score.

Makes sense, thanks!

Data point: When reading this post, I interpreted the "It" in "It came second in the Leaders Forum survey" as referring to "Building such institutions" - i.e., more and better epistemic institutions - rather than to "community-building". Which seemed surprising to me, until I checked the survey.

(Btw, interesting post, thanks for writing it.)

Related: the term I've been using lately in thinking about this sort of thing is epistemic public goods, which I think was prompted when I saw Julia Galef tweet about the "epistemic commons"

There might be coordination platforms around platforms such as a wiki. It’s only worth it for an individual to participate if enough other people participate. Prizes, participation by respected members, and consistently making the case for the institution might help here.

One other idea I'm aware of for solving these coordination problems is this Facebook group "for proposing crowdfunding or coordinated actions" by rationalists/EAs.

E.g.:

Cause Prioritization Wiki
Goal: Improve https://causeprioritization.org/
Commitment: Make X major edits within 3 months after the threshold (ie. creating or significantly improving a page's section)
Alternative: You can pledge money that will be used to pay editors. I'll assume a rough conversion rate of 15 USD per major edit (I would take care of dispatching the work).
Threshold: 100 major edits (equivalent) committed
Extended threshold: I will check with Issa Rice if I can move the wiki to the MediaWiki platform if at least 200 major edits get committed.
Why it needs coordination: It will only become a Schelling point to read and document information if a sufficient amount of people are already using the platform as such.

I'm not necessarily saying that that's a good solution - I haven't really looked into or thought about it - just sharing that it exists. (I think I'd also be concerned about choosing one wiki to coordinate on without first having a big discussion about which wiki it should be - but I haven't looked into the latest status of such discussions either.)

Curated and popular this week
Relevant opportunities