Caveats: This is mainly a linkpost; the linked posts make the arguments more carefully. Some content comes from my comments on other posts.

I realize I may need to make a stronger case for 'why we should care about academic research and bringing academic feedback and credibility to EA research'. If this doesn't seem apparent, you might consider the below as "to the extent that EA-aligned researchers are seeking this, here's a proposal for how to do it better".

Update Feb/March 2022: LTFF funding received, progress being reported in the collaborative gitbook space -- I will make an updated backlink post soon.

We can help 'slay the journals', make research better, and nudge academics towards considering EA-relevant issues

Lauren's post on bringing EA ideas into large research organizations, reminded me that EA organizations and researchers can "make research better, and in doing so, bring academics into our fold". Our shared principles and values, and our lack of ties to traditional systems can help us break the collective-action problems.

EA researchers and orgs need an alternative to traditional journals

In the Slaying the journals discussion I argue:

Global priorities and EA research organizations are looking for ‘feedback and quality control’, dissemination, and external credibility. We would gain substantial benefits from supporting, and working with [journal-independent peer-evaluation systems], rather than (only) submitting our work to traditional journals. We should also put some direct value on results of open science and open access, and the strong impact we may have in supporting this.

I am eager for us to take concrete steps towards an alternative to 'traditional academic journals' process. As I argue in the 'unjournal' link, the traditional model

  • lets publishers extract rents and makes research less accessible,
  • inhibits innovation and open science practices (especially dynamic docs),
  • (most substantially) leads to tremendous wasted effort and risk, as it encourages researchers to focus on gamesmanship and often requires us to submit papers to a long process sequence of journals with 0/1 outcomes.

"Plan of action", crucial steps

I set up a space where I propose a Plan of Action HERE in the Gitbook format. I would appreciate your feedback and suggestions.

I think the crucial steps are

  1. Set up an "experimental space" e.g., on PREreview allowing us to include additional, more quantitative metrics (they have offered this as a possibility), and to focus on content and approaches that are relevant to EA and global priorities.

  2. Most crucially: Get funding and support and commitments (from GPI, RP, etc)

  • ... for people to do reviewing, rating, and feedback activities in our space in PREreview
  • ... for 'editorial' people to oversee which research projects are relevant and assign relevant reviewers
  1. Link arms with Cooper Smout and the "Free our Knowledge" pledges and initiatives like this one as much as possible. Note that this is very close to Cooper's mission, and he has time funded/allotted for this.

Asides and caveats

My 'rated list of tools and partners'

In this Airtable view I give my rough opinion about the value of existing outlets including innovative OA journals, places to host preprints and research projects, and, most importantly IMO, journal-independent peer review and rating tools.

Do we need an actual OA journal?

I don't think setting up an OA journal with an impact factor is necessary. I think "credible quantitative peer review" is enough, and in fact the best mode. But I am also supportive of open-access journals with good feedback/rating models like SciPost. It might be nice to have an EA-relevant place like this. Cooper Smout is more enthusiastic about the idea of starting best-practice OA journals that 'give every acceptable paper a rating' ... see our discussion after my post here.

I recognize that open-access/open science in some fields can raise X-risks

We give a rough outline of the arguments here. But I think its pretty in most cases whether or not this is relevant.

To be a bit glib... Microbiology of diseases: Yes. General AI: Probably. Development Economics: No. Psychology: No.

This is a 'small but big' step

My proposal may not 'fix the biggest problems of research alignment and productivity' (see, e.g., discussions here and here nor make a tremendous contribution to humanity.

But it would make research somewhat more efficient, transparent, and accessible. It would make the researchers' careers less stressful and less random. They would appreciate us for that.

And it would help EA-aligned researchers and organizations do better, more credible research.

28

9 comments, sorted by Click to highlight new comments since: Today at 11:14 AM
New Comment

I guess the main criticisms I've heard of this kind of proposal are:

a) It might cause EA research to be siloed

b) We might let the standards slip since papers match our worldview and produce low-quality research that harms our reputation

c) It'd be easier for people to just publish here rather than engaging in the hard work of publishing in normal journals which is important for building EA's credibility

(Belated) Update: ACX forwarded my grant application to the Long Term Future Fund, who are supporting it. The precise content and details of my grant application are embedded in the collaborative space HERE

Now to make something happen! I'm currently fleshing out the plan described in that application soon, and proceeding on it. Next steps: getting feedback, building a founding committee, setting a concrete agenda of issues to resolve, soliciting interest.

I'll make a linkpost for this news soon.

Thanks for this David! As you know I agree with you in seeing this as a big problem so  I am definitely keen to read more of your work in this space. I think that there is a compelling case that innovation of academic processes is a pressing problem. It may even be a candidate for a cause area.  

Thanks. Doing my best to keep this moving forward but I'm a bit constrained as it's not part of my 'main job'. So I'm not sure how actively I'll be updating this, but if I do I'll loop you in.

I really want to be part of a team that is 'just doing this in some reasonable way' and moving things forward. I don't think it has to be very complicated. Essentially, we commit to doing journal-independent review and rating, we get some funding to pay for editorial/reviewer time, and we give it a try, on existing platforms like OSF + PreReview.

Perhaps someone needs to 'take the lead' on this? I'd love to but not sure I have the bandwidth. But maybe I could help make the case to a funder to get this going?