Hide table of contents

This was written in consultation with Dylan (JD, BCL), and after a few conversations with two legal scholars and others in this field. 

The Unjournal (unjournal.org, see our ‘in a nutshell’) focuses on impactful research in quantitative social science and economics – see a discussion of our focus in our Gitbook here; see our output at unjournal.pubpub.org.

We have not considered legal scholarship, because our team is not familiar with it, because of limited funding, and because we would need to adjust our evaluation model and approaches. But, we think this could be a strong expansion opportunity. Indeed, even though we didn't ask for it, some people have suggested relevant work in this area. We assess this opportunity below and consider some ‘crux’ questions for the success/failure of this. 

We're looking for lawyers, law students, researchers, and practitioners in relevant areas to help us explore, plan, and pilot an approach. 

Express your interest

The case to expand into legal scholarship

Legal research seems to have a concrete impact on global-priority issues, such as animal welfare and AI safety regulations. Legal scholarship doesn’t only inform prioritization and strategy–it directly influences how legislation is written and how courts make decisions.[1] Legal scholarship also impacts students' syllabi and reading. It influences their thinking and practice. Students who hear these ideas may become lawyers and argue those ideas in court (yielding court-made law). They may write essays or blog posts on these ideas that are read by people in power. They may become policy analysts and plant the seed of an idea in a lawmaker’s mind.[2]

Litigation and jurisprudence is an adversarial process, and it values authoritative expert opinions and logical counter-arguments.  Judges read legal scholarship and cite it in their decisions. Rigorous evaluation and feedback from The Unjournal may strengthen and promote legal research in impactful areas, making this more likely .

Where legal research is cited, lawyers and judges can also use Unjournal evaluations to reinforce their case or demonstrate weaknesses on the other side. If The Unjournal's evaluations are visible, credible, and well-written, they may be directly taken up and cited in future decisions.

A 'gap in the market': lack of peer review

In North America, there is a lack of substantive expert review for the most prestigious and actively-cited research. In fact, the top-ranked law journals are not really peer-reviewed. They are run by law students who have only studied law for about two years, with little to no active research experience. There is some guidance from one or more faculty members, but students make most of the filtering decisions.

According to an Assistant Professor of Law, paraphrased:

There are basically two submission windows per year: February and August. You can submit to 50-100 journals at the same time through a general portal, withh a small cost per submission.  Papers are assigned to the submissions team at each journal (law students), which must go through over large piles of papers. They may read the abstract, maybe the introduction. If they like it, they may read further.

Typically, you get an "offer"  from a lower-ranked journal, which you can use to get higher-ranked journals to consider making you an offer.  The "offer" is key: after you accept an offer from a journal, there may be some further suggestions, but these are generally optional; a paper that gets (and accepts) an offer will nearly always be published in that journal.

At some of the very top journals, once a paper is prioritized and close to getting an offer, the journal will send articles out to experts in the relevant  field before the offer is finalized. But this is not done consistently or transparently, and it's not clear how common this is.  And most of the work is rejected (perhaps inappropriately) by the students' decision before it even gets to this point. 

In response to the criticisms of student-edited law reviews, some peer-reviewed legal journals have emerged. These are typically managed by faculty or professional organizations (e.g., Journal of Legal Studies or Law & Society Review). But these don’t seem to be perceived as the top-ranked/highest-status outlets.

This suggests that credible expert evaluation of legal research could provide a light in this relative darkness. If we can get actual legal scholars to publicly assess this research, this could provide a valuable benchmark. This could establish a more informative, sophisticated, higher-prestige standard for judging legal scholarship and a more informative career metric. The research with the strongest public evaluations may be more heavily cited and used in legislation and case law. [3] By providing this outlet, we would also have a platform to nudge the field towards a greater focus on high-impact areas and approaches.[4]

Cruxes for this project to succeed 

There’s lots of legal scholarship that we suspect is high impact. Sometimes, the scholarship is cited in a major court decision, and the way the justice cites it suggests it was a defining factor. For example, the right to privacy was first articulated in an 1890 article by Samuel Warren and Louis Brandeis. That has since been cited many times in major decisions: for example, in Griswold v. Connecticut (1965), which recognized that states cannot make using contraception illegal for couples since they have a right to marital privacy.

Some resources seem likely to point towards high-impact legal research: 

Here are some specific examples

Do relevant questions persist over time?

Our evaluation process and dissemination is somewhat slow relative to the needs in some areas (although it may be similar to the timeline for standard law reviews). [5] E.g., we may need about six months to prioritize, find evaluators,  receive and manage their evaluations, give the authors a chance to respond, and synthesize the results. Will this timeline be too long for our evaluation to still be influential?

Are there ‘big works of legal scholarship’ that have a persistent influence and a longer shelf-life of relevance?

Are there ‘things that can be evaluated meaningfully in a legible way’?

Legal scholarship is not generally assessed using real-world evidence,  data, experimentation, statistics, or the scientific method (as is much of the work The Unjournal covers). According to a law professor we spoke to, there may be less of a "ground truth" in legal scholarship than in economics or other areas.  There are also different schools of legal thought (originalism, textualism, etc.), and we would need to take steps to ensure that our evaluations don't merely echo ideological disagreements. 

But, we still suspect that meaningful evaluations are possible. 

To illustrate, law articles can be contrasted to their peers and existing legislation. Since it’s a contained system, a meaningful evaluation looks at how well one author’s idea stands out from the crowd. When an idea stands out, then one of three things happens: 

  1. Maybe the idea had a mistaken premise. If so, and if most experts can readily see the error, then the piece should receive a poor evaluation.
  2. The idea is completely novel, so it’s evaluation depends on its logical consistency and robustness against ‘soft’ rules (e.g., culture or expectations); or,
  3. It calls for overturning previous ideas because something meaningful has changed in the real world (e.g., technological developments), so its evaluation depends on how well they characterize the issue, and the robustness of solutions that go with it.

To give an example, Weil (2024) claims that some AI logically falls under the category of ‘abnormally dangerous’ tech, and that classifying it as such is desirable and logical. This is a claim about legal definitions and the logical consequence of legal precedents; evaluators can consider the correctness and consistency of each of these (~1-2 above). It’s also an empirical claim about the potential harms from AI; evaluators can consider whether this is an accurate characterization, and whether the stated implications are reasonable. 

The above is merely one perspective (Dylan's). There’s more to talk about here, and we would love to hear input (including through the CtA below).

Will people engage? 

Will legal scholars participate? Will they join our team? 

We need:  

  1. Expertise and credibility
  2. People to suggest work to evaluate and/or authors submit their own research
  3. People to help us prioritize among these
  4. Legal scholars to help do these evaluations (with compensation). Will legal professors do these, anonymously or signed? Note that they are used to having students do most of this work, so it may be a big ask, even if we offer our typical ~$450 compensation.
  5. (Ideally) authors to respond to the evaluations.  

To make this happen (paraphrasing the Asst. Prof. of Law) "people would have to be convinced that anyone cares." This professor suggested that it would make a big difference if we could convince top legal journals to consider our evaluations in their screening process (mentioned above) and to note this publicly. [6] Given the large volume of submitted work they need to consider, they may find our evaluations and suggestions helpful.

Can we gain credibility in this space?

Crucial: Finding a qualified, motivated legal scholar willing to provide expertise, credibility, and networks, and take on some leadership (with potential compensation/funding).

Call for participants and involvement

What We’re Looking For

We want to (1) systematically source and prioritize research for impact and relevance for evaluation, and (2) have the research systematically evaluated for its credibility, usefulness, and general quality. We’re looking for people to help us consider and set up a process, an approach, and some criteria and metrics. Our current approach is mainly focused on ~empirical quantitative economics research; this will surely need some adjustments to the legal scholarship context

Evaluating the expected impact of legal research may be challenging. Timeliness of the subject matter may make it easier to predict whether a judge or legislator will read it, but knowing what they will do with it is hard to predict. The impact of past research impact gives us some ideas, but we need more.

So, we need help figuring out, 

I. Where and how to identify relevant legal research to consider for evaluation 

II. How to prioritize this –  i.e., for a given piece of legal research, how to evaluate its potential for impact to help us know whether to prioritize it for evaluation; 

III. How to find, choose, communicate with, reward, and manage the work of potential expert evaluators (aka ‘referees’)

IV. How to ask these evaluators to consider, discuss, and rate the research, considering, e.g., 

  1. Its overall credibility and quality
  2. Potentially including comparisons to existing work and existing measures like journal tiers. Aspects of its quality, e.g.,
  3. Logical consistency, completeness, reasoning transparency
  4. Communication
  5. Understanding and incorporation of previous work
  6. Accurate and informed depiction of the real-world context and policy issues
  7. Adherence to law and doctrine.

V. How to promote and communicate these evaluations to maximize their impact and strengthen our initiative.

Following up/CtA

If this is something that you think is important, and you have the bandwidth to contribute ~4 hours of your time to work with ~3 other colleagues on it in early 2025 then we encourage you to fill out this expression of interest form

Express your interest

We’ll aim to get back to all selected candidates within a few weeks. We will be able to provide some compensation as an honorarium (but it will likely be modest, given our current funding constraints).

If you have any outstanding questions, feel free to reach out at contact@unjournal.org

  1. ^

    It also has an indirect impact, through the arguments of advocacy groups.

  2. ^

    As Lord Reed, a revered judge in the common law, once put it: Keynes famously observed that practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Academic analysis of the kind carried out in this book has an influence on the development of the law whether its practitioners are consciously aware of it or not.

  3. ^

    As already noted, the evaluations themselves may also be cited.

  1. ^

     According to a law professor I (David Reinstein) spoke to at a recent EAG, legal scholarship underemphasizes practical  scholarship in general According to them...

     [Paraphrasing] The scholarship is dichotomized into case analysis vs theoretical analysis (with some under-appreciated comparative law analysis). However  'the middle 'practical' scholarship isn't being written'

    There's a need for research prioritization in the legal scholarship and X-risk space; this is topical and timely. 

    I suspect this is probably also true in the animal-welfare law space.

  2. ^

    Although their submission-to-decision turnaround time may be faster, research is generally submitted in batches only twice per year.  

  3. ^

    While our ultimate aim is for evaluation to replace academic journals, this could be an important step in the right direction. 

  4. ^

    Indeed, even though we didn't ask for it, some people have suggested relevant work in this area. 

Show all footnotes
Comments2


Sorted by Click to highlight new comments since:

Update: currently exploring this within a small team.  We're trying to gauge interest & the capacity for this, as well as propose an approach.

Would still love to get more feedback from ~established legal scholars, lawyers who engage with research, and law students involved with legal journals. 

One question in particular: would legal scholars would you be interested in doing this public peer-review/rating of work in your area, with modest compensation (~$450), and what would motivate them?

If you fall into this category you can register your interest and leave thoughts at bit.ly/UJlegalEOI (a quick survey) or DM me if you prefer. 

Executive summary: The Unjournal is considering expanding into legal scholarship evaluation, as it could have significant impact by providing expert peer review in a field where top journals lack rigorous evaluation processes, particularly for research affecting global priorities like AI safety and animal welfare.

Key points:

  1. Legal research has direct impact on legislation, court decisions, and policy, but lacks rigorous peer review in top journals (which are student-edited).
  2. Key uncertainty: Whether meaningful evaluation is possible given legal scholarship's less empirical nature and different schools of thought.
  3. Success requires recruiting legal scholars for evaluation (challenging given current norms) and building credibility with top journals.
  4. Project needs help with: identifying relevant research, developing prioritization criteria, managing evaluators, and creating evaluation frameworks.
  5. Actionable next step: Seeking legal experts to contribute ~4 hours in early 2025 to help develop evaluation approach (compensation available).

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in Policy
20
Eva
· · 1m read