Hide table of contents

This is the significantly improved second edition of CSS, again looking at the presidential primaries for the US 2020 election.

PDF report

Excel spreadsheet

Here I copy the introductory sections for convenience:

Preface

The Candidate Scoring System (CSS) is a method for selecting preferred candidates in elections. It is based on Effective Altruist (EA) ethics and methodology. Of course, opposing political positions are still valid in the EA movement and there is room for respectable disagreement. Other people in the EA movement may have different understandings of the factual impacts of various political actions, and they may have different values regarding the appropriate goals of government. But we approach the central, most important policy question – how to maximize global well-being – by gathering opinions and research from authorities in a wide range of domains, then modeling them together with our own careful judgment to fill in the gaps.

CSS1 was released on March 5, 2019, establishing basic policy preferences and providing tentative scoring of presidential candidates. For CSS2 we have deepened our analysis of policy questions, gathered more information about political candidates, expanded the number of candidates under consideration, added calculations of election probabilities and counterfactuals, and simplified the information into a single report with an accompanying Excel model.

This project is limited by the constraints of time and manpower against the vast breadth, depth and complexity of the problems that it tackles. Therefore, many arguments and evidence will be missing. This does not mean the project is necessarily wrong or biased, it just means we haven’t yet included as much content and research as we would like to. It is a work in progress and open to input from others. We are uncertain about much of this content, but we minimize hedging language for the sake of readability. If some relevant information is missing, please submit ideas and content to improve the next version – everything here is subject to revision and elaboration.

CSS is an independent volunteer project.

Summary for Voters and Activists

CSS2 makes the following recommendations:

· John Delaney should be supported if there are tractable opportunities to do so, particularly in Iowa.

· Cory Booker should be supported if Delaney’s candidacy is considered intractable.

· Potential Republican challengers to President Trump should be encouraged and supported if a real chance appears, especially John Kasich.

Our recommendations are based on estimates of the expected value of changing the outcomes of the primary races. We approach this question by first estimating the desirability of each candidate as a potential president, yielding presidency scores. We then factor in the nomination and election chances of all their competitors to produce nomination scores representing the difference in the expected election outcome when the candidate wins or loses in the primaries.

Comments27


Sorted by Click to highlight new comments since:

Fantastic report! I love this type of content and can't wait to sink my teeth into it

Incredible report, bravo! Like probably anyone, I don't agree completely with the ratings, but the structure and research helped me think through my own priorities. I was already interested in supporting Delaney, so this motivates me to ask more people to give him a donation to get on the debate stage.

I have some minor suggestions, which I left in this copy.

Beyond that, my only non-minor suggestion is to consider mentioning domestic poverty as a (potential) priority area, even if it ends up not included due to the thresholds. Depending on the poverty line, US poverty contributes to some premature deaths, though I haven't researched what level would be associated with 100,000 per year. Better-designed antipoverty programs could also improve GWP through improved incentives (especially, I'm guessing, with respect to SSDI), though this could be slight.

CSS2 discusses antipoverty programs like UBI and EITC in the budgeting section, though with a different aim. Yang's UBI isn't fully funded (I've estimated it'd add $1.5T to the annual deficit in static simulation), and other antipoverty proposals like Harris's LIFT Act also don't include funding proposals, but I'd consider emphasis to predict antipoverty action.

Another antipoverty bill considered effective by economists is the American Family Act, which is essentially a child dividend. Cash transfers to families with children improve kids' long-term outcomes (Vox). All 2020 candidates in Congress are cosponsors, and all except Sanders cosponsored its predecessor in the last session. Columbia and Vox have summarized the core antipoverty bills.

Thank you for your stellar work.

Thanks for giving such detailed feedback.

I am now leaning towards separating cash transfers/antipoverty programs away from taxation. When I next put major time into this (I'm not currently, actually) I plan to do that.

I'm always looking for other people's ratings, depending on the nature of the disagreement I can compromise between multiple ratings for better accuracy.

I like that you've put the effort into creating this, but I'm not fond of the background assumptions here - there seem to be some elements that not all EAs might necessarily share. For instance, one section begins "Intrinsic moral rights do not exist" - that's certainly not what I believe and it seems inconsistent with other sections that talk about the "intrinsic moral weight" of animal populations, etc.

While the fact that you've "shown your work" with the Excel spreadsheet helps people evaluate the same issues with different weights, if someone is interested in areas that you've chosen to exclude it's less apparent how to proceed.

I do appreciate the work you've put into this, though!

For instance, one section begins "Intrinsic moral rights do not exist" - that's certainly not what I believe and it seems inconsistent with other sections that talk about the "intrinsic moral weight" of animal populations, etc.

It's definitely consistent - animals can have interests without having rights, just like humans.

Rights can point in a bunch of different ways depending on the moral inclinations of the reader. And integrating and applying them to policy is a very murky issue. So even if I wanted to investigate that side of things, I would have little ability to provide useful judgments to EAs.

At some point, it would be nice to include full arguments about morality. But that's pretty low on my priorities, I don't expect to add it in the foreseeable future. Those arguments already exist elsewhere.

While the fact that you've "shown your work" with the Excel spreadsheet helps people evaluate the same issues with different weights, if someone is interested in areas that you've chosen to exclude it's less apparent how to proceed.

You can add a column besides the other topics, then insert a new row into the weight table (select three adjacent cells and press insert...). True it's a little complicated - but I have to make the spreadsheet this way in order to make the sensitivity analysis work well.

I don't think there's much practical difference between "intrinsic moral interests" and "intrinsic moral rights", but that's not really the point - it's more that I think given such differences in perspective between EAs, I'm not sure that documents like this are great for EA as a movement. I would at least prefer to see them presented less... authoritatively?

OK fine, in CSS3 it now simply says " Absolutist arguments for or against abortion disappear once we focus on well-being. "

Like I said, that's not really the point - it also doesn't meaningfully resolve that particular issue, because of course the whole dispute is whose well-being counts, with anti-abortion advocates claiming that human fetuses count and pro-abortion people claiming that human fetuses don't.

I dunno, maybe I'm overly cautious, but I'm not fond of someone publishing a well-made and official-looking "based on EA principles, here's who to vote for" document, since "EA principles" quite vary - I think if EA becomes seen as politically aligned (with either major US party) that constitutes a huge constraint on our movement's potential.

You said the problem was stating it authoritatively rather than the actual conclusions, I made it sound less authoritative but now you're saying that the actual conclusions matter. The document has sufficient disclaimers as it is, I mean the preface clearly says EAs could disagree. You don't see Givewell writing "assuming that poverty is the #1 cause area, which EAs may disagree on" multiple times and I don't treat politics with special reverence as if different rules should apply. I think there's something unhealthy and self-reinforcing about tiptoeing around like that. The point here is to advertise a better set of implicit norms, so that maybe people (inside and outside EA) can finally treat political policy as just another question to answer rather than playing meta-games.

the whole dispute is whose well-being counts, with anti-abortion advocates claiming that human fetuses count and pro-abortion people claiming that human fetuses don't.

If I care about total well-being, then of course people who say that some people's well being doesn't count are going to be wrong. This includes the pro lifers, who care about the future well being of a particular fetus but not the future well being of any potential child (or not as much, at least).

You said the problem was stating it authoritatively rather than the actual conclusions, I made it sound less authoritative but now you're saying that the actual conclusions matter.

Sorry, I perhaps wasn't specific enough in my original reply. The "less authoritative" thing was meant to apply to the entire document, not just this one section - that's why I also said I wasn't sure documents like this are good for EA as a movement.

I think there's something unhealthy and self-reinforcing about tiptoeing around like that. The point here is to advertise a better set of implicit norms, so that maybe people (inside and outside EA) can finally treat political policy as just another question to answer rather than playing meta-games.

Strong disagree. Political policy in practice isn't "just another question to answer" - maybe it should be, but that's not the world we live in - and acting as if it is strikes me as risky.

The "less authoritative" thing was meant to apply to the entire document, not just this one section.

In the preface I state that hedging language is minimized for the sake of readability.

Political policy in practice isn't "just another question to answer".

Neither is poverty alleviation or veganism or anything else in practice.

Neither is poverty alleviation or veganism or anything else in practice.

Again, strong disagree - many things are not politicized and can be answered more directly. One of the main strengths of EA, in my view, is that it isn't just another culture war position (yet?) - consider Robin Hanson's points on "pulling the rope sideways".

Again, strong disagree - many things are not politicized and can be answered more directly.

I think I'm losing track of the point. What does it mean to answer something "more directly"?

consider Robin Hanson's points on "pulling the rope sideways".

I'm not sure how that's relevant here since I'm clearly saying that we're not taking a position on abortion.

Just posting to acknowledge that I've seen this - my full reply will be long enough that I'm probably going to make it a separate post.

Totally a choosing beggar here but I'd love a TLDR on each major candidate, where they were hightest and lowest. Why is ORourke wo low?

I have written basic TLDRs for the presidency scores on page 74. Though, all I wrote for Beto was:

O’Rourke is inexperienced and has not supported animals as well as some other candidates.

He seemed pretty average in other ways.

I'll point to this section more clearly in the next version.

Delaney's hitting 2% on Predictit for the first time AFAIK. Did your quasi-endorsement move the markets?

Heh, well at the time of this report he announced that he was giving $2 to charity for every donation he got, to try to qualify for the debates. So maybe it was that. (He still needs them - it would be useful to donate $1 to his campaign if you have a moment.)

Just donated! For others' convenience, the link is https://go.johndelaney.com/page/content/this-is-about-america/.

Whoa really?

Could you link to the source of this so I could read about the specifics of his promise?

Oh, I see. It's not a 2:1 match, it's $2 flat for each new donor. So donating $1 is sorta gaming Delaney's intention (though adhering to the letter of the deal).

Well, he really is trying to get people to make $1 donations. He's a pretty wealthy guy but he needs 65,000 individual donors in order to be allowed into the debates.

Links aren't working.

Odd. Perhaps your browser does not accommodate OneDrive? I have Firefox; I have opened it in private mode (not logged in) and I can access them. Other people have also accessed them.

Issues may be caused by x32 Firefox or Kaspersky Password Manager: https://support.mozilla.org/en-US/questions/1115599

You can PM me your email address, and I will email the documents to you.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr