Hide table of contents

(Crossposted from The Impact Purchase)

The first round of the 2015 Impact Purchase had eight submissions, including research, translation, party planning, mentoring, teaching and money to GiveDirectly. We expected the evaluations would have to be rough, and would like to emphasize that they really were rough: we had to consider lots of things very quickly to get through them in a reasonable time for the scale of the funding. Please forgive us for our inaccuracies, and don't read too much into our choices! This round, we are buying certificates of impact for:

What does this mean? If everything is working correctly, it suggests that for about $1,200 you can buy an investigation as good as Ben's. And if you can make an investigation as good as Ben's, it suggests you can get $1,200 for it. (Note that these prices should include more costs of the labor than are usually accounted for when paying for altruistic projects. Usually if someone pays me to write an EA blog post, say, I am willing to do it for less than what I consider the value of my time, because I also want the blog post to be written. These prices are designed to be the full price without this discounting.)

The submissions

Here are all of the submissions so far. Everything not bought in this round can still be bought in the next rounds:

  1. Teaching at SPARC in 2014 (50%), Ben Kuhn
  2. Post "Does Donation Matching Work?" (50%), Ben Kuhn
  3. Inducing the translation of many papers and posts by Bostrom, Yudkowksy and Hanson to Portuguese, as part of IERFH (40%), Diego
  4. A donation of $100 to GiveDirectly, Telofy
  5. Research comparing modafinil and caffeine as cognitive enhancers, including these blog posts (50%), Joao Fabiano
  6. A chapter of a doctoral thesis defending a spin-off version of Eliezer's complexity of value thesis (20%), Joao Fabiano
  7. Organization of Harry Potter and the Methods of Rationality wrap parties, including organization of the Berkeley party and central organization of other parties (50%), Oliver Habryka
  8. Mentoring promising effective altruists (50%), Oliver Habryka

The evaluations

Too hard to evaluate

We decided not to evaluate teaching at SPARC, inducing the translation of papers, or mentoring. Paul's involvement in SPARC made buying teaching there complicated, and it would already have been difficult to separate the teaching from others' work on SPARC. Inducing the translation of papers also seemed too hard to separate from actually translating the papers, without much more access to exactly what happened between the participants. The value of mentoring EAs seemed too hard to assess.

Purchased projects

We evaluated the other five projects, and it looked as if we would buy the two that we did. We then evaluated those two somewhat more thoroughly. Here are summaries of our evaluations for them.

Ben Kuhn's blog post on donation matching
  1. We estimate that EAs and other related groups move around $500k annually through donation matching. We are thinking of drives run by MIRI, CFAR, GiveDirectly, Charity Science, Good Ventures, among others.
  2. We think a full and clear understanding of donation matching would improve this by around $6k, through such drives being better optimized. We lowered this figure to account for the data being less relevant to some matching drives, and costs and inefficiencies in the information being spread.
  3. We think this work constitutes around 1/30th of a full and clear understanding of donation matching.
  4. We used a time horizon of three years, though in retrospect it probably should have been longer. This implicitly included some general concerns about the fraction of people who have seen it being smaller in the future, and information accruing from other sources and conditions changing, and so on.
  5. We get $6,000 * 3 years / 30 = $600 of stimulated EA donations
Oliver Habryka's organization of HPMOR wrap parties
  1. We estimate that around 1300 people went to wrap parties (adjusted somewhat for how long they were there for). This was based on examining the list of events and their purported attendances, and a few quick checks for verification.
  2. We estimated Oliver's impact was 1/4 of the impact of the wrap parties. We estimated that the existence of cental organization doubled the scale of the event, and we attributed half of that credit to the central organization and half of the credit to other local organizers and non-organizational inputs (which also had to scale up).
  3. We estimated that the attendance of an additional person was worth around $15 of stimulated EA donations. This was a guess based on a few different lines of reasoning. We estimated the value of the EA/LW community in stimulated donations, the value of annual growth, and the fraction of that growth that comes from outreach (as opposed to improving the EA product, or natural social contact), and the fraction of outreach that came from the wrap parties. We also guessed what fraction of people were new, and would become more involved in the EA/LW community as a result, and would end up doing more useful things on our values as a result of that. We sanity checked these numbers against the kind of value participants probably got from the celebration individually.
  4. Thus we have 1300 * 15 /4 = $4,875 of stimulated EA donations, which we rounded up to $5,000.

Note that while we evaluated both items in terms of dollars of stimulated EA donations, these numbers don't have much to do with real dollars in the auction—their only relevance is in deciding the ratio of value between different projects. So systematic errors one way or the other won't much matter.

Notes on our experience

Quick estimates

It was tough to evaluate things fast enough to be worth it given how little we were spending, while also being meaningfully accurate. To some extent this is just a problem with funding small, inhomogeneous projects. But we think it will get better in the future for a few reasons, if we or others do more of this kind of thing:

  1. Having a reference class of similar things that are already evaluated makes it much easier to evaluate a new project. You can tell how much to spend on a bottle of ketchup because you have many similar ketchup options which you have already judged to be basically worth buying, and so you mostly just have to judge whether it is worth an extra $0.10 for less sugar or more food dye or whatever. If you had never bought food before and had to figure out from first principles how much a bottle of ketchup would improve your long term goals, you would have more trouble. Similarly, if we had established going prices for different kinds of research blogging, it would be easier to evaluate Ben's post relative to nearby alternatives.
  2. We will cache many parts of the analysis that come up often. e.g. how much is it worth to attract a new person to the EA movement? And only make comparisons between similar activities.
  3. We will get better with practice.

Shared responsibility

We said we would not buy certificates for collaborative projects unless the subset of people applying had been explicitly allocated a share of responsibility for the project. Collaborative versus not turned out to be a fairly unclear distinction. No project was creating objects of ultimate value directly; so all of these projects are instrumental steps, to be combined with other people's instrumental steps, to make further, bigger instrumental steps. Is a donation to GiveDirectly its own project, or is it part of a collaboration with GiveDirectly and their other donors? Happily, we don't care. We just want to be able to evaluate the thing we are buying. So we were willing to purchase a donation to GiveDirectly from the donor, but not to purchase the output of a cash transfer from a GiveDirectly donor. In some cases it is hard to assess the value of one intermediate step in isolation, and then will be less likely to purchase it (or will purchase it only at a discount).

Call for more proposals

The next deadline will be April 25. If you have any finished work you'd like to partially sell, please consider applying!

Comments4


Sorted by Click to highlight new comments since:

Great! It's excellent to see how this is progressing.

Are you going to try to stick to evaluating individual projects, or do you want people to try to take credit for their part in a collaborative project now?

You didn't explain in your post your rationale for not purchasing Joao Fabiano's work. For what reasons did you rule it out? Difficulty in evaluation?

We evaluated all of the projects other than the three I specifically mentioned not evaluating. Sorry for not writing up the other evaluations - we just didn't have time. We bought the ones that gave us the most impact per dollar, according to our evaluations (and based on the prices people wanted for their work). So we didn't purchase Joao's work this round because we calculated that it was somewhat less cost-effective than the things we did purchase, given the price. We may still purchase it in a later round.

Thanks for the response. That's great feedback to hear.

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
 ·  · 1m read
 · 
This is a linkpost for a new paper called Preparing for the Intelligence Explosion, by Will MacAskill and Fin Moorhouse. It sets the high-level agenda for the sort of work that Forethought is likely to focus on. Some of the areas in the paper that we expect to be of most interest to EA Forum or LessWrong readers are: * Section 3 finds that even without a software feedback loop (i.e. “recursive self-improvement”), even if scaling of compute completely stops in the near term, and even if the rate of algorithmic efficiency improvements slow, then we should still expect very rapid technological development — e.g. a century’s worth of progress in a decade — once AI meaningfully substitutes for human researchers. * A presentation, in section 4, of the sheer range of challenges that an intelligence explosion would pose, going well beyond the “standard” focuses of AI takeover risk and biorisk. * Discussion, in section 5, of when we can and can’t use the strategy of just waiting until we have aligned superintelligence and relying on it to solve some problem. * An overview, in section 6, of what we can do, today, to prepare for this range of challenges.  Here’s the abstract: > AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges.  > > These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making. > > We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are alig
Recent opportunities in Building effective altruism