All of Jonas Vollmer's Comments + Replies

University EA Groups Should Form Regional Groups

I commented on a draft of this post. I haven't re-read it in full, so I don't know to what degree my comments were incorporated. Based on a quick glance it seems they weren't, so I thought I'd copy the main comments I left on that draft. My main point is that I think inserting regional groups into the funding landscape would likely worsen rather than improve the funding situation. I still think regional groups seem promising for other reasons.

Some of my comments (copy-paste, quickly written):

[Regarding applying for funding:] At a high level, my guess would

... (read more)
How to best address Repetitive Strain Injury (RSI)?

Some further recommendations:

  • Keep using your hands, acknowledging it may be (partly) psychosomatic, and not worrying too much about it. A friend told me they saw a surgeon for RSI and the surgeon recommended to keep using the hands as normally and not worry too much, and that helped in their case.
  • Reducing phone usage; not using the phone in bed while lying down; not playing games on my phone.
Get 100s of EA books for your student group

In 80K's The Precipice mailing experiment, 15% of recipients reported reading the book in full after a month, and ~7% of people reported reading at least half.

I'm also aware of some anecdotal cases where books seemed pretty good - e.g., I know of a very promising person who got highly involved with longtermism within a few months primarily based on reading The Precipice.

The South Korea case study is pretty damning, though. I wonder if things would look better if there had been a small number of promising people who help onboard newly interested ones (or wh... (read more)

Get 100s of EA books for your student group

To me it sounds like you're underestimating the value of handing out books: I think books are great because you can get someone to engage with EA ideas for ~10 hours, without it taking up any of your precious time.

As you said, I think books can be combined with mailing lists. (If there was a tradeoff, I would estimate they're similarly good: You can either get a ~20% probability of getting someone to engage for ~10h via a book, or a ~5%(? most people don't read newsletters) probability of getting someone to engage for ~40h via a mailing list. And while I'd... (read more)

6Benjamin_Todd23dI think I disagree with those fermis for engagement time. My prior is that in general, people are happier to watch videos than to read online articles, and they're happier to read online articles than to read books. The total time per year spent reading books is pretty tiny. (Eg I think all time spent reading DGB is about 100k hours, which is only ~1yr of the 80k podcast or GiveWell's site.) I expect that if you sign someone up to a newsletter and give them a book at the same time, they're much more likely to read a bunch of links from the newsletter than they are to read the book. With our newsletter, the open rate is typically 20-30%, and it's usually higher for the first couple of emails someone gets. About 20% of subs read most of the emails, which go out ~3 times per month. The half life is several years (e.g. 1.5% unsubscribe per month gives you a half life of over 3yr). I don't think our figures are especially good vs. other newsletters. If you give someone a book, I expect the chance they finish it is under 10%, rather than 20%. The other point is about follow up. I think book with no follow up might be almost no value. A case study is South Korea. DGB had top tier media coverage and sold around 30k copies, but I've never heard of any key EAs resulting from that. (Though hopefully if we set up south korean orgs now we'd have an easier time.) The explanation could be almost no-one becomes a committed EA just from reading – lots of one-on-one discussions are basically necessary. And it takes several years for most people. There are lots of ways to achieve this follow up. If a book is given out in the context of a local group, maybe that's enough. But my thinking is that if you sign someone up to a newsletter (or other subscription), you've already (partly) automated the process. As well as sending them more articles, you can send them events, job adverts, invites to one-on-ones etc. I'm confident it's more reliable than hoping they reach out again
What are the EA movement's most notable accomplishments?

Strictly speaking, a lot of the examples are outputs or outcomes, not impacts, and some readers may not like that. It could be good to make that more explicit at the top.

I also want to suggest using more imagery, graphs, etc. – more like visual storytelling and less like just a list of bullet points.

4TheUtilityMonster25dIf I define impact as change and outcome as a result, then isn't every occurrence of an impact an outcome? Are you defining those words differently?
Open Phil EA/LT Survey 2020: Introduction & Summary of Takeaways

I think it's really cool that you're making this available publicly, thanks a lot for doing this!

6MarkusAnderljung1moCame here to say the same thing :)
Mission Hedgers Want to Hedge Quantity, Not Price

Great points, thanks for raising them!

One potential takeaway could be that we may want to set up the financial products we'd like to use for hedging ourselves – e.g., by setting up prediction markets for the quantity of oil consumption. (Perhaps FTX would be up for it, though it won't be easy to get liquidity.)

6Larks1moHistorically it has been hard to get similar products off the ground. Virtually every human has native exposures to housing prices and the overall level of GDP in their country, but for some reason virtually no-one is interested in actually trading them. According to bloomberg on most days literally zero contracts trade for even the front-month Case-Shiller housing composite future. It's possible there might be some natural short interest for oil quantity contracts from e.g. pipelines, whose revenue is determined by the volume of oil sent through them? But this would likely be quite local, and I think you would struggle to find interest in the global quantity.
AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA

I'm surprised this comment was downvoted so much. It doesn't seem very nuanced, but here's obviously a lot going wrong with modern capitalism. While free markets have historically been a key driver of the decline of global poverty (see e.g. this and this), I don't think it's wrong to say that longtermists should be thinking about large scale economic transition (though should most likely still involve free markets).

I think a downvoters view is that:

It packs powerful claims that really need to be unpacked ("unsustainable...massive suffering"), with a backhand against the community ("actually care...claim to") with extraordinary, vague demands ("large economic transition"), all in a single sentence.

It's hard to be generous, since it's so vague. If you tried to riff  some "steelman" off it, you could work in almost any argument critical of capitalism or even EA in general, which isn't a good sign. 

  

The forum guidelines suggest I downvote comments when I dislike the effect they have on a conversation. One of the examples the guidelines give is when a comment contains an error or bad reasoning. While I think the reasoning in Ruth's comment is fine, I think the claim that capitalism is unsustainable and causes "massive suffering" is an error. Nor is the claim backed up by any links to supporting evidence that might change my mind. The most likely effect of ruth_schlenker's comment is to distract from Halstead's original comment and inflame the discussion, i.e. have a negative effect on the conversation.

Some quick notes on "effective altruism"

A friend (edit: Ruairi Donnelly) raised the following point, which rings true to me:

If you mention EA in a conversation with people who don't know about it yet, it often derails the conversation in unfruitful ways, such as discussing the person's favorite pet theory/project for changing the world, or discussing whether it's possible to be truly altruistic. It seems 'effective altruism' causes people to ask the wrong questions.

In contrast, concepts like 'consequentialism', 'utilitarianism', 'global priorities', or 'longtermism' seem to lead to more fruitful conversations, and the complexity feels more baked into the framing.

Denise_Melchin's Shortform

I generally agree with most of what you said, including the 3%. I'm mostly writing for that target audience, which I think is probably at least a partial mistake, and seems worth improving.

I'm also thinking that there seem to be quite a few exceptions. E.g., the Zurich ballot initiative I was involved in had contributors from a very broad range of backgrounds. I've also seen people from less privileged backgrounds make excellent contributions in operations-related roles, in fundraising, or by welcoming newcomers to the community. I'm sure I'm missing many ... (read more)

Most research/advocacy charities are not scalable

I have edited all our fund pages to include the following sentence:

Note: We are temporarily unable to display correct fund balances. Please ignore the balance listed below while we are fixing the issue.

Most research/advocacy charities are not scalable

I strongly agree with the premise of this post and really like the analysis, but feel unhappy with the strong focus on physical products. I think we should instead think about a broader set of scalable ways to usefully spend money, including but not limited to physical products. E.g. scholarships aren't a physical product, but large scholarship programs could plausibly scale to >$100 million.

(Perhaps this has been said already; I haven't bothered reading all the comments.)

4Stefan_Schubert1moYes, it has been pointed out; cf.: https://forum.effectivealtruism.org/posts/Kwj6TENxsNhgSzwvD/most-research-advocacy-charities-are-not-scalable?commentId=mdDxjftDfeZX2AQoZ [https://forum.effectivealtruism.org/posts/Kwj6TENxsNhgSzwvD/most-research-advocacy-charities-are-not-scalable?commentId=mdDxjftDfeZX2AQoZ] https://forum.effectivealtruism.org/posts/Kwj6TENxsNhgSzwvD/most-research-advocacy-charities-are-not-scalable?commentId=xpwxjvimgQe84gcs4
A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good

Yeah, in my model, I just assumed lower returns for simplicity. I don't think this is a crazy assumption – e.g., even if the AI portfolio has higher risk, you might keep your Sharpe ratio constant by reducing your equity exposure. Modelling an increase in risk would have been a bit more complicated, and would have resulted in a similar bottom line.

I don't really understand your model, but if it's correct, presumably the optimal exposure to the AI portfolio would be at least slightly greater than zero. (Though perhaps clearly lower than 100%.)

8MichaelDickens2moTo be clear, my model is exactly the same as your model, I just changed one of the parameters—I changed the AI portfolio's overall expected return from 4.7% to 1.3%. It's not intuitively obvious to me whether, given the 1.3%-return assumption, the optimal portfolio contains more AI than the global market portfolio. I know how I'd write a program to find the answer, but it's complicated enough that I don't want to do it right now. (The way you'd do it is to model the correlation between the AI portfolio and the market, and set your assumptions such that the optimal value-neutral portfolio (given the two investments of "AI stocks" and "all other stocks") equals the global market portfolio. Then write a utility function that assigns more utility to money in the short-timelines world and maximize that function where the independent variable is % allocation to each portfolio. You can do this with Python's scipy.optimize, or any other similar library.) EDIT: I wrote a spreadsheet to do this, see this comment [https://forum.effectivealtruism.org/posts/iZp7TtZdFyW8eT5dA/a-generalized-strategy-of-mission-hedging-investing-in-evil?commentId=Xsqah5ejKCEASPkFf]
What would you do if you had half a million dollars?

I think deciding between capital allocators is a great use of the donor lottery, even as a Plan A. You might say something like: "I would probably give to the Long-Term Future Fund, but I'm not totally sure whether they're better than the EA Infrastructure Fund or Longview or something I might come up with myself. So I'll participate in the donor lottery so if I win, I can take more time to read their reports and see which of them seems best." I think this would be a great decision.

I'd be pretty unhappy if such a donor then felt forced to instead do their ... (read more)

What would you do if you had half a million dollars?

I really liked this comment. Three additions:

  • I would take a close look at who the grantmakers are and whether their reasoning seems good to you. Because there is significant fungibility and many of these funding pools have broad scopes, I personally expect the competence of the grantmakers to matter at least as much as the specific missions of the funds.
  • I don't think it's quite as clear that the LTFF is better than the EA Infrastructure Fund; I agree with your argument but think this could be counterbalanced by the EA Infrastructure Fund's greater focus on
... (read more)
Metaculus Questions Suggest Money Will Do More Good in the Future

It's worth pointing out that these questions apply specifically to global health and development, but could be very different in other cause areas.

I don't think question 1 provides evidence that money will do more good in the future. It might even suggest the opposite: As you point out, malaria prevention and deworming might run out of room for more funding, and to me this seems more likely than the discovery of a more cost-effective option that is also highly scalable (beyond >$30 million per year).

A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good

I took your spreadsheet and made a quick estimate for an AI mission hedging portfolio. You can access it here

The model assumes:

  • AI companies return 20% annually over the next 10 years in a short-timelines world, but less than the global market portfolio in a long-timelines world,
  • AI companies have equal or lower expected returns than the global market portfolio (otherwise we're just making a bet on AI),
  • money is 10x more useful in a short-timelines world than in a long-timelines world,
  • logarithmic utility.

In the model, the extra utility from the AI port... (read more)

4MichaelDickens2moAs an extension to this model, I wrote a solver that finds the optimal allocation between the AI portfolio and the global market portfolio. I don't think Google Sheets has a solver, so I wrote it in LibreOffice. Link to download [https://mdickens.me/materials/Mission-Hedging.ods] I don't know if the spreadsheet will work in Excel, but if you don't have LibreOffice, it's free to download [https://www.libreoffice.org/]. I don't see any way to save the solver parameters that I set, so you have to re-create the solver manually. Here's how to do it in LibreOffice: 1. Go to "Tools" -> "Solver..." 2. Click "Options" and change Solver Engine to "LibreOffice Swarm Non-Linear Solver" 3. Set "Target cell" to D32 (the green-colored cell) 4. Set "By changing cells" to E7 (the blue-colored cell) 5. Set two limiting conditions: E7 => 0 and E7 <= 1 6. Click "Solve" Given the parameters I set, the optimal allocation is 91.8% to the global market portfolio and 8.2% to the AI portfolio. The parameters were fairly arbitrary, and it's easy to get allocations higher or lower than this.
2HaukeHillebrandt2mo@Jonas: I think your model is interesting, but if we define transformative AI like OpenPhil [https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence ]does (" AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution."), and you invest for mission hedging in a diversified portfolio of AI companies (and perhaps other inputs such as hardware) , then it seems conceivable to me to have much higher returns - perhaps 100x of crypto? This is the basic idea for mission hedging for AI, and in line with my prior, and I think this difference in returns might be why I find the results of your model, that Mission hedging wouldn't have a bigger effect, surprising.
4MichaelDickens2moThanks for making this model extension! I believe the most important downside to a mission hedging portfolio is that it's poorly diversified, and thus experiences much more volatility than the global market portfolio. More volatility reduces the geometric return due to volatility drag. Example case: * Stocks follow geometric Brownian motion. * AI portfolio has the same arithmetic mean return as the global market portfolio. * Market standard deviation is 15%, AI portfolio standard deviation is 30%. * Market geometric mean return is 5%. In geometric Brownian motion, arithmetic return = geometric return + stdev^2 / 2. Therefore, the geometric mean return of the AI portfolio is 5% + 15%^2/2 - 30%^2/2 = 1.6%. If we still assume a 20% return to AI stocks in the short-timelines scenario, that gives 1.3% return in the long-timelines scenario. And the annual return thanks to mission hedging is -1.1%. (I'm only about 60% confident that I set up those calculations correctly. When to use arithmetic vs. geometric returns can be confusing.) Of course, you could also tweak the model to make mission hedging look better. For instance, it's plausible that in the short-timeline world, money is 100x more valuable instead of 10x, in which case mission hedging is equivalent to a 24% higher return even with my more pessimistic assumption for the AI portfolio's return.
EA Infrastructure Fund: Ask us anything!

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

You can now apply to EA Funds anytime! (LTFF & EAIF only)

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

EA Infrastructure Fund: May 2021 grant recommendations

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

EA Funds has appointed new fund managers

I am very excited to announce that we have appointed Max Daniel as the chairperson at the EA Infrastructure Fund. We have been impressed with the high quality of his grant evaluations, public communications, and proactive thinking on the EAIF's future strategy. I look forward to having Max in this new role!

EA Infrastructure Fund: Ask us anything!

I'm also in favor of EA Funds doing generous back payments for successful projects. In general, I feel interested in setting up prize programs at EA Funds (though it's not a top priority).

One issue is that it's harder to demonstrate to regulators that back payments serve a charitable purpose. However, I'm confident that we can find workarounds for that.

EA Infrastructure Fund: Ask us anything!

> Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?

Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too).

Just wanted to flag briefly that I personally disagree with this:

  • I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really w
... (read more)
You can now apply to EA Funds anytime! (LTFF & EAIF only)

(Also agree with Max. Long lead times in academia definitely qualify as a "convincing reason" in my view)

You can now apply to EA Funds anytime! (LTFF & EAIF only)

I wouldn't rule it out, but typically we might say something like: We are interested in principle, but would like to wait for another 6-12 months to see how your project/career/organization develops in the meantime before committing the funding (unless there's a convincing reason for why you need the funding immediately).

5Max_Daniel3mo(The following is just my view, not necessarily the view of other EAIF managers. And I can't speak for the LTFF at all.) FWIW I can think of a number of circumstances I'd consider a "convincing reason" in this context. In particular, cases where people know they won't be available for 6-12 months because they want to wrap up some ongoing unrelated commitment, or cases where large lead times are common (e.g., PhD programs and some other things in academia). I think as with most other aspects of a grant, I'd make decisions on a case-by-case basis that would be somewhat hard to describe by general rules. I imagine I'd generally be fairly open to considering cases where an applicant thinks it would be useful to get a commitment now for funding that would be paid out a few months out, and I would much prefer they just apply as opposed to worrying too much about whether their case for this is "convincing".
Refining improving institutional decision-making as a cause area: results from a scoping survey

I'm excited that there's now more work happening on Effective Institutions / IIDM!

Some questions and constructive criticism that's hopefully useful:

The aim was to gauge the diversity of perspectives in the EA community on what “counts'' as IIDM. This helps us understand what the community thinks is important and has the most potential for impact. We hope that the results will shape the rest of our work as a working group and provide a helpful starting point for others as well. 

It seems that you're starting out with the assumption that IIDM is a useful... (read more)

5IanDavidMoss2moHi Jonas, I can share some personal reflections on this. Please note that the following are better described as hunches and impressions based on my experiences rather than strongly held opinions -- I'm hopeful that some of the analysis and knowledge synthesis EIP is doing this year will help us and me take more confident positions in the future. 1. Re: institutional design/governance specifically, I would guess that this scored highly because of its holistic and highly leveraged nature. Many institutions are strongly shaped and highly constrained by rules and norms that are baked into the way they operate from the very beginning or close to it, which in turn can make other kinds of reforms much more difficult or less likely to succeed. The most common problem I see in this area is not so much bad design as lack of design, i.e., silos and practices that may have made sense at one particular moment for one particular set of stakeholders, but weren't implemented with any larger vision in mind for how everything would need to function together. This is a common failure mode when organizations grow opportunistically rather than intentionally. My sense is that opportunities to make interventions into institutional design and governance are few and far between, but can be tremendously impactful when they do appear. It's generally easiest to make changes to institutional design early in the life of an institution, but because the scale of operations is often smaller and the prospects for success unclear at that point, it's not always obvious to the participants how much downstream impact their decisions during that period can have. 2. One of the biggest bottlenecks to improved decision-making in institutions is simply the level of priority and attention the issue receives. There tends to be much more focus in institutions on specific policies and strategies than on the process by which those
2Vicky Clayton3moThanks Jonas. We / I are also really interested in activities that people find promising within this area! The idea with the survey was partly to connect IIDM to categories which exist in other professional communities and academic literatures to help us understand what are considered promising approaches in those fields and allow us to build on existing knowledge.
Shallow evaluations of longtermist organizations

I actually think it would be cool to have more posts that explicitly discuss which organizations people should go work at (and what might make it a good personal fit for them).

EA Infrastructure Fund: Ask us anything!

If you have to pay fairly (i.e., if you pay one employee $200k/y, you have to pay everyone else with a similar skill level a similar amount), the marginal cost of an employee who earns $200k/y can be >$1m/y. That may still be worth it, but less clearly so.

FWIW, I also don't really share the experience that labor supply is elastic above $100k/y, at least when taking into account whether staff have a good attitude, fit into the culture of the organization, etc. I'd be keen to hear more about that.

2018-2019 Long Term Future Fund Grantees: How did they do?

I'd be pretty excited about financially incentivizing people to do more such evaluations. Not sure how to set the incentives optimally, though – I really want to avoid any incentives that make it more likely that people say what we want to hear (or that lead others to think that this is what happened, even when didn't), but I also care a lot about such evaluations being high-quality and and having sufficient depth, so don't want to hand out money for any kind of evaluation.

Perhaps one way is to pay $2,000 for any evaluation or review that receives >120 Karma on the EA Forum (periodically adjusted for Karma inflation), regardless of what it finds? Of course, this is somewhat gameable, but perhaps it's good enough.

You can now apply to EA Funds anytime! (LTFF & EAIF only)

Yeah, I plan to keep sending the form around in the coming months. Using the EA Forum question feature is a great idea, too. Thank you!

2018-2019 Long Term Future Fund Grantees: How did they do?

Thanks a lot for doing this evaluation! I haven't read it in full yet, but I would like to encourage more people to review and critique EA Funds. As the EA Funds ED, I really appreciate it if others take the time to engage with our work and help us improve.

Is there a useful way to financially incentivise these sort of independent evaluations? Seems like a potential good use of fund money

You can now apply to EA Funds anytime! (LTFF & EAIF only)

"It's hard to find great grants" seems different than "It's hard to find grants we really like".

I would expect that most grantmakers (including ones with different perspectives) would agree with this and would find it hard to spend money in useful ways (e.g., I suspect that Nuño might say something similar if he were running the LTFF, though not sure). So while I think your framing is overall slightly more accurate, I feel like it's okay to phrase it the way I did.

that they're skeptical of funding independent researchers

I don't think this characterization ... (read more)

3xccf3moMy model for why there's a big discrepancy between what NIH grantmakers will fund and what Fast Grants recipients want to do is that NIH grantmakers adopt a sort of conservative, paternalistic attitude. I don't think this is unique to NIH grantmakers. For example, in your comment you wrote: The person who applies for a grant knows a lot more about their situation than the grantmaker does: their personal psychology, the nature of their research interests, their fit for various organizations. They seem a lot better equipped to make career decisions for themselves than busy grantmakers. It seems worth considering the possibility that there are psychological dynamics to grantmaking that are inherent in the nature of the activity. Maybe the NIH has just had more time to slide down this slope than EA Funds has.
You can now apply to EA Funds anytime! (LTFF & EAIF only)

In a typical case, it takes a week to complete due diligence, and up to 31 days for the money to be paid out (because we currently do the payouts in monthly batches). So from decision to "money in the bank account" it takes 1–6 weeks, typically 3.5 weeks. I think the country shouldn't matter too much for this. Because most grantees care more about having a definite decision than the money actually arriving in their bank account, this waiting time seemed fine to us (though we're also looking into ways to cut it short).

That said, if the grantseeker indicates that they need the money urgently, and they submit due diligence promptly, the payout can be expedited and should take just a few days.

2BrianTan3moGot it, thanks! Yeah 3.5 weeks is fine, and it's cool too that the payout can be expedited if needed.
You can now apply to EA Funds anytime! (LTFF & EAIF only)

Thanks, we really hope it will help people like the ones you mentioned!

What should CEEALAR be called?

I like Athena, or Athena Centre!

9Linch3moIn case you want lesser-known goddesses (eg because TOFKACEEALAR wants to branch out), there's also Aletheia [https://en.wikipedia.org/wiki/Aletheia], Hedone [https://en.wikipedia.org/wiki/Hedone] and Harmonia [https://en.wikipedia.org/wiki/Harmonia].
Forget replaceability? (for ~community projects)

In my mind, a significant benefit of impact certificates is that they can feel motivating:

The huge uncertainty about the long-run effects of our actions is a common struggle of community builders and longtermists. Earning to give or working on near-term issues (e.g., corporate farm animal welfare campaigns, or AMF donations) tends to come with a much stronger sense of moral urgency, tighter feedback loops, and a much clearer sense of accomplishment if you actually managed to do something important: 1 million hens are spared from battery cages in country X!... (read more)

Forget replaceability? (for ~community projects)

Certificates of impact are the best known proposal for this, although they aren't strictly necessary.

I don't understand the difference between certificates of impact and altruistic equity – they seem kind of the same thing to me. Is the main difference that certificates of impact are broader, whereas altruistic equity refers to certificates of impact of organizations (rather than individuals, etc.)? Or is the idea that certificates of impact would also come with a market to trade them, whereas altruistic equity wouldn't? Either way,  I don't find it u... (read more)

EA Infrastructure Fund: Ask us anything!
  • Having an application form that asks some more detailed questions (e.g., path to impact of the project, CV/resume, names of the people involved with the organization applying, confidential information)
  • Having a primary investigator for each grant (who gets additional input from 1-3 other fund managers), rather than having everyone review all grants
  • Using score voting with a threshold (rather than ordering grants by expected impact, then spending however much money we have)
  • Explicitly considering giving applicants more money than they applied for
  • Offering feedb
... (read more)
EA Infrastructure Fund: Ask us anything!

I include the opportunity cost of the broader community (e.g., the project hires people from the community who'd otherwise be doing more impactful work), but not the opportunity cost of providing the funding. (This is what I meant to express with "someone giving funding to them", though I think it wasn't quite clear.)

EA Infrastructure Fund: Ask us anything!

This isn't what you asked, but out of all the applications that we receive (excluding desk rejections), 5-20% seem ex ante net-negative to me, in the sense that I expect someone giving funding to them to make the world worse. In general, worries about accidental harm do not play a major role in my decisions not to fund projects, and I don't think we're very risk-averse. Instead, a lot of rejections happen because I don't believe the project will have a major positive impact.

2Linch3moare you including opportunity cost in the consideration of net harm?
EA Infrastructure Fund: May 2021 grant recommendations

A further point is donor coordination / moral trade / fair-share giving. Treating it as a tax (as Larks suggests) could often amount to defecting in an iterated prisoner's dilemma between donors who care about different causes. E.g., if the EAIF funded only one org, which raised $0.90 for MIRI, $0.90 for AMF, and $0.90 for GFI for every dollar spent, this approach would lead to it not getting funded, even though co-funding with donors who care about other cause areas would be a substantially better approach.

You might respond that there's no easy way to ver... (read more)

2Larks3moI'm afraid I don't quite understand why such an org would end up unfunded. Such an organisation is not longtermist or animal rights or global poverty specific, and hence seems to fall within the natural remit of the Meta/Infrastructure fund. Indeed according to the goal [https://forum.effectivealtruism.org/posts/KesWktndWZfGcBbHZ/ea-infrastructure-fund-ask-us-anything] of the EAIF it seems like a natural fit: Nor would this be disallowed by weeatquince's policy, as no other fund is more appropriate than EAIF:
EA Infrastructure Fund: Ask us anything!

Here's a toy model:

  • A production function roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly)
  • A default assumption that longtermism will eventually end up with $30-$300B in funding, let's assume $100B

Increasing the funding from $100B to $200B would then increase utility by 15%.

EA Infrastructure Fund: Ask us anything!

I don't think anyone has made any mistakes so far, but they would (in my view) be making a mistake if they didn't allocate more funding this year.

Edit:

you've said elsewhere already indicate you think smaller donors are indeed often making a mistake by not allocating more funding to EAIF and LTFF

Hmm, why do you think this? I don't remember having said that.

5MichaelA3moActually I now think I was just wrong about that, sorry. I had been going off of vague memories, but when I checked your post history now to try to work out what I was remembering, I realised it may have been my memory playing weird tricks based on your donor lottery post [https://forum.effectivealtruism.org/posts/LNNrqDeAMqHFNKaGa/why-you-should-give-to-a-donor-lottery-this-giving-season] , which actually made almost the opposite claim. Specifically, you say "For this reason, we believe that a donor lottery is the most effective way for most smaller donors to give the majority of their donations, for those who feel comfortable with it." (Which implies you think that that's a more effective way for most smaller donors to give than giving to the EA Funds right away - rather than after winning a lottery and maybe ultimately deciding to give to the EA Funds.) I think I may have been kind-of remembering what David Moss said as if it was your view, which is weird, since David was pushing against what you said. I've now struck out that part of my comment.
EA Infrastructure Fund: May 2021 grant recommendations

the extent to which fund managers should be trying to instantiate donor's wishes vs fund managers allocating the money by their own lights of what's best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former

This is a longer discussion, but I lean towards the latter, both because I think this will often lead to better decisions, and because many donors I've talked to actually want the fund managers to spend the money that way (the EA Funds pitch is "defer to experts" and donors want to go all in on... (read more)

EA Infrastructure Fund: May 2021 grant recommendations

I think we will probably do two types of post-hoc evaluations:

  1. Specifically aiming to improve our own decision-making in ways that seem most relevant to us, without publishing the results (as they would be quite explicit about which grantees were successful in our view), driven by key uncertainties that we have
  2. Publicly communicating our track record to donors, especially aiming to find and communicate the biggest successes to date

#1 is somewhat high on my priority list (may happen later this year), whereas #2 is further down (probably won't happen this year... (read more)

EA Infrastructure Fund: May 2021 grant recommendations

high quality and convincing in whatever conclusions it has

This.

EA Infrastructure Fund: May 2021 grant recommendations

Yeah, the latter is what I meant to say, thanks for clarifying.

4weeatquince3moFWIW I had assumed the former was the case. Thank you for clarifying. I had assumed the former as * it felt like the logical reading of the phrasing of the above * my read of the things funded in this round seemed to be that some of them don’t appear to be b OR c (unless b and c are interpreted very broadly).
Load More