VG

Vasco Grilo

5216 karmaJoined Jul 2020Working (0-5 years)Lisbon, Portugal
sites.google.com/view/vascogrilo?usp=sharing

Bio

Participation
4

How others can help me

You can give me feedback here (anonymous or not). You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering, and part-time or full-time paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.

Comments
1192

Topic contributions
25

We've started working on this [making some application public], but no promises. My guess is that making public the rejected applications is more valuable than accepted ones, eg on Manifund. Note that grantees also have the option to upload their applications as well (and there are less privacy concerns if grantees choose to reveal this information).

Manifund already has quite a good infrastructure for sharing grants. However, have you considered asking applicants to post a public version of their applications on EA Forum? People who prefer to remain anonymous could use an anonymous account, and anonymise the public version of their grant. At a higher cost, there would be a new class of posts[1] which would mimic some of the features of Manifund, but this is not strictly necessary. The posts with the applications could simply be tagged appropriately (with new tags created for the purpose), and include a standardised section with some key information, like the requested amount of funding, and the status of the grant (which could be changed over time editing the post).

The idea above is inspired by some thoughts from Hauke Hillebrandt.

  1. ^

    As of now, there are 3 types, normal posts, question posts and linkposts/crossposts.

Nice discussion, Owen and titotal!

But it doesn't make sense to me to analogise it to a risk in putting up a sail.

I think this depends on the timeframe. Over a longer one, looking into the estimated destroyable area by nuclear weapons, nuclear risk looks like a transition risk (see graph below). In addition, I think the nuclear extinction risk has decreased even more than the destroyable area, since I believe greater wealth has made society more resilient to the effects of nuclear war and nuclear winter. For reference, I estimated the current annual nuclear extinction risk is 5.93*10^-12.

Hi JP,

Minor. In the messages' page, the screen is currently broken down into 2, with my past conversations on the left, and the one I am focussing on on the right. I would rather have an option to expand the screen on the right such that I do not see the conversations pane on the left, or have an option to hide the conversations pane on the left.

If the point is donor oversight/evaluation/accountability, then I am hesitant to give the grantmakers too much information ex ante on which grants are very likely/unlikely to get the public writeup treatment.

Great point! I had not thought about that. On the other hand, I assume grantmakers are already spending more time on assessing larger grants. So I wonder whether the distribution of the granted amount is sufficiently heavy-tailed for grantmakers to be influenced to spend too much time on them due to their higher chance of being selected for having longer write-ups.

I think grant size also comes into play on the detail level of the writeup.

Another nice point. I agree the level of detail of the write-up should be proportional to the granted amount.

Caleb and Linch randomly selected grants from each group.

I think your procedure to select the grants was great. However, would it become even better by making the probability of each grant being selected proportional to its size? In theory, donors should care about the impact per dollar (not impact per grant), which justifies weighting by grant size. This may matter because there is significant variation in grant size. The 5th and 95th percentile amount granted by LTFF are 2.00 k$ and 169 k$, so, specially if one is picking just a few grants as you did (as opposed to dozens of grants), there is a risk of picking unrepresentatively small grants.

I'm late to the discussion, but I'm curious how much of the potential value would be unlocked -- at least for modest size / many grants orgs like EA Funds -- if we got a better writeup for a random ~10 percent of grants (with the selection of the ten percent happening after the grant decisions were made).

Great suggestion, Jason! I think that would be over 50 % as valuable as detailed write-ups for all grants.

Actually, the grants which were described in this post on the Long-Term Future Fund (LTFF) and this on the Effective Altruism Infrastructure Fund (EAIF) were randomly selected after being divided into multiple tiers according to their cost-effectiveness[1]. I think this procedure was great. I would just make the probability of a grant being selected proportional to its size. The 5th and 95th percentile amount granted are 2.00 k$ and 234 k$, which is a wide range, so it is specially important to make larger grants being more likely to be picked if one is just analysing a few grants as opposed to dozens of grants (as it was the case for the posts). Otherwise there is a risk of picking small grants which are not representative of the mean grant.

There is still the question about how detailed the write-ups of the selected grants should be. They are just a few paragraphs in the posts I linked above, which in my mind is not enough to make a case for the value of the grants without many unstated background assumptions.

If the idea is to see the quality of the median grant, not assess individual grants, then a random sample should work ~as well as writing and polishing for dozens and dozens of grants a year.

Nitpick. I think we should care about the quality of the mean (not median) grant weighted by grant size, which justifies picking each grant with a probability proportional to its size.

  1. ^

    I know you are aware of this, since you commented on the post on LTFF, but I am writing this here for readers who did not know about the posts.

Hi Elizabeth,

I think mentioning CE may have distracted from the main point I wanted to convey. 1 paragraph or sentence is not enough for the public to assess the cost-effectiveness of a grant.

I think downvoting comments like the above is harmful:

  • It disincentivises people to make honest efforts to express dissenting views, thus contributing towards creating echo chambers.
  • It increases polarisation.
    • I assume people who believe they are unfairly downvoted will tend to unfairly downvote others more.
    • I had initially not upvoted/downvoted the original post, but then felt like I should downvote the post given my perception that the comment above was unfairly downvoted. I do not endorse my initial retaliatory reaction, and have now upvoted the post as a way of trying to counter my bad intuitions.

Thanks for the analysis, Hauke! I strongly upvoted it.

The mean "CCEI's effect of shifting deploy$ to RD&D$" of 5 % you used in UseCarlo is 12.5 (= 0.05/0.004) times the mean of 0.4 % respecting your Guesstimate model. Which one do you stand by? Since you say "CCEI is part of a much smaller coalition of only hundreds of key movers and shakers", the smaller effect of 0.4 % (= 1/250) would be more appropriate assuming the same contribution for each member of such coalition.

I think you had better estimate the expected cost-effectiveness in t/$ instead of $/t:

  • The expected benefits in t are equal to the product between the cost and expected cost-effectiveness in t/$[1], not to the ratio between the cost and expected cost-effectiveness in $/t[2].
    • I appreciate the cost-effectiveness you present in your results table was correctly obtained with the 1st of the above methods. However, people could interpret it as referring to the mean cost per benefit, which would not be correct (since E(1/X) is not equal to 1/E(X)).
    • In your Guesstime model, you estimate the expected cost per benefit, which is not directly comparable to the expected benefit per cost that you calculated with UseCarlo.
  • The benefits can often be 0, thus resulting in numerical instabilities in the cost-effectiveness in $/t, although this does not apply to your case.
  1. ^

    E("benefits (t)") = E("cost ($)"*"cost-effectiveness (t/$)") = "cost ($)"*E("cost-effectiveness (t/$)").

  2. ^

    E("benefits (t)") = E("cost ($)"/"cost-effectiveness ($/t)") = "cost ($)"*E(1/"cost-effectiveness ($/t)") != "cost ($)"/E("cost-effectiveness ($/t)").

Load more