mtrazzi

Topic Contributions

Comments

Norms and features for the Forum

LessWrong has been A/B testing for a voting system separate from karma for  "agree/disagree". I would suggest contacting the LW team to know 1) the results from their experiments 2) how easy it would be to just copy the feature to the EAF (since codebases used to be the same).

EA and the current funding situation

Thanks for the thoughtful post. (Cross-posting a comment I made on Nick's recent post.)

My understanding is that people were mostly speculating on the EAF about the rejection rate for the FTX future fund's grants and distribution of $ per grantee. What might have caused the propagation of "free-spending" EA stories:

  • the selection bias at EAG(X) conferences where there was a high % of  grantees.
  • the fact that the FTX future fund did not (afaik) released their rejection rate publicly
  • other grants made by other orgs happening concurrently (eg. CEA)

This post helped me clarify my thoughts on this. In particular, I found this sentence useful to shed light on the rejection rate situation:

 "For example, Future Fund is trying to scale up its giving rapidly, but in the recent open call it rejected over 95% of applications" 

Some clarifications on the Future Fund's approach to grantmaking

My understanding is that people were mostly speculating on the EAF about the rejection rate and distribution of $ per grantee. What might have caused the propagation of "free-spending" EA stories:

  • the selection bias at EAG(X) conferences where there was a high % of  grantees.
  • the fact that FTX did not (afaik) release their rejection rate publicly
  • other grants made by other orgs happening concurrently (eg. CEA)

I found this sentence in Will's recent post "For example, Future Fund is trying to scale up its giving rapidly, but in the recent open call it rejected over 95% of applications" useful to shed light on the rejection rate situation.

Peter Wildeford on Forecasting Nuclear Risk and why EA should fund scalable non-profits

Note: the probabilities in the above quotes and in the podcast are the result of armchair forecasting. Please do not quote Peter on this. (I want to give some space for my guests to give intuitions about their estimates without having to worry about being extra careful.)

How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe?

To make that question more precise, we're trying to estimate xrisk_{counterfactual world without those people} - xrisk_{our world}, with xrisk_{our world}~1/6 if we stick to The Precipice's estimate.

Let's assume that the x-risk research community completely vanishes right now (including the past outputs, and all the research it would have created). It's hard to quantify, but I would personally be at least twice as worried about  AI risk that I am right now (I am unsure about how much it would affect nuclear/climate change/natural disasters/engineered pandemics risk and other risks).

Now, how much of the "community" was actually funded by "EA $"? How much of those researchers would not be capable of the same level of output without the funding we currently have? How much of the x-risk reduction is actually done by our impact in the past (e.g. new sub-fields of x-risk research being created, where progress is now (indirectly) being made by people outside of the x-risk community) vs. researcher hours today? What fraction of those researchers would still be working on x-risk on the side even if their work wasn't  fully funded by "EA $"?