I have a few drafts which could use that, send me a message if you feel like doing that.
Could you change the deadline to June 5th? This seems like a potentially good fit to some readers of my forecasting newsletter, which goes out at the beginning of each month
Yesterday I told some EA friends that I strongly identify as an EA and with the EA project while being fundamentally agnostic on the EA movement/community
This seems like a useful distinction, which puts words to something on the back of my mind, thanks
Thanks for the detailed answers!
Right, thanks
So as you discuss, this survey suffers from selection bias. At the time, I suggested[1] looking at SlateStarCodex results instead, filtering by self-reported EA affiliation . I can't find the results for 2021 or 2022, but using results from 2020:
## Helpers
formatAsPercent <- function (float){
return(sprintf("%0.1f%%", float * 100))
}
## Body
data <- read.csv("2020ssc_public.csv", header=TRUE, stringsAsFactors = FALSE)
data_EAs <- data[data["EAID"] == "Yes",]
n=dim(data_EAs)[1]
n ## 993 EAs answered the survey.
mental_illnesses <- colna
... (read more)So improving institutional decision-making doesn't seem like it's a new cause area. You've been working on it for quite a while
potentially offensive, but they actually say novel or interesting things.
This seems like a factor. But you also have A Preliminary Model of Mission-Correlated Investing, or Josh Morrison's calls for research on an advanced market commitment on vaccines, which are quite good, but not the popular kids this time around.
FWIW I would be a regular reader of Nuno's monthly (or some other interval) forum digest
Thanks Nora. Note that you can in fact sign up.
I appreciated this summary
I appreciated this summary
I appreciated the... ruthlessness.
I would also be curious if you can come up with a project you'd be more excited to lead.
how do I go about applying for funding
I would say, come up with a Fermi estimate of where the value proposition is coming from, e.g. from preventing obesity, from being a good investment, etc. Then apply to the either the relevant EA Fund or to the Future Fund
proper funding
Can you give a range? Also, how much would you be happy to sell a prototype for?
This would be cool to fund as a bet on success, e.g., to give you/your early stage funders a $10M price if you "actually solve the replication crisis in social science" (or a much lower amount if you hit your milestones but no transformative change occurs). This would allow larger funders for whom you are less legible to create incentives for others who are more familiar with your work to fund you.
I got around halfway through. Some random comments, no need to respond:
The possibility of try-once steps allows one to reject the existence of hard try-try steps, but suppose very hard try-once steps.
May I ask how you went about writing and formatting this? What format/program did you use to draft it? (my first guess is some kind of markdown) Was it easy to format for EA forum? Did you use any neat tricks for embedding code output (e.g. with rmarkdown)?
Not the OP, but I usually start in markdown, convert it to html using pandoc (with the -r gfm). Then I copy the html to Google docs, share it with people, rework it after their comments, post it to the EA forum, and then download an html from the forum's graphql API which I convert to markdown for my own archives.
Swapcard functionality into the forum
Seems a bit unlikely; I created a market on this here.
See also Gordon Irlam's Back of the Envelope Guide to Philanthropy. Maybe not exactly what you are asking, though it's in the vicinity.
I thought this was excellent. Do you have any thoughts on further ways to "extend the GiveWell model"? For instance, GiveWell could pay organizations which took a risk incubating or investing in charities which have now reached top or standout status?
The traditional answer is, I think, to join but then to only work there for one year before moving on.
GiveWell's perspective on StrongMinds, for any interested.
We analysed data from more than 140,000 participants across 80 studies to show that providing group psychotherapy to people with depression in low- and middle-income countries is around 10 times more cost-effective than providing cash transfers to people living in extreme poverty
How sure are you of this conclusion? E.g., what rough probability would you assign that if you did a new review in 15 years it would come out the other way? Also, how are you taking into account long-run effects?
Conditional on immortality being achievable, we might also care about the hands in whom it is achieved. And if there isn't much altruistic investment, it might by default fall into the hands of inordinately selfish billionaires.
I'm not planning on doing this, but I would be somewhat curious to see the BOTEC as a multiplication of intervals, not of point estimates, e.g., with Guesstimate.
See My bargain with the EA machine for the converse perspective
See also: Altruism as a central purpose for the converse perspective
I agree that the title could have used some work, but I thought that the videos themselves were quite good. Kudos on introducing such a large audience to EA concepts.
Quartz wrote a good piece on it, can't remember off the top of my head if it covers it: https://qz.com/2069284/facebook-is-shutting-down-its-experimental-app-forecast/
iirc, the project lead left for a startup.
I thought this was a good idea. I have submitted this as a issue here: https://github.com/ForumMagnum/ForumMagnum/issues/4825
I'm guessing $10M-$50M to something like Impossible Food, and $50-100M to political causes
I guess this is fine
This is not fine
From SamotsvetyForecasting/optimal-scoring:
... (read more)This git repository outlines three scoring rules that I believe might serve current forecasting platforms better than current alternatives. The motivation behind it is my frustration with scoring rules as used in current forecasting platforms, like Metaculus, Good Judgment Open, Manifold Markets, INFER, and others. In Sempere and Lawsen, we outlined and categorized how current scoring rules go wrong, and I think that the three new scoring rules I propose avoid the pitfalls outlined in that pape
Open Philanthropy’s grants so far, roughly:
This only includes the top 8 areas. “Other areas” refers to grants tagged “Other areas” in OpenPhil’s database. So there are around $47M in known donations missing from that graph. There is also one (I presume fairly large) donation amount missing from OP’s database, to Impossible Foods
See also as a tweet and on my blog. Thanks to @tmkadamcz for suggesting I use a bar chart.
Strongly upvoted, but this should be its own top-level post.
Would recommend, INFER's team functionality is great.
I would also be extremely curious about how this estimate affects the author's personal decision-making. For instance, are you avoiding major cities? Do you think that the risk is much lower on one side of the Atlantic? Would you advise for a thousand people to not travel to London to attend a conference due to the increased risk?
Comparisons are an old and very well-developed area of statistics
Yeah, but it's not clear to me that discrete choice is a good fit for the kind of thing that I'm trying to do (though I've downloaded a few textbooks, and I'll find out). I agree that UX is important.
A big part of it is even just the inscrutable failure rate of complex early warning systems composed of software, ground/space based sensors and communications infrastructure
This list of nuclear close calls has 16 elements. Laplace's law of succession would give a close call a 5.8% of resulting in a nuclear detonation. Again per Laplace's law, with 16 close calls in (2022-1953), this would imply a (16+1)/(2022-1953+2) = 24% chance of seeing a close call each year. Combining the two forecast gives us 24% of 5.8%, which is 1.4%/year. But earlier warning syst... (read more)
On Laplace's law and Knightian uncertainty
... (read more)Calculating the baseline probability of a nuclear war between the United States and Russia—i.e., the probability pre-Ukraine invasion and absent conventional confrontation—is difficult because no one has ever fought a nuclear war. Nuclear war is not analogous to conventional war, so it is difficult to form even rough comparison classes that would provide a base rate. It is a unique situation that approaches true Knightian uncertainty, muddying attempts to characterize it in terms of probabilistic risk. That said, f
Wiblin's triggers to bolt away:
Hey, thanks for the review. Perhaps unsurprisingly, we thought you were stronger when talking about the players in the field and the tools they have at their disposal, but weaker when talking about how this cashes out in terms of probabilities, particularly around parameters you consider to have "Knightian uncertainty".
We liked your overview of post-Soviet developments, and thought it was a good overview of reasons to be pessimistic. We would also have hoped for an expert overview of reasons for optimism (which an ensemble of experts could provide). For in... (read more)
On Laplace's law and Knightian uncertainty
... (read more)Calculating the baseline probability of a nuclear war between the United States and Russia—i.e., the probability pre-Ukraine invasion and absent conventional confrontation—is difficult because no one has ever fought a nuclear war. Nuclear war is not analogous to conventional war, so it is difficult to form even rough comparison classes that would provide a base rate. It is a unique situation that approaches true Knightian uncertainty, muddying attempts to characterize it in terms of probabilistic risk. That said, f
Wiblin's triggers to bolt away:
Personally I'd be very interested in such a blog post
My sense is that the mathematized version would be much more valuable (for instance, I could incorporate it into my tooling), but also harder to obtain than you might realize.
Well, until recently you sought to be and were in fact very anonymous (e.g., it's easier to claim to be a uni professor than to actually be one).
Given that you were anonymous, you wouldn't have been exposed to enormous legal trouble.
You advertised that you would pay in 24h, but you in fact took more than that, and after doing so paid me from an FTX account (so you are not e.g., holding your user's funds but instead, I presume, doing some smart, minimally risky, yet still very opaque stuff with users' funds. I'm ok with this, but would prefer tr... (read more)
Infinite Ethics 101: Stochastic and Statewise Dominance as a Backup Decision Theory when Expected Values Fail
First posted on nunosempere.com/blog/2022/05/20/infinite-ethics-101 , and written after one too many times encountering someone who didn't know what to do when encountering infinite expected values.
In Exceeding expectations: stochastic dominance as a general decision theory, Christian Tarsney presents stochastic dominance (to be defined) as a total replacement for expected value as a decision theory. He wants to argue that one decision is only ratio... (read more)