All of NunoSempere's Comments + Replies

NunoSempere's Shortform

Infinite Ethics 101: Stochastic and Statewise Dominance as a Backup Decision Theory when Expected Values Fail

First posted on nunosempere.com/blog/2022/05/20/infinite-ethics-101 , and written after one too many times encountering someone who didn't know what to do when encountering infinite expected values.

In Exceeding expectations: stochastic dominance as a general decision theory, Christian Tarsney presents stochastic dominance (to be defined) as a total replacement for expected value as a decision theory. He wants to argue that one decision is only ratio... (read more)

3Kevin Lacker9h
You could discount utilons - say there is a “meta-utilon” which is a function of utilons, like maybe meta utilons = log(utilons). And then you could maximize expected metautilons rather than expected utilons. Then I think stochastic dominance is equivalent to saying “better for any non decreasing metautilon function”. But you could also pick a single metautilon function and I believe the outcome would at least be consistent. Really you might as well call the metautilons “utilons” though. They are just not necessarily additive.
NegativeNuno's Shortform

I have a few drafts which could use that, send me a message if you feel like doing that.

Request for proposals: Help Open Philanthropy quantify biological risk

Could you change the deadline to June 5th? This seems like a potentially good fit to some readers of my forecasting newsletter, which goes out at the beginning of each month

1djbinder4d
Thanks for pointing this out, but unfortunately we cannot shift the submission deadline.
Against “longtermist” as an identity

Yesterday I told some EA friends that I strongly identify as an EA and with the EA project while being fundamentally agnostic on the EA movement/community

This seems like a useful distinction, which puts words to something on the back of my mind, thanks

2Zach Stein-Perlman7d
Yes, the former are basically normative while the latter is largely empirical and I think it's useful to separate them. (And Lizka does something similar in this post.)
2021 EA Mental Health Survey Results

So as you discuss, this survey suffers from selection bias. At the time, I suggested[1] looking at SlateStarCodex results instead, filtering by self-reported EA affiliation . I can't find the results for 2021 or 2022, but using results from 2020:

## Helpers
formatAsPercent <- function (float){
  return(sprintf("%0.1f%%", float * 100))
}

## Body
data <- read.csv("2020ssc_public.csv", header=TRUE, stringsAsFactors = FALSE)
data_EAs <- data[data["EAID"] == "Yes",]
n=dim(data_EAs)[1]
n ## 993 EAs answered the survey.

mental_illnesses <- colna
... (read more)
Big List of Cause Candidates: January 2021–March 2022 update

So improving institutional decision-making doesn't seem like it's a new cause area. You've been working on it for quite a while

2IanDavidMoss19d
I guess I'm confused then, since there are some others on the list with longer histories in the movement, including cause prioritization research, biosecurity, s-risks, and EA meta. Arguably democracy promotion, lead exposure, and animal-free proteins as well. Maybe it would be helpful to clarify the criteria for inclusion?
EA Forum Lowdown: April 2022

potentially offensive, but they actually say novel or interesting things. 

This seems like a factor. But you also have A Preliminary Model of Mission-Correlated Investing, or Josh Morrison's calls for research on an advanced market commitment on vaccines, which  are quite good, but not the popular kids this time around.

EA Forum Lowdown: April 2022

FWIW I would be a regular reader of Nuno's monthly (or some other interval) forum digest

Thanks Nora. Note that you can in fact sign up.

[Needs Funding] I invented a cheap, scalable tool for fighting obesity

I would also be curious if you can come up with a project you'd be more excited to lead.

[Needs Funding] I invented a cheap, scalable tool for fighting obesity

how do I go about applying for funding

I would say, come up with a Fermi estimate of where the value proposition is coming from, e.g. from preventing obesity, from being a good investment, etc. Then apply to the either the relevant EA Fund or to the Future Fund

5NunoSempere21d
I would also be curious if you can come up with a project you'd be more excited to lead.
[Needs Funding] I invented a cheap, scalable tool for fighting obesity

proper funding

Can you give a range? Also, how much would you be happy to sell a prototype for?

Solving the replication crisis (FTX proposal)

This would be cool to fund as a bet on success, e.g., to give you/your early stage funders a $10M price if you "actually solve the replication crisis in social science" (or a much lower amount if you hit your milestones but no transformative change occurs). This would allow larger funders for whom you are less legible to create incentives for others who are more familiar with your work to fund you.

Replicating and extending the grabby aliens model

I got around halfway through. Some random comments, no need to respond:

  • What is the highest probability of encountering aliens in the next 1000 years according to reasonable choices once could make in your model?

The possibility of try-once steps allows one to reject the existence of hard try-try steps, but suppose very hard try-once steps.

  • I'm not seeing why this is. Why is that the case?
  • Sometimes you just give a prior, e.g., your prior on d, where I don't really know where it comes from. If it wouldn't take too much time, it might be worth it to quickly mot
... (read more)
3Tristan Cook18d
Thanks for your questions and comments! I really appreciate someone reading through in such detail :-) SIA (with no simulations) gives the nearest and most numerous aliens. My bullish prior (which has a priori has 80% credence in us not being alone) with SIA and the assumption that grabby aliens are hiding gives a median of ~2.5 ⋅10−6chance in a grabby civilization reaching us in the next 1000 years. I don't condition on us not having any ICs in our past light cone. When conditioning on not being inside a GC, SIA is pretty confident (~80% certain) that we have at least one IC (origin planet) in our past light cone. When conditioning on not seeing any GCs, SIA thinks ~50% that there's at least one IC in our past light cone. Even if there origin planet is in our light cone, they may already be dead. Thanks for the suggestion, this was definitely an oversight. I'll add in some text to motivate each prior. * My prior ford, the sum of delay and fuse steps: by definition it is bounded above by the time until now and bounded below by zero. * I set the median to ~0.5 Gy. The median is both to account for the potential delay in the Earth first becoming habitable (since the range of estimates around the first life appearing is ~600 My) and be roughly in line with estimates for the time that plants took to oxygenate the atmosphere (a potential delay/fuse step) . * My prior,LogUniform(0.1Gy,4.5Gy), roughly fits these criteria * My prior forwis pretty arbitrarily chosen. Here's a post-hoc (motivated) semi-justification for the prior. Wikipedia discusses ~8 possible factors for Rare Earths. If there areBinomial(n=8,p=0.25)necessary Rare-Earth like factors for life, each withLogUniform(0.01,1)fraction of planets having the property, then my prior onwisn't awfully off. * If one thinks that between 0.1 and 1 fraction of all planets have each of the eight factors (and they are independent) something roughly similar
2Daniel_Eth18d
" * I'm not seeing why this is. Why is that the case? " Because if (say) only 1/10^30 stars has a planet with just the right initial conditions to allow for the evolution of intelligent life, then that fully explains the Great Filter, and we don't need to posit that any of the try-try steps are hard (of course, they still could be).
Replicating and extending the grabby aliens model

May I ask how you went about writing and formatting this? What format/program did you use to draft it? (my first guess is some kind of markdown) Was it easy to format for EA forum? Did you use any neat tricks for embedding code output (e.g. with rmarkdown)?

Not the OP, but I usually start in markdown, convert it to html using pandoc (with the -r gfm). Then I copy the html to Google docs, share it with people, rework it after their comments, post it to the EA forum, and then download an html from the forum's graphql API which I convert to markdown for my own archives.

3Tristan Cook18d
This definitely sounds like a better approach than mine, thanks for sharing! This will be useful for me for any future projects
EAG is over, but don't delete Swapcard

Swapcard functionality into the forum

Seems a bit unlikely; I created a market on this here.

7Ruby20d
I don't mean that I expect EA Forum software to replace Swapcard for EAG itself probably, just that the goal is to provide similar functionality all year round.
Are there any uber-analyses of GiveWell/ACE top charities?

See also Gordon Irlam's Back of the Envelope Guide to Philanthropy. Maybe not exactly what you are asking, though it's in the vicinity.

3Vasco Grilo21d
Thanks for the feedback! I am aware of that guide, but it seems to lack the incorporation of bayesian reasoning (it is highlighted here [https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/] as an example of an explicit expect value approach).
Market Design Meets Effective Altruism

I thought this was excellent. Do you have any thoughts on further ways to "extend the GiveWell model"? For instance, GiveWell could pay organizations which took a risk incubating or investing in charities which have now reached top or standout status?

1Marshall16d
Thanks for reading and for the support! I like suggestion of a prize, which could encourage some risk-taking but also orient investment towards an objectively defined goal. Another way to extend the model would be for more and more venture philanthropists to explicitly adopt this strategy of funding projects that push towards GiveWell listing, recognizing that this is their opportunity to "exit" (IE - move on to funding other promising projects).
1Austin20d
Yup, I think that should be possible. Here's a (very wip) writeup of how this could work: https://manifoldmarkets.notion.site/Charity-Equity-2bc1c7a411b9460b9b7a5707f3667db8 [https://manifoldmarkets.notion.site/Charity-Equity-2bc1c7a411b9460b9b7a5707f3667db8]
Is working for Meta a good or bad option?

The traditional answer is, I think, to join but then to only work there for one year before moving on.

Happier Lives Institute: 2021 Annual Review

We analysed data from more than 140,000 participants across 80 studies to show that providing group psychotherapy to people with depression in low- and middle-income countries is around 10 times more cost-effective than providing cash transfers to people living in extreme poverty

How sure are you of this conclusion? E.g., what rough probability would you assign that if you did a new review in 15 years it would come out the other way? Also, how are you taking into account long-run effects?

Against immortality?

Conditional on immortality being achievable, we might also care about the hands in whom it is achieved. And if there isn't much altruistic investment, it might by default fall into the hands of inordinately selfish billionaires.

Open Philanthropy Shallow Investigation: Civil Conflict Reduction

I'm not planning on doing this, but I would be somewhat curious to see the BOTEC as a multiplication of intervals, not of point estimates, e.g., with Guesstimate.

Why Consider Supporting SEC Climate Disclosures

I follow a conservative economist who likes to rant about how this is a terrible, terrible idea. See here and here.

3Ben Yeoh14d
I link to John in the orginal post (tho on his Fed counter though similar). You can probably mitigate some of John's and Hester's (who I also link to) concerns, while still allowing for the data and disclosure part. If you take the steel man version of those arguments the problem is that investors are not doing enough litigation. as " " They both argue that these disclosures are covered by existing regulation. I have some sympathy for this point, as it is meant to be covered. But often - in reality - it is not. Currently we rely on audit/maanager's judgement that this is a material risk. The only way to then get the disclosure would be to litigate and claim these are material risks. The costs part of their argument seem to me to be overstated, but are a true trade-off.
Using TikTok to indoctrinate the masses to EA

I agree that the title could have used some work, but I thought that the videos themselves were quite good. Kudos on introducing such a large audience to EA concepts.

HaukeHillebrandt's Shortform

I thought this was a good idea. I have submitted this as a issue here: https://github.com/ForumMagnum/ForumMagnum/issues/4825

NunoSempere's Shortform

I'm guessing $10M-$50M to something like Impossible Food, and $50-100M to political causes

2Aaron Gertler21d
We publish our giving to political causes just as we publish our other giving (e.g. this ballot initiative [https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/californians-against-pandemics-ballot-initiative] ). As with contractor agreements, we publish investments and include them in our total giving if they are conceptually similar to grants (meaning that investments aren't part of the gap James noted). You can see a list of published investments by searching "investment" in our grants database [https://www.openphilanthropy.org/giving/grants].
Nathan Young's Shortform

I guess this is fine

This is not fine

2Nathan Young25d
I dunno. I thought I'd surface.
NunoSempere's Shortform

Better scoring rules

From SamotsvetyForecasting/optimal-scoring:

This git repository outlines three scoring rules that I believe might serve current forecasting platforms better than current alternatives. The motivation behind it is my frustration with scoring rules as used in current forecasting platforms, like Metaculus, Good Judgment Open, Manifold Markets, INFER, and others. In Sempere and Lawsen, we outlined and categorized how current scoring rules go wrong, and I think that the three new scoring rules I propose avoid the pitfalls outlined in that pape

... (read more)
NunoSempere's Shortform

Open Philanthopy’s allocation by cause area

Open Philanthropy’s grants so far, roughly:

This only includes the top 8 areas. “Other areas” refers to grants tagged “Other areas” in OpenPhil’s database. So there are around $47M in known donations missing from that graph. There is also one (I presume fairly large) donation amount missing from OP’s database, to Impossible Foods

See also as a tweet and on my blog. Thanks to @tmkadamcz for suggesting I use a bar chart.

6James Ozden1mo
One thing I can never figure out is where the missing Open Phil donations are! According to their own internal comms (e.g. this job advert [https://jobs.ashbyhq.com/openphilanthropy/893a7160-00d1-43fe-8cfe-47bc83e2cb5d] ) they gave away roughly $450 million in 2021. Yet when you look at their grants database, you only find about $350 million, which is a fair bit short. Any idea why this might be? I think it could be something to do with contractor agreement (e.g. they gave $2.8 million to Kurzgesagt [https://www.openphilanthropy.org/giving/grants/kurzgesagt-video-creation-and-translation] and said they don't tend to publish similar contractor agreements like these). Curious to see the breakdown of the other approx. $100 million though!
4RyanCarey1mo
I did a variation on this analysis here: https://github.com/RyanCarey/openphil [https://github.com/RyanCarey/openphil]
2Chris Leong1mo
Any thoughts on way AI keeps expanding then shrinking? Is it due to 2 year grants?
Unsurprising things about the EA movement that surprised me

Strongly upvoted, but this should be its own top-level post.

Launching the INFER Forecasting Tournament for EA uni groups

Would recommend, INFER's team functionality is great.

Nuclear Expert Comment on Samotsvety Nuclear Risk Forecast

I would also be extremely curious about how this estimate affects the author's personal decision-making. For instance, are you avoiding major cities? Do you think that the risk is much lower on one side of the Atlantic? Would you advise for a thousand people to not travel to London to attend a conference due to the increased risk?

1Jhrosenberg15d
Peter says: No, I live in Washington, DC a few blocks from the White House, and I’m not suggesting evacuation at the moment because I think conventional conflict would precede nuclear conflict. But if we start trading bullets with Russian forces, odds of nuclear weapons use goes up sharply. And, yes, I do believe risk is higher in Europe than in the United States. But for the moment, I’d happily attend a conference in London.
Simple comparison polling to create utility functions

Comparisons are an old and very well-developed area of statistics

Yeah, but it's not clear to me that discrete choice is a good fit for the kind of thing that I'm trying to do (though I've downloaded a few textbooks, and I'll find out). I agree that UX is important.

Nuclear Expert Comment on Samotsvety Nuclear Risk Forecast

A big part of it is even just the inscrutable failure rate of complex early warning systems composed of software, ground/space based sensors and communications infrastructure

This list of nuclear close calls has 16 elements. Laplace's law of succession would give a close call a 5.8% of resulting in a nuclear detonation. Again per Laplace's law, with 16 close calls in (2022-1953), this would imply a (16+1)/(2022-1953+2) = 24% chance of seeing a close call each year. Combining the two forecast gives us 24% of 5.8%, which is 1.4%/year. But earlier warning syst... (read more)

Nuclear Expert Comment on Samotsvety Nuclear Risk Forecast

On Laplace's law and Knightian uncertainty

Calculating the baseline probability of a nuclear war between the United States and Russia—i.e., the probability pre-Ukraine invasion and absent conventional confrontation—is difficult because no one has ever fought a nuclear war. Nuclear war is not analogous to conventional war, so it is difficult to form even rough comparison classes that would provide a base rate. It is a unique situation that approaches true Knightian uncertainty, muddying attempts to characterize it in terms of probabilistic risk. That said, f

... (read more)
1Jhrosenberg15d
(Below written by Peter in collaboration with Josh.) It sounds like I have a somewhat different view of Knightian uncertainty, which is fine—I’m not sure that it substantially affects what we’re trying to accomplish. I’ll simply say that, to the extent that Knight saw uncertainty as signifying the absence of “statistics of past experience,” nuclear war strikes me as pretty close to a definitional example. I think we make the forecasting challenge easier by breaking the problem into pieces, moving us closer to risk. That’s one reason I wanted to add conventional conflict between NATO and Russia as an explicit condition: NATO has a long history of confronting Russia and, by and large, managed to avoid direct combat. By contrast, the extremely limited history of nuclear war does not enable us to validate any particular model of the risk. I fear that the assumptions behind the models you cite may not work out well in practice and would like to see how they perform in a variety of as-similar-as-possible real world forecasts. That said, I am open to these being useful ways to model the risk. Are you aware of attempts to validate these types of methods as applied to forecasting rare events? On the ignorance prior: I agree that not all complex, debatable issues imply probabilities close to 50-50. However, your forecast will be sensitive to how you define the universe of "possible outcomes" that you see as roughly equally likely from an ignorance prior. Why not define the possible outcomes as: one-off accident, containment on one battlefield in Ukraine, containment in one region in Ukraine, containment in Ukraine, containment in Ukraine and immediately surrounding countries, etc.? Defining the ignorance prior universe in this way could stack the deck in favor of containment and lead to a very low probability of large-scale nuclear war. How can we adjudicate what a naive, unbiased description of the universe of outcomes would be? As I noted, my view of the landscape is d
Nuclear Expert Comment on Samotsvety Nuclear Risk Forecast

Hey, thanks for the review. Perhaps unsurprisingly, we thought you were stronger when talking about the players in the field and the tools they have at their disposal, but weaker when talking about how this cashes out in terms of probabilities, particularly around parameters you consider to have "Knightian uncertainty".

We liked your overview of post-Soviet developments, and thought it was a good overview of reasons to be pessimistic. We would also have hoped for an expert overview of reasons for optimism (which an ensemble of experts could provide). For in... (read more)

1Jhrosenberg15d
Thanks for the reply and the thoughtful analysis, Misha and Nuño, and please accept our apologies for the delayed response. The below was written by Peter in collaboration with Josh. First, regarding the Rodriguez estimate, I take your point about the geometric mean rather than arithmetic mean and that would move my probability of risk of nuclear war down a bit — thanks for pointing that out. To be honest, I had not dug into the details of the Rodriguez estimate and was attempting to remove your downward adjustment from it due to "new de-escalation methods" since I was not convinced by that point. To give a better independent estimate on this I'd need to dig into the original analysis and do some further thinking of my own. I'm curious: How much of an adjustment were you making based on the "new de-escalation methods" point? Regarding some of the other points: * On "informed and unbiased actors": I agree that if someone were following Rob Wiblin's triggers, they'd have a much higher probability of escape. However, I find the construction of the precise forecasting question somewhat confusing and, from context, had been interpreting it to mean that you were considering the probability that informed and unbiased actors would be able to escape after Russia/NATO nuclear warfare had begun but before London had been hit, which made me pessimistic because that seems like a fairly late trigger for escape. However, it seems that this was not your intention. If you're assuming something closer to Wiblin's triggers before Russia/NATO nuclear warfare begins, I'd expect greater chance of escape like you do. I would still have questions about how able/willing such people would be to potentially stay out of London for months at a time (as may be implied by some of Wiblin's triggers) and what fraction of readers would truly follow that protocol, though. As you say, perhaps it makes most sense for people to judge this for themselves, but

On Laplace's law and Knightian uncertainty

Calculating the baseline probability of a nuclear war between the United States and Russia—i.e., the probability pre-Ukraine invasion and absent conventional confrontation—is difficult because no one has ever fought a nuclear war. Nuclear war is not analogous to conventional war, so it is difficult to form even rough comparison classes that would provide a base rate. It is a unique situation that approaches true Knightian uncertainty, muddying attempts to characterize it in terms of probabilistic risk. That said, f

... (read more)

Wiblin's triggers to bolt away:

Simple comparison polling to create utility functions

My sense is that the mathematized version would be much more valuable (for instance, I could incorporate it into my tooling), but also harder to obtain than you might realize.

3gwern2mo
I dunno if it's that hard. Comparisons are an old and very well-developed area of statistics, if only for use in tournaments, and you can find a ton of papers and code for pairwise comparisons. I have some & a R utility in a similar spirit on my Resorter page [https://www.gwern.net/Resorter]. Compared (ahem) to many problems, it's pretty easy to get started with some Elo or Bradley-Terry-esque system and then work on nailing down your ordinal rankings into more cardinal stuff. This is something where the hard part is the UX/UI and tailoring to use-cases, and too much attention to the statistics may be wankery.
Forecasting Newsletter: February 2022

Well, until recently you sought to be and were in fact very anonymous (e.g., it's easier to claim to be a uni professor than to actually be one). 

Given that you were anonymous, you wouldn't have been exposed to enormous legal trouble.

You advertised that you would pay in 24h, but you in fact took more than that, and after doing so paid me from an FTX account (so you are not e.g., holding your user's funds but instead, I presume, doing some  smart, minimally risky, yet still very opaque stuff with users' funds. I'm ok with this, but would prefer tr... (read more)

Load More