All of NunoSempere's Comments + Replies

As a personal anecdote I was spending a few hundred bucks per month last year and now I'm spending a few thousand so the cost I'm paying is definitely rising :)

This feels so so ugly in a good way, congrats from finding an unexplored corner in the space of possible solutions

deliberately, consciously

I think the source you mention is talking about people deceiving themselves

stomach-churning thing of all is that CFAR organized a summer camp for kids where, according to one person who was involved, things were even worse than at CFAR itself

Idk man I think this summary is a few shades more alarming than the post you are taking as evidence.

2
Yarrow Bouchard 🔸
The Facebook comment by a CFAR co-founder that you quoted in that post says (emphasis added): The "usually consciously" is key. In that post, you said you were "a bit freaked out" about an aspect of the kids’ summer camp that was run by CFAR. You also said that the dynamics at the summer camp were "at times a bit more dysfunctional than" CFAR. What’s the disconnect between what you wrote and my summary? At times a bit more dysfunctional than CFAR sounds to me like you were saying the camp was worse than CFAR. If that’s not what you were trying to say, what were you trying to say?
2
Tee
Thank you! It's that feel in development we want to have

How should one think about this in the constext of like, IAPS existing?

4
kierangreig🔸
Good question. I think this recent comment addresses it. 

Omg the sign up flow for this is comically bad XD, still, only took me 9 mins.

5
Jakub Stencel
You mean The European Union's page? Well, probably nothing made the EU more talked about, even decades of peace and cooperation, than their ability to affect the UX of an average internet user. They need to live to the expectations of their brand.

I think doing it successfully would take too large a chunk of my time, but I was considering it in case Sentinel fails

Some questions which feel alive for me:

  1. Should we expect risks to come from categories and places we expect to see coming, or from places we would not have anticipated afterwards? What's the proportion of black swans for the largest risks?
  2. How do we incorporate a term for doing good in a way that helps us do more good in the future? Companies can sell stock, whereas nonprofits can't; the more Elon Musk does the more money he makes, even while pursuing some notion of the good, whereas the more a philanthropist gives away the less he has. This seems like a strategic disadvantage. This is more like an operational decision than like a research question though.

What can I say, I really like your work and I wish it was more widely know, which would mean that you'd get more ressources to continue doing it.

Here is a reading list I made for a new hire at Sentinel. I think it does a good job at capturing the promise, need, yet limitations of forecasting which I've found over the last few years. Suggestions of items to add welcome.

From our writtings:

Fundaments of Probability Theory

... (read more)

<https://forum.effectivealtruism.org/posts/4DeWPdPeBmJsEGJJn/interview-with-a-drone-expert-on-the-future-of-ai-warfare>

If 3H is false but we act urgently, this false positive is far less bad, as we will have many years (maybe millions) later in which to invest resources for the real hinge of history.

 

But you lose the compounding, particularly if later generations make the same calculus, and so you can't implement something like a Patient Philanthropy Fund. https://www.founderspledge.com/programs/patient-philanthropy-fund

4
Ozzie Gooen
This feels highly targeted :)  Noted, though! I find it quite difficult to make good technical progress, manage the nonprofit basics, and do marketing/outreach, with a tiny team. (Mainly just me right now). But would like to improve. 

Distribution rules everything around me

 

First time founders are obsessed with product. Second time founders are obsessed with distribution.

 

I see people in and around EA building tooling for forecasting, epistemics, starting projects, etc. They often neglect distribution. This means that they will probably fail, because they will not get enough users to justify the effort that went into their existence.

 

Some solutions for EAs:

  • Build a distribution pipeline for your work. Have a mailing list on substack. Have a twitter account. This means that
... (read more)
3
Linch
Link appears to be broken.
1
Mikolaj Kniejski
Can you share any of those tools?

At least an equal level of data efficiency

...

This is the only kind of AI system that could plausibly automate all human labour

Your bar is too high, you can automate all human labour with less data efficiency.

-2
Davidmanheim
AI will hunt down the last remaining human, and with his last dying breath, humanity will end - not with a bang, but with a "you don't really count as AGI"
-1
Yarrow Bouchard 🔸
This apparently isn’t true for autonomous driving[1] and it’s probably even less true in a lot of other domains. If an AI system can’t respond well to novelty, it can’t function in the world because novelty occurs all the time. For example, how can AI automate the labour of scientists, philosophers, and journalists if it can’t understand novel ideas? 1. ^ Edited to add on October 20, 2025 at 12:23pm Eastern: Don’t take my word for it. Andrej Karpathy, an AI researcher formerly at OpenAI who led Tesla’s autonomous driving AI from 2017 to 2022, recently said on a podcast that he doesn’t think fully autonomous driving is nearly solved yet:

No, I think on that post I'm saying something that is more like "what if we were all much more capable", which seems tamer.

I had reason to come back to this comment. Rereading it, I don't think I'm exactly wrong, but I'm not paying enough face, enough respect to the challenges of running an organization, and so the bar that I am setting is in some sense inhuman. These days if I wanted to give similar feedback I would do so in private, and I would make sure it is understood to come from a place of appreciation.

2
Mo Putera
re: the inhuman bar, is that also a reassessment of your views on formidability? 

You are underrating the geographical closeness of China and Taiwan, and overrating the cost of shipping military materiel continuously to a contested area. 

2
Charlie_Guthmann
Do you have any new thoughts on the probabilities/timelines of when he is going to invoke the insurrection act? 

Executive summary for this week's global risks roundup

Top items:

  • Geopolitics: Russian jets entered Estonia’s airspace. An agreement between Pakistan and Saudi Arabia could bring Saudi Arabia under Pakistan’s nuclear umbrella.
  • US Politics: A US comedian’s show was suspended after the FCC chair exerted pressure on his network and on companies that own local TV stations. The Trump administration plans to announce actions targeting left-wing groups.
  • Tech and AI: Open-source AIs were used to design variants of a simple virus genome, eliminating the need for
... (read more)
2
Tristan W
Thanks for the appreciation :) 

I followed up with classic 2019 EA forum post Aligning Recommender Systems as Cause Area with a podcast with Ivan Vendrov here: https://x.com/NunoSempere/status/1965820448123629986 

Here is an endpoint that takes a google doc and turns into a markdown file, including the comments. https://docs.nunosempere.com. Useful for automation, e.g., I downloaded my browser history, extracted all google docs, summarized them, and asked for a summary & blindspots.

2
Aaron Bergman
This is coming in handy, thanks!! Who would win, a $3T company or one guy on the EA Forum? (answer: the latter)
1
EffectiveAdvocate🔸
Can you share how you have set this up? 
4
MichaelDickens
This may end up solving an upcoming problem of mine in which I wrote an org-mode doc, converted it to a Google Doc, made some changes, and might need to convert it to Markdown to publish it.

You might also enjoy https://forum.effectivealtruism.org/s/AbrRsXM2PrCrPShuZ and  https://github.com/NunoSempere/SoGive-CSER-evaluation-public

Here is the executive summary and few sections for this week's brief on global risks, by my team @ Sentinel

  • Geopolitics: Trump and Putin met in Alaska to discuss the Ukraine war. Forecasters’ estimate of the chance of a ceasefire by October dropped from 27% pre-summit to 9%.
  • Biorisks: The chikungunya virus continues to spread, including in France and the UK.
  • Tech and AI: Meta’s policies explicitly allowed its AI chatbots to “engage a child in conversations that are romantic or sensual.”
  • And more: Three soldiers were killed and four others injured in a d
... (read more)
2
Lukas_Gloor
France has locally acquired cases (so the mosquito already lives there) whereas the UK cases are all linked to travel, I think.

Instead of letting the social fabric collapse, everyone suddenly turns their ire on one person, the victim. Maybe this person is a foreigner, or a contrarian, or just ugly. The transition from individuals to a mob reaches a crescendo. The mob, with one will, murders the victim (or maybe just exiles them).

Maybe this person is a contrarian, but Girard also argues that the scapegoat effect is greater if the person is like any other member of the public, because then this scares the participants more because "it could have been me"

4
Jackson Wagner
IIRC Girard posits kind of a confusing multi-step process that involves something like: * people become more and more similar due to memetic desire, competition, imitation, etc.  ironically, as people become more similar, they become more divided and start more fights (since they increasingly want the same things, I guess).  So, tension increases and the situation threatens to break out into some kind of violent anarchy. * in order to forestall a messy civil war, people instead fixate on a scapegoat (per Ben's quote above).  everyone exaggerates the different-ness of the scapegoat and gangs up against them, which helps the community feel nice and unified again. So the scapegoat is indeed different in some way (different religion, ethnicity, political faction, whatever).  And if you ask anybody at the time, it'll be the massive #1 culture-war issue that the scapegoated group are all heathens who butter their bread with the butter side down, while we righteous upstanding citizens butter our bread with the butter side up.  But objectively, the actual difference between the two groups is very small, and indeed the scapegoat process is perhaps more effective the smaller the actual objective difference is.  (One is reminded of some of Stalin's purges, where true believers in the cause of communism were sent to the gulag for what strike us today as minor doctrinal differences.  Or the long history of bitter religious schisms over nigh-incomprehensible theological disputes.)
3
Linch
Isn't Girard's claim just pretty empirically testable? AFAIK like at no point in any of human history has >50% of the human population been Christians, so we can just look at non-Christian societies and see whether this sacrificial model (as practiced by eg the Aztecs, or at Salem) looked more like the exception, or like the rule. Like what's the historicity of his claims?  (My impression fwiw is that it's obviously false if you ask anthropologists or Roman scholars or China scholars or Egypt scholars, but I'm not an expert here)
2
NunoSempere
Banger article otherwise tho

Just addressing part of your comment, I think additional non-death badness seems similar for Sudan and Ukraine.

I think if you focus on conflicts it's just smaller than other conflicts:

  • Russia/Ukraine war: ~150-250K killed on the Russian side, 50K to 100K killed on the Ukrainian side; source
  • Sudanese civil war: "Likely significantly more than 150,000 total killed[44] More than 700,000 children with acute malnutrition[45] 8,856,313 internally displaced 3,506,383 refugees[46]" per Wikipedia. Also at risk of famine

Here is a related tweet.

I agree that with the risk of famine losses rise to, potentially, 2.1M. But the whole population of Gaza being killed seems very unlike... (read more)

2
NunoSempere
FWIW my team, Sentinel, is indeed tracking Gaza in our weekly briefs.
7
Ben Stevenson
  It's odd to say this when you don't give a comparable casualty figure for Gaza, which would be 77,000 to 109,000 for May 2025, and when you estimate that, with a famine, casualties could reach 2,100,000.
huw
30
5
4

(I might delete this post later if it derails the thread, I’m not sure how useful or constructive it is—please let me know!)

I think this inadvertently highlights why an ‘EA’ (utilitarian) framing might downplay the badness of the conflict. I think the badness falls into five buckets (opinions follow; and not claiming that you do or don’t agree with these):

  1. The direct number of deaths and ongoing suffering is high.
  2. The systematic bombing and killing of civilians, coupled with the eradicationist rhetoric of the Israeli government, almost certainly constitut
... (read more)

I think the Sudanese civil war is a relevant comparison. I'd take the typical EA point to be something like:

"If Western diplomats spent as much time as they've (ineffectively) spent trying to avert famine / improve aid in Gaza as ending the war in Sudan, it seems like there would have been much more progress -- fewer dead, starving, maimed, irreparably emotionally harmed."

Or one level deeper, there are probably conflicts that are not yet happening that we could decrease the likelihood of and that are probably even more neglected. I'm thinking of Ethiopia a... (read more)

FWIW the latest estimate I heard from Gaza was 100,000 dead (many of which haven't been reported by Hamas) (sorry for the paywall) which is on the same order of magnitude - and as opposed to the Ukraine war, most of them aren't combatants. It's up to you what to make of that.

My team at Sentinel produces a weekly brief on global risks. Here is the executive summary and forecasts for this weeks:

Key items this week are:

  • Economy and trade: Trump announced new tariffs. US GDP growth is driven by AI capex spending while labor slumps. In response to an unfavorable labor report, Trump fired the commissioner of the US Bureau of Labor Statistics.
  • Geopolitics: UK, France and Canada announced their intention to recognize a Palestinian state. Hunger is widespread in Gaza and might meet the technical definition of famine later this year, whic
... (read more)
4
ElliotTep
For anyone wondering whether to subscribe, I’ve been subscribed for a month and it’s an excellent newsletter. Once a week email covering things happening in the news with forecasts, reasoning, and aiming to cover what actually matters. It’s great. 

We're looking into this as part of Sentinel. I agree it looks unlikely. Manifest at 3%, though I think it's even lower than that because of the low return over ~2 years https://manifold.markets/embed/NuñoSempere/will-anthropic-go-bankrupt-or-be-di

2
Hauke Hillebrandt
Also, in some secondary markets Anthropic is trading at record highs (e.g. https://notice.co/c/anthropic )

There is an additional set of beliefs which EAs share, which is something like: EA institutions are a good mean to pursue those goals, their leaders worth deferring to, their forum worth using, the brand worth expanding, the community worth recruiting people into. I think this is important, otherwise it risks stolen valor and e.g., counting Bill Gates as "an EA", and it risks eliding the ways in which the community is very dysfunctional.

I personally had a small crisis when I came to believe that the Center for Effective Altruism wasn't cost-effective. I wr... (read more)

I don't get that sense. Compared to most other movements I feel that people involved in EA are less likely to think those things (blind deferral/spread the brand/trust institutions)

people reporting very low P(doom) numbers just don't understand the AI alignment problem

... or disagree with the frame.

Another selection effect is that people who are more interested in xrisk will tend to differentially participate in these forecasts.

https://forum.effectivealtruism.org/posts/EG9xDM8YRz4JN4wMN/samotsvety-s-ai-risk-forecasts might be of interest.

Some related thoughts: https://nunosempere.com/blog/2024/12/04/grain-of-truth-memo/

Also related: VC-backed companies pivoting being fairly common (but much less common with nonprofits, maybe since the relationship with funders is capacity constrained and predicated on a particular theory of impact)

I see that you are getting some downvotes, and while I disagree with this comment in particular I'm glad that you are there in the arena strongly pushing your vision of the good :)

My team has published estimates for precursors of xrisk and has weekly updates that usually contain forecasts. Could be of interest, not sure if that's the kind of thing you are asking about.

1
different Sam
It's close to that - as risks go up and down, what adjacent possible new innovations could make those risks go down (or up!) further? What do you see in your updates that could move solutions between "possible" and "done"? Are there any public assessments of what can be better in the world? e.g. your Asteroid estimate is 0.02%/decade, but NASA DART shows when one shows up we can redirect if we've paid enough attention to see the asteroid coming in enough time (it's a bit more complicated than that, but not much). Humanity has gone from Asteroids being an inevitable (~100%) extinction event over a long enough time horizon to being largely "solved" in scientific terms (ie 0% if we systematically look and have a spare DART mission in a cupboard somewhere, which has an engineering cost of $x). Does anyone look at that transition for more risks? Volcanoes or aliens aren't in that category, but AMR etc appears on some of risk lists (not in your scope). Is there another risk that was catastrophic last decade which can become mitigated next decade? So, in your team's case, does anyone take your forecasts/updates and put them next to tech/innovation/change assessments and gone "someone couldn't have done a DART mission five years ago, but now this thing is possible..."  

Phil Trammell has some good work on this topic, here and here.

Therefore, a patient philanthropist can typically do more good (from her perspective) by investing for the sake of future spending than by spending immediately. She should only begin spending under two circumstances. First, she should spend once she, and any other patient funders in an area she wishes to support, have grown wealthy enough relative to the area’s impatient funders that even the impatient are spending at less than the patient-optimal rate for the collective (patient plus impatien

... (read more)
1
BejeweledFisherman
Interesting! Thank you very much for the references. 
  1. Look into logarithmic utility of money; there is some rich literature here
  2. For an altruistic actor, money becomes more linear again, but I don't have a quick reference here.
1
Clara Torres Latorre 🔸
1. Thank you for pointing out log utility, I am aware of this model (and also other utility functions). Any reasonable utility function is concave (diminishing returns), which can explain insurance to some extent but not lotteries. 2. I could imagine that, for an altruistic actor, altruistic utility becomes "more linear" if it's a linear combination of the utility functions of the recipients of help. This might be defensible, but it is not obvious for me unless that actor is utilitarian, at least in their altruistic actions.

Substack's value (or a blog + newsletter mailchimp/listmonk/buttondown/etc.) is also that the writer owns the mailing list, and so it's easy to disintermediate the platform.

I agree that emphasizing the virtues of federallism is good in general and also in this particular case :)

Agree that the framing is imperfect but as you say your point is a tangent

It's a great question. I looked into it a few weeks ago:

In the current political praxis of the United States, there are various workarounds that the government can pursue in order to take illegal or unconstitutional actions. Examples of such actions that previous administrations have taken include keeping prisoners in Guantanamo Bay in order to delay habeas corpus petitions, conditioning highway funding to force state compliance, and torturing the interstate commerce clause of the Constitution to regulate local matters. The point is that the US has some co

... (read more)
1
Holly Elmore ⏸️ 🔸
That has nothing to do with this? The federal government has a right to pass laws at the federal level to preempt state levels laws. There’s a procedural problem with this one (violates Byrd Rule) but we are asking people to tell their Senators they oppose it because of the content.
5
David Mathers🔸
A bit of a tangent in the current context, but I have slight issues with your framing here: mechanisms that prevent the federal government telling the state governments what to do are not necessarily mechanisms that protect individuals citizens, although they could be. But equally, if the federal government is more inclined to protect the rights of individual citizens than the state government is, then they are the opposite. And sometimes framing it in terms of individual rights is just the wrong way to think about it: i.e. if the federal government wants some economic regulation and the state government doesn't, and the regulation has complex costs and benefits that work out well for some citizens and badly for others, then "is it the feds or the state government protecting citizen's rights" might not be a particularly helpful framing.  This isn't just abstract, historically in the South, it was often the feds who wanted to protect Black citizens and the state governments who wanted to avoid this under the banner of state's rights. 
Load more