NB

Noah Birnbaum

Junior @ University of Chicago
691 karmaJoined Pursuing an undergraduate degree

Bio

Participation
7

I am a rising junior at the University of Chicago (co-president of UChicago EA and founder of Rationality Group). I am mostly interested in philosophy (particularly metaethics, formal epistemology, and decision theory), economics, and entrepreneurship. 

I also have a Substack where I post about philosophy (ethics, epistemology, EA, and other stuff). Find it here: https://substack.com/@irrationalitycommunity?utm_source=user-menu. 

Reach out to me via email @ dnbirnbaum@uchicago.edu

How others can help me

If anyone has any opportunities to do effective research in the philosophy space (or taking philosophy to real life/ related field) or if anyone has any more entrepreneurial opportunities, I would love to hear about them. Feel free to DM me!

How I can help others

I can help with philosophy stuff (maybe?) and organizing school clubs (maybe?)

Comments
73

Interesting. A few thoughts:

Beyond strengthening the case for non-existential risks, if Sisyphus risk is substantial it also weakens arguments that place extreme weight on reducing existential risk at a specific time. Some of the importance of the Time of Perils comes from comparative advantage, which is diluted if civilization plausibly gets multiple runs.

One additional Sisyphean mechanism worth flagging is resource exhaustion: collapsing before reaching renewable resource self-sufficiency could permanently worsen later runs. This probably relies on a setback happening much later or a large amount of resources being used before, but it’s worth flagging. 

A caveat on donation timing: even if post-AGI x-risk declines slowly, aligned AGI plausibly generates enormous resources, so standard patient-philanthropy arguments may still apply. And if we assume those resources are lost in a collapse, the same would likely apply to resources saved in advance.

Finally, the plausible setbacks all seem to hinge on something like the loss of knowledge. Other worries (e.g. Butlerian backlash) tend to rely on path-dependent successes—historically contingent timing, unusually alignable models, or specific public perceptions that don’t automatically replicate—seem hard to change conditional on setbacks. If those aren’t mostly luck-based and the relevant knowledge survives, a post-setback society could plausibly re-instantiate the same mechanisms, making Sisyphus risk primarily an epistemic rather than, say, a governance problem.

I think the COVID case usefully illustrates a broader issue with how “EA/rationalist prediction success” narratives are often deployed.

That said, this is exactly why I’d like to see similar audits applied to other domains where prediction success is often asserted, but rarely with much nuance. In particular: crypto, prediction markets, LVT, and more recently GPT-3 / scaling-based AI progress. I wasn’t closely following these discussions at the time, so I’m genuinely uncertain about (i) what was actually claimed ex ante, (ii) how specific those claims were, and (iii) how distinctive they were relative to non-EA communities.

This matters to me for two reasons.

First, many of these claims are invoked rhetorically rather than analytically. “EAs predicted X” is often treated as a unitary credential, when in reality predictive success varies a lot by domain, level of abstraction, and comparison class. Without disaggregation, it’s hard to tell whether we’re looking at genuine epistemic advantage, selective memory, or post-hoc narrative construction.

Second, these track-record arguments are sometimes used—explicitly or implicitly—to bolster the case for concern about AI risks. If the evidential support here rests on past forecasting success, then the strength of that support depends on how well those earlier cases actually hold up under scrutiny. If the success was mostly at the level of identifying broad structural risks (e.g. incentives, tail risks, coordination failures), that’s a very different kind of evidence than being right about timelines, concrete outcomes, or specific mechanisms.

I think the COVID case usefully illustrates a broader issue with how “EA/rationalist prediction success” narratives are often deployed.

That said, this is exactly why I’d like to see similar audits applied to other domains where prediction success is often asserted, but rarely with much nuance. In particular: crypto, prediction markets, LVT, and more recently GPT-3 / scaling-based AI progress. I wasn’t closely following these discussions at the time, so I’m genuinely uncertain about (i) what was actually claimed ex ante, (ii) how specific those claims were, and (iii) how distinctive they were relative to non-EA communities.

This matters to me for two reasons.

First, many of these claims are invoked rhetorically rather than analytically. “EAs predicted X” is often treated as a unitary credential, when in reality predictive success varies a lot by domain, level of abstraction, and comparison class. Without disaggregation, it’s hard to tell whether we’re looking at genuine epistemic advantage, selective memory, or post-hoc narrative construction.

Second, these track-record arguments are sometimes used—explicitly or implicitly—to bolster the case for concern about AI risks. If the evidential support here rests on past forecasting success, then the strength of that support depends on how well those earlier cases actually hold up under scrutiny. If the success was mostly at the level of identifying broad structural risks (e.g. incentives, tail risks, coordination failures), that’s a very different kind of evidence than being right about timelines, concrete outcomes, or specific mechanisms.

I can’t join this Sunday (finals season whoo!), but this is a really good idea. I’d love to see more initiatives like this to encourage writing on the Forum—especially during themed weeks.

Also, I’m always down to do (probably remote) co-working sessions with people who want to write Forum posts.

Strongly agreed. Organizing a group is probably one of the best things one could do for both their present and future impact. 

I (and many others) would be happy to get on a call/help anyone willing to take over (I have a bunch of experience from organizing the UChicago group)! Dm me here to take me up on that.  

Many questions in this space rely on assumptions about whether insect lives are positive or negative, though I haven't seen much discussion of this explicitly (mostly just heard them in conversations). Is there not much that can be done to learn more other than what has already been done? 

If so, the insect welfare initiatives that are interested in creating more or less are going to need to be dropped by 1-2 OOMs to account for massive uncertainty (say, 5% higher that they are positive), which is weird. It also wouldn't be very robust in that the probability we do something good vs bad is quite fragile (especially a problem for many non hedonistic utilitarian views). 

Do you have further takes here, Vasco? 

I would look out for Rethink Priorities' digital consciousness model (and other work they are doing here) which should be coming out soon-ish. I don't think they would call it definitive in any sense, but it could be helpful here. 

I think a major way this could be wrong is if you think we could get lots of digital minds in some amount of decades and early research/public engagement can have an oversized impact on shaping the conversation. This might make digital minds way more important, I think. 

I'm also generally pretty interested in people doing more digital minds cross-cause prio (I'm working on a piece now)! 

  • Re the new 2024 Rethink Cause Prio survey: "The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views. [“Defer to experts”]" 3% strongly agree, 18% somewhat agree, 35% somewhat disagree, 15% strongly disagree.
    • This seems pretty bad to me, especially for a group that frames itself as recognizing intellectual humility/we (base rate for an intellectual movement) are so often wrong.
    • (Charitable interpretation) It's also just the case that EAs tend to have lots of views that they're being contrarian about because they're trying to maximize the the expected value of information (often justified with something like: "usually contrarians are wrong, but if they are right, they are often more valuable for information than average person who just agrees").
      • If this is the case, though, I fear that some of us are confusing the norm of being contrarian instrumental reasons and for "being correct" reasons. 

Tho lmk if you disagree. 

Some questions that might be cruxy and important for money allocation: 

Because there is some evidence that superforecaster aggregation might underperform in AI capabilities, how should epistemic weight be distributed between generalist forecasters, domain experts, and algorithmic prediction models? What evidence exists/can be gotten about their relative track records?

Are there better ways to do AIS CEA? What are they? 

Is there productive work to be done in inter-cause comparison among new potential cause areas (i.e. digital minds, space governance, etc)? What types of assumptions do these rely on? I ask because it seems like people typically go into these fields because "woah, those numbers are really big," but that sort of reasoning applies to lots of those fields and doesn't tell you very much about resource distribution. 

What are the reputational effects for EA (for people inside and outside the movement) going (more) all in on certain causes and then being wrong (i.e. AI is and continues to be a bubble)? Should this change how much EA should go in on things? Under what assumptions? 

This is great, and people should do this for more cause areas! 

Load more