All of Davidmanheim's Comments + Replies

Strong agree that absent new approaches the tailwind isn't enough - but it seems unclear that pretraining scaling doesn't have farther to go, and it seems that current approaches with synthetic data and training via RL to enhance one-shot performance have room left for significant improvement.

I also don't know how much room there is left until we hit genius level AGI or beyond, and at that point even if we hit a wall, more scaling isn't required, as the timeline basically ends.

the extinction scenario that Eliezer Yudkowsky has described. His scenario depends on the premise that AI systems could quickly develop advanced molecular nanotechnology capable of matching or even surpassing the sophistication of biological systems.

 

But that's not the claim he makes!

To quote:

The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point. 

Mostly agree. I've been involved in local orgs a bit more than most people in EA, and grew up in a house where my parents were often serving terms on different synagogue and school boards, and my wife has continued her family's similar tradition - so I strongly agree that passionate alignment changes things - but even that rarely leads to boards setting the strategic direction.

I think a large part of this is that strategy is hard, as you note, and it's very high context for orgs. I still wonder about who is best placed to track priority drift, and about ho... (read more)

My board isn't the reason for the lack of clarity - and it certainly is my job to set the direction. I don't think any of them are particularly dissatisfied with the way I've set the org's agenda. But my conclusion is that I disagree somewhat with Holden's post that partly guided me in the past couple years, in that it's more situational, and there are additional useful roles for the board.

I'd find a breakdown informative, since the distribution both between different frontier firms and between safety and not seems really critical, at least in my view of the net impacts of a program. (Of course, none of this tells us counterfactual impact, which might be moving people on net either way.)

Biggest unanswered but I think critical question: 

What proportion are working for frontier labs (not "for profit" generally, but the ones creating the risks,) in which roles (how many are in capabilities work now?) and at which labs?

3
Chris Clay🔸
Absolutely! My estimate in the summer was 4.5% (around 30 fellows), but this was excluding people at frontier labs who were explicitly on the safety teams. If specifics are important I'd be more than happy to revisit!

I don't think it's that much of a sacrifice.

I don't understand how this is an argument applicable to anyone other than yourself; other people clearly feel differently.

I also think that for many, the only difference in practice would be slightly lower savings for retirement.

If that is something they care or worry about, it's a difference - adding the word "only" doesn't change that!

4
Lorenzo Buonanno🔸
Yes this post is very much "why I donate" and definitely not "why everyone should donate". Most people are also not atheists, much poorer ( see gwwc.org/hrai ), value their wellbeing hundreds of times more then the wellbeing of others, and don't view spending money as voting on how the global economy allocates its resources, so all other paragraphs in this post would also not apply. In that paragraph I mention what I perceive the effect of extra spending vs extra donating to be on others because that informs why I personally donate, I could have phrased it better.

I've run very successful group brainstorming sessions with experts just in order to require them to actually think about a topic enough to realize what seems obvious to me. Getting people to talk through what the next decade of AI progress will look like didn't make them experts, or even get to the basic level I could have presented in a 15 minute talk - but it gives me me a chance to push them beyond their cached thoughts, without them rejecting views they see as extremes, since they are the ones thinking them!

But EA should scale, because its ideas are good, and this leaves it in a much more tricky situation.

I'll just note that when the original conversation started, I addressed this in a few parts.

To summarize, I think that yes, EA should be enormous, but it should not be a global community, and it needs to grapple with how the current community works, and figure out how to avoid ideological conformity.

There's also an important question about which EA causes are differentially more or less likely to be funded. If you think Pause AI is good, Anthropic's IPO probably won't help. If you think mechanistic interpretability is valuable, it might help to fund more training in relevant areas, but you should expect an influx of funding soon. And if you think animal welfare is important, funding new high risk startups that can take advantage of wave of funding in a year may be an especially promising bet.

1
GV 🔸
Yes. Another question is the geographical direction of the (potential) giving. I suppose we should expect a strong focus on US-centric actions, which might very suboptimal. Surely relying on funds will help coordinate intelligently. Therefore, one approach to preparing for the influx of many new donors could be to increase the EA Funds teams to facilitate grantmaking (afaik, they're quite overworked anyway).

I still don't think that works out, given energy cost of transmission and distance.

This could either be a new resource or an extension of an existing one. I expect that improving an existing resource would be faster and require lower maintenance.

My suggestion would be to improve the AI Governance section of aisafety.info


cc: @melissasamworth / @Søren Elverlin / @plex 

...but interstellar communication is incredibly unlikely to succeed - they are far away, we don't know in which direction, and required energy is incredibly large.

3
turchin
That is why we can target Andromeda - the distance is good enough so they didn't arrive yet and we can focus on many stars simultaneously and hope that aliens have very large receivers  - maybe Dyson sphere-size. Also the main point is to affect local superintelliegce expected utility. 

To possibly strengthen the argument made, I'll point out that moving already-effective money to a more effective cause or donation is smaller counterfactually because they are already looking at the question, and could easily come to the conclusion on their own. Moving money in a "Normie" foundation, on the other hand, can have knock-on effects of getting them to think more about impact at all, and change their trajectory.

We're also more likely to be incorrect and influencing money in the wrong direction if we're advising people who already take an effectiveness-based approach! I think full-time, specialized impact evaluators are the best resource we have to improve our answers to these questions over time, but they're fallible people working on complicated questions and certainly they occasionally come to less-optimal decisions than other smart people working from the same principles and premises. By contrast, the "normie" foundation landing on a more cost effective answer than the impact-focused evaluators is probably rare as it would be something of an accident.

I meant that I don't think it's obvious that most people in EA working on this would agree. 

I do think it's obvious that most people overall would agree, though most would not agree or be unsure that a simulation matters at all. It's even very unclear how to count person-experiences overall, as Johnston's Personite paper argues: https://www.jstor.org/stable/26631215 and I'll also point to the general double-counting problem: https://link.springer.com/article/10.1007/s11098-020-01428-9 and suggest that it could apply.

2
AnonymousTurtle
Interesting. Could you point to anyone in EA who does not agree with the additive view and works in this field?

I need to write a far longer response to that paper, but I'll briefly respond (and flag to @Christian Tarsney) that I think my biggest crux is that I think they picked weak objections to causal domain restriction, and that far better objections apply. Secondarily, for axiological weights, the response about egalitarian views leading to rejection of different axiological weights seems to be begging the question, and the next part ignores the fact that any acceptable response to causal domain restriction also addresses the issue of large background populations.

I recently discussed this on twitter with @Jessica_Taylor, and think that there's a weird claim involved that collapses into either believing that distance changes moral importance, or that thicker wires in a computer increases its moral weight. (Similar to the cutting dominos in half example in that post, or the thicker pencil, but less contrived.) Alternatively, it confuses the question by claiming that identical beings at time t_0 are morally different because they differ at time t_n - which is a completely different claim!

I think the many worlds interp... (read more)

I don't think that's at all obvious, though it could be true.

2
AnonymousTurtle
I agree with you, as do most people outside of EA, but I believe almost everyone in EA working on these topics disagrees

That's a fair point, and I agree that it leads to a very different universe.

At that point, however, (assuming we embrace moral realism and an absolute moral value of some non-subjective definition of qualia, which seems incoherent,) it also seems to lead to a functionally unsolvable coordination problem for maximization across galaxies.

a PhD applicant could ask their prospective supervisor’s current grad students what it’s like to work with the supervisor. Yet, at least when I was applying to grad school, this was not very common. 


I often advise doing this, albeit slightly differently - talk to their recently graduated former PhD students, who have a better perspective on what the process led to and how valuable it was in retrospect. I think similar advice plausibly applies in corresponding cases - talk to people who used to work somewhere, instead of current employees.

if the value of welfare scales something-like-linearly


I think this is a critically underappreciated crux! Even accepting the other parts, it's far from obvious that the intuitive approach of scaling value linearly in the near-term and locally is indefinitely correct far out-of-distribution; simulating the same wonderful experience a billion times certainly isn't a billion times greater than simulating it once..

1
tobycrisford 🔸
It sounds like MichaelDickens' reply is probably right, that we don't need to consider identical experiences in order for this argument to go through. But the question of whether identical copies of the same experience have any additional value is a really interesting one. I used to feel very confident that they have no value at all. I'm now a lot more uncertain, after realising that this view seems to be in tension with the many worlds interpretation of quantum mechanics: https://www.lesswrong.com/posts/bzSfwMmuexfyrGR6o/the-ethics-of-copying-conscious-states-and-the-many-worlds 
3
MichaelDickens
I disagree but I don't think this is really a crux. The ideal future could involve filling the universe with beings who have extremely good experiences compared to humans (and do not resemble humans at all) but their experiences are still very diverse. And, this is sort of an unanswered question about how qualia work, but my guess is that for combinatoric reasons, you could fill the accessible universe with (say) 10^40 beings who all have different experiences where the worst experience out of all of them is only a bit worse than the best.
2
AnonymousTurtle
  My sense is that most people in EA working on these topics disagree.
1
Matrice Jacobine🔸🏳️‍⚧️
I think there are pretty good reasons to expect any reasonable axiology to be additive.

You can't know with certainty, but any decision you make is based on some implicit guesses. This seems to be pretending that the uncertainty precludes doing introspection or analysis - as if making bets, as you put it, must be done blindly.

2
Yarrow Bouchard 🔸
No, irreducible uncertainty is not all-or-nothing. Obviously a person should do introspection and analysis when making important decisions. 

Strongly both agree and disagree - it's incredibly valuable to have saving, it should definitely be prioritized, and despite being smart, it's not a donation!

So if you choose to save instead of fulfilling your full pledge, I think that's a reasonable decision, though I'd certainly endorse trying to find other places to save money instead. But given that, don't claim it's charitable, say you're making a compromise. (Moral imperfection is normal and acceptable, if not inevitable. Trying to justify such compromises as actually fully morally justified, in my view, is neither OK, nor is it ever necessary.)

4
Benevolent_Rain
I like the idea of just accepting it as moral imperfection rather than rationalizing it as charity — thanks for challenging me! One benefit of framing it as imperfection is that it helps normalize moral imperfection, which might actually be net positive for the most dedicated altruists, since it could help prevent burnout or other mental strain. Still, I’m not completely decided. I’m unclear about cases where someone needs to use their runway: A. They might have chosen not to build runway and instead donated effectively, and then later, when needing runway, received career transition funding from an effective donor. B. Alternatively, they could have built runway and, when needing it, avoided submitting a funding request for career transition and instead used their own funds — probably more cost-effective overall, since it reduces admin costs for both the person and the grantmakers.

Yeah, now that I'm doing payroll donations I have not been recording the data. I guess it would be good to fill in the data, for EDIT: GWWC's records?

2
Vasco Grilo🔸
Hi David. Yes, I think GWWC would appreciate that! I guess you meant GWWC's records, not GiveWell's.

The way I managed this in the past was having a separate bank account for charity, and splitting my income when I was paid, then making donation decisions later - often at year end, or when there was a counterfactual match, etc.

Understood, and reasonable. The problem is that I'm uncomfortable with "the most good" as the goal anyways, as I explained a few years ago; https://forum.effectivealtruism.org/posts/f9NpDx65zY6Qk9ofe/doing-good-best-isn-t-the-ea-ideal

So moving from 'doing good better' to 'do the most good' seems explicitly worse on dimensions I care about, even if it performs better on approval.

I would be careful with this - it might be an improvement, but are we sure that optimizing short-term messaging success is the right way to promote the ideas as being important long-term conceptual changes to how people approach life and charity?

Lots of other factors matter, and optimizing one dimension, especially using short term approval, implicitly minimizes other important dimensions of the message. Also, as a partial contrast to this point, see "You get about five words."

Thanks David.

I certainly agree that we should be careful to make sure that we don't over-optimise short-term appeal at the cost of other things that matter (e.g. long-term engagement, accuracy and fidelity of the message, etc.). I don't think we're calling for people to only consider this dimension: we explicitly say that we "think that people should assess particular cases on the basis of all the details relevant to the particular case in question."

That said, I think that there are many cases where those other dimensions won't, in fact, be diminished by s... (read more)

Strongly agree based on my experiences talking to political operatives, in case additional correlated small n anecdata is helpful.

"Without causing inflation" seems hard to support based on this study, given the short timeframe and large external effects which aren't being controlled for.

That said, it seems very plausible that the localized economic impact of more cash wouldn't drive large price change if the economy was integrated with other regions; critical inputs such as grain prices are driven by global markets more than local demand. And the surveyed markets shown are mostly for global goods.

You're right, they made the problem easier with geofencing, but the data from Waymo isn't ambiguous, and despite your previous investigations, is now published https://storage.googleapis.com/waymo-uploads/files/documents/safety/Safety%20Impact%20Crash%20Type%20Manuscript.pdf

This example makes it clear that the approach works to automate significant human labor, with some investment, without solving AGI.

1
Yarrow Bouchard 🔸
I'll have to look at that safety report later and see what the responses are to it. At a glance, this seems to be a bigger and more rigorous disclosure than what I've seen previously and Waymo has taken the extra step of publishing in a journal.  [Edit, added on October 20, 2025 at 12:40pm Eastern: There are probably going to be limitations with any safety data and we shouldn't expect perfection, nor should that get in the way of us lauding companies for being more open with their safety data. However, just one thing to think about: if autonomous vehicles are geofenced to safer areas but they're being compared to humans driving in all areas, ranging from the safest to the most dangerous, then this isn't a strict apples-to-apples comparison.] However, I'm not ready to jump to any conclusions just yet because it was a similar report by Waymo (not published in a journal, however) that I paid someone with a PhD in a relevant field to help me analyze and, despite Waymo's report initially looking promising and interesting to me, that person's conclusion was that there was not enough data to actually make a determination one way or the other whether Waymo's autonomous vehicles were actually safer than the average human driver. I was coming at that report from the perspective of wanting it to show that Waymo's vehicles were safer than human drivers (although I didn't tell the person with the PhD that because I didn't want to bias them). I was disappointed that the result was inconclusive.  If it turns out Waymo's autonomous vehicles are indeed safer than the average human driver, I would celebrate that. Sadly, however, it would not really make me feel more than marginally more optimistic about the near-term prospects of autonomous vehicle technology for widespread commercialization. The bigger problem for this overall argument about autonomous vehicles (that they show data efficiency or the ability to deal with novelty isn't important) is that safety is only one compon

For autonomous driving, current approaches which "can't deal with novelty" are already far safer than human drivers.

2
Yarrow Bouchard 🔸
Safety is only one component of overall driving competence. A parked car is 100% safe. Even if it is true that autonomous cars are safer than human drivers, they aren't as competent as human drivers overall.  Incidentally, I'm pretty familiar with the autonomous driving industry and I've spent countless hours looking into such claims. I even once paid someone with a PhD in a relevant field to help me analyze some data to try to come to a conclusion. (The result was there wasn't enough data to draw a conclusion.) What I've found is that autonomous driving companies are incredibly secretive about the data they keep on safety and other kinds of driving performance. They have aggressive PR and marketing, but they won't actually publish the data that will allow third-parties to independently audit how safe their AI vehicles are.  Besides just not having the data, there are the additional complications of 1) aggressive geofencing to artificially constrain the problem and make it easier (just like a parked car is 100% safe, a car slowly circling a closed track would also be almost 100% safe) and 2) humans in the loop, either physically inside the car or remotely.[1] The most important thing to know is that you can't trust these companies' PR and marketing. Autonomous vehicle companies will be happy to say their cars are superhuman right up until the day they announce they're shutting down. It's like Soviet propagandists saying communism is going great in 1988. But also, no, you can't look at their economic data.  1. ^ Edited on October 20, 2025 at 12:35pm Eastern to add: See the footnote added to my comment above for Andrej Karpathy's recent comments on this.

AI will hunt down the last remaining human, and with his last dying breath, humanity will end - not with a bang, but with a "you don't really count as AGI"

Thank you for this - as someone who lives with my wife and kids on the other side of the world from the "optimal" place to live, around the corner from the grandparents and cousins, I very much appreciate people raising the flag for this being an acceptable choice in the community.

That said, I think there's another aspect that is worth flagging; the implicit expectation that the commitment  for EA is utilitarian, and so you won't have your own priorities other than the minimum needed to keep yourself happy and motivated, or if not, at least the (mista... (read more)

Hi David, if I've understood you correctly, I agree that a reason to return home as for other priorities that have nothing to do with impact. I personally did not return home for the extra happiness or motivation required to stay productive, but because I valued these other things intrinsically, which Julia articulates better here: https://forum.effectivealtruism.org/posts/zu28unKfTHoxRWpGn/you-have-more-than-one-goal-and-that-s-fine  

I'll point to my dated but still relevant counterpoint: the way that EA has been built is worrying, and EA as a global community that functions as a high-trust collaborative society is bad. This conclusion was tentative at the time, and I think has been embraced to a very limited extent since then - but the concerns seem not to be noted in your post.

One application of this line of reasoning here is, as @Holly Elmore ⏸️ 🔸 has said more than once, including here, is that being friends and part of a single community seems to have dampened people's ability to... (read more)

...it occurs to me that it's worrying in very different directions if FRED changes what or how they report. If they stop reporting or data collection is halted for political reasons, I'd expect that we either pick an arbiter to make the call, or agree to call the bet off.

2
Vasco Grilo🔸
I see. By default, I think the bet should resolve as specified, "as reported by the Federal Reserve Economic Data (FRED)". However, I would be happy to do as you suggested if we agree there was a significant change in the reliability or definition of the unemployment rate reported by FRED.

I've updated substantially towards this view - the practical issues with renting CPUs make them far less of a fungible commodity than I was assuming, and as you pointed out, contra my understanding, there are effective restrictions on Chinese companies getting their hands on large amounts of compute.

Thanks for this response - I am not an expert on chip production, and your response on fabrication is clearly better informed than mine.

However, "Policy changes in 2025 could start affecting Chinese AI models in 2027 (for chips) and around 2030 (for SME) already."

I now agree with this - and I was told in other comments that I didn't sufficiently distinguish between these two, so thanks for clarifying that. But 2030 for starting to help get more chips is long timelines, and the people you cite with 2029-2030 timelines expect it to be playing out already then, so starting to get more chips then seems irrelevant in those worlds.

Edit to add: First, I really liked your post yesterday, which responded to some of this.

I think the technical barriers to developing EUV photolithography from scratch are far higher than anything needed to extract, refine, or transport oil.

I think the technical barriers are higher today, but not so high that intense Chinese investment can't dent it over the course of a decade.  SMEE is investing in laser-induced discharge plasma tech, with rumored trial production as soon as the end of this year. SMIC  is using DUV more efficiently for (lower-yie... (read more)

6
Erich_Grunewald 🔸
On timelines, I think it's worth separating out export controls on different items: * Controls on AI chips themselves start having effects on AI systems within a year or so probably (say 6-12 months to procure and install the chips, and 6-18 months to develop/train/post-train a model with them), or even sooner for deployment/inference, i.e. 1-2 years or so. * Controls on semiconductor manufacturing equipment (SME) take longer to have an impact as you say, but I think not that long. SMIC (and therefore future Ascend GPUs) is clearly limited by the 2019 ban on EUV photolithography, and I would say this was apparent as early as 2023. So I think SME controls instituted now would start having an effect on chip production in the late 2020s already, and on AI systems 1-2 years after that. Most other relevant products (e.g., HBM and EDA software) probably fall between those two in terms of how quickly controls affect downstream AI systems. So that means policy changes in 2025 could start affecting Chinese AI models in 2027 (for chips) and around 2030 (for SME) already, which seems relevant to even short-timeline worlds. For example, Daniel Kokotajlo's median for superhuman coders is now 2029, and IIUC Eli Lifland's median is in the (early?) 2030s. But I would go further to say that export controls now can substantially affect compute access well into the 2030s or even the 2040s. You write that I won't have time to go into great detail here, but I have researched this a fair amount and I think you are too bullish on Chinese leading-edge chip fabrication. To be clear, China can and will certainly produce AI chips, and these are decent AI chips. But they will likely produce those chips less cost-efficiently and at lower volumes due to having worse equipment, and they will have worse performance than TSMC-fabbed chips due to using older-generation processes. The lack of EUV machines, which will likely last at least another five years and plausibly well into the 2030s, al
6
Erich_Grunewald 🔸
On the oil analogy, it seems from that you think ownership of compute does not substantially influence who will have or control the most powerful AI systems? I disagree; I think it will impact both AI developers and also companies relying on access to AI models. First, AI developers -- export controls put the Chinese AI industry as a whole at a compute disadvantage, which we see in the fact that they train less compute-intensive models, for a few reasons: * It is generally unappealing for major AI developers to merely rent GPUs they don't own, as a result of which they often build their own data centers (xAI, Google) or rely on partnerships for exclusive access (OpenAI, Anthropic). I think the main reasons for this are cost, (un)certainty, and greater control over the cluster set-up. * Chinese companies cannot build their own data centers with export-controlled chips without smuggling, and cannot embark on these partnerships with American hyperscalers. If they want to use cutting-edge GPUs, they must either rely on smuggling (which means higher prices and smaller quantities), or renting from foreign cloud providers. * The US likely could, if and when it wanted to, deny access of compute via the cloud to Chinese customers, at least large-scale use and at least for the large hyperscalers. So for Chinese AI developers to rely on foreign cloud compute gives the US a lot of leverage. (There are some questions around how feasible it is to circumvent KYC checks, and especially whether the US can effectively ensure these checks are done well in third countries, but I think the US could deny China most of the world's rentable cloud compute in this way.) * Chinese privacy law makes it harder for Chinese AI developers to use foreign cloud compute, at least for some use cases. I'm not sure exactly how strong this effect is, but it seems non-negligible. * For deployment/inference, you may want to have your compute located close to your users, as that reduces latency. *

To be clear, I think that this bet is pretty close to fair according to my views about the likely future of the US economy - I don't have a large edge in my estimate, and I'm making the bet mostly to have my (fairly and unusually pessimistic) views clearly on the record, for myself and for others.

I would rather see people make bets that they think are very profitable (relative to the size of the bet).

There's this idea that betting on your beliefs is epistemically virtuous, which sometimes leads people to be so eager to bet that they make bets at odds that are roughly neutral EV for them. But I think the social epistemic advantages of betting mostly depend on both parties trying to make bets where they think they have a significant EV edge, so sacrificing your EV to get some sort of bet made is also sacrificing the epistemic spillover benefits of the bet.

It's partly shorter timelines, which we're seeing start to play out, and partly underlying pessimism on US economic policy under Trump, and the increasing odds of a recession. 

The Us economy has been stalled, and the only reason this isn't obvious in the stock market is the AI companies - so my weak general model is that either the AI companies continue to do better, which at least weakly implies job displacement, or they don't, and there's a market crash and need for stimulus and inflation. In that situation, or even with a continued status quo maybe... (read more)

4
NickLaing
Yeah I agree there's a decent chance of a recession unrelated to AI, but I'm not sure I would pick a 1 in 3 chance of an 8 no prevent unemployment creating recession though. The bet to me though doesn't seem too unbalanced even disregarding AI. I agree with Thomas though that 1 in 4 seems even more midpointy
9
Davidmanheim
To be clear, I think that this bet is pretty close to fair according to my views about the likely future of the US economy - I don't have a large edge in my estimate, and I'm making the bet mostly to have my (fairly and unusually pessimistic) views clearly on the record, for myself and for others.

First, I was convinced, separately, that chip production location matters more than I presumed here because chips are not commodities in an important way I neglected - the security of a chip isn't really verifiable post-hoc, and worse, the differential insecurity of chips to US versus Chinese backdoors means that companies based in different locations will have different preferences for which risks to tolerate. (On the other hand, I think you're wrong in saying that "the chip supply chain has unique characteristics [compared to oil,] with extreme manufactu... (read more)

9
Erich_Grunewald 🔸
I think the technical barriers to developing EUV photolithography from scratch are far higher than anything needed to extract, refine, or transport oil. I also think the market concentration is far higher in the AI chip design and semiconductor industries. There's no oil equivalent to TSMC's ~90% leading-edge logic chip, NVIDIA's ~90% data center GPU, or ASML's 100% EUVL machine market shares. Are you sure? I would guess that the chip supply chain used by NVIDIA has more investment than the Chinese counterpart. For example, according to a SEMI report, China will spend $38bn on semiconductor manufacturing equipment in 2025, whereas the US + Taiwan + South Korea + Japan is set to spend a combined ~$70bn. I would guess it looks directionally similar for R&D investment, though the difference may be smaller there. I was under the impression the AI chip design process is more like 1.5-2 years, and a fab is built in 2-3 years in Taiwan or 4 years for the Arizona fab. It sounds like you think differently? Whatever it is, I would guess it's roughly similar across the industry, including in China. That seems like, if my numbers are right, it leaves enough room for policy now to influence the relative compute distribution of nations 5-6 years from now.

As a meta-comment, it's really important that a huge proportion of the disagreement in the comments here is about what "engage deeply" means.

If that means it is a crux that must be decided upon, the claim is clearly true that we must engage with them - because they are certainly cruxes.
It if means people must individually spend time on doing so, it is clearly false, because people can rationally choose not to engage and use some heuristic, or defer to experts which is rational[1].

  1. ^

    In worlds where computation and consideration are not free. Using certain te

... (read more)

Deference to authority is itself a philosophical contention which has been discussed and debated (in that case, in comparison to voting as a method.)

6
David Mathers🔸
Sure, but it's an extreme view that it's never ok to outsource epistemic work to other people. 
Davidmanheim
2
0
0
80% disagree

It is possible to rationally prioritise between causes without engaging deeply on philosophical issues

As laid out at length in my 2023 post, no, it is not. For a single quote "all of axiology, which includes both aesthetics and ethics, and large parts of metaphysics... are central to any discussion of how to pursue cause-neutrality, but are often, in fact, nearly always, ignored by the community."

As to the idea that one can defer to others in place of engaging deeply, this is philosophically debated, and while rational in the decision theoretic sense, it i... (read more)

Definitely seems reasonable, but it would ideally need to be done somewhere high prestige.

Convince funders to invest in building those plans, to sketch out futures and treaties that could work rbustly to stop the coming likely nightmare of default AGI/ASI futures.

3
Peter
Would be curious to hear your thoughts on this as one strategy for eliciting better plans 

The key takeaway, which has been argued for by myself and others, should be to promote investment in clear plans for what post warning shot AI governance looks like. Unfortunately, despite the huge contingent value, there is very little good work on the topic.

3
LuisEUrtubey
In a scenario like that, also important to prevent something similar to what happened to the Future of Iraq plans from the State Department.
3
Peter
Do you have ideas about how we could get better plans?

The description is about punishment for dissent from non-influential EAs, but the title is about influential members. (And I'd vote differently depending on which is intended.)

Load more