the extinction scenario that Eliezer Yudkowsky has described. His scenario depends on the premise that AI systems could quickly develop advanced molecular nanotechnology capable of matching or even surpassing the sophistication of biological systems.
But that's not the claim he makes!
To quote:
The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point.
Mostly agree. I've been involved in local orgs a bit more than most people in EA, and grew up in a house where my parents were often serving terms on different synagogue and school boards, and my wife has continued her family's similar tradition - so I strongly agree that passionate alignment changes things - but even that rarely leads to boards setting the strategic direction.
I think a large part of this is that strategy is hard, as you note, and it's very high context for orgs. I still wonder about who is best placed to track priority drift, and about ho...
My board isn't the reason for the lack of clarity - and it certainly is my job to set the direction. I don't think any of them are particularly dissatisfied with the way I've set the org's agenda. But my conclusion is that I disagree somewhat with Holden's post that partly guided me in the past couple years, in that it's more situational, and there are additional useful roles for the board.
I'd find a breakdown informative, since the distribution both between different frontier firms and between safety and not seems really critical, at least in my view of the net impacts of a program. (Of course, none of this tells us counterfactual impact, which might be moving people on net either way.)
I don't think it's that much of a sacrifice.
I don't understand how this is an argument applicable to anyone other than yourself; other people clearly feel differently.
I also think that for many, the only difference in practice would be slightly lower savings for retirement.
If that is something they care or worry about, it's a difference - adding the word "only" doesn't change that!
I've run very successful group brainstorming sessions with experts just in order to require them to actually think about a topic enough to realize what seems obvious to me. Getting people to talk through what the next decade of AI progress will look like didn't make them experts, or even get to the basic level I could have presented in a 15 minute talk - but it gives me me a chance to push them beyond their cached thoughts, without them rejecting views they see as extremes, since they are the ones thinking them!
But EA should scale, because its ideas are good, and this leaves it in a much more tricky situation.
I'll just note that when the original conversation started, I addressed this in a few parts.
To summarize, I think that yes, EA should be enormous, but it should not be a global community, and it needs to grapple with how the current community works, and figure out how to avoid ideological conformity.
There's also an important question about which EA causes are differentially more or less likely to be funded. If you think Pause AI is good, Anthropic's IPO probably won't help. If you think mechanistic interpretability is valuable, it might help to fund more training in relevant areas, but you should expect an influx of funding soon. And if you think animal welfare is important, funding new high risk startups that can take advantage of wave of funding in a year may be an especially promising bet.
This could either be a new resource or an extension of an existing one. I expect that improving an existing resource would be faster and require lower maintenance.
My suggestion would be to improve the AI Governance section of aisafety.info.
cc: @melissasamworth / @Søren Elverlin / @plex
To possibly strengthen the argument made, I'll point out that moving already-effective money to a more effective cause or donation is smaller counterfactually because they are already looking at the question, and could easily come to the conclusion on their own. Moving money in a "Normie" foundation, on the other hand, can have knock-on effects of getting them to think more about impact at all, and change their trajectory.
We're also more likely to be incorrect and influencing money in the wrong direction if we're advising people who already take an effectiveness-based approach! I think full-time, specialized impact evaluators are the best resource we have to improve our answers to these questions over time, but they're fallible people working on complicated questions and certainly they occasionally come to less-optimal decisions than other smart people working from the same principles and premises. By contrast, the "normie" foundation landing on a more cost effective answer than the impact-focused evaluators is probably rare as it would be something of an accident.
I meant that I don't think it's obvious that most people in EA working on this would agree.
I do think it's obvious that most people overall would agree, though most would not agree or be unsure that a simulation matters at all. It's even very unclear how to count person-experiences overall, as Johnston's Personite paper argues: https://www.jstor.org/stable/26631215 and I'll also point to the general double-counting problem: https://link.springer.com/article/10.1007/s11098-020-01428-9 and suggest that it could apply.
I need to write a far longer response to that paper, but I'll briefly respond (and flag to @Christian Tarsney) that I think my biggest crux is that I think they picked weak objections to causal domain restriction, and that far better objections apply. Secondarily, for axiological weights, the response about egalitarian views leading to rejection of different axiological weights seems to be begging the question, and the next part ignores the fact that any acceptable response to causal domain restriction also addresses the issue of large background populations.
I recently discussed this on twitter with @Jessica_Taylor, and think that there's a weird claim involved that collapses into either believing that distance changes moral importance, or that thicker wires in a computer increases its moral weight. (Similar to the cutting dominos in half example in that post, or the thicker pencil, but less contrived.) Alternatively, it confuses the question by claiming that identical beings at time t_0 are morally different because they differ at time t_n - which is a completely different claim!
I think the many worlds interp...
That's a fair point, and I agree that it leads to a very different universe.
At that point, however, (assuming we embrace moral realism and an absolute moral value of some non-subjective definition of qualia, which seems incoherent,) it also seems to lead to a functionally unsolvable coordination problem for maximization across galaxies.
a PhD applicant could ask their prospective supervisor’s current grad students what it’s like to work with the supervisor. Yet, at least when I was applying to grad school, this was not very common.
I often advise doing this, albeit slightly differently - talk to their recently graduated former PhD students, who have a better perspective on what the process led to and how valuable it was in retrospect. I think similar advice plausibly applies in corresponding cases - talk to people who used to work somewhere, instead of current employees.
if the value of welfare scales something-like-linearly
I think this is a critically underappreciated crux! Even accepting the other parts, it's far from obvious that the intuitive approach of scaling value linearly in the near-term and locally is indefinitely correct far out-of-distribution; simulating the same wonderful experience a billion times certainly isn't a billion times greater than simulating it once..
Strongly both agree and disagree - it's incredibly valuable to have saving, it should definitely be prioritized, and despite being smart, it's not a donation!
So if you choose to save instead of fulfilling your full pledge, I think that's a reasonable decision, though I'd certainly endorse trying to find other places to save money instead. But given that, don't claim it's charitable, say you're making a compromise. (Moral imperfection is normal and acceptable, if not inevitable. Trying to justify such compromises as actually fully morally justified, in my view, is neither OK, nor is it ever necessary.)
Yeah, now that I'm doing payroll donations I have not been recording the data. I guess it would be good to fill in the data, for EDIT: GWWC's records?
Understood, and reasonable. The problem is that I'm uncomfortable with "the most good" as the goal anyways, as I explained a few years ago; https://forum.effectivealtruism.org/posts/f9NpDx65zY6Qk9ofe/doing-good-best-isn-t-the-ea-ideal
So moving from 'doing good better' to 'do the most good' seems explicitly worse on dimensions I care about, even if it performs better on approval.
I would be careful with this - it might be an improvement, but are we sure that optimizing short-term messaging success is the right way to promote the ideas as being important long-term conceptual changes to how people approach life and charity?
Lots of other factors matter, and optimizing one dimension, especially using short term approval, implicitly minimizes other important dimensions of the message. Also, as a partial contrast to this point, see "You get about five words."
Thanks David.
I certainly agree that we should be careful to make sure that we don't over-optimise short-term appeal at the cost of other things that matter (e.g. long-term engagement, accuracy and fidelity of the message, etc.). I don't think we're calling for people to only consider this dimension: we explicitly say that we "think that people should assess particular cases on the basis of all the details relevant to the particular case in question."
That said, I think that there are many cases where those other dimensions won't, in fact, be diminished by s...
"Without causing inflation" seems hard to support based on this study, given the short timeframe and large external effects which aren't being controlled for.
That said, it seems very plausible that the localized economic impact of more cash wouldn't drive large price change if the economy was integrated with other regions; critical inputs such as grain prices are driven by global markets more than local demand. And the surveyed markets shown are mostly for global goods.
You're right, they made the problem easier with geofencing, but the data from Waymo isn't ambiguous, and despite your previous investigations, is now published https://storage.googleapis.com/waymo-uploads/files/documents/safety/Safety%20Impact%20Crash%20Type%20Manuscript.pdf
This example makes it clear that the approach works to automate significant human labor, with some investment, without solving AGI.
Thank you for this - as someone who lives with my wife and kids on the other side of the world from the "optimal" place to live, around the corner from the grandparents and cousins, I very much appreciate people raising the flag for this being an acceptable choice in the community.
That said, I think there's another aspect that is worth flagging; the implicit expectation that the commitment for EA is utilitarian, and so you won't have your own priorities other than the minimum needed to keep yourself happy and motivated, or if not, at least the (mista...
Hi David, if I've understood you correctly, I agree that a reason to return home as for other priorities that have nothing to do with impact. I personally did not return home for the extra happiness or motivation required to stay productive, but because I valued these other things intrinsically, which Julia articulates better here: https://forum.effectivealtruism.org/posts/zu28unKfTHoxRWpGn/you-have-more-than-one-goal-and-that-s-fine
I'll point to my dated but still relevant counterpoint: the way that EA has been built is worrying, and EA as a global community that functions as a high-trust collaborative society is bad. This conclusion was tentative at the time, and I think has been embraced to a very limited extent since then - but the concerns seem not to be noted in your post.
One application of this line of reasoning here is, as @Holly Elmore ⏸️ 🔸 has said more than once, including here, is that being friends and part of a single community seems to have dampened people's ability to...
I've updated substantially towards this view - the practical issues with renting CPUs make them far less of a fungible commodity than I was assuming, and as you pointed out, contra my understanding, there are effective restrictions on Chinese companies getting their hands on large amounts of compute.
Thanks for this response - I am not an expert on chip production, and your response on fabrication is clearly better informed than mine.
However, "Policy changes in 2025 could start affecting Chinese AI models in 2027 (for chips) and around 2030 (for SME) already."
I now agree with this - and I was told in other comments that I didn't sufficiently distinguish between these two, so thanks for clarifying that. But 2030 for starting to help get more chips is long timelines, and the people you cite with 2029-2030 timelines expect it to be playing out already then, so starting to get more chips then seems irrelevant in those worlds.
Edit to add: First, I really liked your post yesterday, which responded to some of this.
I think the technical barriers to developing EUV photolithography from scratch are far higher than anything needed to extract, refine, or transport oil.
I think the technical barriers are higher today, but not so high that intense Chinese investment can't dent it over the course of a decade. SMEE is investing in laser-induced discharge plasma tech, with rumored trial production as soon as the end of this year. SMIC is using DUV more efficiently for (lower-yie...
I would rather see people make bets that they think are very profitable (relative to the size of the bet).
There's this idea that betting on your beliefs is epistemically virtuous, which sometimes leads people to be so eager to bet that they make bets at odds that are roughly neutral EV for them. But I think the social epistemic advantages of betting mostly depend on both parties trying to make bets where they think they have a significant EV edge, so sacrificing your EV to get some sort of bet made is also sacrificing the epistemic spillover benefits of the bet.
It's partly shorter timelines, which we're seeing start to play out, and partly underlying pessimism on US economic policy under Trump, and the increasing odds of a recession.
The Us economy has been stalled, and the only reason this isn't obvious in the stock market is the AI companies - so my weak general model is that either the AI companies continue to do better, which at least weakly implies job displacement, or they don't, and there's a market crash and need for stimulus and inflation. In that situation, or even with a continued status quo maybe...
First, I was convinced, separately, that chip production location matters more than I presumed here because chips are not commodities in an important way I neglected - the security of a chip isn't really verifiable post-hoc, and worse, the differential insecurity of chips to US versus Chinese backdoors means that companies based in different locations will have different preferences for which risks to tolerate. (On the other hand, I think you're wrong in saying that "the chip supply chain has unique characteristics [compared to oil,] with extreme manufactu...
As a meta-comment, it's really important that a huge proportion of the disagreement in the comments here is about what "engage deeply" means.
If that means it is a crux that must be decided upon, the claim is clearly true that we must engage with them - because they are certainly cruxes.
It if means people must individually spend time on doing so, it is clearly false, because people can rationally choose not to engage and use some heuristic, or defer to experts which is rational[1].
In worlds where computation and consideration are not free. Using certain te
Deference to authority is itself a philosophical contention which has been discussed and debated (in that case, in comparison to voting as a method.)
It is possible to rationally prioritise between causes without engaging deeply on philosophical issues
As laid out at length in my 2023 post, no, it is not. For a single quote "all of axiology, which includes both aesthetics and ethics, and large parts of metaphysics... are central to any discussion of how to pursue cause-neutrality, but are often, in fact, nearly always, ignored by the community."
As to the idea that one can defer to others in place of engaging deeply, this is philosophically debated, and while rational in the decision theoretic sense, it i...
Strong agree that absent new approaches the tailwind isn't enough - but it seems unclear that pretraining scaling doesn't have farther to go, and it seems that current approaches with synthetic data and training via RL to enhance one-shot performance have room left for significant improvement.
I also don't know how much room there is left until we hit genius level AGI or beyond, and at that point even if we hit a wall, more scaling isn't required, as the timeline basically ends.