I live for a high disagree-to-upvote ratio
Hmm. Not a super well-thought out take here, but it seems to me that Situational Awareness’ biggest crux is around whether an arms race dynamic would develop between the U.S. and China, and he lays out a few specific ways in which that might happen.
I don’t see any evidence of such an arms race taking place. China don’t have any frontier labs, only labs which distill other models. They haven’t yet produced a capable chip and seem at least a few years to half a decade off (much slower than Aschenbrenner’s predictions). They haven’t waged a state-sponsored cyberattack to steal model weights or algorithmic secrets—but I suppose you could argue it’s cheaper and easier to just distill in the short term?
In fact, given the ease of distillation and the proliferation of open-source models, it might be more reasonable to argue that such an arms race may not even occur, because it will be cheap and easy to access intelligence.
One reason this is important is because AOC is very likely to run for president in 2028, and has so far been quite judicious about which policies she chooses to publicly support and endorse.
This is either an attempt to test the waters on AI regulation, to see if it will become part of her platform, or she is already convinced it will be. If she runs, she will then be in a position to leverage this policy to convince other Democratic presidential candidates to adopt similar measures (or a rhetorical anti-AI framing). The other most likely candidate for president is Gavin Newsom, in whose state most of the leading AI companies are headquartered.
What would you say to a potential attendee who has a legitimate interest in reprogenetics’ emancipatory capacity, but is concerned that the conference will be taken over by discussions of human biodiversity, especially given that two of the featured speakers, Jonathan Anomaly and Steve Hsu, have both pretty clearly endorsed HBD or at least, given the ambiguities in their statements, never explicitly refuted it?
Would you be interested in screening out certain problematic attendees or explicitly refuting human biodiversity on the conference website, in order to create an environment welcoming of open discussion of reprogenetics?
One other thing that feels missing from these comments, is that a more mature field has a bunch of other interesting discussion points. If all the philosophical questions in EA GHD were one day solved, we could still have invigorating debates about how to develop and manage interventions, about who the payer should be, etc. etc.
So I’m not sure this is all just a dearth of topics to discuss—perhaps the nuance is that this forum tends to like those more philosophical or intellectual discussions and those aren’t generally the kinds of debates most GHD practitioners I know are having?
To me wellbeing is the most exciting topic in EA GHD at the moment, because with some serious engagement from the kinds of players attending that workshop, it has the greatest potential to credibly upend the currently accepted wisdom in EA GHD. There are a lot of questions that you and others have been chipping away at for some time that many people assume are either solved or unlikely to yield field-altering results, and I think that impression is wrong!
Average income of CS graduates relative to average US individual income at the midpoint between now and HL-AGI
I don’t think it’s going to change much. Supply might slightly lower as AI tools make it easier for people to write code, but writing code ≠ developing software. Demand might slightly lower initially as existing firms find productivity improvements and markets demand cuts, but the demand for more software is still nearly infinite.
A rush of new, cheap entry-level programmers from the Global South in the 2000s–2010s didn’t really depress wages at all.
I’m not an economist though so I’m probably not qualified to have a good opinion here. I’m speaking as a professional software engineer who has a deep familiarity with these tools.
One thing I didn’t expand on in that thread is some uncertainty I have around ‘You think your sacrificed money is best spent on the non-profit you are working for’.
For these reasons I haven’t considered my sacrifice as a GWWC pledge so far, but I’m uncertain about it.
This is really nice, I really like it. Millenarianism feels all too easy to reach for in AI risk—as you note, there is a subtle self-satisfaction in predicting the end of the world that we have to be careful not to use as a crutch. In the world where we succeed, it will have been important to have done so pro-socially for the world after to have any chance of being worth living in.