I live for a high disagree-to-upvote ratio
What would you say to a potential attendee who has a legitimate interest in reprogenetics’ emancipatory capacity, but is concerned that the conference will be taken over by discussions of human biodiversity, especially given that two of the featured speakers, Jonathan Anomaly and Steve Hsu, have both pretty clearly endorsed HBD or at least, given the ambiguities in their statements, never explicitly refuted it?
Would you be interested in screening out certain problematic attendees or explicitly refuting human biodiversity on the conference website, in order to create an environment welcoming of open discussion of reprogenetics?
One other thing that feels missing from these comments, is that a more mature field has a bunch of other interesting discussion points. If all the philosophical questions in EA GHD were one day solved, we could still have invigorating debates about how to develop and manage interventions, about who the payer should be, etc. etc.
So I’m not sure this is all just a dearth of topics to discuss—perhaps the nuance is that this forum tends to like those more philosophical or intellectual discussions and those aren’t generally the kinds of debates most GHD practitioners I know are having?
To me wellbeing is the most exciting topic in EA GHD at the moment, because with some serious engagement from the kinds of players attending that workshop, it has the greatest potential to credibly upend the currently accepted wisdom in EA GHD. There are a lot of questions that you and others have been chipping away at for some time that many people assume are either solved or unlikely to yield field-altering results, and I think that impression is wrong!
Average income of CS graduates relative to average US individual income at the midpoint between now and HL-AGI
I don’t think it’s going to change much. Supply might slightly lower as AI tools make it easier for people to write code, but writing code ≠ developing software. Demand might slightly lower initially as existing firms find productivity improvements and markets demand cuts, but the demand for more software is still nearly infinite.
A rush of new, cheap entry-level programmers from the Global South in the 2000s–2010s didn’t really depress wages at all.
I’m not an economist though so I’m probably not qualified to have a good opinion here. I’m speaking as a professional software engineer who has a deep familiarity with these tools.
One thing I didn’t expand on in that thread is some uncertainty I have around ‘You think your sacrificed money is best spent on the non-profit you are working for’.
For these reasons I haven’t considered my sacrifice as a GWWC pledge so far, but I’m uncertain about it.
Given EA's goals, I'd argue it's okay to hold them to a high standard.
I would go further, and say that given CEA’s specific history and promises of change around sexual harassment[1], we should hold them to an even higher standard than that.
CEA was and is a member organisation of EV UK, and the findings partially concerned CEA’s Community Health Team ↩︎
One reason this is important is because AOC is very likely to run for president in 2028, and has so far been quite judicious about which policies she chooses to publicly support and endorse.
This is either an attempt to test the waters on AI regulation, to see if it will become part of her platform, or she is already convinced it will be. If she runs, she will then be in a position to leverage this policy to convince other Democratic presidential candidates to adopt similar measures (or a rhetorical anti-AI framing). The other most likely candidate for president is Gavin Newsom, in whose state most of the leading AI companies are headquartered.