All of richard_ngo's Comments + Replies

I used to not actually believe in heavy-tailed impact. On some level I thought that early rationalists (and to a lesser extent EAs) had "gotten lucky" in being way more right than academic consensus about AI progress. And I thought on some gut level that e.g. Thiel and Musk and so on kept getting lucky, because I didn't want to picture a world in which they were actually just skillful enough to keep succeeding (due to various psychological blockers).

Now, thanks to dealing with a bunch of those blockers, I have internalized to a much greater extent that you... (read more)

2
Tristan W
Still working my way through the talk and post mentioned, so pardon the tardiness, but does that mean you expect the highest quality talent will naturally find it's way to the field? I suppose I see a tension between "outreach only to the best" and generally walking away from outreach. E.g. do the fellowships seem like a reasonable bet to you now that they're super competitive and raising their bar, or are they still too general in scope and we should instead be doing something like running an exclusive side event at NeurIPS? Put more succinctly: should we be raising the bar for the quality of talent reached, or working to pivot outreach to those who already show strong signs of success in relevant fields? Helpful updates though, thanks for taking the time to share them. 

My story is: Elon changing the twitter censorship policies was a big driver of a chunk of Silicon Valley getting behind Trump—separate from Elon himself promoting Trump, and separate from Elon becoming a part of the Trump team.

And I think anyone who bought Twitter could have done that.

If anything being Elon probably made it harder, because he then had to face advertiser boycotts.

Agree/disagree?

2
Cullen 🔸
Ah, interesting, not exactly the case that I thought you were making. I more or less agree with the claim that "Elon changing the twitter censorship policies was a big driver of a chunk of Silicon Valley getting behind Trump," but probably assign it lower explanatory power than you do (especially compared to nearby explanatory factors like, Elon crushing internal resistance and employee power at Twitter). But I disagree with the claim that anyone who bought Twitter could have done that, because I think that Elon's preexisting sources of power and influence significantly improved his ability to drive and shape the emergence of the Tech Right. I also don't think that the Tech Right would have as much power in the Trump admin if not for Elon promoting Trump and joining the administration. So a different Twitter CEO who also created the Tech Right would have created a much less powerful force.

To be clear, my example wasn't "I'm trying to talk to people in the south about racism" It's more like, "I'm trying to talk to people in the south about animal welfare, and in doing so, I bring up examples around South people being racist."

Yeah I got that. Let me flesh out an analogy a little more:

Suppose you want to pitch people in the south about animal welfare. And you have a hypothesis for why people in the south don't care much about animal welfare, which is that they tend to have smaller circles of moral concern than people in the north. Here are two... (read more)

5
Ozzie Gooen
Thanks for continuing to engage! I really wasn't expecting this to go so long. I appreciate that you are engaging on the meta-level, and also that you are keeping more controversial claims separate for now.  On the thought experiment of the people in the South, it sounds like we might well have some crux[1] here. I suspect it would be strained to discuss it much further. We'd need to get more and more detailed on the thought experiment, and my guess is that this would make up a much longer debate. Some quick things: This is a sort of sentence I find frustrating. It feels very motte-and-bailey - like on one hand, I expect you to make a narrow point popular on some parts of MAGA Twitter/X, then on the other, I expect you to say, "Well, actually, Trump got 51% of the popular vote, so the important stuff is actually a majority opinion.". I'm pretty sure that very few specific points I would have a lot of trouble with are actually substantially endorsed by half of America. Sure, there are ways to phrase things very careful such that versions of them can technically be seen as being endorsed, but I get suspicious quickly.  The weasel phrases here of "to some extent" and "approximately", and even the vague phrases "memeplex" and "endorsed" also strike me as very imprecise. As I think about it, I'm pretty sure I could claim that that sentence could hold, with a bit of clever reasoning, for almost every claim I could imagine someone making on either side.  To be clear, I'm fine with someone straightforwardly writing good arguments in favor of much of MAGA[2]. One of my main issues with this piece is that it's not claiming to be that, it feels like you're trying to sneakily (intentionally or unintentionally) make this about MAGA.  I'm not sure what to make of the wording of "the optimal number of people raising and defending MAGA ideas in EA and AI safety is clearly not zero." I mean, to me, the more potentially inflammatory content is, more I'd want to make sure it's

a lot of your framing matches incredibly well with what I see as current right-wing talking points

Occam's razor says that this is because I'm right-wing (in the MAGA sense not just the libertarian sense).

It seems like you're downweighting this hypothesis primarily because you personally have so much trouble with MAGA thinkers, to the point where you struggle to understand why I'd sincerely hold this position. Would you say that's a fair summary? If so hopefully some forthcoming writings of mine will help bridge this gap.

It seems like the other reason you'r... (read more)

3
Ozzie Gooen
If you're referring to the part where I said I wasn't sure if you were faking it - I'd agree. From my standpoint, it seems like you've shifted to hold beliefs that both seem highly suspicious and highly convenient - this starts to raise the hypothesis that you're doing it, at least partially, strategically.  (I relatedly think that a lot of similar posturing is happening on both sides of the political isle. But I generally expect that the politicians and power figures are primarily doing this for strategic interests, while the news consumers are much more likely to actually genuinely believe it. I'd suspect that others here would think similar of me, if it were the case that we had a hard-left administration, and I suddenly changed my tune to be very in line with that.) Again, this seems silly to me. For one thing, I think that while I don't always trust people's publicly-stated political viewpoints, I state their reasons for doing these sorts of things even less. I could imagine that your statement is what it honestly feels to you, but this just raises a bunch of alarm bells to me. Basically, if I'm trying to imagine someone coming up with a convincing reason to be highly and unnecessarily (from what I can tell) provocative, I'd expect them to raise some pretty wacky reasons for it. I'd guess that the answer is often simpler, like, "I find that trolling just brings with it more attention, and this is useful for me," or "I like bringing in provocative beliefs that I have, wherever I can, even if it hurts an essay about a very different topic. I do this because I care a great deal about spreading these specific beliefs. One convenient thing here is that I get to sell readers on an essay about X, but really, I'll use this as an opportunity to talk about Y instead." Here, I just don't see how it helps. Maybe it attracts MAGA readers. But for the key points that aren't MAGA-aligned, I'd expect that this would just get less genuine attention, not more. To me it sounds

Thanks for the comment.

I think you probably should think of Silicon Valley as "the place" for politics. A bunch of Silicon Valley people just took over the Republican party, and even the leading Democrats these days are Californians (Kamala, Newsom, Pelosi) or tech-adjacent (Yglesias, Klein).

Also I am working on basically the same thing as Jan describes, though I think coalitional agency is a better name for it. (I even have a post on my opposition to bayesianism.)

9
Matrice Jacobine🔸🏳️‍⚧️
I think you're interpreting as ascendancy what is mostly just Silicon Valley realigning to the Republican Party (which is more of a return to the norm both historically and for US industrial lobbies in general). None of the Democrats you cite are exactly rising stars right now.

Good questions. I have been pretty impressed with:

  • Balaji, who tweets about the dismantling of the American Empire (e.g. here, here) are the best geopolitical analysis I've seen of what's going wrong with the current Trump administration
  • NS Lyons, e.g. here.
  • Some of Samo Burja's concepts (e.g. live vs dead players) have proven much more useful than I expected when I heard about them a few years ago.

I think there are probably a bunch of other frameworks that have as much or more explanatory power as my two-factor model (e.g. Henrich's concept of WEIRDness, Sco... (read more)

Thanks for the thoughtful comment! Yeah the OpenAI board thing was the single biggest thing that shook me out of complacency and made me start doing sociopolitical thinking. (Elon's takeover of twitter was probably the second—it's crazy that you can get that much power for $44 billion.)

I do think I have a pretty clear story now of what happened there, and maybe will write about it explicitly going forward. But for now I've written about it implicitly here (and of course in the cooperative AI safety strategies post).

6
Cullen 🔸
I think this is pretty significantly understating the true cost. Or put differently, I don't think it's good to model this as an easily replicable type of transaction. I don't think that if, say, some more boring multibillionaire did the same thing, they could achieve anywhere close to the same effect. It seems like the Twitter deal mainly worked for him, as a political figure, because it leveraged existing idiosyncratic strengths that he had, like his existing reputation and social media following. But to get to the point where he had those traits, he needed to be crazy successful in other ways. So the true cost is not $44 billion, but more like: be the world's richest person, who is also charismatic in a bunch of different ways, have an extremely dedicated online base of support from consumers and investors, have a reputation for being a great tech visionary, and then spend $44B.

No central place for all the sources but the one you asked about is: https://www.sebjenseb.net/p/how-profitable-is-embryo-selection

9
gwern
FWIW, I am skeptical of the interpretation you put on that graph. I considered the same issue in my original analysis, but went with Strenze 2007's take that there doesn't seem to be much steeper IQ/income slope and kept returns constant. Seb objects that the datapoints have various problems and that's why he redoes an analysis with NLS to get his number, which is reasonable for his use, but he specifically disclaims projecting the slope out (which would make ES more profitable than it seems) like you do: And it's not hard to see why the slope difference might be real but irrelevant: people in NLSY79 have different life cycles than in NLSY97, and you would expect a steeper slope (but same final total lifetime income / IQ correlation) from simply lengthening years of education - you don't have much income as a college student! The longer you spend in college, the higher the slope has to be when you finally get out and can start your remunerative career. (This is similar to those charts which show that Millennials are tragically impoverished and doing far worse than their Boomer parents... as long as you cut off the chart at the right time and don't extend it to the present.) I'm also not aware of any dramatic surge in the college degree wage-premium, which is hard to reconcile with the idea that IQ points are becoming drastically more valuable. So quite aside from any Inside View takes about how the current AI trends look like they're going to savage holders of mere IQ, your slope take is on shaky, easily confounded grounds, while the Outside View is that for a century or more, the returns to IQ have been remarkably constant and have not suddenly been increasing by many %.

Ah, gotcha. Yepp, that's a fair point, and worth me being more careful about in the future.

I do think we differ a bit on how disagreeable we think advocacy should be, though. For example, I recently retweeted this criticism of Abundance, which is basically saying that they overly optimized for it to land with those who hear it.

And in general I think it's worth losing a bunch of listeners in order to convey things more deeply to the ones who remain (because if my own models of movement failure have been informed by environmentalism etc, it's hard to talk ar... (read more)

7
Ozzie Gooen
My quick take: I think that at a high level you make some good points. I also think it's probably a good thing for some people who care about AI safety to appear to the current right as ideologically aligned with them. At the same time, a lot of your framing matches incredibly well with what I see as current right-wing talking points. "And in general I think it's worth losing a bunch of listeners in order to convey things more deeply to the ones who remain"  -> This comes across as absurd to me. I'm all for some people holding uncomfortable or difficult positions. But when those positions sound exactly like the kind of thing that would gain favor by a certain party, I have a very tough time thinking that the author is simultaneously optimizing for "conveying things deeply". Personally, I find a lot of the framing irrelevant, distracting, and problematic. As an example, if I were talking to a right-wing audience, I wouldn't focus on example of racism in the South, if equally-good examples in other domains would do. I'd expect that such work would get in the way of good discussion on the mutual areas where myself and the audience would more easily agree.  Honestly, I have had a decent hypothesis that you are consciously doing all of this just in order to gain favor by some people on the right. I could see a lot of arguments people could make sense for this. But that hypothesis makes more sense for the Twitter stuff than here. Either way, it does make it difficult for me to know how to engage. On one hand, I am very uncomfortable and I highly disagree with a lot of MAGA thinking, including some of the frames you reference (which seem to fit that vibe), if you do honestly believe this stuff. Or on the other, you're actively lying about what you believe, in a critical topic, in an important set of public debates. Anyway, this does feel like a pity of a situation. I think a lot of your work is quite good, and in theory, what read to me like the MAGA-aligned parts do

him being one of the few safetyists on the political right to not have capitulated to accelerationism-because-of-China (as most recently even Elon did).

Thanks for noticing this. I have a blog post coming out soon criticizing this exact capitulation.

every time he tries to talk about object-level politics it feels like going into the bizarro universe and I would flip the polarity of the signs of all of it

I am torn between writing more about politics to clarify, and writing less about politics to focus on other stuff. I think I will compromise by trying to wr... (read more)

I worry that your bounties are mostly just you paying people to say things you already believe about those topics

This is a fair complaint and roughly the reason I haven't put out the actual bounties yet—because I'm worried that they're a bit too skewed. I'm planning to think through this more carefully before I do; okay to DM you some questions?

I think it is extremely easy to imagine the left/Democrat wing of AI safety becoming concerned with AI concentrating power, if it hasn't already

It is not true that all people with these sort of concerns only care pr

... (read more)

Thanks for the feedback!

FWIW a bunch of the polemical elements were deliberate. My sense is something like: "All of these points are kinda well-known, but somehow people don't... join the dots together? Like, they think of each of them as unfortunate accidents, when they actually demonstrate that the movement itself is deeply broken."

There's a kind of viewpoint flip from being like "yeah I keep hearing about individual cases that sure seem bad but probably they'll do better next time" to "oh man, this is systemic". And I don't really know how to induce the... (read more)

1
Davik
The systemic issue is capital accumulation. It neatly explains everything in your examples that's actually happening and not a hallucination.  You're right that EA needed this guidance on sociological and political issues... Ten years ago. This was a discussion that was had, and rejected, early on. Now everything EA did is meaningless as capital destroys the world economy and all institutions of collective action.
7
jackva
I think we are misunderstanding each other a bit. I am in no way trying to imply that you shouldn't be mad about environmentalism's failings -- in fact, I am mad about them on a daily basis. I think if being mad about environmentalism's failing is the main point than what Ezra Klein and Derek Thompson are currently doing with Abundance is a good example of communicating many of your criticisms in a way optimized to land with those that need to hear it. My point was merely that by framing the example in such extreme terms it will lose a lot of people despite being only very tangentially related to the main points you are trying to make. Maybe that's okay, but it didn't seem like your goal overall to make a point about environmentalism, so losing people on an example that is stated in such an extreme fashion did not seem worth it to me.

yeah there was a tender offer, openai does them every year or two

This post convinced me to sell $200,000 more OpenAI shares than I would otherwise have, in order to have more money available to donate rapidly. Thanks!

2
Will Howard🔹
Out of interest, did there happen to be a tender offer running since this post came out or is there some other way you can sell the shares?

Thanks for sharing this, it does seem good to have transparency into this stuff.

My gut reaction was "huh, I'm surprised about how large a proportion of these people (maybe 30-50%, depending on how you count it) I don't recall substantially interacting with" (where by "interaction" I include reading their writings).

To be clear, I'm not trying to imply that it should be higher; that any particular mistakes are being made; or that these people should have interacted with me. It just felt surprising (given how long I've been floating around EA) and worth noting as a datapoint. (Though one reason to take this with a grain of salt is that I do forget names and faces pretty easily.)

Thanks! I think this note explains the gap:

Note that we have a few attendees at this year’s event who are specialists in one of our focus areas rather than leaders of an EA meta organization or team (though some attendees are both).

We were not trying to optimise the attendee list for connectedness or historical engagement with the community, but rather who can contribute to making progress on our core themes; brand and funding. When you see what roles these attendees have, I think it's fairly evident why we invited them, given this lens.

I'll also note that... (read more)

My point is not that the current EA forum would censor topics that were actually important early EA conversations, because EAs have now been selected for being willing to discuss those topics. My point is that the current forum might censor topics that would be important course-corrections, just as if the rest of society had been moderating early EA conversations, those conversations might have lost important contributions like impartiality between species (controversial: you're saying human lives don't matter very much!), the ineffectiveness of developmen... (read more)

Narrowing in even further on the example you gave, as an illustration: I just had an uncomfortable conversation about age of consent laws literally yesterday with an old friend of mine. Specifically, my friend was advocating that the most important driver of crime is poverty, and I was arguing that it's cultural acceptance of crime. I pointed to age of consent laws varying widely across different countries as evidence that there are some cultures which accept behavior that most westerners think of as deeply immoral (and indeed criminal).

Picturing some resp... (read more)

Do you have an example of the kind of early EA conversation which you think was really important which helped came up with core EA tenets might be frowned upon or censored on the forum now? I'm still super dubious about whether leaving out a small number of specific topics really leaves much value on the table.

And I really think conversations can be had in more sensitive ways. In the the case of the original banned post, just as good a philosophical conversation could be had without explicitly talking about killing people. The conversation already was bein... (read more)

Ty for the reply; a jumble of responses below.

I think there are better places to have these often awkward, fraught conversations.

You are literally talking about the sort of conversations that created EA. If people don't have these conversations on the forum (the single best way to create common knowledge in the EA commmunity), then it will be much harder to course-correct places where fundamental ideas are mistaken. I think your comment proceeds from the implicit assumption that we're broadly right about stuff, and mostly just need to keep our heads down a... (read more)

8
huw
Just to narrow in on a single point—I have found the 'EA fundamentally depends on uncomfortable conversations' point to be a bit unnuanced in the past. It seems like we could be more productive by delineating which kinds of discomfort we want to defend—for example, most people here don't want to have uncomfortable conversations about age of consent laws (thankfully), but do want to have them about factory farming. When I think about the founding myths of EA, most of them seem to revolve around the discomfort of applying utilitarianism in practice, or on how far we should expand our moral circles. I think EA would've broadly survived intact by lightly moderating other kinds of discomfort (or it may have even expanded). I'm not keen to take a stance on whether this post should or shouldn't be allowed on the forum, but I am curious to hear if and where you would draw this line :)

I appreciate the thought that went into this. I also think that using rate-limits as a tool, instead of bans, is in general a good idea. I continue to strongly disagree with the decisions on a few points:

  1. I still think including the "materials that may be easily perceived as such" clause has a chilling effect.
  2. I also remember someone's comment that the things you're calling "norms" are actually rules, and it's a little disingenuous to not call them that; I continue to agree with this.
  3. The fact that you're not even willing to quote the parts of the post that w
... (read more)
7
NickLaing
I've weak karma downvoted and disagreed with this, then hit the "insightful" button. Definitely made me think and learn. I agree that this is really tricky question, and some of those philosophical conversations (including this one) are important and should happen, but I don't think this particular EA forum is the best place for them, for a few reasons. 1) I think there are better places to have these often awkward, fraught conversations. I think they are often better had in-person where you can connect, preface, soften and easily retract. I recently got into a mini online-tiff, when a wise onlooker noted... "Online discussions can turn that way with a few misinterpretations creating a doom loop that wouldn't happen with a handshake and a drink" Or alternatively perhaps in a more academic/narrow forum where people have similar discussion norms and understandings. This forum has a particularly wide range of users, from nerds to philosophers to practitioners to managers to donors so there's a very wide range of norms and understandings. 2) There's potential reputational damage for all the people doing great EA work across the spectrum here. These kinds of discussions could lead to more hit-pieces and reduced funding. It would be a pity if the AI apocalypse hit us because of funding cuts due to these discussions. (OK now I'm strawmanning a bit :D) 3) The forum might be an entry-point for some people into EA things. I don't think its a good idea for this to be these discussions to be the first thing someone looking into EA sees on the internet. 4) It might be a bit of a strawman to say our "discourse will forever be at the mercy of whoever is most hostile to you, or whoever is craziest." I think people hostile to EA don't like many things said here on the forum, but we aren't forever at the mercy of them and we keep talking. I think there are just a few particular topics which give people more ammunition for public take-downs, and there is wisdom in sometimes avo

On point 4:

I'm pretty sure we could come up with various individuals and groups of people that some users of this forum would prefer not to exist. There's no clear and unbiased way to decide which of those individuals and groups could be the target of "philosophical questions" about the desirability of murdering them and which could not. Unless we're going to allow the question as applied to any individual or group (which I think is untenable for numerous reasons), the line has to be drawn somewhere. Would it be ethical to get rid of this meddlesome priest... (read more)

This moderation policy seems absurd. The post in question was clearly asking purely hypothetical questions, and wasn't even advocating for any particular answer to the question. May as well ban users for asking whether it's moral to push a man off a bridge to stop a trolley, or ban Peter Singer for his thought experiments about infanticide.

Perhaps dstudiocode has misbehaved in other ways, but this announcement focuses on something that should be clearly within the bounds of acceptable discourse. (In particular, the standard of "content that could be interp... (read more)

4
JP Addison🔸
Another comment from me: I don’t like my mod message, and I apologize for it. I was rushed and used some templated language that I knew damn well at the time that I wasn’t excited about putting my name behind. I nevertheless did and bear the responsibility. That’s all from me for now. The mods who weren’t involved in the original decision will come in and reconsider the ban, pursuant to the appeal.
-5
JP Addison🔸

That is not the post in question. We removed the post that prompted the ban.

I accept that I should talk about "Trump and the Republican party". But conversely, when we talk about the Democratic party, we should also include the institutions over which it has disproportionate influence—including most mainstream media outlets, the FBI (which pushed for censorship of one of the biggest anti-Biden stories in the lead-up to the 2020 election—EDIT: I no longer endorse this phrasing, it seems like the FBI's conversations with tech companies were fairly vague on this matter), the teams responsible for censorship at most major tech compani... (read more)

4
Tobias Häberli
You probably know much more about U.S. politics than I do, so I can't engage deeply on whether these things are really happening or how unusual they might be. However, I suspect that much of what you're attributing to the Democratic party is actually due to a broader trend of U.S. elites becoming more left-leaning and Democrat-voting. Even if I agreed that this shift was bad for democracy, I'm not sure how voting for Trump would fix it in the long run. A Trump presidency would likely push elites even further toward left-leaning politics.

One more point: in Scott's blog post he talks about the "big lie" of Trump: that the election was stolen. I do worry that this is a key point of polarization, where either you fully believe that the election was stolen and the Democrats are evil, or you fully believe that Trump was trying to seize dictatorial power.

But reality is often much more complicated. My current best guess is that there wasn't any centrally-coordinated plan to steal the election, but that the central Democrat party:

  1. Systematically turned a blind eye to thousands of people who shouldn
... (read more)
7
LintzA
I think it's plausible that Dems turned a blind eye to some of this and that led to a few thousand extra votes here and there. US elections (and elections in general) always have issues like this and AFAIK there's no reason to believe they played any larger or more important role in 2020 than any other election. In fact, given the amount of highly-motivated scrutiny applied to the 2020 election, I suspect it was cleaner than most previous elections.  Even had Trump received any credible evidence of unusual tampering (you'd think he'd have laid it out by now if he had), his actions were beyond the pale. His own Attorney General refused to recognize any signs of fraud. He tried to cajole anyone he could into not certifying the results in any state or district he could despite no real evidence of wrong-doing. His scheme to create alternate slates of electors was an out-and-out attempt at election fraud. There's no world in which that was intended to be representative of ground-truth.  This article spells out a bunch of Trump's actions around the 2020 election. I'm curious what you think of it.  To be fair (kinda) to Trump, I think he really may have thought the election was stolen. He seems extremely capable of deluding himself about things like that. E.g. he just said that, if Jesus were counting the vote, he would win California easily. My hot take is that having a president who is actively trying to delude himself and his followers into believing 2020 was stolen (and that 2024 will be stolen) is bad, that it displays a weakness of character & epistemics that should be disqualifying. It should, e.g., make us question his ability to act reasonably in a crisis situation or when presented with a complicated new risk like AI. 

Without expressing any views on which allegations against the two major sides are true, it's clear to me that relatively few people in the US are particularly interested in what we might call nonpartisan electoral truthseeking: making it easy, convenient, and secure for all those (and only those) legally eligible to vote, without unlawful foreign interference or illegal disinformation (like false robocalls about poll location).

I think this pales in comparison to Trump’s willingness to silence critics (e.g. via hush money and threats).

If you believe that Trump has done a bunch of things wrong, the Democrats have done very little wrong, and the people prosecuting Trump are just following normal process in doing so, then yes these threats are worrying.

But if you believe that the charges against Trump were in fact trumped-up, e.g. because Democrats have done similarly bad things without being charged, then most of Trump's statements look reasonable. E.g. this testimony about Biden s... (read more)

-2
richard_ngo
One more point: in Scott's blog post he talks about the "big lie" of Trump: that the election was stolen. I do worry that this is a key point of polarization, where either you fully believe that the election was stolen and the Democrats are evil, or you fully believe that Trump was trying to seize dictatorial power. But reality is often much more complicated. My current best guess is that there wasn't any centrally-coordinated plan to steal the election, but that the central Democrat party: 1. Systematically turned a blind eye to thousands of people who shouldn't have been voting (like illegal immigrants) actually voting (in some cases because Democrat voter registration pushes deliberately didn't track this distinction). 2. Blocked reasonable election integrity measures that would have prevented this (like voter ID), primarily in a cynical + self-interested way. On priors I think this probably didn't swing the election, but given how small the winning margins were in swing states, it wouldn't be crazy if it did. From this perspective I think it reflects badly on Trump that he tried to do unconstitutional things to stay in power, but not nearly as badly as most Democrats think. (Some intuitions informing this position: I think if there had been clear smoking guns of centrally-coordinated election fraud, then Trump would have won some of his legal challenges, and we'd have found out about it since then. But it does seem like a bunch of non-citizens are registered to vote in various states (e.g. here, here), and I don't think this is a coincidence given that it's so beneficial for Dems + Dems have so consistently blocked voter ID laws. Conversely, I do also expect that red states are being overzealous in removing people from voter rolls for things like changing their address. Basically it all seems like a shitshow, and not one which looks great for Trump, but not disqualifying either IMO, especially because in general I expect to update away from the mainstream

(This comment focuses on object-level arguments about Trump vs Kamala; I left another comment focused on meta-level considerations.)

Three broad arguments for why it's plausibly better if Trump wins than if Kamala does:

  1. I basically see this election as a choice between a man who's willing to subvert democracy, and a party that is willing to subvert democracy—e.g. via massively biased media coverage, lawfare against opponents, and coordinated social media censorship (I've seen particularly egregious examples on Reddit, but I expect that Facebook and Instagram
... (read more)
9
Fermi–Dirac Distribution
Which things that Democrats have done are as bad as the following actions from the GOP ticket? * Calling for the Constitution to be suspended * Saying the President should ignore the Supreme Court * Saying that Pence shouldn't have certified the 2020 election results * Planning to use the military for domestic law enforcement * Calling for journalists to be jailed * Ending a 220-year tradition of peaceful transfers of power and spending considerable and relentless effort attempting to overturn an election It's plausible to worry that if Trump wins in 2024, and then a Democrat wins the 2028 election, Vance will simply not certify the election results until states send illegitimate Republican electors, which Republican members of the House would then have the opportunity of choosing.[1] This isn't a conspiracy theory, it's what Vance said on TV that he would've done in 2020. So we could be in a situation in 4 years in which only one party is allowed to win major elections. I believe the technical term for this is "dictatorship." I believe the examples of undemocratic activity by Democrats that you've listed in your comment pale in comparison to those actions and statements. But even if they don't, it's unclear why they're relevant to your argument, when Republicans have done approximately all of the things you listed. For example: Have you read Breitbart or watched One America News Network? Can you name one media company whose staff is largely right-wing which produces better and less biased content than the NYT? If not, why does "massively biased media coverage" count against Democrats but not Republicans? Do you actually expect Trump to be better on that front? As the WP reported, "In public, Trump has vowed to appoint a special prosecutor to “go after” President Biden and his family. [...] In private, Trump has told advisers and friends in recent months that he wants the Justice Department to investigate onetime officials and allies who have become crit
8
Tobias Häberli
Regarding point 1. You're framing the situation as a choice between 'Trump, who is willing to subvert democracy' and 'the Democratic Party, who is willing to subvert democracy'. This framing implicitly acknowledges that Harris is not (especially) willing to subvert democracy. It's very plausible to believe that both the Democratic Party and the Republican Party are roughly equally willing to subvert democracy, especially given the significant influence Trump has on the Republican Party. It then becomes a choice between: Trump and the Republican Party, who are both willing to subvert democracy vs. The Democratic Party, who are willing to subvert democracy, and Harris, who is not. In this comparison, Harris's apparent commitment to democratic norms becomes the deciding factor in how you evaluate the overall democraticness of the choices.

[This is part 1, I will get to foreign policy and AI-specific questions hopefully soon] 

I don't think it's fair to put an attempt to overthrow an election on par with biased media coverage (seems like both sides do this about equally, maybe conservative media is worse?) or dumping on opposition candidates (not great but also typical of both parties for many decades AFAIK). Scott Aaronson lays out some general concerns well here.

Trump incited a violent coup/insurrection attempt to prevent the 2020 election from being certified as well as other extremel... (read more)

(This comment focuses on meta-level issues; I left another comment with object-level disagreements.)

The EA case for Trump was heavily downvoted, with commenters arguing that e.g. "a lot of your arguments are extremely one-sided in that they ignore very obvious counterarguments and fail to make the relevant comparisons on the same issue."

This post is effectively an EA case for Kamala, but less even-handed—e.g. because it:

  1. Is framed it not just as a case for Kamala, but as a case for action (which, I think, requires a significantly higher bar than just believ
... (read more)

Generally feels like it's primarily talking to an audience who already agrees that Trump is bad, and just needs to be persuaded about how bad he is

This is true to some extent. I did not write this thinking it would be ‘the EA case for Kamala’ in response to Hammond’s piece. I also was wary about adding length to an already too-long piece so didn’t go into detail on various counterpoints to Kamala.

Is framed it not just as a case for Kamala, but as a case for action (which, I think, requires a significantly higher bar than just believing that it'd be better

... (read more)
2
Gil
Setting aside the substantive issues about how accurate this post is vs. the other one, I'll admit I'm very uncertain on how much we should avoid talking about partisan politics in AI forums, how much it politicizes the debate vs. clarifies the stakes in ways that help us act more strategically
9
Jason
A few observations on that from someone who did not vote on the Trump post or this one: 1. This seems to rely on an assumption that the commenters on the prior post had the same motivations as one might assign to the broader voter pool. It's certainly possible, but hardly certain. 2. It's impossible to completely divorce oneself from object-level views when deciding whether a post has failed to address or acknowledge sufficiently important considerations in the opposite direction. Yet such a failure is (and I think, has to be) a valid reason to downvote. It's reasonable to me that a voter would find the missing issues in the Trump piece sufficiently important, and the issues you identify for Harris as having much less significance for a number of reasons. 3. Partisan political posts are disfavored for various reasons, including some your comment mentions. I think it's fine for voters to maintain higher voting standards for such posts. Moreover, it feels easier for those posts to be net-negative because they are closer to zero-sum in nature; John's candidate winning the election means Jane's candidate losing. "It would be better for this post not to be on the Forum" is a plausible reason to downvote. Those factors make downvoting for strong disagreement more plausible than on non-political posts. This is especially true insofar as the voter thinks the resulting discussion will sound like ten thousand other political debates and contribute little if at all to finding truth. 4. Finally, there are good reasons for people to be less willing to leave object-level comments on posts like this one or the Trump one. First, arguing about politics is exhausting and usually unfruitful. Two, it risks derailing the Forum into a discussion of topics rather removed from effective altruism (e.g., were the various criminal charges against Trump and lawsuits against Musk legit? how biased in the mainstream US media?)

Anyone know what post Dustin was referring to? EDIT: as per a DM, probably this one.

I recently had a very interesting conversation about master morality and slave morality, inspired by the recent AstralCodexTen posts.

The position I eventually landed on was:

  1. Empirically, it seems like the world is not improved the most by people whose primary motivation is helping others, but rather by people whose primary motivation is achieving something amazing. If this is true, that's a strong argument against slave morality.
  2. The defensibility of morality as the pursuit of greatness depends on how sophisticated our cultural conceptions of greatness are.
... (read more)
5
Noah Birnbaum
Very interesting points. Here are a few other things to think about: 1. I think there are very few people whose primary motivation is helping others, so we shouldn't empirically expect them to be doing the most good because they represent a very small portion of the population. This is especially true if you think (which I do) that the vast majority of people who do good are 1) (consciously or unconsciously) signaling for social status or 2) not doing good very effectively (the people who are are a much smaller subgroup because doing non-effective good is easy). It would be very surprising, however, if those who try to do good effectively aren't doing much better than those who aren't, as individuals, on average, but it seems unlikely to me (though feel free to throw some stats that will change my mind!).  2. I'm very skeptical that "the defensibility of morality as the pursuit of greatness depends on how sophisticated our cultural conceptions of greatness are." Could you give more reason for why you think this?  3. I'm skeptical that 1) searching for equanimity is truly the best thing and 2) that we have good and tractable methods of achieving it. Perhaps people would be better off as being more Buddhist on the margin, but, to me, it seems like (thoughtfully!) getting the heavy positive tail end results and be really careful and thoughtful about negatives leads to a much better off society.  Let me know what you think! 

Empirically, it seems like the world is not improved the most by people whose primary motivation is helping others, but rather by people whose primary motivation is achieving something amazing. If this is true, that's a strong argument against slave morality.

This is seems very wrong to me on a historical basis. When I think of the individuals who have done the most good for the world, I think of people who made medical advances like the smallpox vaccine, scientists who discovered new technologies like electricity, and social movements like abolitionism tha... (read more)

My take is that most of the points raised here are second-order points, and actually the biggest issue in this election is how democratic the future of America will be. But having said that, it's not clear which side is overall better on this front:

  1. The strongest case for Trump is that the Democrat establishment is systematically deceiving the American people (e.g. via the years-long cover-up of Biden's mental state, strong partisan bias in mainstream media, and extensive censorship campaigns), engaging in lawfare against political opponents (e.g. against E
... (read more)
3
Linch
You don't have to just trust media portrayals or ex-advisor testimonies to know how authoritarian Trump is, much of this stuff is publicly available online and you can just look at primary sources. For example, the full Trump call to Georgia Sectary of State is uploaded to Youtube (I linked to the part where he started yelling but honestly the entire call, which you can listen to, is imo pretty damning). Trump afterwards Tweeted  Which is interesting because Raffensperger patiently responded to every allegation over a call lasting a whole hour. 
3
Charlie_Guthmann
Bureaucracy point seems potentially reasonable to me, although hard to say if that exactly equates to less/more democratic or just worse domestic situation.  The cover up of bidens mental state is "highly undemocratic"? that would be not in the top 1000 least democratic things trump/republicans have done in the last 8 years. 

I remain in favor of people doing work on evals, and in favor of funding talented people to work on evals. The main intervention I'd like to make here is to inform how those people work on evals, so that it's more productive. I think that should happen not on the level of grants but on the level of how they choose to conduct the research.

This seems like the wrong meta-level orientation to me. A meta-level orientation that seems better to me is something like "Truth and transparency have strong global benefits, but often don't happen enough because they're locally aversive. So assume that sharing information is useful even when you're not concretely sure how it'll help, and assume by default that power structures (including boards, social networks, etc) are creating negative externalities insofar as they erect barriers to you sharing information".

The specific tradeoff between causing drama ... (read more)

7
Owen Cotton-Barratt
Thanks, this felt clarifying (and an important general point). I think I'm now at "Well I'd maybe rather share my information with an investigator who would take responsibility for working out what's worth sharing publicly and what's extraneous detail; but absent that, speaking seems preferable to not-speaking. So I'll wait a little to see whether the momentum in this thread turns into anything, but if it's looking like not I'll probably just share something."

I disagree FWIW. I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.

4
Ryan Greenblatt
I don't disagree with this statement, but also think the original comment is reading into twitter way too much.

you can infer that people who don't take AI risk seriously are somewhat likely to lack important forms of competence

This seems true, but I'd also say that the people who do take AI risk seriously also typically lack different important forms of competence. I don't think this is coincidental; instead I'd say that there's (usually) a tradeoff between "good at taking very abstract ideas seriously" and "good at operating in complex fast-moving environments". The former typically requires a sort of thinking-first orientation to the world, the latter an action-f... (read more)

If Hassan had said that more recently or I was convinced he still thought that, then I would agree he should not be invited to Manifest.

My claim is that the Manifest organizers should have the right to invite him even if he'd said that more recently. But appreciate you giving your perspective, since I did ask for that (just clarifying the "agree" part).

Having said that, given that there is a very clear non-genocidal reading, I do not think it is a clear example of hate speech in quite the same sense as Hanania's animals remark

I have some object-level views... (read more)

Why doesn't this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)?

Of course this is all a spectrum, but I don't believe this implication in part because I expect that impact is often heavy-tailed. You do something really well first and foremost by finding the people who naturally inclined towards being some of the best in the world at it. If a community that was really good at power struggles tried to get much better at truth-seeki... (read more)

I broadly endorse Jeff's comment above. To put it another way, though: I think many (but not all) of the arguments from the Kolmogorov complicity essay apply whether the statements which are taboo to question are true or false. As per the quote at the top of the essay:

"A good scientist, in other words, does not merely ignore conventional wisdom, but makes a special effort to break it. Scientists go looking for trouble."

That is: good scientists will try to break a wide range of conventional wisdom. When the conventional wisdom is true, then they will fail. ... (read more)

The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.

I agree that extreme truth-seeking can be counterproductive. But in most worlds I don't think that EA's impact comes from arguing for highly controversial ideas; and I'm not advocating for extreme truth-seeking like, say, hosting public debates on the most c... (read more)

The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.

Thanks for the clarification. Why doesn't this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)? I feel like maybe you're adopti... (read more)

One person I was thinking about when I wrote the post was Medhi Hassan. According to Wikipedia:

During a sermon delivered in 2009, quoting a verse of the Quran, Hasan used the terms "cattle" and "people of no intelligence" to describe non-believers. In another sermon, he used the term "animals" to describe non-Muslims.

Medhi has spoken several times at the Oxford Union and also in a recent public debate on antisemitism, so clearly he's not beyond the pale for many.

I personally also think that the "from the river to the sea" chant is pretty analogous to, say,... (read more)

2
David Mathers🔸
Hassan: Those comments were indeed egregious, but they were not about Jews specifically. Indeed much more recently (although still a while ago) Hassan has harshly criticised antisemitism in the British Muslim community. I can't link on my phone but google "the sorry truth is that the virus of antisemitism has infected the British Muslim community". I grant that this was comparably egregious to what Hanania said (I do think it is slightly less bad to attack literally everyone outside your small community than to target a vulnerable minority, but I wouldn't rest much on that.*) If Hassan had said that more recently or I was convinced he still thought that, then I would agree he should not be invited to Manifest. But it's not actually an example of prejudice against Jews specifically except to the extent that Jews are also not Muslim. Tlaib: Well I wouldn't use that phrase, and I'm inclined to say using it is antisemitic yes, because at the very least it creates an ambiguity about whether you mean it in the genocidal way. Having said that, given that there is a very clear non-genocidal reading, I do not think it is a clear example of hate speech in quite the same sense as Hanania's animals remark. I'd also say that my strength of feeling against Hanania is influenced by the fact that he was an out and out white nationalist for years, and that he remains hostile to the civil rights act that ended Jim Crow and democratised the South.. If you can show me that Tlaib is or was a Hamas supporter, then yes, I'd say her saying "from the river to the sea" is at least as bad as Hanania's animals comment. (Worse inherently, since that would make it a call for violence and genocide/ethnic cleansing. But I do think Palestinians are subject to forces that make resisting bigotry harder vis-a-vis Israelis, than it is for white Americans to resist white nationalism.) *For example, I think the NYT should have fired Sarah Jeong even though her racist comments were "only" about whites,
richard_ngo
121
44
25
6
3

I wasn't at Manifest, though I was at LessOnline beforehand. I strongly oppose attempts to police the attendee lists that conference organizers decide on. I think this type of policing makes it much harder to have a truth-seeking community. I've also updated over the last few years that having a truth-seeking community is more important than I previously thought - basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Therefore our comparat... (read more)

I've also updated over the last few years that having a truth-seeking community is more important than I previously thought - basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Therefore our comparative advantage will need to be truth-seeking.

I'm actually not sure about this logic. Can you expand on why EA having insufficient skill to "navigate power dynamics around AI" implies "our comparative advantage will need to be truth-seeking... (read more)

5
David Mathers🔸
To put Garrison's comment a bit more bluntly, I challenge you to name 1 left-winger who might feasibly be invited to speak at Manifold and has said anything about Jews as a group comparable to Hanania saying "these people are animals" about Black people*. That's not a dogwhistle, or a remark that reflects stereotypes, or applies double-standards, or cheering for one side in a violent conflict because you think their the aggressors it's just explicit open racism. *(The claim made below that Hanania really meant woke people not Black people, strains credulity. He was talking about not just the lawyer who prosecuted a man for violence towards a black man who was harrassing people on the subway, but also the . The full quote was "these people are animals, whether harassing people in the subway or walking around in suits". There is no reason to think the harasser was woke or shared any other characteristic with the lawyer except being Black. And your prior on "man who was a neo-Nazi for years, and never apologised till he got caught actually meant the racist reading when he said something that sounded racist" should be high.)

What prominent left wing thinkers exhibited anti semitism recently?

3
Arthropod
Could you elaborate on why you’re so quick to associate racism with truthseekingness? You’re at least the third person to do so in this discussion and I think this demands an explanation. What’s the relationship between the two? Have you investigated racist assertions and concluded they are truthful? You could say that lack of censorship, even of false ideas, is important for truth seeking in a community. But I don’t think you’d agree with a policy to allow everyone to say what they think is true without social consequences. Suppose a community of people are fixated on the intelligence of your children specifically, and they think that your children are genetically dumb. They post about this often on Twitter/X, and endorse eugenic policies to prevent future people from being like your children in particular. How would you feel about one of those people being a top billed guest to a conference? Would you approve of it because it demonstrates a strong commitment to truthseekingness?

Eh, I personally think of some things in the top 10 as "nowhere near" the most important issues, because of how heavy-tailed cause prioritization tends to be.

4
Habryka [Deactivated]
Yeah, I was thinking about that as well. Seems plausible for something to be top 5-10 and also "nowhere near".

When you're weighing existential risks (or other things which steer human civilization on a large scale) against each other, effects are always going to be denominated in a very large number of lives. And this is what OP said they were doing: "a major consideration here is the use of AI to mitigate other x-risks". So I don't think the headline numbers are very useful here (especially because we could make them far far higher by counting future lives).

2
Vasco Grilo🔸
Thanks for the comment, Richard. I used to prefer focussing on tail risk, but I now think expected deaths are a better metric.

It follows from alignment/control/misuse/coordination not being (close to) solved.

"AGIs will be helping us on a lot of tasks", "collusion is hard" and "people will get more scared over time" aren't anywhere close to overcoming it imo.

These are what I mean by the vague intuitions.

I think it should be possible to formalise it, even

Nobody has come anywhere near doing this satisfactorily. The most obvious explanation is that they can't.

The issue is that both sides of the debate lack gears-level arguments. The ones you give in this post (like "all the doom flows through the tiniest crack in our defence") are more like vague intuitions; equally, on the other side, there are vague intuitions like "AGIs will be helping us on a lot of tasks" and "collusion is hard" and "people will get more scared over time" and so on. 

2
Greg_Colbourn ⏸️
I'd say it's more than a vague intuition. It follows from alignment/control/misuse/coordination not being (close to) solved and ASI being much more powerful than humanity. I think it should be possible to formalise it, even. "AGIs will be helping us on a lot of tasks", "collusion is hard" and "people will get more scared over time" aren't anywhere close to overcoming it imo.

Last time there was an explicitly hostile media campaign against EA the reaction was not to do anything, and the result is that Émile P. Torres has a large media presence,[1] launched the term TESCREAL to some success, and EA-critical thoughts became a lot more public and harsh in certain left-ish academic circles.

You say this as if there were ways to respond which would have prevented this. I'm not sure these exist, and in general I think "ignore it" is a really really solid heuristic in an era where conflict drives clicks.

6
AnonymousTurtle
.

I think responding in a way that is calm, boring, and factual will help. It's not going to get Émile to publicly recant anything. The goal is just for people who find Émile's stuff to see that there's another side to the story. They aren't going to publicly say "yo Émile I think there might be another side to the story". But fewer of them will signal boost their writings on the theory that "EAs have nothing to say in their own defense, therefore they are guilty". Also, I think people often interpret silence as a contemptuous response, and that can be enraging in itself.

7
JWS 🔸
I agree it's a solid heuristic, but heuristics aren't foolproof and it's important to be able to realise where they're not working. I remembered your tweet about choosing intellectual opponents wisely because I think it be useful to show where we disagree on this: 1 - Choosing opponents is sometimes not up to you. As an analogy, being in a physical fight only takes one party to throw punches. When debates start to have significant consequences socially and politically, it's worth considering that letting hostile ideas spread unchallenged may work out badly in the future. 2 - I'm not sure it's clear that "the silent majority can often already see their mistakes" in this case. I don't think this is a minor view on EA. I think a lot of people are sympathetic to Torres' point of view, and a signficiant part of that is (in my opinion) because there wasn't a lot of pushback when they started making these claims in major outlets. On my first comment, I agree that I don't think much could have been much done to stop Émile turning against EA,[1] but I absolutely don't think it was inevitable that they would have had such a wide impact. They made the Bulletin of Atomic Scientists! They're partnered with Timnit, who has large influence and sympathy in AI space! People who could have been potential allies in a coalition basically think our movement is evil.[2] They get sympathetically cited in academic criticisms of EA.  Was some pushback going to happen? Yes, but I don't think inevitably at this scale. I do think more could have been done to actually push back on their claims that went over the line in terms of hostility and accuracy, and I think that could have led to a better climate at this critical juncture for AI discussions and policy where we need to build coalitions with communities who don't fully agree with us. My concern is that this new wave of criticism and attack on OpenPhil might not simply fade away but could instead cement an anti-EA narrative that could

@Linch, see the article I linked above, which identifies a bunch of specific bottlenecks where lobbying and/or targeted funding could have been really useful. I didn't know about these when I wrote my comment above, but I claim prediction points for having a high-level heuristic that led to the right conclusion anyway.

4
Linch
Do you want to discuss this in a higher-bandwidth channel at some point? Eg next time we're in an EA social or something, have an organized chat with a moderator and access to a shared monitor? I feel like we're not engaging with each other's arguments as much in this setting, but we can maybe clarify things better in a higher-bandwidth setting.   (No worries if you don't want to do it; it's not like global health is either of our day jobs)

The article I linked above has changed my mind back again. Apparently the RTS,S vaccine has been in clinical trials since 1997. So the failure here wasn't just an abstract lack of belief in technology: the technology literally already existed the whole time that the EA movement (or anyone who's been in this space for less than two decades) has been thinking about it.

An article on why we didn't get a vaccine sooner: https://worksinprogress.co/issue/why-we-didnt-get-a-malaria-vaccine-sooner

This seems like significant evidence for the tractability of speeding things up. E.g. a single (unjustified) decision by the WHO in 2015 delayed the vaccine by almost a decade, four years of which were spent in fundraising. It seems very plausible that even 2015 EA could have sped things up by multiple years in expectation either lobbying against the original decision, or funding the follow-up trial.

Great comment, thank you :) This changed my mind.

4
richard_ngo
The article I linked above has changed my mind back again. Apparently the RTS,S vaccine has been in clinical trials since 1997. So the failure here wasn't just an abstract lack of belief in technology: the technology literally already existed the whole time that the EA movement (or anyone who's been in this space for less than two decades) has been thinking about it.

This is a good point. The two other examples which seem salient to me:

  1. Deutsch's brand of techno-optimism (which comes through particularly clearly when he tries to reason about the future of AI by saying things like "AIs will be people, therefore...").
  2. Yudkowsky on misalignment.

Ah, I see. I think the two arguments I'd give here:

  1. Founding 1DaySooner for malaria 5-10 years earlier is high-EV and plausibly very cheap; and there are probably another half-dozen things in this reference class.
  2. We'd need to know much more about the specific interventions in that reference class to confidently judge that we made a mistake. But IMO if everyone in 2015-EA had explicitly agreed "vaccines will plausibly dramatically slash malaria rates within 10 years" then I do think we'd have done much more work to evaluate that reference class. Not having done that work can be an ex-ante mistake even if it turns out it wasn't an ex-post mistake.

Hmm, your comment doesn't really resonate with me. I don't think it's really about being monomaniacal. I think the (in hindsight) correct thought process here would be something like:

"Over the next 20 or 50 years, it's very likely that the biggest lever in the space of malaria will be some kind of technological breakthrough. Therefore we should prioritize investigating the hypothesis that there's some way of speeding up this biggest lever."

I don't think you need this "move heaven and earth" philosophy to do that reasoning; I don't think you need to focus o... (read more)

2
Linch
Yeah so basically I contest that this alone will actually have higher EV in the malaria case; apologies if my comment wasn't clear enough.
Load more