My story is: Elon changing the twitter censorship policies was a big driver of a chunk of Silicon Valley getting behind Trump—separate from Elon himself promoting Trump, and separate from Elon becoming a part of the Trump team.
And I think anyone who bought Twitter could have done that.
If anything being Elon probably made it harder, because he then had to face advertiser boycotts.
Agree/disagree?
To be clear, my example wasn't "I'm trying to talk to people in the south about racism" It's more like, "I'm trying to talk to people in the south about animal welfare, and in doing so, I bring up examples around South people being racist."
Yeah I got that. Let me flesh out an analogy a little more:
Suppose you want to pitch people in the south about animal welfare. And you have a hypothesis for why people in the south don't care much about animal welfare, which is that they tend to have smaller circles of moral concern than people in the north. Here are two...
a lot of your framing matches incredibly well with what I see as current right-wing talking points
Occam's razor says that this is because I'm right-wing (in the MAGA sense not just the libertarian sense).
It seems like you're downweighting this hypothesis primarily because you personally have so much trouble with MAGA thinkers, to the point where you struggle to understand why I'd sincerely hold this position. Would you say that's a fair summary? If so hopefully some forthcoming writings of mine will help bridge this gap.
It seems like the other reason you'r...
Thanks for the comment.
I think you probably should think of Silicon Valley as "the place" for politics. A bunch of Silicon Valley people just took over the Republican party, and even the leading Democrats these days are Californians (Kamala, Newsom, Pelosi) or tech-adjacent (Yglesias, Klein).
Also I am working on basically the same thing as Jan describes, though I think coalitional agency is a better name for it. (I even have a post on my opposition to bayesianism.)
Good questions. I have been pretty impressed with:
I think there are probably a bunch of other frameworks that have as much or more explanatory power as my two-factor model (e.g. Henrich's concept of WEIRDness, Sco...
Thanks for the thoughtful comment! Yeah the OpenAI board thing was the single biggest thing that shook me out of complacency and made me start doing sociopolitical thinking. (Elon's takeover of twitter was probably the second—it's crazy that you can get that much power for $44 billion.)
I do think I have a pretty clear story now of what happened there, and maybe will write about it explicitly going forward. But for now I've written about it implicitly here (and of course in the cooperative AI safety strategies post).
Ah, gotcha. Yepp, that's a fair point, and worth me being more careful about in the future.
I do think we differ a bit on how disagreeable we think advocacy should be, though. For example, I recently retweeted this criticism of Abundance, which is basically saying that they overly optimized for it to land with those who hear it.
And in general I think it's worth losing a bunch of listeners in order to convey things more deeply to the ones who remain (because if my own models of movement failure have been informed by environmentalism etc, it's hard to talk ar...
him being one of the few safetyists on the political right to not have capitulated to accelerationism-because-of-China (as most recently even Elon did).
Thanks for noticing this. I have a blog post coming out soon criticizing this exact capitulation.
every time he tries to talk about object-level politics it feels like going into the bizarro universe and I would flip the polarity of the signs of all of it
I am torn between writing more about politics to clarify, and writing less about politics to focus on other stuff. I think I will compromise by trying to wr...
I worry that your bounties are mostly just you paying people to say things you already believe about those topics
This is a fair complaint and roughly the reason I haven't put out the actual bounties yet—because I'm worried that they're a bit too skewed. I'm planning to think through this more carefully before I do; okay to DM you some questions?
...I think it is extremely easy to imagine the left/Democrat wing of AI safety becoming concerned with AI concentrating power, if it hasn't already
It is not true that all people with these sort of concerns only care pr
Thanks for the feedback!
FWIW a bunch of the polemical elements were deliberate. My sense is something like: "All of these points are kinda well-known, but somehow people don't... join the dots together? Like, they think of each of them as unfortunate accidents, when they actually demonstrate that the movement itself is deeply broken."
There's a kind of viewpoint flip from being like "yeah I keep hearing about individual cases that sure seem bad but probably they'll do better next time" to "oh man, this is systemic". And I don't really know how to induce the...
Thanks for sharing this, it does seem good to have transparency into this stuff.
My gut reaction was "huh, I'm surprised about how large a proportion of these people (maybe 30-50%, depending on how you count it) I don't recall substantially interacting with" (where by "interaction" I include reading their writings).
To be clear, I'm not trying to imply that it should be higher; that any particular mistakes are being made; or that these people should have interacted with me. It just felt surprising (given how long I've been floating around EA) and worth noting as a datapoint. (Though one reason to take this with a grain of salt is that I do forget names and faces pretty easily.)
Thanks! I think this note explains the gap:
Note that we have a few attendees at this year’s event who are specialists in one of our focus areas rather than leaders of an EA meta organization or team (though some attendees are both).
We were not trying to optimise the attendee list for connectedness or historical engagement with the community, but rather who can contribute to making progress on our core themes; brand and funding. When you see what roles these attendees have, I think it's fairly evident why we invited them, given this lens.
I'll also note that...
My point is not that the current EA forum would censor topics that were actually important early EA conversations, because EAs have now been selected for being willing to discuss those topics. My point is that the current forum might censor topics that would be important course-corrections, just as if the rest of society had been moderating early EA conversations, those conversations might have lost important contributions like impartiality between species (controversial: you're saying human lives don't matter very much!), the ineffectiveness of developmen...
Narrowing in even further on the example you gave, as an illustration: I just had an uncomfortable conversation about age of consent laws literally yesterday with an old friend of mine. Specifically, my friend was advocating that the most important driver of crime is poverty, and I was arguing that it's cultural acceptance of crime. I pointed to age of consent laws varying widely across different countries as evidence that there are some cultures which accept behavior that most westerners think of as deeply immoral (and indeed criminal).
Picturing some resp...
Do you have an example of the kind of early EA conversation which you think was really important which helped came up with core EA tenets might be frowned upon or censored on the forum now? I'm still super dubious about whether leaving out a small number of specific topics really leaves much value on the table.
And I really think conversations can be had in more sensitive ways. In the the case of the original banned post, just as good a philosophical conversation could be had without explicitly talking about killing people. The conversation already was bein...
Ty for the reply; a jumble of responses below.
I think there are better places to have these often awkward, fraught conversations.
You are literally talking about the sort of conversations that created EA. If people don't have these conversations on the forum (the single best way to create common knowledge in the EA commmunity), then it will be much harder to course-correct places where fundamental ideas are mistaken. I think your comment proceeds from the implicit assumption that we're broadly right about stuff, and mostly just need to keep our heads down a...
I appreciate the thought that went into this. I also think that using rate-limits as a tool, instead of bans, is in general a good idea. I continue to strongly disagree with the decisions on a few points:
On point 4:
I'm pretty sure we could come up with various individuals and groups of people that some users of this forum would prefer not to exist. There's no clear and unbiased way to decide which of those individuals and groups could be the target of "philosophical questions" about the desirability of murdering them and which could not. Unless we're going to allow the question as applied to any individual or group (which I think is untenable for numerous reasons), the line has to be drawn somewhere. Would it be ethical to get rid of this meddlesome priest...
This moderation policy seems absurd. The post in question was clearly asking purely hypothetical questions, and wasn't even advocating for any particular answer to the question. May as well ban users for asking whether it's moral to push a man off a bridge to stop a trolley, or ban Peter Singer for his thought experiments about infanticide.
Perhaps dstudiocode has misbehaved in other ways, but this announcement focuses on something that should be clearly within the bounds of acceptable discourse. (In particular, the standard of "content that could be interp...
I accept that I should talk about "Trump and the Republican party". But conversely, when we talk about the Democratic party, we should also include the institutions over which it has disproportionate influence—including most mainstream media outlets, the FBI (which pushed for censorship of one of the biggest anti-Biden stories in the lead-up to the 2020 election—EDIT: I no longer endorse this phrasing, it seems like the FBI's conversations with tech companies were fairly vague on this matter), the teams responsible for censorship at most major tech compani...
One more point: in Scott's blog post he talks about the "big lie" of Trump: that the election was stolen. I do worry that this is a key point of polarization, where either you fully believe that the election was stolen and the Democrats are evil, or you fully believe that Trump was trying to seize dictatorial power.
But reality is often much more complicated. My current best guess is that there wasn't any centrally-coordinated plan to steal the election, but that the central Democrat party:
Without expressing any views on which allegations against the two major sides are true, it's clear to me that relatively few people in the US are particularly interested in what we might call nonpartisan electoral truthseeking: making it easy, convenient, and secure for all those (and only those) legally eligible to vote, without unlawful foreign interference or illegal disinformation (like false robocalls about poll location).
I think this pales in comparison to Trump’s willingness to silence critics (e.g. via hush money and threats).
If you believe that Trump has done a bunch of things wrong, the Democrats have done very little wrong, and the people prosecuting Trump are just following normal process in doing so, then yes these threats are worrying.
But if you believe that the charges against Trump were in fact trumped-up, e.g. because Democrats have done similarly bad things without being charged, then most of Trump's statements look reasonable. E.g. this testimony about Biden s...
(This comment focuses on object-level arguments about Trump vs Kamala; I left another comment focused on meta-level considerations.)
Three broad arguments for why it's plausibly better if Trump wins than if Kamala does:
[This is part 1, I will get to foreign policy and AI-specific questions hopefully soon]
I don't think it's fair to put an attempt to overthrow an election on par with biased media coverage (seems like both sides do this about equally, maybe conservative media is worse?) or dumping on opposition candidates (not great but also typical of both parties for many decades AFAIK). Scott Aaronson lays out some general concerns well here.
Trump incited a violent coup/insurrection attempt to prevent the 2020 election from being certified as well as other extremel...
(This comment focuses on meta-level issues; I left another comment with object-level disagreements.)
The EA case for Trump was heavily downvoted, with commenters arguing that e.g. "a lot of your arguments are extremely one-sided in that they ignore very obvious counterarguments and fail to make the relevant comparisons on the same issue."
This post is effectively an EA case for Kamala, but less even-handed—e.g. because it:
Generally feels like it's primarily talking to an audience who already agrees that Trump is bad, and just needs to be persuaded about how bad he is
This is true to some extent. I did not write this thinking it would be ‘the EA case for Kamala’ in response to Hammond’s piece. I also was wary about adding length to an already too-long piece so didn’t go into detail on various counterpoints to Kamala.
...Is framed it not just as a case for Kamala, but as a case for action (which, I think, requires a significantly higher bar than just believing that it'd be better
I recently had a very interesting conversation about master morality and slave morality, inspired by the recent AstralCodexTen posts.
The position I eventually landed on was:
Empirically, it seems like the world is not improved the most by people whose primary motivation is helping others, but rather by people whose primary motivation is achieving something amazing. If this is true, that's a strong argument against slave morality.
This is seems very wrong to me on a historical basis. When I think of the individuals who have done the most good for the world, I think of people who made medical advances like the smallpox vaccine, scientists who discovered new technologies like electricity, and social movements like abolitionism tha...
My take is that most of the points raised here are second-order points, and actually the biggest issue in this election is how democratic the future of America will be. But having said that, it's not clear which side is overall better on this front:
I remain in favor of people doing work on evals, and in favor of funding talented people to work on evals. The main intervention I'd like to make here is to inform how those people work on evals, so that it's more productive. I think that should happen not on the level of grants but on the level of how they choose to conduct the research.
This seems like the wrong meta-level orientation to me. A meta-level orientation that seems better to me is something like "Truth and transparency have strong global benefits, but often don't happen enough because they're locally aversive. So assume that sharing information is useful even when you're not concretely sure how it'll help, and assume by default that power structures (including boards, social networks, etc) are creating negative externalities insofar as they erect barriers to you sharing information".
The specific tradeoff between causing drama ...
you can infer that people who don't take AI risk seriously are somewhat likely to lack important forms of competence
This seems true, but I'd also say that the people who do take AI risk seriously also typically lack different important forms of competence. I don't think this is coincidental; instead I'd say that there's (usually) a tradeoff between "good at taking very abstract ideas seriously" and "good at operating in complex fast-moving environments". The former typically requires a sort of thinking-first orientation to the world, the latter an action-f...
If Hassan had said that more recently or I was convinced he still thought that, then I would agree he should not be invited to Manifest.
My claim is that the Manifest organizers should have the right to invite him even if he'd said that more recently. But appreciate you giving your perspective, since I did ask for that (just clarifying the "agree" part).
Having said that, given that there is a very clear non-genocidal reading, I do not think it is a clear example of hate speech in quite the same sense as Hanania's animals remark
I have some object-level views...
Why doesn't this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)?
Of course this is all a spectrum, but I don't believe this implication in part because I expect that impact is often heavy-tailed. You do something really well first and foremost by finding the people who naturally inclined towards being some of the best in the world at it. If a community that was really good at power struggles tried to get much better at truth-seeki...
I broadly endorse Jeff's comment above. To put it another way, though: I think many (but not all) of the arguments from the Kolmogorov complicity essay apply whether the statements which are taboo to question are true or false. As per the quote at the top of the essay:
"A good scientist, in other words, does not merely ignore conventional wisdom, but makes a special effort to break it. Scientists go looking for trouble."
That is: good scientists will try to break a wide range of conventional wisdom. When the conventional wisdom is true, then they will fail. ...
The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.
I agree that extreme truth-seeking can be counterproductive. But in most worlds I don't think that EA's impact comes from arguing for highly controversial ideas; and I'm not advocating for extreme truth-seeking like, say, hosting public debates on the most c...
The main alternative to truth-seeking is influence-seeking. EA has had some success at influence-seeking, but as AI becomes the locus of increasingly intense power struggles, retaining that influence will become more difficult, and it will tend to accrue to those who are most skilled at power struggles.
Thanks for the clarification. Why doesn't this imply that EA should get better at power struggles (e.g. by putting more resources into learning/practicing/analyzing corporate politics, PR, lobbying, protests, and the like)? I feel like maybe you're adopti...
One person I was thinking about when I wrote the post was Medhi Hassan. According to Wikipedia:
During a sermon delivered in 2009, quoting a verse of the Quran, Hasan used the terms "cattle" and "people of no intelligence" to describe non-believers. In another sermon, he used the term "animals" to describe non-Muslims.
Medhi has spoken several times at the Oxford Union and also in a recent public debate on antisemitism, so clearly he's not beyond the pale for many.
I personally also think that the "from the river to the sea" chant is pretty analogous to, say,...
I wasn't at Manifest, though I was at LessOnline beforehand. I strongly oppose attempts to police the attendee lists that conference organizers decide on. I think this type of policing makes it much harder to have a truth-seeking community. I've also updated over the last few years that having a truth-seeking community is more important than I previously thought - basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Therefore our comparat...
I've also updated over the last few years that having a truth-seeking community is more important than I previously thought - basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Therefore our comparative advantage will need to be truth-seeking.
I'm actually not sure about this logic. Can you expand on why EA having insufficient skill to "navigate power dynamics around AI" implies "our comparative advantage will need to be truth-seeking...
When you're weighing existential risks (or other things which steer human civilization on a large scale) against each other, effects are always going to be denominated in a very large number of lives. And this is what OP said they were doing: "a major consideration here is the use of AI to mitigate other x-risks". So I don't think the headline numbers are very useful here (especially because we could make them far far higher by counting future lives).
It follows from alignment/control/misuse/coordination not being (close to) solved.
"AGIs will be helping us on a lot of tasks", "collusion is hard" and "people will get more scared over time" aren't anywhere close to overcoming it imo.
These are what I mean by the vague intuitions.
I think it should be possible to formalise it, even
Nobody has come anywhere near doing this satisfactorily. The most obvious explanation is that they can't.
The issue is that both sides of the debate lack gears-level arguments. The ones you give in this post (like "all the doom flows through the tiniest crack in our defence") are more like vague intuitions; equally, on the other side, there are vague intuitions like "AGIs will be helping us on a lot of tasks" and "collusion is hard" and "people will get more scared over time" and so on.
Last time there was an explicitly hostile media campaign against EA the reaction was not to do anything, and the result is that Émile P. Torres has a large media presence,[1] launched the term TESCREAL to some success, and EA-critical thoughts became a lot more public and harsh in certain left-ish academic circles.
You say this as if there were ways to respond which would have prevented this. I'm not sure these exist, and in general I think "ignore it" is a really really solid heuristic in an era where conflict drives clicks.
I think responding in a way that is calm, boring, and factual will help. It's not going to get Émile to publicly recant anything. The goal is just for people who find Émile's stuff to see that there's another side to the story. They aren't going to publicly say "yo Émile I think there might be another side to the story". But fewer of them will signal boost their writings on the theory that "EAs have nothing to say in their own defense, therefore they are guilty". Also, I think people often interpret silence as a contemptuous response, and that can be enraging in itself.
@Linch, see the article I linked above, which identifies a bunch of specific bottlenecks where lobbying and/or targeted funding could have been really useful. I didn't know about these when I wrote my comment above, but I claim prediction points for having a high-level heuristic that led to the right conclusion anyway.
The article I linked above has changed my mind back again. Apparently the RTS,S vaccine has been in clinical trials since 1997. So the failure here wasn't just an abstract lack of belief in technology: the technology literally already existed the whole time that the EA movement (or anyone who's been in this space for less than two decades) has been thinking about it.
An article on why we didn't get a vaccine sooner: https://worksinprogress.co/issue/why-we-didnt-get-a-malaria-vaccine-sooner
This seems like significant evidence for the tractability of speeding things up. E.g. a single (unjustified) decision by the WHO in 2015 delayed the vaccine by almost a decade, four years of which were spent in fundraising. It seems very plausible that even 2015 EA could have sped things up by multiple years in expectation either lobbying against the original decision, or funding the follow-up trial.
Ah, I see. I think the two arguments I'd give here:
Hmm, your comment doesn't really resonate with me. I don't think it's really about being monomaniacal. I think the (in hindsight) correct thought process here would be something like:
"Over the next 20 or 50 years, it's very likely that the biggest lever in the space of malaria will be some kind of technological breakthrough. Therefore we should prioritize investigating the hypothesis that there's some way of speeding up this biggest lever."
I don't think you need this "move heaven and earth" philosophy to do that reasoning; I don't think you need to focus o...
I used to not actually believe in heavy-tailed impact. On some level I thought that early rationalists (and to a lesser extent EAs) had "gotten lucky" in being way more right than academic consensus about AI progress. And I thought on some gut level that e.g. Thiel and Musk and so on kept getting lucky, because I didn't want to picture a world in which they were actually just skillful enough to keep succeeding (due to various psychological blockers).
Now, thanks to dealing with a bunch of those blockers, I have internalized to a much greater extent that you... (read more)