I object to calling funding two public defenders "strictly dominating" being one yourself; while public defender isn't an especially high-variance role with respect to performance compared to e.g. federal public policy, it doesn't seem that crazy that a really talented and dedicated public defender could be more impactful than the 2 or 3 marginal PDs they'd fund while earning to give.
The shape of my updates has been something like:
Q2 2023: Woah, looks like the AI Act might have a lot more stuff aimed at the future AI systems I'm most worried about than I thought! Making that go well now seems a lot more important than it did when it looked like it would mostly be focused on pre-foundation model AI. I hope this passes!
Q3 2023: As I learn more about this, it seems like a lot of the value is going to come from the implementation process, since it seems like the same text in the actual Act could wind up either specifically requiring things...
The text of the Act is mostly determined, but it delegates tons of very important detail to standard-setting organizations and implementation bodies at the member-state level.
(Cross-posting from LW)
Thanks for these thoughts! I agree that advocacy and communications is an important part of the story here, and I'm glad for you to have added some detail on that with your comment. I’m also sympathetic to the claim that serious thought about “ambitious comms/advocacy” is especially neglected within the community, though I think it’s far from clear that the effort that went into the policy research that identified these solutions or work on the ground in Brussels should have been shifted at the margin to the kinds of public communica...
It uses the language of "models that present systemic risks" rather than "very capable," but otherwise, a decent summary, bot.
(I began working for OP on the AI governance team in June. I'm commenting in a personal capacity based on my own observations; other team members may disagree with me.)
OpenPhil sometimes uses its influence to put pressure on orgs to not do things that would disrupt the status quo
FWIW I really don’t think OP is in the business of preserving the status quo. People who work on AI at OP have a range of opinions on just about every issue, but I don't think any of us feel good about the status quo! People (including non-grantees) often ask us for our thoug...
Nitpick: I would be sad if people ruled themselves out for e.g. being "20th percentile conscientiousness" since in my impression the popular tests for OCEAN are very sensitive to what implicit reference class the test-taker is using.
For example, I took one a year ago and got third percentile conscientiousness, which seems pretty unlikely to be true given my abilities to e.g. hold down a grantmaking job, get decent grades in grad school, successfully run 50-person retreats, etc. I think the explanation is basically that this is how I respond to "I am ...
Yeah this is a good point; fwiw I was pointing at "<30th percentile conscientiousness" as a problem that I have, as someone who is often late to meetings for more than 1-2 minutes (including twice today). My guess is that my (actual, not perceived) level of conscientiousness is pretty detrimental to LTFF fund chair work, while yours should be fine? I also think "Harvard Law student" is just a very wacky reference class re: conscientious; most people probably come from a less skewed sample than yours.
Reposting my LW comment here:
Just want to plug Josh Greene's great book Moral Tribes here (disclosure: he's my former boss). Moral Tribes basically makes the same argument in different/more words: we evolved moral instincts that usually serve us pretty well, and the tricky part is realizing when we're in a situation that requires us to pull out the heavy-duty philosophical machinery.
Huh, it really doesn't read that way to me. Both are pretty clear causal paths to "the policy and general coordination we get are better/worse as a result."
Most of these have the downside of not giving the accused the chance to respond and thereby giving the community the chance to evaluate both the criticism and the response (which as I wrote recently isn't necessarily a dominant consideration, but it is an upside of the public writeup).
Fwiw, seems like the positive performance is more censored in expectation than the negative performance: while a case that CH handled poorly could either be widely discussed or never heard about again, I'm struggling to think of how we'd all hear about a case that they handled well, since part of handling it well likely involves the thing not escalating into a big deal and respecting people's requests for anonymity and privacy.
It does seem like a big drawback that the accused don't know the details of the accusations, but it also seems like there are obvio...
Thanks for writing this up!
I hope to write a post about this at some point, but since you raise some of these arguments, I think the most important cruxes for a pause are:
Agree, basically any policy job seems to start teaching you important stuff about institutional politics and process and the culture of the whole political system!
Though I should also add this important-seeming nuance I gathered from a pretty senior policy person who said basically: "I don't like the mindset of, get anywhere in the government and climb the ladder and wait for your time to save the day; people should be thinking of it as proactively learning as much as possible about their corner of the government-world, and ideally sharing that information with others."
Suggestion for how people go about developing this expertise from ~scratch, in a way that should be pretty adaptable to e.g. the context of an undergraduate or grad-level course, or independent research (a much better/stronger version of things I've done in the past, which involved lots of talking and take-developing but not a lot of detail and publication, which I think are both really important):
Honestly, my biggest recommendation would be just getting a job in policy! You'll get to see what "everyone knows", where the gaps are, and you'll have access to a lot more experts and information to help you upskill faster if you're motivated.
You might not be able to get a job on the topic you think is most impactful but any related job will give you access to better information to learn faster, and make it easier to get your next, even-more-relevant policy job.
In my experience getting a policy job is relatively uncorrelated with knowing a lot about a specific topic so I think people should aim for this early. You can also see if you actually LIKE policy jobs and are good at them before you spend too much time!
A technique I've found useful in making complex decisions where you gather lots of evidence over time -- for example, deciding what to do after your graduation, or whether to change jobs, etc., where you talk to lots of different people and weigh lots of considerations -- is to make a spreadsheet of all the arguments you hear, each with a score for how much it supports each decision.
For example, this summer, I was considering the options of "take the Open Phil job," "go to law school," and "finish the master's." I put each of these options in columns. Then...
As of August 24, 2023, I no longer endorse this post for a few reasons.
Was going to make a very similar comment. Also, even if "someone else in Boston could have" done the things, their labor would have funged from something else; organizer time/talent is a scarce resource, and adding to that pool is really valuable.
Yep, all sounds right to me re: not deferring too much and thinking through cause prioritization yourself, and then also that the portfolio is too broad, though these are kind of in tension.
To answer your question, I'm not sure I update that much on having changed my mind, since I think if people did listen to me and do AISTR this would have been a better use of time even for a governance career than basically anything besides AI governance work (and of course there's a distribution within each of those categories for how useful a given project is; lots of technical projects would've been more useful than the median governance project).
I just want to say I really like this style of non-judgmental anthropology and think it gives an accurate-in-my-experience range of what people are thinking and feeling in the Bay, for better and for worse.
Also: one thing that I sort of expected to come up and didn't see, except indirectly in a few vignettes, is just how much of one's life in the Bay Area rationalist/EA scene is comprised of work, of AI, and/or of EA. Part of this is just that I've only ever lived in the Bay for up to ~6 weeks at a time and was brought there by work, and if I lived there p...
That is a useful post, thanks. It changes my mind somewhat about EA's overall reputational damage, but I still think the FTX crisis exploded the self-narrative of ascendancy (both in money and influence), and the prospects have worsened for attracting allies, especially in adversarial environments like politics.
I agree that we're now in a third wave, but I think this post is missing an essential aspect of the new wave, which is that EA's reputation has taken a massive hit. EA doesn't just have less money because of SBF; it has less trust and prestige, less optimism about becoming a mass movement (or even a mass-elite movement), and fewer potential allies because of SBF, Bostrom's email/apology, and the Time article.
For that reason, I'd put the date of the third wave around the 10th of November 2022, when it became clear that FTX was not only experiencing a "liqui...
Thanks! You may be interested in my recent post with Emma which found that FTX does not seem to have greatly affected EA's public image.
I sense this is true internally but not externally. I don't really feel like our reputation has changed much in general.
Maybe among US legislators? I don't know.
I spent some time last summer looking into the "other countries" idea: if we'd like to slow down both Chinese AI timelines without speeding up US timelines, what if we tried to get countries that aren't the US (or the UK, since DeepMind is there) to accept more STEM talent from China? TLDR:
I don't know how we got to whether we should update about longtermism being "bad." As far as I'm concerned, this is a conversation about whether Eric Schmidt counts as a longtermist by virtue of being focused on existential risk from AI.
It seems to me like you're saying: "the vast majority of longtermists are focused on existential risks from AI; therefore, people like Eric Schmidt who are focused on existential risks from AI are accurately described as longtermists."
When stated that simply, this is an obvious logical error (in the form of "most squares are rectangles, so this rectangle named Eric Schmidt must be a square"). I'm curious if I'm missing something about your argument.
Thanks for this post, Luise! I've been meaning to write up some of my own burnout-type experiences but probably don't want to make a full post about it, so instead I'll just comment the most important thing from it here, which is:
Burnout does not always take the form of reluctance to work or lacking energy. In my case, it was a much more active resentment of my work — work that I had deeply enjoyed even a month or two earlier — and a general souring of my attitude towards basically all projects I was working on or considering. While I was sometimes still, ...
The posts linked in support of "prominent longtermists have declared the view that longtermism basically boils down to x-risk" do not actually advocate this view. In fact, they argue that longtermism is unnecessary in order to justify worrying about x-risk, which is evidence for the proposition you're arguing against, i.e. you cannot conclude someone is a longtermist because they're worried about x-risk.
As of May 27, 2023, I no longer endorse this post. I think most early-career EAs should be trying to contribute to AI governance, either via research, policy work, advocacy, or other stuff, with the exception of people with outlier alignment research talent or very strong personal fits for other things.
I do stand by a lot of the reasoning, though, especially of the form "pick the most important thing and only rule it out if you have good reasons to think personal fit differences outweigh the large differences in importance." The main thing that changed was that governance and policy stuff now seems more tractable than I thought, while alignment research seems less tractable.
Edited the top of the post to reflect this.
I love the point about the dangers of "can't go wrong" style reasoning. I think we're used to giving advice like this to friends when they're stressing out about a choice that is relatively low-stakes, even like "which of these all-pretty-decent jobs [in not-super-high-impact areas] should I take." It's true that for the person getting the advice, all the jobs would probably be fine, but the intuition doesn't work when the stakes for others are very high. Impact is likely so heavy-tailed that even if you're doing a job at the 99th percentile of your option...
Correct that CBAI does not have plans to run a research fellowship this summer (though we might do one again in the winter), but we are tentatively planning on running a short workshop this summer that I think will at least slightly ease this bottleneck by connecting people worried about AI safety to the US AI risks policy community in DC - stay tuned (and email me at trevor [at] cbai [dot] ai if you'd want to be notified when we open applications).
Pretty great discussion at the end of the implications of AI becoming more salient among the public!
A couple more entries from the last couple days: Kelsey Piper's great Ezra Klein podcast appearance, and, less important but closer to home, The Crimson's positive coverage of HAIST.
Rereading your post, it does make sense now that you were thinking of safety teams at the big labs, but both the title about "selling out" and point #3 about "capabilities people" versus "safety people" made me think you had working on capabilities in mind.
If you think it's "fair game to tell them that you think their approach will be ineffective or that they should consider switching to a role at another organization to avoid causing accidental harm," then I'm confused about the framing of the post as being "please don't criticize EAs who 'sell out'," sin...
(Realizing that it would be hypocritical for me not to say this, so I'll add: if you're working on capabilities at an AGI lab, I do think you're probably making us less safe and could do a lot of good by switching to, well, nearly anything else, but especially safety research.)
Setting aside the questions of the impacts of working at these companies, it seems to me like this post prioritizes the warmth and collegiality of the EA community over the effects that our actions could have on the entire rest of the planet in a way that makes me feel pretty nervous. If we're trying in good faith to do the most good, and someone takes a job we think is harmful, it seems like the question should be "how can I express my beliefs in a way that is likely to be heard, to find truth, and not to alienate the person?" rather than "is it polite to...
Also seems important to note: EA investing should have not only different expectations about the world but also different goals about the point of finance. We are relatively less interested in moving money from worlds where we're rich to worlds where we're poor (the point of risk aversion) and more interested in moving it from worlds where it's less useful to where it's more useful.
Concretely: if it turns out we're wrong about AI, this is a huge negative update on the usefulness of money to buy impact, since influencing this century's development generally...
Agree with Xavier's comment that people should consider reversing the advice, but generally confused/worried that this post is getting downvoted (13 karma on 18 votes as of this writing). In general, I want the forum to be a place where bold, truth-seeking claims about how to do more good get attention. My model of people downvoting this is that they are worried that this will make people work harder despite this being suboptimal. I think that people can make these evaluations well for themselves, and that it's good to present people with information and a...
I agree that the forum should be a place for bold, truth-seeking claims about how to do more good. However, I think recommending people try taking stimulants is quite different to recommending people try working longer hours. The downside risks of harm are higher and more serious, and more difficult for readers to interpret for themselves. I don't think this post is well argued, but the part on stimulants is particularly weak.
Heavily cosigned (as someone who has worked with some of Nick's friends whom he got into EA, not as someone who's done a particularly great job of this myself). I encourage readers of this post to think of EA-sympathetic and/or very talented friends of theirs and find a time to chat about how they could get involved!
As your fellow Cantabrigian I have some sympathies for this argument. But I'm confused about some parts of it and disagree with others:
I think it is a heuristic rather than a pure value. My point in my conversation with Josh was to disentangle these two things — see Footnote 1! I probably should be more clear that these examples are Move 1 in a two-move case for longtermism: first, show that the normative "don't care about future people" thing leads to conclusions you wouldn't endorse, then argue about the empirical disagreement about our ability to benefit future people that actually lies at the heart of the issue.
Borrowing this from some 80k episode of yore, but it seems like another big (but surmountable) problem with neglectedness is which resources count as going towards the problem. Is existential risk from climate change neglected? At first blush, no, hundreds of billions of dollars are going toward climate every year. But how many of these are actually going towards the tail risks, and how much should we downweight them for ineffectiveness, and so on.
Great summary, thanks for posting! Quick question about this line:
surely the major problem in our actual world is that consequentialist behavior leads to poor self-interested outcomes. The implicit view in economics is that it’s society’s job to align selfish interests with public interests
Do you mean the reverse — self-interested behavior leads to poor consequentialist outcomes?
Right, I'm just pointing out that the health/income tradeoff is a very important input that affects all of their funding recommendations.
I'm not familiar with the Global Burden of Disease report, but if Open Phil and GiveWell are using it to inform health/income tradeoffs it seems like it would play a pretty big role in their grantmaking (since the bar for funding is set by being a certain multiple more effective than GiveDirectly!) [edit: also, I just realized that my comment above looked like I was saying "mostly yes" to the question of "is this true, as an empirical matter?" I agree this is misleading. I meant that Linch's second sentence was mostly true; edited to reflect that.]
Your understanding is mostly correct. But I often mention this (genuinely very cool) corrective study to the types of political believers described in this post, and they've really liked it too: https://www.givewell.org/research/incubation-grants/IDinsight-beneficiary-preferences-march-2019 [edit: initially this comment began with "mostly yes" which I meant as a response to the second sentence but looked like a response to the first, so I changed it to "your understanding is mostly correct."]
I am generally not that familiar with the creating-more-persons arguments beyond what I've said so far, so it's possible I'm about to say something that the person-affecting-viewers have a good rebuttal for, but to me the basic problem with "only caring about people who will definitely exist" is that nobody will definitely exist. We care about the effects of people born in 2024 because there's a very high chance that lots of people will be born then, but it's possible that an asteroid, comet, gamma ray burst, pandemic, rogue AI, or some other threat could ...
I appreciate the intention of keeping argumentative standards on the forum high, but I think this misses the mark. (Edit: I want this comment's tone to come off less as "your criticism is wrong" and more like "you're probably right that this isn't great philosophy; I'm just trying to do a different thing.")
I don't claim to be presenting the strongest case for person-affecting views, and I acknowledge in the post that non-presentist person-affecting views don't have these problems. As I wrote, I have repeatedly encountered these views "in the wild" and am p...
I anticipate the response being "climate change is already causing suffering now," which is true, even though the same people would agree that the worst effects are decades in the future and mostly borne by future generations.
I basically just sidestep these issues in the post except for alluding to the "transitivity problems" with views that are neutral to the creation of people whose experiences are good. That is, the question of whether future people matter and whether more future people is better than fewer are indeed distinct, so these examples do not fully justify longtermism or total utilitarianism.
Borrowing this point from Joe Carlsmith: I do think that like, my own existence has been pretty good, and I feel some gratitude towards the people who took actions to make it m...
Further evidence that it's not an applause light: vast majority of respondents to this Twitter poll about the subject say the jokes have not crossed a line: https://twitter.com/nathanpmyoung/status/1557811908555804674?s=21&t=wdOUd5m4d_ZqiJ-12TLM9Q
Interesting! I actually wrote a piece on "the ethics of 'selling out'" in The Crimson almost 6 years ago (jeez) that was somewhat more explicit in its EA justification, and I'm curious what you make of those arguments.
I think randomly selected Harvard students (among those who have the option to do so) deciding to take high-paying jobs and donate double-digit percentages of their salary to places like GiveWell is very likely better for the world than the random-ish other things they might have done, and for that reason I strongly support this op-ed. But I ... (read more)