levin

Bio

Leading Harvard EA and the Harvard-MIT Project on Existential Risk, and occasionally doing AI governance research.

Comments
62

Wait, why have we not tried to buy ea.org?

As your fellow Cantabrigian I have some sympathies for this  argument. But I'm confused about some parts of it and disagree with others:

  • "EA hub should be on the east coast" is one kind of claim. "People starting new EA projects, orgs, and events should do so on the east coast" is a different one. They'd be giving up the very valuable benefits of living near the densest concentrations of other orgs, especially funders. You're right that the reasons for Oxford and the Bay being the two hubs are largely historical rather than practical, but that's the nature of Schelling points; it might have been better to have started in the East Coast (or somewhere temperate, cheap, cosmopolitan, and globally centrally located like Barcelona), but how are we going to all coordinate to move there? The options that come to mind (Open Phil, FTX, CEA, and/or others move there, or coordinate to do so together?) seem very costly — on the order of weeks or months of the entire organization's time.
  • By the commonly held view that AI is by far the most important cause area, it's fine that the Bay is an EA hub despite the tech industry being its only non-Schelling-point reason to be a hub.
  • For better or worse, Berkeley is also a hub for community-building now; tons of student organizers spent this summer there. Again, they go there for the recursive common-knowledge reason that other people will also be going there, so there'd have to be some (costly?) coordinated shift probably driven by a major org.
  • Seems slightly like cheating to count all those universities (or indeed all those cities) as part of the same hub. Oxford and London are way closer than any of Boston, DC, and NYC are to each other. It seems like a place can be a hub if it would be physically easy for any two people living in it to meet every week. Boston, NYC, and DC are not close enough to qualify. Pointing out the cause area networks that each of these cities have, and cumulatively counting them against the Bay "merely" having the AI industry, makes it seem more likely than it is that the entire East Coast could achieve the kind of Schelling status that Berkeley has. (Indeed, notably the Bay Area EA community is overwhelmingly located specifically in Berkeley, supporting the idea that physical proximity is very important.)
  • Generally I really like the East Coast lifestyle (insofar as it differs from the Bay's) and am figuring out how to articulate it. Maybe it's that people are a little more ironic. Maybe it's that having to Uber basically everywhere in the Bay is dystopian. That being said, lots of EAs like the outdoors, and the East Coast is much worse than the Bay Area for hiking etc.
  • One thing that I like about Boston relative to the Bay is the relatively horizontal social/professional structure: it feels like, in the Bay, there's a pretty clear status pyramid and a pretty clear line of who's in the elite circle (access to the top workspaces), while it's looser and chiller in Boston. But it seems like this results from the Bay being a major hub and Boston being less of a hub. E.g., once a certain office space opens in Cambridge, I expect some of these dynamics to reappear, and if Boston became as booming as Berkeley, I think a pyramid would likely start to become more apparent as well. (Sad.)

I think it is a heuristic rather than a pure value. My point in my conversation with Josh was to disentangle these two things — see Footnote 1! I probably should be more clear that these examples are Move 1 in a two-move case for longtermism: first, show that the normative "don't care about future people" thing leads to conclusions you wouldn't endorse, then argue about the empirical disagreement about our ability to benefit future people that actually lies at the heart of the issue.

Borrowing this from some 80k episode of yore, but it seems like another big (but surmountable) problem with neglectedness is which resources count as going towards the problem. Is existential risk from climate change neglected? At first blush, no, hundreds of billions of dollars are going toward climate every year. But how many of these are actually going towards the tail risks, and how much should we downweight them for ineffectiveness, and so on.

Great summary, thanks for posting! Quick question about this line:

surely the major problem in our actual world is that consequentialist behavior leads to poor self-interested outcomes. The implicit view in economics is that it’s society’s job to align selfish interests with public interests

Do you mean the reverse — self-interested behavior leads to poor consequentialist outcomes?

Right, I'm just pointing out that the health/income tradeoff is a very important input that affects all of their funding recommendations.

I'm not familiar with the Global Burden of Disease report, but if Open Phil and GiveWell are using it to inform health/income tradeoffs it seems like it would play a pretty big role in their grantmaking (since the bar for funding is set by being a certain multiple more effective than GiveDirectly!) [edit: also, I just realized that my comment above looked like I was saying "mostly yes" to the question of "is this true, as an empirical matter?" I agree this is misleading. I meant that Linch's second sentence was mostly true; edited to reflect that.]

Your understanding is mostly correct. But I often mention this (genuinely very cool) corrective study to the types of political believers described in this post, and they've really liked it too: https://www.givewell.org/research/incubation-grants/IDinsight-beneficiary-preferences-march-2019 [edit: initially this comment began with "mostly yes" which I meant as a response to the second sentence but looked like a response to the first, so I changed it to "your understanding is mostly correct."]

I am generally not that familiar with the creating-more-persons arguments beyond what I've said so far, so it's possible I'm about to say something that the person-affecting-viewers have a good rebuttal for, but to me the basic problem with "only caring about people who will definitely exist" is that nobody will definitely exist. We care about the effects of people born in 2024 because there's a very high chance that lots of people will be born then, but it's possible that an asteroid, comet, gamma ray burst, pandemic, rogue AI, or some other threat could wipe us out by then. We're only, say, 99.9% sure these people will be born, but this doesn't stop us from caring about them.

As we get further and further into the future, we get less confident that there will be people around to benefit or be harmed by our actions, and this seems like a perfectly good reason to discount these effects.

And if we're okay with doing that across time, it seems like we should similarly be okay with doing it within a time. The UN projects a global population of 8.5 billion by 2030, but this is again not a guarantee. Maybe there's a 98% chance that 8 billion people will exist then, an 80% chance that another 300 billion will exist, a 50% chance that another 200 billion will exist (getting us to a median of 8.5 billion), a 20% chance for 200 billion more, and a 2% chance that there will be another billion after that. I think it would be odd to count everybody who has a 50.01% chance of existing and nobody who's at 49.99%. Instead, we should take both as having a ~50% chance of being around to be benefited/harmed by our actions and do the moral accounting accordingly.

Then, as you get further into the future, the error bars get a lot wider and you wind up starting to count people who only exist in like 0.1% of scenarios. This is less intuitive, but I think it makes more sense to count their interests as 0.1% as important as people who definitely exist today, just as we count the interests of people born in 2024 as 99.9% as important, rather than drawing the line somewhere and saying we shouldn't consider them at all.

The question of whether these people born in 0.1% of future worlds are made better off by existing (provided that they have net-positive experiences) rather than not existing just returns us to my first reply to your comment: I don't have super robust philosophical arguments but I have those intuitions.

I appreciate the intention of keeping argumentative standards on the forum high, but I think this misses the mark. (Edit: I want this comment's tone to come off less as "your criticism is wrong" and more like "you're probably right that this isn't great philosophy; I'm just trying to do a different thing.")

I don't claim to be presenting the strongest case for person-affecting views, and I acknowledge in the post that non-presentist person-affecting views don't have these problems. As I wrote, I have repeatedly encountered these views "in the wild" and am presenting this as a handbook for pumping the relevant intuitions, not as a philosophical treatise that shows the truth of the total view. The point of the post is to help people share their intuitions with skeptics, not to persuade moral philosophers.

In general, I'm confused by the standard of arguing against the strongest possible version of a view rather than the view people actually have and express. If someone said "I'm going to buy Home Depot because my horoscope said I will find my greatest treasure in the home," my response wouldn't be "I'll ignore that and argue against the strongest possible case for buying Home Depot stock," it would be to argue that astrology is not a good way of making investment decisions. I also am not sure where you're seeing the post "assuming a position is true." My methodology here is to present a case and see what conclusions we'd have to draw if the position weren't true. Utilitarians do in fact have to explain either why the organ harvesting is actually fine or why utilitarianism doesn't actually justify it, so it seems fine to ask those who hold presentist person-affecting view to either bite the bullet or explain why I'm wrong about the implication.

Finally, for what it's worth, I did initially include a response: Émile Torres's response to a version of Case 2. I decided including Torres's response — which was literally to "shrug" because if utilitarians get to not care about the repugnant conclusion then they get to ignore these cases — would not have enriched the post and indeed might have seemed combative and uncharitable towards the view. (This response, and the subsequent discussion that argued that "western ethics is fundamentally flawed," leads me to think the post wouldn't benefit much by trying to steelman the opposition. Maybe western ethics is fundamentally flawed, but I'm not trying to wade into that debate in this post.)

Load More