All of tlevin's Comments + Replies

Interesting! I actually wrote a piece on "the ethics of 'selling out'" in The Crimson almost 6 years ago (jeez) that was somewhat more explicit in its EA justification, and I'm curious what you make of those arguments.

I think randomly selected Harvard students (among those who have the option to do so) deciding to take high-paying jobs and donate double-digit percentages of their salary to places like GiveWell is very likely better for the world than the random-ish other things they might have done, and for that reason I strongly support this op-ed. But I ... (read more)

6
chanden
2d
Wow that's awesome. Great to connect with a Crimson alum!! Your article is great — it covers a lot of bases, ones that I wish I had gotten the chance to talk about in my op-ed. The original version was a lot heavier on the EA-lingo. Discussed 80,000 hours explicitly, didn't make such a strong claim that "selling out" was the best strategy, etc., but I decided that a straightforward & focused approach to the problem would be most useful. I don't think I'd truly say selling out is the "best" thing to do for everyone (which is the language my article uses), and that's for reasons others have laid out in this comment section. But I do think it's a useful nudge. I've gotten a lot of reactions like "Wow, these stats are really eye-opening," and "That's a cool way to think about selling out," which was, honestly, the intention, so I'm glad it's played out that way. It seems hard to EA-pill everyone from the outset. We all got here in small steps, not with everything thrust at us from once. I'm hopeful that it's at the very least a good start for a few people :)

I object to calling funding two public defenders "strictly dominating" being one yourself; while public defender isn't an especially high-variance role with respect to performance compared to e.g. federal public policy, it doesn't seem that crazy that a really talented and dedicated public defender could be more impactful than the 2 or 3 marginal PDs they'd fund while earning to give.

1
Mjreard
2d
Yes, in general it's good to remember that people are far from 1:1 substitutes for each other for a given job title. I think the "1 into 2" reasoning is a decent intuition pump for how wide the option space becomes when you think laterally though and that lateral thinking of course shouldn't stop at earning to give.  A minor, not fully-endorsed object level point: I think people who do ~one-on-one service work like (most) doctors and lawyers are much less likely to 10x the median than e.g. software engineers. With rare exceptions, their work just isn't that scalable and in many cases output is a linear return to effort. I think this might be especially true in public defense where you sort of wear prosecutors down over a volume of cases.  

The shape of my updates has been something like:

Q2 2023: Woah, looks like the AI Act might have a lot more stuff aimed at the future AI systems I'm most worried about than I thought! Making that go well now seems a lot more important than it did when it looked like it would mostly be focused on pre-foundation model AI. I hope this passes!

Q3 2023: As I learn more about this, it seems like a lot of the value is going to come from the implementation process, since it seems like the same text in the actual Act could wind up either specifically requiring things... (read more)

The text of the Act is mostly determined, but it delegates tons of very important detail to standard-setting organizations and implementation bodies at the member-state level.

1
constructive
3mo
And your update is that this process will be more globally impactful than you initially expected? Would be curious to learn why.

(Cross-posting from LW)

Thanks for these thoughts! I agree that advocacy and communications is an important part of the story here, and I'm glad for you to have added some detail on that with your comment. I’m also sympathetic to the claim that serious thought about “ambitious comms/advocacy” is especially neglected within the community, though I think it’s far from clear that the effort that went into the policy research that identified these solutions or work on the ground in Brussels should have been shifted at the margin to the kinds of public communica... (read more)

3
Akash
3mo
I appreciate the comment, though I think there's a lack of specificity that makes it hard to figure out where we agree/disagree (or more generally what you believe). If you want to engage further, here are some things I'd be excited to hear from you: * What are a few specific comms/advocacy opportunities you're excited about//have funded? * What are a few specific comms/advocacy opportunities you view as net negative//have actively decided not to fund? * What are a few examples of hypothetical comms/advocacy opportunities you've been excited about? * What do you think about EG Max Tegmark/FLI, Andrea Miotti/Control AI, The Future Society, the Center for AI Policy, Holly Elmore, PauseAI, and other specific individuals or groups that are engaging in AI comms or advocacy?  I think if you (and others at OP) are interested in receiving more critiques or overall feedback on your approach, one thing that would be helpful is writing up your current models/reasoning on comms/advocacy topics. In the absence of this, people simply notice that OP doesn't seem to be funding some of the main existing examples of comms/advocacy efforts, but they don't really know why, and they don't really know what kinds of comms/advocacy efforts you'd be excited about.

It uses the language of "models that present systemic risks" rather than "very capable," but otherwise, a decent summary, bot.

(I began working for OP on the AI governance team in June. I'm commenting in a personal capacity based on my own observations; other team members may disagree with me.)

OpenPhil sometimes uses its influence to put pressure on orgs to not do things that would disrupt the status quo

FWIW I really don’t think OP is in the business of preserving the status quo.  People who work on AI at OP have a range of opinions on just about every issue, but I don't think any of us feel good about the status quo! People (including non-grantees) often ask us for our thoug... (read more)

Nitpick: I would be sad if people ruled themselves out for e.g. being "20th percentile conscientiousness" since in my impression the popular tests for OCEAN are very sensitive to what implicit reference class the test-taker is using. 

For example, I took one a year ago and got third percentile conscientiousness, which seems pretty unlikely to be true given my abilities to e.g. hold down a grantmaking job, get decent grades in grad school, successfully run 50-person retreats, etc. I think the explanation is basically that this is how I respond to "I am ... (read more)

Yeah this is a good point; fwiw I was pointing at "<30th percentile conscientiousness" as a problem that I have, as someone who is often late to meetings for more than 1-2 minutes (including twice today). My guess is that my (actual, not perceived) level of conscientiousness is pretty detrimental to LTFF fund chair work, while yours should be fine? I also think "Harvard Law student" is just a very wacky reference class re: conscientious; most people probably come from a less skewed sample than yours. 

Reposting my LW comment here:

Just want to plug Josh Greene's great book Moral Tribes here (disclosure: he's my former boss). Moral Tribes basically makes the same argument in different/more words: we evolved moral instincts that usually serve us pretty well, and the tricky part is realizing when we're in a situation that requires us to pull out the heavy-duty philosophical machinery.

Huh, it really doesn't read that way to me. Both are pretty clear causal paths to "the policy and general coordination we get are better/worse as a result."

5
Holly_Elmore
6mo
That too, but there was a clear indication that 1 would be fun and invigorating and 2 would be depressing.

Most of these have the downside of not giving the accused the chance to respond and thereby giving the community the chance to evaluate both the criticism and the response (which as I wrote recently isn't necessarily a dominant consideration, but it is an upside of the public writeup).

2
Linch
6mo
I agree what you said is a consideration, though I'm not sure that's an upside. eg I wasted a lot more time/sleep on this topic than if I learned about it elsewhere and triaged accordingly, and I wouldn't be surprised if other members of the public did as well.

Fwiw, seems like the positive performance is more censored in expectation than the negative performance: while a case that CH handled poorly could either be widely discussed or never heard about again, I'm struggling to think of how we'd all hear about a case that they handled well, since part of handling it well likely involves the thing not escalating into a big deal and respecting people's requests for anonymity and privacy.

It does seem like a big drawback that the accused don't know the details of the accusations, but it also seems like there are obvio... (read more)

Thanks for writing this up!

I hope to write a post about this at some point, but since you raise some of these arguments, I think the most important cruxes for a pause are:

  1. It seems like in many people's models, the reason the "snap back" is problematic is that the productivity of safety research is much higher when capabilities are close to the danger zone, both because the AIs that we're using to do safety research are better and because the AIs that we're doing the safety research on are more similar to the ones in the danger zone. If the "snap back" redu
... (read more)
3
Akash
6mo
@tlevin I would be interested in you writing up this post, though I'd be even more interested in hearing your thoughts on the regulatory proposal Thomas is proposing. Note that both of your points seem to be arguing against a pause, whereas my impression is that Thomas's post focuses more on implementing a national regulatory body. (I read Thomas's post as basically saying like "eh, I know there's an AI pause debate going on, but actually this pause stuff is not as important as getting good policies. Specifically, we should have a federal agency that does licensing for frontier AI systems, hardware monitoring for advanced chips, and tracking of risks. If there's an AI-related emergency or evidence of imminent danger, then the agency can activate emergency powers to swiftly respond." I think the "snap-back" point and the "long-term supply curve of compute" point seem most relevant to a "should we pause?" debate, but they seem less relevant to Thomas's regulatory body proposal. Let me know if you think I'm missing something, though!)
2
Greg_Colbourn
6mo
Any realistic pause would only be lifted once there is a consensus on a potential solution to x-safety (or at least, say, full solutions to all jailbreaks, mechanistic interpretability and alignment up to the (frozen) frontier). If compute limits are in place during the pause, they can gradually be ratcheted up, with evals performed on models trained at each step, to avoid any such sudden snap back.

Agree, basically any policy job seems to start teaching you important stuff about institutional politics and process and the culture of the whole political system!

Though I should also add this important-seeming nuance I gathered from a pretty senior policy person who said basically: "I don't like the mindset of, get anywhere in the government and climb the ladder and wait for your time to save the day; people should be thinking of it as proactively learning as much as possible about their corner of the government-world, and ideally sharing that information with others."

Suggestion for how people go about developing this expertise from ~scratch, in a way that should be pretty adaptable to e.g. the context of an undergraduate or grad-level course, or independent research (a much better/stronger version of things I've done in the past, which involved lots of talking and take-developing but not a lot of detail and publication, which I think are both really important):

  1. Figure out who, both within the EA world and not, would know at least a fair amount about this topic -- maybe they just would be able to explain why it's useful
... (read more)

Honestly, my biggest recommendation would be just getting a job in policy! You'll get to see what "everyone knows", where the gaps are, and you'll have access to a lot more experts and information to help you upskill faster if you're motivated.

You might not be able to get a job on the topic you think is most impactful but any related job will give you access to better information to learn faster, and make it easier to get your next, even-more-relevant policy job.

In my experience getting a policy job is relatively uncorrelated with knowing a lot about a specific topic so I think people should aim for this early. You can also see if you actually LIKE policy jobs and are good at them before you spend too much time!

A technique I've found useful in making complex decisions where you gather lots of evidence over time -- for example, deciding what to do after your graduation, or whether to change jobs, etc., where you talk to lots of different people and weigh lots of considerations -- is to make a spreadsheet of all the arguments you hear, each with a score for how much it supports each decision.

For example, this summer, I was considering the options of "take the Open Phil job," "go to law school," and "finish the master's." I put each of these options in columns. Then... (read more)

As of August 24, 2023, I no longer endorse this post for a few reasons.

  1. I think university groups should primarily be focused on encouraging people to learn a lot of things and becoming a venue/community for people to try to become excellent at things that the world really needs, and this will mostly look like creating exciting and welcoming environments for co-working and discussion on campus. In part this is driven by the things that I think made HAIST successful, and in part it's driven by thinking there's some merit to the unfavorable link to this post
... (read more)

Was going to make a very similar comment. Also, even if "someone else in Boston could have" done the things, their labor would have funged from something else; organizer time/talent is a scarce resource, and adding to that pool is really valuable.

Yep, all sounds right to me re: not deferring too much and thinking through cause prioritization yourself, and then also that the portfolio is too broad, though these are kind of in tension.

To answer your question, I'm not sure I update that much on having changed my mind, since I think if people did listen to me and do AISTR this would have been a better use of time even for a governance career than basically anything besides AI governance work (and of course there's a distribution within each of those categories for how useful a given project is; lots of technical projects would've been more useful than the median governance project).

I just want to say I really like this style of non-judgmental anthropology and think it gives an accurate-in-my-experience range of what people are thinking and feeling in the Bay, for better and for worse.

Also: one thing that I sort of expected to come up and didn't see, except indirectly in a few vignettes, is just how much of one's life in the Bay Area rationalist/EA scene is comprised of work, of AI, and/or of EA. Part of this is just that I've only ever lived in the Bay for up to ~6 weeks at a time and was brought there by work, and if I lived there p... (read more)

That is a useful post, thanks. It changes my mind somewhat about EA's overall reputational damage, but I still think the FTX crisis exploded the self-narrative of ascendancy (both in money and influence), and the prospects have worsened for attracting allies, especially in adversarial environments like politics.

5
Ben_West
9mo
Yep, FTX's collapse definitely seems bad for EA!
2
Nathan Young
9mo
Is this your own experience, something you are confident of or something you guess? If the first two I might move towards you more.
tlevin
9mo42
19
13

I agree that we're now in a third wave, but I think this post is missing an essential aspect of the new wave, which is that EA's reputation has taken a massive hit. EA doesn't just have less money because of SBF; it has less trust and prestige, less optimism about becoming a mass movement (or even a mass-elite movement), and fewer potential allies because of SBF, Bostrom's email/apology, and the Time article.

For that reason, I'd put the date of the third wave around the 10th of November 2022, when it became clear that FTX was not only experiencing a "liqui... (read more)

Thanks! You may be interested in my recent post with Emma which found that FTX does not seem to have greatly affected EA's public image.

I sense this is true internally but not externally. I don't really feel like our reputation has changed much in general.

Maybe among US legislators? I don't know.

I spent some time last summer looking into the "other countries" idea: if we'd like to slow down both Chinese AI timelines without speeding up US timelines, what if we tried to get countries that aren't the US (or the UK, since DeepMind is there) to accept more STEM talent from China? TLDR:

  • There are very few countries at the intersection of "has enough of an AI industry and general hospitality to Chinese immigrants (e.g., low xenophobia, widely spoken languages) that they'd be interested in moving" + "doesn't have so much of an AI industry that this would
... (read more)

I don't know how we got to whether we should update about longtermism being "bad." As far as I'm concerned, this is a conversation about whether Eric Schmidt counts as a longtermist by virtue of being focused on existential risk from AI.

It seems to me like you're saying: "the vast majority of longtermists are focused on existential risks from AI; therefore, people like Eric Schmidt who are focused on existential risks from AI are accurately described as longtermists."

When stated that simply, this is an obvious logical error (in the form of "most squares are rectangles, so this rectangle named Eric Schmidt must be a square"). I'm curious if I'm missing something about your argument.

tlevin
10mo18
❤️ 2

Thanks for this post, Luise! I've been meaning to write up some of my own burnout-type experiences but probably don't want to make a full post about it, so instead I'll just comment the most important thing from it here, which is:

Burnout does not always take the form of reluctance to work or lacking energy. In my case, it was a much more active resentment of my work — work that I had deeply enjoyed even a month or two earlier — and a general souring of my attitude towards basically all projects I was working on or considering. While I was sometimes still, ... (read more)

1
Luise
10mo
Thanks a lot, I think it's really valuable to have your experience written up!

The posts linked in support of "prominent longtermists have declared the view that longtermism basically boils down to x-risk" do not actually advocate this view. In fact, they argue that longtermism is unnecessary in order to justify worrying about x-risk, which is evidence for the proposition you're arguing against, i.e. you cannot conclude someone is a longtermist because they're worried about x-risk.

2
Arepo
10mo
Are you claiming that if (they think and we agree that) longtermism is 80+% concerned with AI safety work and AI safety work turns out to be bad, we shouldn't update that longtermism is bad? The first claim seems to be exactly what they think.  Scott: You could argue that he means 'socially promote good norms on the assumption that the singularity will lock in much of society's then-standard morality', but 'shape them by trying to make AI human-compatible' seems a much more plausible reading of the last sentence to me, given context of both longtermism. Neel: He identifies as a not-longtermist (mea culpa), but presumably considers longtermism the source of these as 'the core action relevant points of EA', since they certainly didn't come from the global poverty or animal welfare wings. Also, at EAG London, Toby Ord estimated there were 'less than 10' people in the world working full time on general longtermism (as opposed to AI or biotech) - whereas the number of people who'd consider themselves longtermist is surely in the thousands.

As of May 27, 2023, I no longer endorse this post. I think most early-career EAs should be trying to contribute to AI governance, either via research, policy work, advocacy, or other stuff, with the exception of people with outlier alignment research talent or very strong personal fits for other things.

I do stand by a lot of the reasoning, though, especially of the form "pick the most important thing and only rule it out if you have good reasons to think personal fit differences outweigh the large differences in importance." The main thing that changed was that governance and policy stuff now seems more tractable than I thought, while alignment research seems less tractable.

Edited the top of the post to reflect this.

3
CEvans
8mo
I think this post is interesting, while being quite unsure what my actual take is on the correctness of this updated version. I think I am worried about community epistemics in this world where we encourage people to defer on what the most important thing is. It seems like there are a bunch of other plausible candidates for where the best marginal value add is even if you buy AI X- risk arguments eg. S risks, animal welfare, digital sentience, space governance etc. I am excited about most young EAs thinking about these issues for themselves. How much do you weight the outside view consideration here of you suggesting a large shift in the EA community resource allocation, and then changing your mind a year later, which indicates the exact kind of uncertainty which motivates more diverse portfolios? I think your point of people underrating problem importance relative to personal fit on the current margin seems true though and tangentially, my guess is the overall EA cause portfolio (both for financial and human capital allocation) is too large.

I love the point about the dangers of "can't go wrong" style reasoning. I think we're used to giving advice like this to friends when they're stressing out about a choice that is relatively low-stakes, even like "which of these all-pretty-decent jobs [in not-super-high-impact areas] should I take." It's true that for the person getting the advice, all the jobs would probably be fine, but the intuition doesn't work when the stakes for others are very high. Impact is likely so heavy-tailed that even if you're doing a job at the 99th percentile of your option... (read more)

Correct that CBAI does not have plans to run a research fellowship this summer (though we might do one again in the winter), but we are tentatively planning on running a short workshop this summer that I think will at least slightly ease this bottleneck by connecting people worried about AI safety to the US AI risks policy community in DC - stay tuned (and email me at trevor [at] cbai [dot] ai if you'd want to be notified when we open applications).

Pretty great discussion at the end of the implications of AI becoming more salient among the public!

Rereading your post, it does make sense now that you were thinking of safety teams at the big labs, but both the title about "selling out" and point #3 about "capabilities people" versus "safety people" made me think you had working on capabilities in mind.

If you think it's "fair game to tell them that you think their approach will be ineffective or that they should consider switching to a role at another organization to avoid causing accidental harm," then I'm confused about the framing of the post as being "please don't criticize EAs who 'sell out'," sin... (read more)

2
BrownHairedEevee
1y
Yes! I realize that "capabilities people" was not a good choice of words. It's a shorthand based on phrases I've heard people use at events.

(Realizing that it would be hypocritical for me not to say this, so I'll add: if you're working on capabilities at an AGI lab, I do think you're probably making us less safe and could do a lot of good by switching to, well, nearly anything else, but especially safety research.)

Setting aside the questions of the impacts of working at these companies, it seems to me like this post prioritizes the warmth and collegiality of the EA community over the effects that our actions could have on the entire rest of the planet in a way that makes me feel pretty nervous. If we're trying in good faith to do the most good, and someone takes a job we think is harmful, it seems like the question should be "how can I express my beliefs in a way that is likely to be heard, to find truth, and not to alienate the person?" rather than "is it polite to... (read more)

3
tlevin
1y
(Realizing that it would be hypocritical for me not to say this, so I'll add: if you're working on capabilities at an AGI lab, I do think you're probably making us less safe and could do a lot of good by switching to, well, nearly anything else, but especially safety research.)

Also seems important to note: EA investing should have not only different expectations about the world but also different goals about the point of finance. We are relatively less interested in moving money from worlds where we're rich to worlds where we're poor (the point of risk aversion) and more interested in moving it from worlds where it's less useful to where it's more useful.

Concretely: if it turns out we're wrong about AI, this is a huge negative update on the usefulness of money to buy impact, since influencing this century's development generally... (read more)

Agree with Xavier's comment that people should consider reversing the advice, but generally confused/worried that this post is getting downvoted (13 karma on 18 votes as of this writing). In general, I want the forum to be a place where bold, truth-seeking claims about how to do more good get attention. My model of people downvoting this is that they are worried that this will make people work harder despite this being suboptimal. I think that people can make these evaluations well for themselves, and that it's good to present people with information and a... (read more)

I agree that the forum should be a place for bold, truth-seeking claims about how to do more good. However, I think recommending people try taking stimulants is quite different to recommending people try working longer hours. The downside risks of harm are higher and more serious, and more difficult for readers to interpret for themselves. I don't think this post is well argued, but the part on stimulants is particularly weak.

Heavily cosigned (as someone who has worked with some of Nick's friends whom he got into EA, not as someone who's done a particularly great job of this myself). I encourage readers of this post to think of EA-sympathetic and/or very talented friends of theirs and find a time to chat about how they could get involved!

Wait, why have we not tried to buy ea.org?

1
Robi Rahman
2y
I heard CEA offered them $10k and they refused to sell it.

As your fellow Cantabrigian I have some sympathies for this  argument. But I'm confused about some parts of it and disagree with others:

  • "EA hub should be on the east coast" is one kind of claim. "People starting new EA projects, orgs, and events should do so on the east coast" is a different one. They'd be giving up the very valuable benefits of living near the densest concentrations of other orgs, especially funders. You're right that the reasons for Oxford and the Bay being the two hubs are largely historical rather than practical, but that's the na
... (read more)
3
Eli Rose
2y
Hmm yeah, I went East Coast --> Bay and I somewhat miss the irony.

I think it is a heuristic rather than a pure value. My point in my conversation with Josh was to disentangle these two things — see Footnote 1! I probably should be more clear that these examples are Move 1 in a two-move case for longtermism: first, show that the normative "don't care about future people" thing leads to conclusions you wouldn't endorse, then argue about the empirical disagreement about our ability to benefit future people that actually lies at the heart of the issue.

1
Jack R
2y
I think I understood that's what you were doing at the time of writing, and mostly my comment was about bullets 2-5. E.g. yes "don't care about future people at all" leads to conclusions you wouldn't endorse, but what about discounting future people with some discount rate? I think this is what the common-sense intuition does, and maybe this should be thought of as a "pure value" rather than a heuristic. I wouldn't really know how to answer that question though, maybe it's dissolvable and/or confused.

Borrowing this from some 80k episode of yore, but it seems like another big (but surmountable) problem with neglectedness is which resources count as going towards the problem. Is existential risk from climate change neglected? At first blush, no, hundreds of billions of dollars are going toward climate every year. But how many of these are actually going towards the tail risks, and how much should we downweight them for ineffectiveness, and so on.

Great summary, thanks for posting! Quick question about this line:

surely the major problem in our actual world is that consequentialist behavior leads to poor self-interested outcomes. The implicit view in economics is that it’s society’s job to align selfish interests with public interests

Do you mean the reverse — self-interested behavior leads to poor consequentialist outcomes?

Right, I'm just pointing out that the health/income tradeoff is a very important input that affects all of their funding recommendations.

I'm not familiar with the Global Burden of Disease report, but if Open Phil and GiveWell are using it to inform health/income tradeoffs it seems like it would play a pretty big role in their grantmaking (since the bar for funding is set by being a certain multiple more effective than GiveDirectly!) [edit: also, I just realized that my comment above looked like I was saying "mostly yes" to the question of "is this true, as an empirical matter?" I agree this is misleading. I meant that Linch's second sentence was mostly true; edited to reflect that.]

2
Karthik Tadepalli
2y
Again, it informs only how they trade off health and income. The main point of DALY/QALYs is to measure health effects. And in that regard, EA grantmakers use off-the-shelf estimates of QALYs rather than calculating them. Even if they were to calculate them, the IDinsight study does not have anything in it that would be used to calculate QALYs, it focuses solely on income vs health tradeoffs.

Your understanding is mostly correct. But I often mention this (genuinely very cool) corrective study to the types of political believers described in this post, and they've really liked it too: https://www.givewell.org/research/incubation-grants/IDinsight-beneficiary-preferences-march-2019 [edit: initially this comment began with "mostly yes" which I meant as a response to the second sentence but looked like a response to the first, so I changed it to "your understanding is mostly correct."]

6
Karthik Tadepalli
2y
That seems a bit misleading since the IDinsight study, while excellent, is not actually the basis for QALY estimates as used in e.g. the Global Burden of Disease report. My understanding is that it informs the way givewell and open philanthropy trade off health vs income, but nothing more than that.

I am generally not that familiar with the creating-more-persons arguments beyond what I've said so far, so it's possible I'm about to say something that the person-affecting-viewers have a good rebuttal for, but to me the basic problem with "only caring about people who will definitely exist" is that nobody will definitely exist. We care about the effects of people born in 2024 because there's a very high chance that lots of people will be born then, but it's possible that an asteroid, comet, gamma ray burst, pandemic, rogue AI, or some other threat could ... (read more)

1
Noah Scales
2y
Thank you for the thorough answer. To me it's a practical matter. Do I believe or not that some set of people will exist? To motivate that thinking, consider the possibility that ghosts exist, and that their interests deserve account. I consider its probability non-zero because  I can imagine plausible scenarios in which ghosts will exist, especially ones in which science invents them. However, I don't factor those ghosts into my ethical calculations with any discount rate. Then there's travelers from parallel universes, again, a potentially huge population with nonzero probability of existing (or appearing) in future. They don't get a discount rate either, in fact I don't consider them at all. As far as large numbers of future people in the far future, that future is not on the path that humanity walks right now. It's still plausible, but I don't believe in it. So no discount rate for trillions of future people. And, if I do believe in those trillions, still no discount rate. Instead, those people are actual future people having full moral status. Lukas Gloor's description of contractualism and minimal morality that is mentioned in a comment on your post appeals to me, and is similar to my intuitions about morality in context, but I am not sure my views on deciding altruistic value of actions match Gloor's views.  I have a few technical requirements before I will accept that I affect other people, currently alive or not. Also, I only see those effects as present to future, not present to past. For example, I won't feel concern myself about the moral impacts of a cheeseburger, no matter what suffering was caused by the production of it, unless I somehow caused that production. However, I will concern myself with what suffering my eating of that burger will cause (not could cause, will cause) in future. And I am accountable for what I caused after I ate cheeseburgers before. Anyway, belief in a future is a binary thing to me. When I don't know what the future h

I appreciate the intention of keeping argumentative standards on the forum high, but I think this misses the mark. (Edit: I want this comment's tone to come off less as "your criticism is wrong" and more like "you're probably right that this isn't great philosophy; I'm just trying to do a different thing.")

I don't claim to be presenting the strongest case for person-affecting views, and I acknowledge in the post that non-presentist person-affecting views don't have these problems. As I wrote, I have repeatedly encountered these views "in the wild" and am p... (read more)

I anticipate the response being "climate change is already causing suffering now," which is true, even though the same people would agree that the worst effects are decades in the future and mostly borne by future generations.

I basically just sidestep these issues in the post except for alluding to the "transitivity problems" with views that are neutral to the creation of people whose experiences are good. That is, the question of whether future people matter and whether more future people is better than fewer are indeed distinct, so these examples do not fully justify longtermism or total utilitarianism.

Borrowing this point from Joe Carlsmith: I do think that like, my own existence has been pretty good, and I feel some gratitude towards the people who took actions to make it m... (read more)

4
Noah Scales
2y
About MacAskill's Longtermism Levin, let me reassure you that, regardless of how far in the future they exist, future people that I believe will exist do have moral status to me, or should. However, I see no reason to find more humans alive in the far future to be morally preferable to fewer humans alive in the far future above a population number in the lower millions. Am I wrong to suspect that MacAskill's idea of longtermism includes that a far future containing more people is morally preferable to a far future containing fewer people? A listing of context-aware vs money-pump conditions The money pump seems to demonstrate that maximizing moral value inside a particular person-affecting theory of moral value (one that is indifferent toward the existence of nonconceived future people) harms one's own interests. In context, I am indifferent to the moral status of nonconceived future people that I do not believe will ever exist. However, in the money pump, there is no distinction between people that could someday exist versus will someday exist. In context, making people is morally dangerous. However, in the money pump, it is morally neutral. In context, increasing the welfare of an individual is not purely altruistic (for example, wrt everyone else). However, in the money pump, it is purely altruistic. In context, the harm of preventing conception of additional life is only what it causes those who will live, just like in the money pump. The resource that you linked on transitivity problems includes a tree of valuable links for me to explore. The conceptual background information should be interesting, thank you. About moral status meaning outside the context of existent beings Levin, what are the nonconceived humans (for example, humans that you believe are never conceived) that do not have moral status in your ethical calculations? Are there any conditions in which you do not believe that future beings will exist but you give them moral status anyway?

Further evidence that it's not an applause light: vast majority of respondents to this Twitter poll about the subject say the jokes have not crossed a line: https://twitter.com/nathanpmyoung/status/1557811908555804674?s=21&t=wdOUd5m4d_ZqiJ-12TLM9Q

Load more