Chi

Wiki Contributions

Comments

Why I am probably not a longtermist

Again, I haven't actually read this, but this article discusses intransitivity in asymmetric person-affecting views, i.e. I think in the language you used: The value of pleasure is contingent in the sense that creating new lives with pleasure has no value. But the disvalue of pain is not contingent in this way. I think you should be able to directly apply that to other object-list theories that you discuss instead of just hedonistic (pleasure-pain) ones.

An alternative way to deal with intransitivity is to say that not existing and any life are incomparable. This gives you the unfortunate situation that you can't straightforwardly compare different worlds with different population sizes. I don't know enough about the literature to say how people deal with this. I think there's some long work in the works that's trying to make this version work and that also tries to make "creating new suffering people is bad" work at the same time.

I think some people probably do think that they are comparable but reject that some lives are better than neutral. I expect that that's rarer though?

Noticing the skulls, longtermism edition

person-affecting view of ethics, which longtermists reject

I'm a longtermist and I don't reject (asymmetric) person(-moment-)affecting views, at least not those that think necessary ≠ only present people. I would be very hard-pressed to give a clean formalization of necessary people though. I think it's bad if effective altruists think longtermism can only be justified with astronomical waste-style arguments and not at all if someone has person-affecting intuitions. (Staying in a broadly utilitarian framework. There are, of course, also obligation-to-ancestor-type justifications for longtermism or similar.) The person-affecting part of me just pushes me in the direction of caring more about trajectory change than extinction risk.

Since I could only ever give very handwavey defenses of person-affecting views and even handwaveier explanations of my overall moral views: Here's a paper by someone that AFAICT is at least sympathetic to longtermism and discusses asymmetric person-affecting views. (I have to admit I never got around to read the paper.) (Writing a paper on how an asymmetric person-affecting view obviously also doesn't necessarily mean that the author doesn't actually reject person-affecting views)

How would you run the Petrov Day game?

Big fan of what you describe in the end or something similar.

It's still not great, and it would still be hard to distinguish the people who opted-in and received the codes but decided not to use them from the people who just decided to not receive their codes in the first place

Not sure whether you mean it's hard from the technical side to track who received their code and who didn't (which would be surprising) or whether you mean distinguishing between people who opted out and people who opted in but decided not to see the code. If the latter: Any downside to just making it clear in the email that not receiving your code is treated as opting out? People who don't read the email text should presumably not count anyway.

On the trust-building and adding to the voices in favor of making it opt-in: I like many aspects of this game, including the fact that doing the right thing is at least plausibly "do nothing and don't tell anyone you've done/you're doing the right thing." But currently, the combination of no opt-in/opt-out and that it's not anonymous doesn't really make it feel like a trust-building exercise to me. It feels more like "Don't push the button because people will seriously hate you if you do" and also "people will also get angry if you push the button because of an honest mistake, so it's probably best to just protect yourself from information for a day" (see last year - although maybe people were more upset about the wording in some of the messages the person who pushed sent rather than being tricked into pushing itself?), which isn't great. So, I think the lack of opt-in/out makes lots of people upset + it ruins the original purpose of this event IMO, and everyone is unhappy.

Honoring Petrov Day on the EA Forum: 2021

edit: Feature already exists, thanks Ruby!

Another feature request: Is it possible to make other people's predictions invisible by default and then reveal them if you'd like? (Similar to how blacked-out spoilers work, which you can hover over to see the text.)

I wanted to add a prediction but then noticed that I heavily anchored on the previous responses and didn't end up doing it.

Is effective altruism growing? An update on the stock of funding vs. people

edit: no longer relevant since OP has been edited since. (Thanks!)

Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10.

(emphasis mine)

This would also mean that if you have a 10% chance of succeeding, then the expected value of the path is $300,000–$2 million (and the value of information will be very high if you can determine your fit within a couple of years).

Just to clarify, that's the EV of the path per year, right?

The funding overhang also created bottlenecks for people able to staff projects, and to work in supporting roles. [...]

I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million.

I assume this is also per year?


Clarifying because I think numbers like this are likely to be quoted/vaguely remembered in the future, and it's easy to miss the per year part.

Is effective altruism growing? An update on the stock of funding vs. people

And do you have any idea how the numbers for total funding break down into different cause areas? That seems important for reasoning about this.

+1

I think I often hear longtermists discuss funding in EA and use the 22 Bil number from OpenPhilanthropy. And I think people often make some implicit mental move thinking that's also the money dedicated to longtermism- even though my understanding is very much that that's not all available to longtermism.

anoni's Shortform

1.

1.1.: You might want to have a look at group of positions in metaethics called person affecting views, some of which include future people and some of which don't. The ones that do often don't care about increasing/decreasing the number of people in the future, but about improving the lives of future people that will exist anyway. That's compatible with longtermism - not all longtermism is about extinction risk. (See trajectory change and s-risk.)

1.2.: No, we don't just care about humans. In fact, I think it's quite likely that most of the value or disvalue will come from non-human minds. (Though I'm thinking digital minds rather than animals.) But we can't influence how the future will go if we're not around, and many x-risk scenarios would be quite bad full stop and not just bad for humans.

1.3.: You might want to have a look at cluelessness (EA forum and GPI website should have links) or the recent 80,000 Hours podcast with Alexander Berger. Predicting the future and how we can influence it is definitely extremely hard, but I don't think we're decisively in bad enough of a position where we can - with a good conscience - just throw our hands up and conclude there's definitely nothing to be done here.

 

2.

2.1 + 2.2.: Don't really want to write anything on this right now

2.3.: Definite no. It just argues that trade-offs must be made, and some bads are worse even than current suffering. Or rather: The amount of bad we can avert is greater even than if we focus on current suffering

2.4: Don't understand what you're getting at.

 

3.

3.1.: Can't parse the question

3.2.: I think many longtermists struggle with this. Michelle Hutchinson wrote a post on the EA forum recently on what still keeps her motivated. You can find it by searching her name ont he EA forum.

3.3.: No. Longtermism per se doesn't say anything about how much to personally sacrifice. You can believe in longtermism + think that you should give away your last penny and work every waking hour in a job you don't like. You can not be a longtermist and think you should live a comfortable, expensive life because that's what's most sustainable. Some leanings on this question might correlate with whether you're a longtermist or not, but in principle, this question is orthogonal.

 

Sorry if the tone is brash. If so, that's unintentional, and I tend to be really slow otherwise, but I appreciate that you're thinking about this. (Also, I'm writing this as sleep procrastination, and my guilt is driving my typing speed)

COVID: How did we do? How can we know?

On Human Challenge Trials (HCTs):

Disclaimer: I have been completely plugged out of Covid-19 stuff for over a year, definitely not an expert on these things (anymore), and definitely speaking for myself and not 1Day Sooner (which is more bullish on HCTs)

I worked for 1Day sooner last year as one of the main people investigating the feasibility and usefulness of HCTs for the pandemic. At least back then (March 2020), we estimated that it would optimiscally take 8 months to complete  the preparations for an HCT (so not even the HCT itself). Most of this time would be used for manufacturing and approving the challenge virus, and for dose-finding studies. (You give people some of the virus and check if it's enough to induce the disease, then repeat with a higher dose etc.)

I think in a better world, you can probably speed up the approval for the challenge virus, and massively parallize dose-finding to be less lenghty. Not sure how many months that gets you down to, but the 2.5 months for preparation + the actual HCT you assume seem overly optimistic to me. I still think HCTs should have been prepared, but I'm not sure how much speed that would have actually gained us. More details here in the section "PREPARATORY STEPS NEEDED FOR HUMAN CHALLENGE TRIALS" (free access)

There was also some discussion of challenge trials with natural infection (you put people together with infectious people who have Covid-19), which might get around this? But I don't know what came out of that (I think it wasn't pursued further?). Not sure how logistically feasible that actually is. (I think it would at least be more difficult politically than a normal HCT.)

Don't think this changes the general thrust of your post, but wanted to push back on this part of it.

(There's some chance I missed followup work, perhaps even by 1Day Sooner itself, that corrects these numbers, in which case I stand embarrassed :) )

An animated introduction to longtermism (feat. Robert Miles)

Note: this is mostly about your earlier videos. I think this one was better done, so maybe my points are redundant. Posting this here because the writer has expressed some unhappiness with reception so far. I've watched the other videos some weeks ago and didn't rewatch them for this comment. I also didn't watch the bitcoin one.

First of, I think trying out EA content on youtube is really cool (in the sense of potentially high value), really scary, and because of this really cool (in the sense of "of you to do this".) Kudos for that. I think this could be really good and valuable if you incorporate feedback and improve over time.

Some reasons why I was/am skeptical of the channel when I watched the videos:

  • For the 4 videos before this one, I didn't see how they were going to help make the world better. (I can tell some hypothetical stories for 3 of them, but I don't think they achieved that goal because of some of the things later in this comment.)
  • I found the title for the Halo effect one aversive. I'm personally fine with a lot of internet meme humour, but also know some EAs who actually take offense by the Virgin vs. Chad meme. I think for something so outward facing, I want to avoid controversy where it's unnecessary. (And to be clear: not avoid it where it's necessary.) It also just feels click-baity.
  • Watching the videos, I just didn't feel like I could trust the content. If I didn't know some of the content already, it would be really hard for me to tell from the video whether the content was legitimate science or buzzfeed-level rigour. For example, I really didn't know how to treat the information in the cringe one and basically decided to ignore it. This is not to say that the content wasn't checked and legitimate, just that it's not obvious from the videos. Note that this wasn't true for the longtermism one.
  • I found the perceived jump in  topic in the cringe video aversive, and it reinforced my impression that the videos weren't very rigorous/truthseeking/honest. I was overall kind of confused by that video.
  • I think the above (and the titles) matter because of the kind of crowd you want to attract and retain with your videos.
  • I think the artistic choice is fine, but also contributes. I don't think that's a problem when not combined with the other things.

In general, the kind of questions I would ask myself, and the reason why I think all of the above are a concern are:

  1. Which kind of people does this video attract?
  2. Which of these people will get involved/in contact with EA because of these videos?
  3. Do we want these people to be involved in the EA project?
  4. Which kind of people does this video turn off?
  5. Which of these people will be turned off of EA in general because of these videos?
  6. Do we want these people to be involved in the EA project?

I'm somewhat concerned that the answer for too many people would be "no" for 3, and "yes" for 6. Obviously there will always be some "no" for 3 and some "yes"for 6, especially for such a broad medium like youtube, and balancing this is really difficult. (And it's always easier to take the skeptical stance.) But I think I would like to see more to tip the balance a bit.

Maybe one thing that's both a good indicator but also important in its own right is the kind of community that forms in the comment section. I've so far been moderately positively surprised by the comment section on the longtermism video and how your handling it, so maybe this is evidence that my concerns are misplaced. It still seems like something worth paying attention to. (Not claiming I'm telling you anything new.)

I'm not sure what your plans and goals are, but I would probably prioritise getting the overall tone and community of the channel right before trying to scale your audience.

 

Some comments on this video:

  • I thought it was much better in all the regards I mentioned above.
  • There were still some things I felt slightly uneasy about, but there were much, much smaller, and might be idiosyncratic taste or really-into-philosophy-or-even-specific-philosophical-positions type things. I might also have just noticed them in the context of your other videos, and might have been fine with this otherwise. I feel much less confident that they are actually bad. Examples:
    • I felt somewhat unhappy with your presentation of person-affecting views, mostly because there are versions that don't only value people presently alive. (Actually, I'm pretty confused about this. I thought your video explicitly acknowledged that, but then sounded different later. I didn't go back to check again, so feel free to discard this if it's inaccurate.) Note that I sympathise a lot with person-affecting views, so might just be biased and feel attacked.
    • I feel a bit unhappy that trajectory-change wasn't really discussed.
    • I felt somewhat uneasy about the "but what if I tell you that even this is nothing compared to what impact you could have" part when transitioning from speeding up technological progress to extincition risk reduction. It kind of felt buzzfeedy again, but I think it's plausibly I only noticed because I had the context of your other videos. On the more substantive side, I'm not familiar with the discussion around this at all, but I can imagine that whether speeding up growth or preventing extinction risk is more important is an open question to some researchers involved? Really don't know though.

 

Again, I think it is really cool and potentially highly valuable that you're doing this, and I have a lot of respect for how you've handled feedback so far. I don't want to discourage you from producing further videos, just want to give an idea of what some people might be concerned about/why there's not more enthusiasm for your channel so far. As I said, I think this video is definitely in the IMO right direction and find this encouraging.

 

edit: Just seen the comment you left on Aaron Gertler's comment about engagement. Maybe this is a crux.

A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it

Hm, I'm a bit unhappy with the framing of symptoms vs. root causes, and am skeptical about whether it captures a real thing (when it comes to mental health and drugs vs. therapy). I'm worried that making the difference between the two contributes to the problems alexrjl pointed out.

Note, I have no clinical expertise and am  just spitballing: e.g. I understand the following trajectory as archetypical for what others might call "aha! First a patch and then root causes":

[Low energy --> takes antidepressants --> then has enough energy to do therapy & changes thought patterns etc. --> becomes long-term better afterwards doesn't need antidepressants anymore"]

But even if somebody had a trajectory like this, I'm not convinced that the thought patterns should count as root cause and not e.g. physiological imbalances that gave these kind of thought patterns a rich feeding ground in the first place (, which were addressed by antidepressants and perhaps to be addressed first before long-term improvement is possible). This makes me think that even if there is some matter of fact, it's not particularly meaningful.

(This seems even more true to me for things like ADHD - not even sure what root causes would be here -, but which weren't central to OP)

I think you might plausibly have a different and coherent conception of the root causes vs. symptoms thing, but I'm worried of using that distinction anyway because root causes is pretty normatively connotated, and people have all kinds of associations to it. (Would still be curious to hear your conceptualisation if you have one)

I care much less/have no particular thoughts on this distinction in non-mental-health cases, which were the focus of OP.

+1 to appreciating the OP, and I'll probably try out some of the things suggested!

Load More