Again, I haven't actually read this, but this article discusses intransitivity in asymmetric person-affecting views, i.e. I think in the language you used: The value of pleasure is contingent in the sense that creating new lives with pleasure has no value. But the disvalue of pain is not contingent in this way. I think you should be able to directly apply that to other object-list theories that you discuss instead of just hedonistic (pleasure-pain) ones.
An alternative way to deal with intransitivity is to say that not existing and any life are incomparable. This gives you the unfortunate situation that you can't straightforwardly compare different worlds with different population sizes. I don't know enough about the literature to say how people deal with this. I think there's some long work in the works that's trying to make this version work and that also tries to make "creating new suffering people is bad" work at the same time.
I think some people probably do think that they are comparable but reject that some lives are better than neutral. I expect that that's rarer though?
person-affecting view of ethics, which longtermists reject
I'm a longtermist and I don't reject (asymmetric) person(-moment-)affecting views, at least not those that think necessary ≠ only present people. I would be very hard-pressed to give a clean formalization of necessary people though. I think it's bad if effective altruists think longtermism can only be justified with astronomical waste-style arguments and not at all if someone has person-affecting intuitions. (Staying in a broadly utilitarian framework. There are, of course, also obligation-to-ancestor-type justifications for longtermism or similar.) The person-affecting part of me just pushes me in the direction of caring more about trajectory change than extinction risk.
Since I could only ever give very handwavey defenses of person-affecting views and even handwaveier explanations of my overall moral views: Here's a paper by someone that AFAICT is at least sympathetic to longtermism and discusses asymmetric person-affecting views. (I have to admit I never got around to read the paper.) (Writing a paper on how an asymmetric person-affecting view obviously also doesn't necessarily mean that the author doesn't actually reject person-affecting views)
Big fan of what you describe in the end or something similar.
It's still not great, and it would still be hard to distinguish the people who opted-in and received the codes but decided not to use them from the people who just decided to not receive their codes in the first place
Not sure whether you mean it's hard from the technical side to track who received their code and who didn't (which would be surprising) or whether you mean distinguishing between people who opted out and people who opted in but decided not to see the code. If the latter: Any downside to just making it clear in the email that not receiving your code is treated as opting out? People who don't read the email text should presumably not count anyway.
On the trust-building and adding to the voices in favor of making it opt-in: I like many aspects of this game, including the fact that doing the right thing is at least plausibly "do nothing and don't tell anyone you've done/you're doing the right thing." But currently, the combination of no opt-in/opt-out and that it's not anonymous doesn't really make it feel like a trust-building exercise to me. It feels more like "Don't push the button because people will seriously hate you if you do" and also "people will also get angry if you push the button because of an honest mistake, so it's probably best to just protect yourself from information for a day" (see last year - although maybe people were more upset about the wording in some of the messages the person who pushed sent rather than being tricked into pushing itself?), which isn't great. So, I think the lack of opt-in/out makes lots of people upset + it ruins the original purpose of this event IMO, and everyone is unhappy.
edit: Feature already exists, thanks Ruby!
Another feature request: Is it possible to make other people's predictions invisible by default and then reveal them if you'd like? (Similar to how blacked-out spoilers work, which you can hover over to see the text.)
I wanted to add a prediction but then noticed that I heavily anchored on the previous responses and didn't end up doing it.
edit: no longer relevant since OP has been edited since. (Thanks!)
Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10.
This would also mean that if you have a 10% chance of succeeding, then the expected value of the path is $300,000–$2 million (and the value of information will be very high if you can determine your fit within a couple of years).
Just to clarify, that's the EV of the path per year, right?
The funding overhang also created bottlenecks for people able to staff projects, and to work in supporting roles. [...]I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million.
The funding overhang also created bottlenecks for people able to staff projects, and to work in supporting roles. [...]
I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million.
I assume this is also per year?
Clarifying because I think numbers like this are likely to be quoted/vaguely remembered in the future, and it's easy to miss the per year part.
And do you have any idea how the numbers for total funding break down into different cause areas? That seems important for reasoning about this.
I think I often hear longtermists discuss funding in EA and use the 22 Bil number from OpenPhilanthropy. And I think people often make some implicit mental move thinking that's also the money dedicated to longtermism- even though my understanding is very much that that's not all available to longtermism.
1.1.: You might want to have a look at group of positions in metaethics called person affecting views, some of which include future people and some of which don't. The ones that do often don't care about increasing/decreasing the number of people in the future, but about improving the lives of future people that will exist anyway. That's compatible with longtermism - not all longtermism is about extinction risk. (See trajectory change and s-risk.)
1.2.: No, we don't just care about humans. In fact, I think it's quite likely that most of the value or disvalue will come from non-human minds. (Though I'm thinking digital minds rather than animals.) But we can't influence how the future will go if we're not around, and many x-risk scenarios would be quite bad full stop and not just bad for humans.
1.3.: You might want to have a look at cluelessness (EA forum and GPI website should have links) or the recent 80,000 Hours podcast with Alexander Berger. Predicting the future and how we can influence it is definitely extremely hard, but I don't think we're decisively in bad enough of a position where we can - with a good conscience - just throw our hands up and conclude there's definitely nothing to be done here.
2.1 + 2.2.: Don't really want to write anything on this right now
2.3.: Definite no. It just argues that trade-offs must be made, and some bads are worse even than current suffering. Or rather: The amount of bad we can avert is greater even than if we focus on current suffering
2.4: Don't understand what you're getting at.
3.1.: Can't parse the question
3.2.: I think many longtermists struggle with this. Michelle Hutchinson wrote a post on the EA forum recently on what still keeps her motivated. You can find it by searching her name ont he EA forum.
3.3.: No. Longtermism per se doesn't say anything about how much to personally sacrifice. You can believe in longtermism + think that you should give away your last penny and work every waking hour in a job you don't like. You can not be a longtermist and think you should live a comfortable, expensive life because that's what's most sustainable. Some leanings on this question might correlate with whether you're a longtermist or not, but in principle, this question is orthogonal.
Sorry if the tone is brash. If so, that's unintentional, and I tend to be really slow otherwise, but I appreciate that you're thinking about this. (Also, I'm writing this as sleep procrastination, and my guilt is driving my typing speed)
On Human Challenge Trials (HCTs):
Disclaimer: I have been completely plugged out of Covid-19 stuff for over a year, definitely not an expert on these things (anymore), and definitely speaking for myself and not 1Day Sooner (which is more bullish on HCTs)
I worked for 1Day sooner last year as one of the main people investigating the feasibility and usefulness of HCTs for the pandemic. At least back then (March 2020), we estimated that it would optimiscally take 8 months to complete the preparations for an HCT (so not even the HCT itself). Most of this time would be used for manufacturing and approving the challenge virus, and for dose-finding studies. (You give people some of the virus and check if it's enough to induce the disease, then repeat with a higher dose etc.)
I think in a better world, you can probably speed up the approval for the challenge virus, and massively parallize dose-finding to be less lenghty. Not sure how many months that gets you down to, but the 2.5 months for preparation + the actual HCT you assume seem overly optimistic to me. I still think HCTs should have been prepared, but I'm not sure how much speed that would have actually gained us. More details here in the section "PREPARATORY STEPS NEEDED FOR HUMAN CHALLENGE TRIALS" (free access)
There was also some discussion of challenge trials with natural infection (you put people together with infectious people who have Covid-19), which might get around this? But I don't know what came out of that (I think it wasn't pursued further?). Not sure how logistically feasible that actually is. (I think it would at least be more difficult politically than a normal HCT.)
Don't think this changes the general thrust of your post, but wanted to push back on this part of it.
(There's some chance I missed followup work, perhaps even by 1Day Sooner itself, that corrects these numbers, in which case I stand embarrassed :) )
Note: this is mostly about your earlier videos. I think this one was better done, so maybe my points are redundant. Posting this here because the writer has expressed some unhappiness with reception so far. I've watched the other videos some weeks ago and didn't rewatch them for this comment. I also didn't watch the bitcoin one.
First of, I think trying out EA content on youtube is really cool (in the sense of potentially high value), really scary, and because of this really cool (in the sense of "of you to do this".) Kudos for that. I think this could be really good and valuable if you incorporate feedback and improve over time.
Some reasons why I was/am skeptical of the channel when I watched the videos:
In general, the kind of questions I would ask myself, and the reason why I think all of the above are a concern are:
I'm somewhat concerned that the answer for too many people would be "no" for 3, and "yes" for 6. Obviously there will always be some "no" for 3 and some "yes"for 6, especially for such a broad medium like youtube, and balancing this is really difficult. (And it's always easier to take the skeptical stance.) But I think I would like to see more to tip the balance a bit.
Maybe one thing that's both a good indicator but also important in its own right is the kind of community that forms in the comment section. I've so far been moderately positively surprised by the comment section on the longtermism video and how your handling it, so maybe this is evidence that my concerns are misplaced. It still seems like something worth paying attention to. (Not claiming I'm telling you anything new.)
I'm not sure what your plans and goals are, but I would probably prioritise getting the overall tone and community of the channel right before trying to scale your audience.
Some comments on this video:
Again, I think it is really cool and potentially highly valuable that you're doing this, and I have a lot of respect for how you've handled feedback so far. I don't want to discourage you from producing further videos, just want to give an idea of what some people might be concerned about/why there's not more enthusiasm for your channel so far. As I said, I think this video is definitely in the IMO right direction and find this encouraging.
edit: Just seen the comment you left on Aaron Gertler's comment about engagement. Maybe this is a crux.
Hm, I'm a bit unhappy with the framing of symptoms vs. root causes, and am skeptical about whether it captures a real thing (when it comes to mental health and drugs vs. therapy). I'm worried that making the difference between the two contributes to the problems alexrjl pointed out.
Note, I have no clinical expertise and am just spitballing: e.g. I understand the following trajectory as archetypical for what others might call "aha! First a patch and then root causes":
[Low energy --> takes antidepressants --> then has enough energy to do therapy & changes thought patterns etc. --> becomes long-term better afterwards doesn't need antidepressants anymore"]
But even if somebody had a trajectory like this, I'm not convinced that the thought patterns should count as root cause and not e.g. physiological imbalances that gave these kind of thought patterns a rich feeding ground in the first place (, which were addressed by antidepressants and perhaps to be addressed first before long-term improvement is possible). This makes me think that even if there is some matter of fact, it's not particularly meaningful.
(This seems even more true to me for things like ADHD - not even sure what root causes would be here -, but which weren't central to OP)
I think you might plausibly have a different and coherent conception of the root causes vs. symptoms thing, but I'm worried of using that distinction anyway because root causes is pretty normatively connotated, and people have all kinds of associations to it. (Would still be curious to hear your conceptualisation if you have one)
I care much less/have no particular thoughts on this distinction in non-mental-health cases, which were the focus of OP.
+1 to appreciating the OP, and I'll probably try out some of the things suggested!