Chi

Wiki Contributions

Comments

Is effective altruism growing? An update on the stock of funding vs. people

edit: no longer relevant since OP has been edited since. (Thanks!)

Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10.

(emphasis mine)

This would also mean that if you have a 10% chance of succeeding, then the expected value of the path is $300,000–$2 million (and the value of information will be very high if you can determine your fit within a couple of years).

Just to clarify, that's the EV of the path per year, right?

The funding overhang also created bottlenecks for people able to staff projects, and to work in supporting roles. [...]

I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million.

I assume this is also per year?


Clarifying because I think numbers like this are likely to be quoted/vaguely remembered in the future, and it's easy to miss the per year part.

Is effective altruism growing? An update on the stock of funding vs. people

And do you have any idea how the numbers for total funding break down into different cause areas? That seems important for reasoning about this.

+1

I think I often hear longtermists discuss funding in EA and use the 22 Bil number from OpenPhilanthropy. And I think people often make some implicit mental move thinking that's also the money dedicated to longtermism- even though my understanding is very much that that's not all available to longtermism.

anoni's Shortform

1.

1.1.: You might want to have a look at group of positions in metaethics called person affecting views, some of which include future people and some of which don't. The ones that do often don't care about increasing/decreasing the number of people in the future, but about improving the lives of future people that will exist anyway. That's compatible with longtermism - not all longtermism is about extinction risk. (See trajectory change and s-risk.)

1.2.: No, we don't just care about humans. In fact, I think it's quite likely that most of the value or disvalue will come from non-human minds. (Though I'm thinking digital minds rather than animals.) But we can't influence how the future will go if we're not around, and many x-risk scenarios would be quite bad full stop and not just bad for humans.

1.3.: You might want to have a look at cluelessness (EA forum and GPI website should have links) or the recent 80,000 Hours podcast with Alexander Berger. Predicting the future and how we can influence it is definitely extremely hard, but I don't think we're decisively in bad enough of a position where we can - with a good conscience - just throw our hands up and conclude there's definitely nothing to be done here.

 

2.

2.1 + 2.2.: Don't really want to write anything on this right now

2.3.: Definite no. It just argues that trade-offs must be made, and some bads are worse even than current suffering. Or rather: The amount of bad we can avert is greater even than if we focus on current suffering

2.4: Don't understand what you're getting at.

 

3.

3.1.: Can't parse the question

3.2.: I think many longtermists struggle with this. Michelle Hutchinson wrote a post on the EA forum recently on what still keeps her motivated. You can find it by searching her name ont he EA forum.

3.3.: No. Longtermism per se doesn't say anything about how much to personally sacrifice. You can believe in longtermism + think that you should give away your last penny and work every waking hour in a job you don't like. You can not be a longtermist and think you should live a comfortable, expensive life because that's what's most sustainable. Some leanings on this question might correlate with whether you're a longtermist or not, but in principle, this question is orthogonal.

 

Sorry if the tone is brash. If so, that's unintentional, and I tend to be really slow otherwise, but I appreciate that you're thinking about this. (Also, I'm writing this as sleep procrastination, and my guilt is driving my typing speed)

COVID: How did we do? How can we know?

On Human Challenge Trials (HCTs):

Disclaimer: I have been completely plugged out of Covid-19 stuff for over a year, definitely not an expert on these things (anymore), and definitely speaking for myself and not 1Day Sooner (which is more bullish on HCTs)

I worked for 1Day sooner last year as one of the main people investigating the feasibility and usefulness of HCTs for the pandemic. At least back then (March 2020), we estimated that it would optimiscally take 8 months to complete  the preparations for an HCT (so not even the HCT itself). Most of this time would be used for manufacturing and approving the challenge virus, and for dose-finding studies. (You give people some of the virus and check if it's enough to induce the disease, then repeat with a higher dose etc.)

I think in a better world, you can probably speed up the approval for the challenge virus, and massively parallize dose-finding to be less lenghty. Not sure how many months that gets you down to, but the 2.5 months for preparation + the actual HCT you assume seem overly optimistic to me. I still think HCTs should have been prepared, but I'm not sure how much speed that would have actually gained us. More details here in the section "PREPARATORY STEPS NEEDED FOR HUMAN CHALLENGE TRIALS" (free access)

There was also some discussion of challenge trials with natural infection (you put people together with infectious people who have Covid-19), which might get around this? But I don't know what came out of that (I think it wasn't pursued further?). Not sure how logistically feasible that actually is. (I think it would at least be more difficult politically than a normal HCT.)

Don't think this changes the general thrust of your post, but wanted to push back on this part of it.

(There's some chance I missed followup work, perhaps even by 1Day Sooner itself, that corrects these numbers, in which case I stand embarrassed :) )

An animated introduction to longtermism (feat. Robert Miles)

Note: this is mostly about your earlier videos. I think this one was better done, so maybe my points are redundant. Posting this here because the writer has expressed some unhappiness with reception so far. I've watched the other videos some weeks ago and didn't rewatch them for this comment. I also didn't watch the bitcoin one.

First of, I think trying out EA content on youtube is really cool (in the sense of potentially high value), really scary, and because of this really cool (in the sense of "of you to do this".) Kudos for that. I think this could be really good and valuable if you incorporate feedback and improve over time.

Some reasons why I was/am skeptical of the channel when I watched the videos:

  • For the 4 videos before this one, I didn't see how they were going to help make the world better. (I can tell some hypothetical stories for 3 of them, but I don't think they achieved that goal because of some of the things later in this comment.)
  • I found the title for the Halo effect one aversive. I'm personally fine with a lot of internet meme humour, but also know some EAs who actually take offense by the Virgin vs. Chad meme. I think for something so outward facing, I want to avoid controversy where it's unnecessary. (And to be clear: not avoid it where it's necessary.) It also just feels click-baity.
  • Watching the videos, I just didn't feel like I could trust the content. If I didn't know some of the content already, it would be really hard for me to tell from the video whether the content was legitimate science or buzzfeed-level rigour. For example, I really didn't know how to treat the information in the cringe one and basically decided to ignore it. This is not to say that the content wasn't checked and legitimate, just that it's not obvious from the videos. Note that this wasn't true for the longtermism one.
  • I found the perceived jump in  topic in the cringe video aversive, and it reinforced my impression that the videos weren't very rigorous/truthseeking/honest. I was overall kind of confused by that video.
  • I think the above (and the titles) matter because of the kind of crowd you want to attract and retain with your videos.
  • I think the artistic choice is fine, but also contributes. I don't think that's a problem when not combined with the other things.

In general, the kind of questions I would ask myself, and the reason why I think all of the above are a concern are:

  1. Which kind of people does this video attract?
  2. Which of these people will get involved/in contact with EA because of these videos?
  3. Do we want these people to be involved in the EA project?
  4. Which kind of people does this video turn off?
  5. Which of these people will be turned off of EA in general because of these videos?
  6. Do we want these people to be involved in the EA project?

I'm somewhat concerned that the answer for too many people would be "no" for 3, and "yes" for 6. Obviously there will always be some "no" for 3 and some "yes"for 6, especially for such a broad medium like youtube, and balancing this is really difficult. (And it's always easier to take the skeptical stance.) But I think I would like to see more to tip the balance a bit.

Maybe one thing that's both a good indicator but also important in its own right is the kind of community that forms in the comment section. I've so far been moderately positively surprised by the comment section on the longtermism video and how your handling it, so maybe this is evidence that my concerns are misplaced. It still seems like something worth paying attention to. (Not claiming I'm telling you anything new.)

I'm not sure what your plans and goals are, but I would probably prioritise getting the overall tone and community of the channel right before trying to scale your audience.

 

Some comments on this video:

  • I thought it was much better in all the regards I mentioned above.
  • There were still some things I felt slightly uneasy about, but there were much, much smaller, and might be idiosyncratic taste or really-into-philosophy-or-even-specific-philosophical-positions type things. I might also have just noticed them in the context of your other videos, and might have been fine with this otherwise. I feel much less confident that they are actually bad. Examples:
    • I felt somewhat unhappy with your presentation of person-affecting views, mostly because there are versions that don't only value people presently alive. (Actually, I'm pretty confused about this. I thought your video explicitly acknowledged that, but then sounded different later. I didn't go back to check again, so feel free to discard this if it's inaccurate.) Note that I sympathise a lot with person-affecting views, so might just be biased and feel attacked.
    • I feel a bit unhappy that trajectory-change wasn't really discussed.
    • I felt somewhat uneasy about the "but what if I tell you that even this is nothing compared to what impact you could have" part when transitioning from speeding up technological progress to extincition risk reduction. It kind of felt buzzfeedy again, but I think it's plausibly I only noticed because I had the context of your other videos. On the more substantive side, I'm not familiar with the discussion around this at all, but I can imagine that whether speeding up growth or preventing extinction risk is more important is an open question to some researchers involved? Really don't know though.

 

Again, I think it is really cool and potentially highly valuable that you're doing this, and I have a lot of respect for how you've handled feedback so far. I don't want to discourage you from producing further videos, just want to give an idea of what some people might be concerned about/why there's not more enthusiasm for your channel so far. As I said, I think this video is definitely in the IMO right direction and find this encouraging.

 

edit: Just seen the comment you left on Aaron Gertler's comment about engagement. Maybe this is a crux.

A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it

Hm, I'm a bit unhappy with the framing of symptoms vs. root causes, and am skeptical about whether it captures a real thing (when it comes to mental health and drugs vs. therapy). I'm worried that making the difference between the two contributes to the problems alexrjl pointed out.

Note, I have no clinical expertise and am  just spitballing: e.g. I understand the following trajectory as archetypical for what others might call "aha! First a patch and then root causes":

[Low energy --> takes antidepressants --> then has enough energy to do therapy & changes thought patterns etc. --> becomes long-term better afterwards doesn't need antidepressants anymore"]

But even if somebody had a trajectory like this, I'm not convinced that the thought patterns should count as root cause and not e.g. physiological imbalances that gave these kind of thought patterns a rich feeding ground in the first place (, which were addressed by antidepressants and perhaps to be addressed first before long-term improvement is possible). This makes me think that even if there is some matter of fact, it's not particularly meaningful.

(This seems even more true to me for things like ADHD - not even sure what root causes would be here -, but which weren't central to OP)

I think you might plausibly have a different and coherent conception of the root causes vs. symptoms thing, but I'm worried of using that distinction anyway because root causes is pretty normatively connotated, and people have all kinds of associations to it. (Would still be curious to hear your conceptualisation if you have one)

I care much less/have no particular thoughts on this distinction in non-mental-health cases, which were the focus of OP.

+1 to appreciating the OP, and I'll probably try out some of the things suggested!

How much do you (actually) work?

Hah! Random somewhat fun personal anecdote: I think tracking actually helped me a bit with that. When I first started tracking I was pretty neurotic about doing it super exactly. Having to change my toggl so frequently + seeing the '2 minutes of supposed work X' at the end of the day when looking at my toggl was so embarrassing that I improved a bit over time. Now I'm either better at swtiching less often and less neurotic about tracking or only the latter. It also makes me feel worse to follow some distraction if I know my time is currently being tracked as something else.

Concerns with ACE's Recent Behavior

I might be a little bit less worried about the time delay of the response. I'd be surprised if fewer than say 80% of the people who would say they find this very concerning won't end up also reading the response from ACE.

FWIW, depending on the definition of 'very concerning', I wouldn't find this surprising. I think people often read things, vaguely update, know that there's another side of the story that they don't know, have the thing they read become a lot less salient,  happen to not see the follow-up because they don't check the forum much,  and end up having an updated opinion (e.g. about ACE in this case) much later without really remembering why.

(e.g. I find myself very often saying things like "oh, there was this EA post that vaguely said X and maybe you should be concerned about Y because of this, although I don't know how exactly this ended in the end" when others talk about some X-or-Y-related topic, esp. when the post is a bit older. My model of others is that they then don't go check, but some of them go on to say "Oh, I think there's a post that vaguely says X, and maybe you be concerned about Y because of this, but I didn't read it, so don't take me too seriously" etc. and this post sounds like something this could happen with.)

Maybe I'm just particularly epistemically unvirtuous and underestimate others. Maybe for the people who don't end up looking it up but just having this knowingly-shifty-somewhat-update the information just isn't very decision-relevant and it doesn't matter much. But I generally think information that I got with lots of epistemic disclaimers and that have lots of disclaimers in my head do influence me quite a bit and writing this makes me think I should just stop saying dubious things.

Launching a new resource: 'Effective Altruism: An Introduction'

And if hours went into carefully picking the original ten episodes and deciding how to sequence them, I'd like to see modifications made via a process of re-listening to different podcasts for hours and experimenting with their effects in different orders, seeing what "arcs" they form, etc., rather than via quick EA Forum comments and happy recollections of isolated episodes.

 

I agree that that's how I want the eventual decision to be made. I'm not sure what exactly the intended message of this paragraph was, but at least one reading is that you want to discourage comments like Brian's or otherwise extensive discussion on the contents of the podcast list. In case anyone reads it that way, I strongly disagree.

This has some flavor of 'X at EA organisation Y probably thought about this for much longer than me/works on this professionally, so I'll defer to them', which I think EAs generally say/think/do too often. It's very easy to miss things even when you've worked on something for a while (esp. if it's more in the some months than many years range) and outsiders often can actually contribute something important. I think this is already surprisingly often the case with research, and much more so the case with something like an intro resource where people's reactions are explicitly part of what you're optimizing for. (Obviously what we care about are new-people's reactions, but I still think that people-within-EA-reactions are pretty informative for that. And either way, people within EA are clearly stakeholders of what 80,000 Hours does.)

As with everything, there's some risk of the opposite ('not expecting enough of professionals?'), but I think EA currently is too far on the deferry end (at least within EA, I could imagine that it's the opposite with experts outside of EA).

Meta: Rereading your comment, I think it's more likely that your comment was either meant as a message to 80,000 Hours about how you want them to make their decision eventually or something completely different, but I think it's good to leave thoughts on possible interpretations of what people write.

What material should we cross-post for the Forum's archives?
Answer by ChiApr 15, 202111
  • Some stuff from Paul Christiano's 'The sideways view'

In addition to everything that Pablo said (esp. the Tomasik stuff because AFAICT none of his stuff is on the forum?)

Load More