Recent Discussion

Introduction

Several recent popular posts (here, here, and here) have made the case that existential risks (x-risks) should be introduced without appealing to longtermism or the idea that future people have moral value. They tend to argue or imply that x-risks would still be justified as a priority without caring about future people. I felt intuitively skeptical of this claim[1] and decided to stress-test it.

In this post, I:

  1. Argue that prioritizing x-risks over near-term interventions and global catastrophic risks may require caring about future people. More
  2. Disambiguate connotations of “longtermism”, and suggest a strategy for introducing the priority of existential risks. More
  3. Review and respond to previous articles which mostly argued that longtermism wasn’t necessary for prioritizing existential risks. More

Prioritizing x-risks may require caring about future people

I’ll do some rough analyses on...

Probably a very small share and maybe none of the targeted animals exist at the time a project is started or a donation is made, since, for example, farmed chickens only live 40 days to 2 years, and any animals that benefit would normally be ones born and raised into different systems, rather than changing practices for any animal already alive at the time of reform. They aren't going to move live egg-laying hens out of cages into cage-free systems to keep farming them. It's the next group of them who will just never be farmed in cages at all.

Many anima... (read more)

1elifland18m
Thanks for clarifying, and apologies for making an incorrect assumption about your assessment on tractability. I edited your tl;dr and a link to this comment into the post.
9Lucas Lewit-Mendes4h
Thanks for this really well-written post, I particularly like how you clarified the different connotations of longtermism and also the summary table of cost-effectiveness. I think one thing to note is that an X-risk event would not only wipe out humans, but also the billions of factory farmed animals. Taking into account animal suffering would dramatically worsen the cost-effectiveness of X-risk from a neartermist point of view. I think this implies longtermism is necessary to justify working on X-risk (at least until factory farming is phased out).
1Charlie Dougherty34m
OK! Can you give an example of "make things better" that might make a culture exceptional here? I think most societies would be upset if you suggested that they didn't care about future generations, even if you think they were not very good at it.

Just a very broad definition of ‘helping’, like warning about droughts via hunger stones (https://en.m.wikipedia.org/wiki/Hunger_stone)

4freedomandutility1h
Sorry I did mean descendants, this is quite interesting though!

I'm trying to decide whether to travel to EAG DC from Europe and it would help to know who's attending. My hypothesis is that many AI Safety folks (the area I'm currently most interested in) will have attended EAG SF and won't come to DC and that DC will be more useful for policy folks. Is this roughly correct? I've only ever attended EAG London so any assessment would help.

I am, and am interested in policy, more specifically on biosecurity and nuclear stuff. Also yes, based on my discussions with folks in SF, EAG DC  has many more policy guys than the EAG SF. Let's be connected: https://www.linkedin.com/in/nickmoulios/

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some question posts that could use more answers.

These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.

Job listings that these organizations highlighted are at the top of this post. Some of the jobs have extremely pressing deadlines (including jobs whose applications close today). 

You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series. The organizations are in alphabetical order, starting with F this week.[1]

Job Listings

See also: Who's hiring? May-September 2022. The jobs below will also appear in the upcoming EA Newsletter.

Applications due soon

80,000 Hours

...

Summary

I have consolidated publicly available grants data from EA organizations into a spreadsheet, which I intend to update periodically[1]. Totals pictured below.

 

Figure 1: publicly available grants by recipient category
Figure 2: publicly available grants by source

(edit: swapped color palette to make graphs easier to read)

Observations

  • $2.6Bn in grants on record since 2012, about 63% of which went to Global Health.
  • With the addition of FTX and impressive fundraising by GiveWell, Animal Welfare looks even more neglected in relative terms—effective animal charities will likely receive something like 5% of EA funding in 2022, the smallest figure since 2015 by a wide margin.

Notes on the data

NB: This is just one observer's tally of public data. Sources are cited in the spreadsheet; I am happy to correct any errors as they are pointed...

Me too, same for other areas as well!

Kind of bad we didn't have this overview before. Seems very basic to have! So thanks for doing it

2MatthewDahlhausen1h
I find this website helpful for picking colorblind friendly color schemes: https://colorbrewer2.org/ [https://colorbrewer2.org/]
2david_reinstein2h
Fair point but still may be worth joining force or coordinating with Hamish

I have previously encountered EAs who have beliefs about EA communication that seem jaded to me. These are either, “Trying to make EA seem less weird is an unimportant distraction, and we shouldn’t concern ourselves with it” or “Sounding weird is an inherent property of EA/EA cause areas, and making it seem less weird is not tractable, or at least not without compromising important aspects of the movement.” I would like to challenge both of these views.

“Trying to make EA seem less weird is unimportant”

As Peter Wildeford explains in this LessWrong post:

People take weird opinions less seriously. The absurdity heuristic is a real bias that people -- even you -- have. If an idea sounds weird to you, you're less likely to try and believe it, even

...

I liked this comment!

In particular, I think the people who are good at "not making EA seem weird" (while still communicating all the things that matter – I agree with the points Rohin is making in the thread) are also (often) the ones who have a deeper (or more "authentic") understanding of the content.

There are counterexamples, but consider, for illustration, that Yudkowsky's argument style and the topics he focuses on would seem a whole lot weirder if he wasn't skilled at explaining complex issues. So, understanding what you talk about doesn't always mak... (read more)

2Rohin Shah11h
Idk, what are you trying to do with your illegible message? If you're trying to get people to do technical research, then you probably just got them to work on a different version of the problem that isn't the one that actually mattered. You'd probably be better off targeting a smaller number of people with a legible message. If you're trying to get public support for some specific regulation, then yes by all means go ahead with the illegible message (though I'd probably say the same thing even given longer timelines; you just don't get enough attention to convey the legible message). TL;DR: Seems to depend on the action / theory of change more than timelines.

Summary

We are EAs who have done or are about to do the Schwarzman Scholars program; Saad just graduated in 2022 and Kevin is an incoming Schwarzman Scholar starting the program in fall 2022. While we feel that this post likely gives a good sense of what one could gain from the program, individuals' experiences have varied widely. In particular, the experience and potential value of doing this program might look very different if you are from China. 

We have written this post with input and feedback from a range of Schwarzman alumni. Special thanks to Jason Zhou, Deborah Tien, Miro Pluckebaum, and John Petrie for comments on earlier drafts. Any errors that remain are ours. 

  • Schwarzman Scholars is a fully-funded, 1-year Master’s in Global Affairs and Leadership at Tsinghua
...

Thanks for this post! Schwarzman seems especially promising for folks interested in policy, where a grad degree is often needed and where China expertise is valued.

I think it's worth emphasizing that these degrees only take one year. This is a BIG advantage relative to e.g. law school, an MBA, and even many/most MPP programs. If you think education  (particularly non-STEM grad school) is mostly about signaling rather than learning, then the opportunity cost of an extra one or two years of schooling is really significant. Schwarzman looks like a great way to get a shiny grad credential in a very reasonable amount of time. 

2Jordan_Schneider1h
also this might be a useful thing to throw into the relevant links section https://www.chinatalk.media/p/china-policy-an-early-career-guide
7Jordan_Schneider1h
Hey folks--I wrote a similar review/advice article particularly aimed at the Yenching Scholarship you might find interesting. https://jorschneider.com/2020/11/16/thoughts-on-yenching-academy/ [https://jorschneider.com/2020/11/16/thoughts-on-yenching-academy/] Yenching Academy [https://yenchingacademy.pku.edu.cn/] and the Schwarzman Scholars [http://schwarzmanscholars.org/] program comprise China’s attempt to set up a Rhodes/Marshall-style master’s degree. Both programs are fully-funded masters degrees comprised mostly of non-Chinese students. I was a Yenching Scholar in its third cohort from 2017-2019. What follows are some of my reflections on the experience and advice for applicants considering these programs. Application process (from 2016…there have been two deans since I applied) * There was a big emphasis on the essay on ‘why China,’ so be sure to explain what role you expect China to have in your future professional life and why Yenching will help you achieve those goals. * Of late I hear more emphasis has gone on demonstrated interest in China through language study and past academic work. That said, there are generally a few people from each cohort who haven’t studied Mandarin (though this is easier to pull off if you’re from a region where Chinese instruction is less accessible like South Asia or Africa). Academics * My classes at Yenching were significantly less demanding than advanced undergrad courses at top schools in the US, with many not going much deeper than what you’d learn in a lecture or seminar aimed at Freshmen. Yenching suffers from a ‘principal contradiction’ of on the one hand wanting to have students that have diverse academic interests and having to teach in English at a Chinese university. There are only so many courses they can offer to a program with just 120 students (most of whom don’t have Chinese strong enough to take graduate school courses in PKU’s other schools), so the courses f

I am noticing a lot more (up)voting than I remember seeing in the recent past. 

6Answer by Stefan_Schubert1h
This [https://forum.effectivealtruism.org/posts/oankJ9oFZujCcQ6Jo/product-managers-the-ea-forum-needs-you] is from some time ago.

Thanks for this!

The second part of my comment here is relevant for this thread's theme – it explains my position a bit better.