Hide table of contents

The Center on Long-Term Risk (CLR) will be running its second-ever intro fellowship on risks of astronomical suffering (s-risks), intended for effective altruists to learn more about which s-risks we consider most important and how to reduce them.

Logistics

The fellowship is six weeks long and involves a time commitment of about 3-5 hours per week. It will likely take place from early January to February. The exact dates will be chosen based on the participants’ availability.

The fellowship will cover what we currently consider to be the most important sources of s-risk (TAI conflictrisks from malevolent actors).

Fellowship participants will be divided into small cohorts. Each week will cover a new topic. Participants will explore relevant background materials in their own time, and then have the opportunity to discuss the topic with each other and with CLR staff during a one-hour Zoom meeting. For the final week, each cohort will choose from a list of preselected topics to learn about, giving participants the ability to tailor the material in a way that’s most useful for them.

In addition to having group discussions, participants will attend talks by s-risk researchers and be given the option to schedule 1-1 personalized career calls with us. CLR researchers will also join fellowship meetings about topics related to their work, to answer questions and help facilitate discussion.

Target Audience

We think this event will be useful for you if:

  • You are interested in s-risks and are open to making this cause a priority for your career; and
  • You have not interacted extensively with CLR (staff) yet, e.g., you have talked with us for less than 10 hours.

If you’re interested in applying for our Summer Research Fellowship in the future, this fellowship is a good opportunity to learn more about our work and improve your application due to having a better understanding of what we do and how to contribute.

There might be more idiosyncratic reasons to apply and the criteria above are intended as a guide rather than strict criteria.

Application details

You can apply to participate in the fellowship by filling out this form. The deadline is December 7, 2023, at 23:59 p.m Pacific Time. In some cases, we might ask applicants to do a short interview (10-15 minutes).

We expect to make final application decisions by December 21, 2023.

If you have any questions about the program or are uncertain whether to apply, you can comment on this post, or reach out to tristan.cook@longtermrisk.org.

89

0
0

Reactions

0
0
Comments3


Sorted by Click to highlight new comments since:

The fellowship will cover what we currently consider to be the most important sources of s-risk (TAI conflict, risks from malevolent actors).

Any reason CLR believes that to be the case specifically? For instance, it's argued on this page that botched alignment attempts/partially aligned AIs (near miss) & unforeseen instrumental drives of an unaligned AI are the 2 likeliest AGI-related s-risks, with malevolent actors (deliberately suffering-aligned AI) currently a lesser concern. I guess TAI conflict could fall under the second category, as an instrumental goal derived risk.

Thanks for asking — you can read more about these two sources of s-risk in Section 3.2 of our new intro to s-risks article. (We also discuss "near miss" there, but our current best guess is that such scenarios are significantly less likely than other s-risks of comparable scale.)

[comment deleted]1
0
0
Curated and popular this week
 ·  · 1m read
 · 
 ·  · 14m read
 · 
1. Introduction My blog, Reflective Altruism, aims to use academic research to drive positive change within and around the effective altruism movement. Part of that mission involves engagement with the effective altruism community. For this reason, I try to give periodic updates on blog content and future directions (previous updates: here and here) In today’s post, I want to say a bit about new content published in 2024 (Sections 2-3) and give an overview of other content published so far (Section 4). I’ll also say a bit about upcoming content (Section 5) as well as my broader academic work (Section 6) and talks (Section 7) related to longtermism. Section 8 concludes with a few notes about other changes to the blog. I would be keen to hear reactions to existing content or suggestions for new content. Thanks for reading. 2. New series this year I’ve begun five new series since last December. 1. Against the singularity hypothesis: One of the most prominent arguments for existential risk from artificial agents is the singularity hypothesis. The singularity hypothesis holds roughly that self-improving artificial agents will grow at an accelerating rate until they are orders of magnitude more intelligent than the average human. I think that the singularity hypothesis is not on as firm ground as many advocates believe. My paper, “Against the singularity hypothesis,” makes the case for this conclusion. I’ve written a six-part series Against the singularity hypothesis summarizing this paper. Part 1 introduces the singularity hypothesis. Part 2 and Part 3 together give five preliminary reasons for doubt. The next two posts examine defenses of the singularity hypothesis by Dave Chalmers (Part 4) and Nick Bostrom (Part 5). Part 6 draws lessons from this discussion. 2. Harms: Existential risk mitigation efforts have important benefits but also identifiable harms. This series discusses some of the most important harms of existential risk mitigation efforts. Part 1 discus
LewisBollard
 ·  · 5m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- Progress for factory-farmed animals is far too slow. But it is happening. Practices that once seemed permanent — like battery cages and the killing of male chicks — are now on a slow path to extinction. Animals who were once ignored — like fish and even shrimp — are now finally seeing reforms, by the billions. It’s easy to gloss over such numbers. So, as you read the wins below, I encourage you to consider each of these animals as an individual. A hen no longer confined to a cage, a chick no longer macerated alive, a fish no longer dying a prolonged death. I also encourage you to reflect on the role you and your fellow advocates and funders played in these wins. I’m inspired by what you’ve achieved. I hope you will be too. 1. About Cluckin’ Time. Over 1,000 companies globally have now fulfilled their pledges to go cage-free. McDonald’s implemented its pledge in the US and Canada two years ahead of schedule, sparing seven million hens from cages. Subway implemented its pledge in Europe, the Middle East, Oceania, and Indonesia. Yum Brands, owner of KFC and Pizza Hut, reported that for 25,000 of its restaurants it is now 90% cage-free. These are not cheap changes: one UK retailer, Lidl, recently invested £1 billion just to transition part of its egg supply chain to free-range. 2. The Egg-sodus: Cracking Open Cages. In five of Europe’s seven biggest egg markets — France, Germany, Italy, the Netherlands, and the UK — at least two-thirds of hens are now cage-free. In the US, about 40% of hens are — up from a mere 6% a decade ago. In Brazil, where large-scale cage-free production didn’t exist a decade ago, about 15% of hens are now cage-free. And in Japan, where it still barely exists, the nation’s largest egg buyer, Kewpi