Recent Discussion

Key takeaways

  • Broad student outreach methods seem heavy-tailed regarding converted fellows and time-effectiveness due to differences in scale.
  • If you have the opportunity, use a university newsletter to reach many students.

Caveat: This is just n=22 from one student group’s intro fellowship (and my first forum post). Consider EA Groups Resources or the Global Challenges Project if you need advice for student outreach.

Background

This semester EA Munich facilitated an in-person intro fellowship for the first time as part of the nationwide intro fellowship, which Evander Hammer and Moritz von Knebel organized. Twenty-two fellows participated in Munich, making the cohort the largest in Germany. I think Munich offers an untapped opportunity for community-building (our local group is searching for a full-time community builder – just DM me!) since there are two top...

Cross posted from my blog.

Because of the generosity of a few billionaires, the effective altruism movement has recently come into a lot of money. The total amount of capital committed to the movement varies day to day with the crypto markets on which Sam Bankman-Fried’s net worth is based. But the sum was recently estimated at 46 billion1.

The movement has been trying to figure out how quickly it should give away this money. There’s lots of fascinating questions you have to resolve before you can decide on a disbursement schedule2. An especially interesting one is: how many future billionaires will be effective altruists? If a new Sam Bankman-Fried and Dustin Moskovitz will join the movement every few years, then the argument for allocating money now becomes much...

TL;DR: Research and discourse on AGI timelines aren't as helpful as they may at first appear, and a lot of the low-hanging fruit (i.e. motivating AGI-this-century as a serious possibility) has already been plucked. 

Introduction

A very common subject of discussion among EAs is “AGI timelines.” Roughly, AGI timelines, as a research or discussion topic, refer to the time that it will take before very general AI systems meriting the moniker “AGI” are built, deployed, etc. (one could flesh this definition out and poke at it in various ways, but I don’t think the details matter much for my thesis here—see “What this post isn’t about” below). After giving some context and scoping, I argue below that while important in absolute terms, improving the quality of AGI timelines isn’t...

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some question posts that could use more answers.

Crossposted on Less Wrong.

0. Preface

Summary: AI alignment needs many great people working on research, engineering, governance, social science, security, operations, and more. I think AI alignment-focused university groups are uniquely underexploited sources of that future talent. Here are my plans for running such a group at Stanford, broadly involving mass outreach to a wide range of students, efficient fit-testing and funneling to a core group of promising students, and robust pipelining of those students into a connected network of impactful opportunities.

Confidence: Moderate. This theory of change is heavily inspired by similar plans from Michael Chen (Effective Altruism at Georgia Tech) and Alexander Davies (Harvard AI Safety Team) as well as EA field-building learnings from Stanford Effective Altruism. While parts of this have been tested in those communities,...

1Gabriel Mukobi1h
Thanks Chris! Yeah, I think that's definitely a potential downside—I imagine we'll definitely meet at least a couple of people smack in the middle of the quarter who go "Wow, that sounds cool, and also I never heard about this before" (this has happened with other clubs at Stanford). I think the ways to mitigate failure due to this effect are: * Outreach early (when students still read their emails) and widely (to hopefully get the invitation message to everyone who would consider joining). * Don't turn down students we find later, instead offer a lot of encouragement and a follow-up plan to join the reading group next term (10-week quarters for us). * This has the downside of latency in that we're potentially delaying an impactful career, though maybe this isn't significant compared to the latency of taking enough courses/doing enough research to graduate. * Semester schools could also offer the reading group twice per ~15-week semester, though that sounds a bit intense. * For promising students waiting for the next term, still offer them 1-on-1s, suggest things for them to read/work on independently, invite them to special reading group events like visits from guest alignment professionals, and maybe invite them to some core-only events. * Alternatively, if we find a student who has already read a bit, we could consider having them join a reading group cohort part-way through the year.
3aogara3h
The organizers of such a group are presumably working towards careers in AI safety themselves. What do you think about the opportunity cost of their time? To bring more people into the field, this strategy seems to delay the progress of the currently most advanced students within the AI safety pipeline. Broad awareness of AI risk among potentially useful individuals should absolutely by higher, but it doesn’t seem like the #1 bottle neck compared to developing people from “interested” to “useful contributor”. If somebody is on that cusp themselves, should they focus on personal development or outreach? Trevor Levin and Ben Todd had an interesting discussion of toy models on this question here: https://forum.effectivealtruism.org/posts/ycCBeG5SfApC3mcPQ/even-more-early-career-eas-should-try-ai-safety-technical?commentId=tLMQtbY3am3mzB3Yk#comments [https://forum.effectivealtruism.org/posts/ycCBeG5SfApC3mcPQ/even-more-early-career-eas-should-try-ai-safety-technical?commentId=tLMQtbY3am3mzB3Yk#comments]

Good points! This reminds me of the recent Community Builders Spend Too Much Time Community Building post. Here are some thoughts about this issue:

  1. Field-building and up-skilling don't have to be orthogonal. I'm hopeful that a lot of an organizer's time in such a group would involve doing the same things general members going through the system would be doing, like facilitating interesting reading group discussions or working on interesting AI alignment research projects. As the too much time post suggests, maybe just doing the cool learning stuff is a grea
... (read more)

Just now I looked out my very picturesque window and saw a massive cloud jutting upward like a mountain peak framed by some trees and the sky high above…I whispered an audible “Wow”, it was truly majestic and beautiful like a great white mountain full of bright spots and dark cregs.  

Wonder.

Will AI ever look up and say Wow?

If it did would it then wonder where the wonder came from?

Presumably it would know the cloud was a collection of water droplets forming and preparing to fall back to earth, but would it know the how and why of the cloud’s existence? Would it then express wonder about nature’s beautiful cycle of water going up and down and feeding the earth?

What is AI going to think about art?

AI...

this is why we're building an AI to make humans kinder to each other

Motivational

The Motivational tag covers posts that focus on inspiring us to do good (either a specific kind of good, or whatever we already want to work on).(Read More)

*Please RSVP using the form linked above*

Come meet other people interested in their best options for doing good. Homemade Vegan dinner will be served.

This event is open to all members of the EA community. Feel free to invite a friend (please ask them to register so we know how much food to prepare!)

Logistics
- Please register (and ask anyone you invite to register) and submit your proof of vaccination using the form here:
https://forms.gle/tspp7FtVmaesnuZS9
- All attendees must be at least double vaccinated against Covid. (We are ending the mask requirement since omicron has receded and we hadn't particularly followed it in the first place). Dinner will be outside unless precluded by bad weather. You can read the other covid precautions we will take here:
https://bit.ly/3h6D6Fi

Getting to / finding the Burrow: https://bit.ly/3h6D6Fi

Bathroom available inside the house
Street parking is free after work hours.

(For those familiar, this is inspired by the Boston EA dinners that often take place at Julia Wise's house)

Registration Form:
https://forms.gle/tspp7FtVmaesnuZS9

Many thanks for feedback and insight from Kelly Anthis, Tobias Baumann, Jan Brauner, Max Carpendale, Sasha Cooper, Sandro Del Rivo, Michael Dello-Iacovo, Michael Dickens, Anthony DiGiovanni, Marius Hobbhahn, Ali Ladak, Simon Knutsson, Greg Lewis, Kelly McNamara, John Mori, Thomas Moynihan, Caleb Ontiveros, Sean Richardson, Zachary Rudolph, Manny Rutinel, Stefan Schubert, Michael St. Jules, Nell Watson, Peter Wildeford, and Miranda Zhang. This essay is in part an early draft of an upcoming book chapter on the topic, and I will add the citation here when it is available.

Our lives are not our own. From womb to tomb, we are bound to others, past and present. And by each crime and every kindness, we birth our future. ⸻ Cloud Atlas (2012)

Summary

The prioritization of extinction risk reduction depends on an...

I downvoted this comment. While I think this discussion is important to have, I do not think that a post about longtermism should be turned into a referendum on Jacy's conduct. I think it would be better to have this discussion on a separate post or the open thread.

2anonymous_ea11h
From [https://forum.effectivealtruism.org/posts/ZbdNFuEP2zWN5w2Yx/ryancarey-s-shortform?commentId=oxodp9BzigZ5qgEHg] Jacy:
27Jacy13h
Hi Khorton, I wouldn't describe it as stepping back into the community, and I don't plan on doing that, regardless of this issue, unless you consider occasional posts and presentations or socializing with my EA friends as such. This post on the EV of the future was just particularly suited for the EA Forum (e.g., previous posts on it), and it's been 3 years since I published that public apology and have done everything asked of me by the concerned parties (around 4 years since I was made aware of the concerns, and I know of no concerns about my behavior since then). I'm not planning to comment more here. This is in my opinion a terrible place to have these conversations, as Dony pointed out as well.

Summary

Background

The FTX Foundation’s Future Fund publicly launched in late February. We're a philanthropic fund that makes grants and investments to improve humanity's long-term prospects. For information about some of the areas we've been funding, see our Areas of Interest page

This is our first public update on the Future Fund’s grantmaking. The purpose of this post is to give an update on what we’ve done and what we're learning about the funding models we're testing. (It does not cover a range of other FTX Foundation activities.)

We’ve also published a new grants page and regrants page with our public grants so far.

Our focus on testing funding models

We are trying to learn as much as we can about how to deploy funding at scale to improve humanity’s long-term prospects. Our...

1quinn7h
Is there a way to access a list of regrantors, maybe indexed by problem area? Any reason I can't just query "show me the email address of every FTX regrantor who is interested in epistemic institutions" for instance?

My guesses:

  1. Regranting is intended as a way to let people with local knowledge apply it to directing funds. This is different from just deputizing grantmakers.

  2. If you made the list public I'd expect the regranters to be overwhelmed by people seeking grants, and generally find it pretty frustrating. For example, would your next step be to send emails to each of those addresses? ;)