Key takeaways
Caveat: This is just n=22 from one student group’s intro fellowship (and my first forum post). Consider EA Groups Resources or the Global Challenges Project if you need advice for student outreach.
Background
This semester EA Munich facilitated an in-person intro fellowship for the first time as part of the nationwide intro fellowship, which Evander Hammer and Moritz von Knebel organized. Twenty-two fellows participated in Munich, making the cohort the largest in Germany. I think Munich offers an untapped opportunity for community-building (our local group is searching for a full-time community builder – just DM me!) since there are two top...
Cross posted from my blog.
Because of the generosity of a few billionaires, the effective altruism movement has recently come into a lot of money. The total amount of capital committed to the movement varies day to day with the crypto markets on which Sam Bankman-Fried’s net worth is based. But the sum was recently estimated at 46 billion1.
The movement has been trying to figure out how quickly it should give away this money. There’s lots of fascinating questions you have to resolve before you can decide on a disbursement schedule2. An especially interesting one is: how many future billionaires will be effective altruists? If a new Sam Bankman-Fried and Dustin Moskovitz will join the movement every few years, then the argument for allocating money now becomes much...
TL;DR: Research and discourse on AGI timelines aren't as helpful as they may at first appear, and a lot of the low-hanging fruit (i.e. motivating AGI-this-century as a serious possibility) has already been plucked.
A very common subject of discussion among EAs is “AGI timelines.” Roughly, AGI timelines, as a research or discussion topic, refer to the time that it will take before very general AI systems meriting the moniker “AGI” are built, deployed, etc. (one could flesh this definition out and poke at it in various ways, but I don’t think the details matter much for my thesis here—see “What this post isn’t about” below). After giving some context and scoping, I argue below that while important in absolute terms, improving the quality of AGI timelines isn’t...
Crossposted on Less Wrong.
Summary: AI alignment needs many great people working on research, engineering, governance, social science, security, operations, and more. I think AI alignment-focused university groups are uniquely underexploited sources of that future talent. Here are my plans for running such a group at Stanford, broadly involving mass outreach to a wide range of students, efficient fit-testing and funneling to a core group of promising students, and robust pipelining of those students into a connected network of impactful opportunities.
Confidence: Moderate. This theory of change is heavily inspired by similar plans from Michael Chen (Effective Altruism at Georgia Tech) and Alexander Davies (Harvard AI Safety Team) as well as EA field-building learnings from Stanford Effective Altruism. While parts of this have been tested in those communities,...
Good points! This reminds me of the recent Community Builders Spend Too Much Time Community Building post. Here are some thoughts about this issue:
Just now I looked out my very picturesque window and saw a massive cloud jutting upward like a mountain peak framed by some trees and the sky high above…I whispered an audible “Wow”, it was truly majestic and beautiful like a great white mountain full of bright spots and dark cregs.
Wonder.
Will AI ever look up and say Wow?
If it did would it then wonder where the wonder came from?
Presumably it would know the cloud was a collection of water droplets forming and preparing to fall back to earth, but would it know the how and why of the cloud’s existence? Would it then express wonder about nature’s beautiful cycle of water going up and down and feeding the earth?
What is AI going to think about art?
AI...
this is why we're building an AI to make humans kinder to each other
The Motivational tag covers posts that focus on inspiring us to do good (either a specific kind of good, or whatever we already want to work on).(Read More)
*Please RSVP using the form linked above*
Come meet other people interested in their best options for doing good. Homemade Vegan dinner will be served.
This event is open to all members of the EA community. Feel free to invite a friend (please ask them to register so we know how much food to prepare!)
Logistics
- Please register (and ask anyone you invite to register) and submit your proof of vaccination using the form here:
https://forms.gle/tspp7FtVmaesnuZS9
- All attendees must be at least double vaccinated against Covid. (We are ending the mask requirement since omicron has receded and we hadn't particularly followed it in the first place). Dinner will be outside unless precluded by bad weather. You can read the other covid precautions we will take here:
https://bit.ly/3h6D6Fi
Getting to / finding the Burrow: https://bit.ly/3h6D6Fi
Bathroom available inside the house
Street parking is free after work hours.
(For those familiar, this is inspired by the Boston EA dinners that often take place at Julia Wise's house)
Registration Form:
https://forms.gle/tspp7FtVmaesnuZS9
Many thanks for feedback and insight from Kelly Anthis, Tobias Baumann, Jan Brauner, Max Carpendale, Sasha Cooper, Sandro Del Rivo, Michael Dello-Iacovo, Michael Dickens, Anthony DiGiovanni, Marius Hobbhahn, Ali Ladak, Simon Knutsson, Greg Lewis, Kelly McNamara, John Mori, Thomas Moynihan, Caleb Ontiveros, Sean Richardson, Zachary Rudolph, Manny Rutinel, Stefan Schubert, Michael St. Jules, Nell Watson, Peter Wildeford, and Miranda Zhang. This essay is in part an early draft of an upcoming book chapter on the topic, and I will add the citation here when it is available.
Our lives are not our own. From womb to tomb, we are bound to others, past and present. And by each crime and every kindness, we birth our future. ⸻ Cloud Atlas (2012)
The prioritization of extinction risk reduction depends on an...
I downvoted this comment. While I think this discussion is important to have, I do not think that a post about longtermism should be turned into a referendum on Jacy's conduct. I think it would be better to have this discussion on a separate post or the open thread.
The FTX Foundation’s Future Fund publicly launched in late February. We're a philanthropic fund that makes grants and investments to improve humanity's long-term prospects. For information about some of the areas we've been funding, see our Areas of Interest page.
This is our first public update on the Future Fund’s grantmaking. The purpose of this post is to give an update on what we’ve done and what we're learning about the funding models we're testing. (It does not cover a range of other FTX Foundation activities.)
We’ve also published a new grants page and regrants page with our public grants so far.
We are trying to learn as much as we can about how to deploy funding at scale to improve humanity’s long-term prospects. Our...
My guesses:
Regranting is intended as a way to let people with local knowledge apply it to directing funds. This is different from just deputizing grantmakers.
If you made the list public I'd expect the regranters to be overwhelmed by people seeking grants, and generally find it pretty frustrating. For example, would your next step be to send emails to each of those addresses? ;)