Hide table of contents

A few sharp-eyed readers noticed my imminent departure from CEA in our last quarterly report. Gold stars all around!

My last day as our content specialist — and thus, my last day helping to run the Forum — is December 10th. The other moderators will continue to handle the basics, and we’re in the process of hiring my replacement. (Let me know if anyone comes to mind!)

Memories

Managing this place was fun. It wasn’t always fun, but — on the whole, a good time. 

I’ve enjoyed giving feedback to a few hundred people, organizing some interesting AMAs, running a writing contest, building up the Digest, hosting workshops for EA groups around the world, and deleting a truly staggering number of comments advertising escort services (I’ll spare you the link).

More broadly, I’ve felt a continual sense of admiration for everyone who cares about the Forum and tries to make it better — by reading, voting, posting, crossposting, commenting, tagging, Wiki-editing, bug-reporting, and/or moderating. Collectively, you’ve put in tens of thousands of hours of work to develop our strange, complicated, unique website, with scant compensation besides karma.

(Now that I’m leaving, it’s time to be honest — despite the rumors, our karma isn’t the kind that gets you a better afterlife.)

Thank you for everything you’ve done to make this job what it was.

What’s next?

In January, I’ll join Open Philanthropy as their communications officer, working to help their researchers publish more of their work.

I’ll also be joining Effective Giving Quest as their first partnered streamer. Wish me luck: moderating this place sometimes felt like herding cats, but it’s nothing compared to Twitch chat.

My Forum comments will be less frequent, but probably spicier.

Comments28


Sorted by Click to highlight new comments since:

I hope others will join me in saying: thank you for your years serving as the friendly voice of the Forum, and best of luck at Open Philanthropy!

My Forum comments will be less frequent, but probably spicier.

Looking forward to this.

Only fitting that it's Thanksgiving today - I am so grateful for all that you've done with the Forum. Your presence was part of what made me feel welcome here and I think you've done an incredible job building this place up. : )

Also - your new plans sound so cool. Despite my sadness to see you leave your moderator role, it's overwhelmed by sheer excitement for your future! 

Echoing everyone else, thank you for all your hard work.

I do not exaggerate when I say you are the best forum moderator I have ever seen. I am really impressed with your availability, creativity and kindness. You have driven the culture of this website to a whole new level, and inspired me and I bet many others to write better content.

Good luck at OpenPhil!

I think you've done so much to make the Forum what it is today (>4x bigger than what it was when you joined, and perhaps as importantly a really vibrant community with great content). You've been a welcoming commenter, an empathetic moderator, and a sharp-pencilled editor. 

I'll miss you at our next ultimate frisbee game, but I'm so excited to read what you write with Open Phil! (Oh, and looking forward to the spicy comments too.)

Empathetic, welcoming — and a lively, wise writer. Even Aaron's commenting guidelines (↓) are warm and concise. 

Your guiding voice will be missed here, Aaron. We look forward to cheering your EGQ and OP work next year!

Thanks, Jared!

To clarify: The guidelines on this post are the default guidelines for all posts, which I think were written by someone on the LessWrong team before the Forum existed.

Ah, well - you can see how even the agreeable Forum norms you aren't directly responsible are enhanced by association with you!

Hope your final 1.5 Forum-wrangling weeks are smooth ones.

Congrats, Aaron! Thanks for your good work improving and growing the forum.

Thank you for all your encouragement over the past few years for students and newer community members to post on the forum, and for actually making it easier and less scary to do so. I definitely would not have felt anywhere near as comfortable getting started without your encouragement and post editing offers. I've replaced Facebook binging with EA Forum binging since I both enjoyed it so much and found it really valuable for my learning. You will be missed, and incredibly hard to replace. Thank you for all your hard work!

aog
22
0
0

🐐

Thanks Aaron for making the Forum both a pleasant and happening place to be in! My investment in the forum grew a lot during your time here, and I imagine you must have done a lot behind the scenes to make this forum a enjoyable place!

Very excited to see your future career trajectory, both at Open Phil and with Effective Giving Quest!

Thanks so much for looking after possibly my favorite place on the internet!

Thank you so much for all your work managing the EA Forum. You’ve done an excellent job, and I’m sure that you’ll do many and varied good things at OpenPhil.

Also, we’re so excited to have you join Effective Giving Quest as our first partnered streamer! I’m really looking forward to what we can accomplish in the gaming space for effective altruism. (c:

Congrats, Aaron, and thank you for your innumerable contributions to the Forum and broader community

Congrats Aaron. Is there a plan for replacing you as the singular and dedicated face of the EA Forum? You will be missed here, but unlocking more OP content into the public sphere sounds well worth it.

The person who takes over for me directly might also be really dedicated to running the site. But we're trying to keep the position flexible to account for the interests of our strongest candidates, so it's possible they'll end up more focused on other aspects of CEA content. If that were to happen, I wouldn't be surprised if we hired for a "head of Forum" role in the near-to-medium future (but no guarantees).

Thanks for all you've done for the forum, Aaron! It was a challenging assignment to say the least. And a personal thank you for your feedback on some of my not-so-short essays! Best of luck on your new path. I'll be cheering you on.

Now that I’m leaving, it’s time to be honest — despite the rumors, our karma isn’t the kind that gets you a better afterlife.

That's precisely what you'd say if it was used as a proxy for deserving a better life, but you didn't want people to Goodhart-game it.


Seriously: congratulations for the job done, thank you so much for it, and I'm eager to see what you'll do in EGQ and beyond.

Thanks for all you've done Aaron. It means a lot. Thanks for responding, for reading drafts, for organising AMAs, along with all the other things you've done. 

Thank you, Aaron, you have been a force for good here. Smart hire by Open Phil!

Thanks so much for all of the work you've put into this forum and good luck with your new job!

Dear Aaron, 

   I don't think we share the same definition for the word "Retirement". 

Sincerely,

 -average humans everywhere

P.S. Thank you for all your help with the one serious EA Forum article I wrote. Despite you and Hauke's heroic efforts, it was still terrible. Related: did you pay 26 people to vote for it out of sympathy?

Thanks for all the hard work Aaron, you did a great job!  I am also excited to see hear about your new position and look forward to seeing the work you produce :)

Yes! So happy for you Aaron!

Thank you for all of your hard work, Aaron!

Congrats on the new job, and thank you for your outstanding service. 🥳

Just wanted to add my voice to the chorus of appreciation for your work Aaron.  I have an RSS feed of the forum and read at least the title of just about every post and it is really valuable for me.  Great work and best of luck in your next endeavors! 

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in Building effective altruism
6
2 authors
· · 3m read