Cross-posted from A Nice Place To Live.

Recently, I've been pitching the idea of an EA blogging carnival where each month, somebody picks a topic and then everybody blogs about it.

As today is the first of the month, I'm going to announce the December topic for Figuring Good Out*, the new EA blogging carnival. If you'd like to participate, just write a post on the topic and mention that you'd like to include it in the carnival for this month.

The topic for December is "Blind spots." You can take that in any direction that you like. For example, what is something altruistically important that you know about that you think most EAs don't know about? Is there a cause, organization, insight, or productivity tip that EAs are ignoring?

On December 31, I'll write a post linking to all the submissions. If you'd like to volunteer to host the January carnival, then let me know below. We can start a queue. As host, you can choose any topic you like but keep in mind that some topics will get more responses than others. If you intend to contribute to this month's carnival, I recommend commenting below.

 

*The title "Figuring Good Out" is negotiable if everybody really hates it but it works for me.

 

6

0
0

Reactions

0
0
Comments14


Sorted by Click to highlight new comments since:

The title "Figuring Good Out" is negotiable if everybody really hates it but it works for me.

I'd prefer the slightly similar "Figuring Out Good". I don't know which one is more grammatical.

I prefer Bitton's, because otherwise it seems like "good" is modifying "figuring out".

Cool topic, but I'm still trying to figure out what EA is about! I have a feeling that I'll be able to articulate a blind spot eventually.

While I don't think I would actually write a whole post for this, I might have a couple quick ideas to throw in a comments section. I'd suggest explicitly asking for comments and half-formed ideas in the summary post, and see if it produces anything interesting.

I think it's probably worth creating a Google Group for the EA blogging carnival and adding each person listed under the 'individuals' heading in the list of EA blogs. Then the topic of each month's carnival can be announced there (in addition to here, assuming that's the plan), which would ensure that every potential contributor is notified. Anyone who wants to opt out of these notifications can easily do so by leaving the group.

I can't find most people's email addresses but the group is here for people to join.

I'm not much of a blogger, but I'll give it a go.

I love what you're doing so far, Giles. I'm excited to read what you come up with!

Note: I didn't actually give this a go.

So my post is up: Investing in Yourself

If you write a post, link to it in this thread so it gets noticed.

Also, let me know if you want to choose next month's topic.

When should we aim to have these written by?

Any time before the end of the month. I was going to take a crack at mine today with the hope of inspiring other people to submit posts.

Curated and popular this week
Jim Chapman
 ·  · 12m read
 · 
By Jim Chapman, Linkedin. TL;DR: In 2023, I was a 57-year-old urban planning consultant and non-profit professional with 30 years of leadership experience. After talking with my son about rationality, effective altruism, and AI risks, I decided to pursue a pivot to existential risk reduction work. The last time I had to apply for a job was in 1994. By the end of 2024, I had spent ~740 hours on courses, conferences, meetings with ~140 people, and 21 job applications. I hope that by sharing my experiences, you can gain practical insights, inspiration, and resources to navigate your career transition, especially for those who are later in their career and interested in making an impact in similar fields. I share my experience in 5 sections - sparks, take stock, start, do, meta-learnings, and next steps. [Note - as of 03/05/2025, I am still pursuing my career shift.] Sparks – 2022 During a Saturday bike ride, I admitted to my son, “No, I haven’t heard of effective altruism.” On another ride, I told him, “I'm glad you’re attending the EAGx Berkely conference." Some other time, I said, "Harry Potter and Methods of Rationality sounds interesting. I'll check it out." While playing table tennis, I asked, "What do you mean ChatGPT can't do math? No calculator? Next token prediction?" Around tax-filing time, I responded, "You really think retirement planning is out the window? That only 1 of 2 artificial intelligence futures occurs – humans flourish in a post-scarcity world or humans lose?" These conversations intrigued and concerned me. After many more conversations about rationality, EA, AI risks, and being ready for something new and more impactful, I decided to pivot my career to address my growing concerns about existential risk, particularly AI-related. I am very grateful for those conversations because without them, I am highly confident I would not have spent the last year+ doing that. Take Stock - 2023 I am very concerned about existential risk cause areas in ge
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 2m read
 · 
2024 marked 10 years since we launched Open Philanthropy. We spent our first decade learning (about grantmaking, cause selection, and the history of philanthropy), and growing our team and expertise to be able to effectively deploy billions of dollars from Good Ventures, our main funder. Our early grants — and some grantees we’ve helped get started — are now old enough that we can see material signs of our impact in the world. The start of our second decade also marked a major change in our direction. With Good Ventures approaching the level of spending consistent with its founders’ ambition to spend down in their lifetimes, we finally began to execute at scale on our long-held ambition to support other funders, and found a surprising degree of early success. I expect that our ambition to serve additional partners will guide much of our second decade. A few highlights from the year: * We launched the Lead Exposure Action Fund (LEAF), a >$100 million collaborative fund to reduce lead exposure globally. LEAF marked our first major foray into partnering with other funders beyond Good Ventures, and we’re planning to do a lot more in this vein going forward — more below. * Our longtime grantee David Baker won the Nobel Prize in Chemistry for his groundbreaking work using AI for protein design. We’re proud to have supported both the basic methods development and the potentially high-impact humanitarian applications of his work for ailments like syphilis, hepatitis C, snakebite, and malaria. * Our grantee Open New York played an important role in the recent passage of New York City’s largest zoning overhaul in over 60 years. The city planning department expects the package to create 80,000 new homes over 15 years, making this the first set of major YIMBY reforms to pass in New York City. * Research mentorship programs that we fund continue to produce some of the top technical talent in AI safety and security. Graduates of programs like MATS, the Astra Fellowship, LA
Relevant opportunities