Hide table of contents

A lot is happening. How do you keep on top of things?

For example:

  1. What's your process for finding out about important developments?
  2. What information sources do you find most useful?
  3. What are some problems with your current process?

I'll share my answers in a comment below.


Motivation: I've noticed cases where AI safety professionals—including leaders in the field—find out about important facts/papers/people/orgs months or years later than they would have wished to. I'm wondering if there are things I could do to help. 

If you'd like to talk about this, please send me an email or suggest a time to call.

13

0
0

Reactions

0
0
New Answer
New Comment


6 Answers sorted by

Every time Zvi posts something, it covers everything (or almost everything) important I've seen until then

https://thezvi.substack.com/

Also in audio:

https://open.spotify.com/show/4lG9lA11ycJqMWCD6QrRO9?si=a2a321e254b64ee9

I don't know your own bar for how much time/focus you want to spend on this, but Zvi covers some bar

 

The main thing I'm missing is a way to learn what the good AI coding tools are. For example, I enjoyed this post:

https://www.lesswrong.com/posts/CYYBW8QCMK722GDpz/how-much-i-m-paying-for-ai-productivity-software-and-the

1. My current process

I check a couple of sources most days, at random times during the afternoon or evening. I usually do this on my phone, during breaks or when I'm otherwise AFK. My phone and laptop are configured to block most of these sources during the morning (LeechBlock and AppBlock).

When I find something I want to engage with at length, I usually put it into my "Reading inbox" note in Obsidian, or into my weekly todo list if it's above the bar.

I check my reading inbox on evenings and weekends, and also during "open" blocks that I sometimes schedule as part of my work week. 

I read about 1/5 of the items that get into my reading inbox, either on my laptop or iPad. I read and annotate using PDF Expert, take notes in Obsidian, and use Mochi for flashcards. My reading inbox—and all my articles, highlights and notes—are synced between my laptop and my iPad.


2. Most useful sources

(~Daily)

  • AI News (usually just to the end of the "Twitter recap" section). 
  • Private Slack and Signal groups.
  • Twitter (usually just the home screen, sometimes my lists).
  • Marginal Revolution.
  • LessWrong and EA Forum (via the 30+ karma podcast feeds; I rarely check the homepages)

(~Weekly)

  • Newsletters: Zvi, CAIS.
  • Podcasts: The Cognitive Revolution, AXRP, Machine Learning Street Talk, Dwarkesh.

3. Problems

I've not given the top of the funnel—the checking sources bit—much thought. In particular, I've never sat down for an afternoon to ask questions like "why, exactly, do I follow AI news?", "what are the main ways this is valuable (and disvaluable)?" and "how could I make it easy to do this better?". There's probably a bunch of low-hanging fruit here.

Twitter is... twitter. I currently check the "For you" home screen every day (via web browser, not the app). At least once a week I'm very glad that I checked Twitter—because I found something useful, that I plausibly wouldn't have found otherwise. But—I wish I had an easy way to see just the best AI stuff. In the past I tried to figure something out with Twitter lists and Tweetdeck (now "X Pro"), but I've not found something that sticks. So I spend most of my time with the "For you" screen, training the algorithm with "not interested" reports, an aggressive follow/unfollow/block policy, and liberal use of the "mute words" function. I'm sure I can do better...

My newsletter inbox is a mess. I filter newsletters into a separate folder, so that they don't distract me when I process my regular email. But I'm subscribed to way too many newsletters, many of which aren't focussed on AI, so when I do open the "Newsletters" folder, it's overwhelming. I don't reliably read the sources which I flagged above, even though I consider them fairly essential reading (and would prefer to read them to many of the things I do, in fact, read). 

I addictively over-consume podcasts, at the cost of "shower time" (diffuse/daydream mode) or higher-quality rest. 

I don't make the most of LLMs. I have various ideas for how LLMs could improve my information discovery and engagement, but on my current setup—especially on mobile—the affordances for using LLMs are poor.

I miss things that I'd really like to know about. I very rarely miss a "big story", but I'd guess I miss several things that I'd really like to know about each week, given my particular interests.

I find out about many things I don't need to know about.

I could go on...

Transformer Weekly is great: https://www.transformernews.ai/

Summarises the AI news of the week every Friday.

We recently added a Stay Informed page to AISafety.com which lists our top recommended information sources for staying up to date with AI/AI safety – I suggest checking it out for more ideas on podcasts, newsletters etc to follow.

Tiny comment: you have ImportAI twice in the list.

I published my answer here: https://lovkush.substack.com/p/how-i-keep-up-with-ai-safety. I share same problems as peterhartree.

1
Lovkush 🔸
I found that out today! Need to update my recommendation.
Curated and popular this week
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
Thomas Kwa
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A