Hide table of contents

See companion question here.

As a researcher in a nonprofit, I often have trouble focusing and executing on the most important tasks. I think a bunch of things help about being in a org helps: ops support, having a manager and others to source contracts for research problems that are relevant for improving specific EA decisions, weekly checkins with a manager, collaboratively set milestones, daily slack updates, having a slack where I share ideas, reviews of my early drafts and other docs by coworkers, having coworkers to challenge and debate theories of change, and I'm sure a number of other things I'm missing. 

Presumably both I individually and my org institutionally are still falling far short of what we can do, but it seems like being in an institution is helpful for me. And of course not everybody benefits from being in an institution, and I'm sure many people are much more productive without one. 

So I'm broadly interested in how independent researchers (or semi-independent researchers like grad students with very absentee advisors) manage to maintain (or get higher) on things like motivation, general productivity, and setting really good/impactful research goals.

New Answer
New Comment


5 Answers sorted by

Prioritize ruthlessly. Very few ideas can even be examined, let alone pursued.

Productivity + meta: Learn to be an effective Red Team, and use this ability on your own ideas and plans. 

Motivation: Find a way to remind yourself about what you care about (and if needed, why you care about it). This could manifest in any way that works for your. A post-it could be useful. A calendar notification. A standing meeting with colleagues where you do a moment of reflection (a technique that I've seen used to great effect at the Human Diagnosis Project). A list of recitations embedded among TODO list items (my personal technique). 

Can you share that list?

7
Ben_Harack
Most of these are pithy statements that serve as reminders of much more complicated and nuanced ideas. This is a mix of recitation types, only some of which are explicitly related to motivation. I've summarized, rephrased, and expanded most of these for clarity, and cut entire sections that are too esoteric. Also, something I'd love to try, but haven't, is putting some of these into a spaced repetition practice (I use Anki), since I've heard surprisingly positive things about how well that works. 1. Be ruthlessly efficient today 2. <Specific reminder about a habit that I'm seeking to break> 3. Brainstorm, then execute 4. If you don't have a plan for it, it isn't going to happen. 5. A long list of things that you want to do is no excuse for not doing any of them. 6. Make an extraordinary effort. 7. <Reminders about particular physical/emotional needs that are not adequately covered by existing habits> 8. Remember the spheres of control: Total control. Some control. No control. For more info, see here: https://www.precisionnutrition.com/wp-content/uploads/2019/09/Sphere-of-control-FF.pdf 9. Every problem is an opportunity 10. What you do today is important because you are exchanging a day of your life for it. (might be from Heartsill Wilson) 11. Think about what isn't being said, but needs to be. 12. Get results 13. Life is finite; pursue your cares. 14. The opposite of play is not work. The opposite of play is depression. (paraphrased from Simon Sutton-Smith) 15. Move gently 16. Weighted version of "shortest processing time" scheduling algorithm is close to optimal on all metrics. (from "Algorithms to live by") 17. Exponential backoff for relationships: finite investment, infinite patience. (from "Algorithms to live by") 18. Doing things right vs doing the right thing. 19. 10-10-10. (Reference to the technique of thinking about how a decision would be viewed 10 minutes, 10 months, and 10 years in the future. Modify at your discretion.) 2

Allocate some time to "meta", like studying habit formation and self-management. For starters I might recommend Atomic Habits and some of Cal Newport's work.

I'm not an independent researcher, so this advice is probably less trustworthy than others', but I am currently on somewhat of an independent research stint to work on ELK, and have been annoyed at motivation being hard to conjure sometimes. 

I've been thinking about what causes motivation (e.g. thinking about various anecdata in my life) and I've also just begun tracking my time practically to the minute in the hope of this causing me to reflect on the sequence of stimuli, actions, and feelings I have throughout the day/week such that I can deduce any tractable levers on my own motivation. Though it seems too early to tell whether the time tracking will be fruitful in the end -- we will see.

An example of how "reflecting on the sequence of stimuli, actions, and feeling" could be helpful: today, I hypothesized that I was much more productive on two recent plane rides than I am usually due to being away from people and being action/motion restricted. And so I tried getting on a train; though I noticed I didn't want to work on ELK due to being anxious and hypothesized that perhaps my brain still wanted to pay attention to things I had been doing before getting on the train, and that, maybe, an additional reason I am productive on planes is due to security lines giving me time to reset my brain; I then tried resting for 20 minutes, to see if my anxiety would go away; unfortunately it didn't, though I then went on to think of more testable hypotheses and decided to lower my caffeine dosage [I had had about 120mg that morning, and caffeine seems to cause anxiety]).

 

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3