Hide table of contents

TL;DR

Your single best lever for impact right now is probably via the influence you have on your future self. Leveraging this should be one of your top priorities, and you should think carefully about how your other plans interact with this consideration, especially your future motivation.

If I was reading this post, I could imagine myself getting some fuzzies and motivations, closing the tab, and never revisiting it to do some of the things I thought were worthwhile while reading. But I expect most of the impact from this post to come from people taking action. If, after reading, you feel motivated to invest in and protect your future motivation, I encourage you to think about what actions you can take to achieve this and actually make a plan for getting started. I’ve linked a template at the end with questions I’d find helpful for channeling my short-term motivation into action - maybe you’d find them helpful too!

Intro

Most likely, your biggest direct impact will not be in the immediate future. Getting to the position where you’re producing as much impact as you have potential for requires investments in yourself (gaining knowledge, learning skills, building connections, etc.), but also sustaining and building motivation.

Most upskilling advice focuses on the former, but the latter is potentially even more critical. If you lose your motivation to improve the world and stop trying to, that’s probably an existential threat to most of your future impact!

Treating future you like they’re a different person makes this reality more salient, and makes planning around it more natural. The prompting question becomes:

“How can I ensure that whoever future me is, who by default might not share my values or motivations, achieves their potential to improve the world as much as they can?”

In this post, I’ll focus on one component of answers to this prompt: "Make sure they actually care about doing so!”

I sometimes get the impression that people think they don’t need to contribute to their motivation bank since moral realism, indomitable willpower, short timelines, or some other X factor is so overwhelmingly powerful that of course they’ll always pursue their goals. I think this is very rarely correct! Most people who start engaging with EA don’t end up acting on it, and value drift is real even for those actively working directly in an EA org.

Here are two general heuristics for how to positively influence the motivation of future you, as well as some concrete suggestions and caveats to consider.

1. Strategically build motivation

If you’re unsure about something being worthwhile from a direct-impact perspective, but you think it would contribute positively to your longer-term motivation, then make sure to weigh that in the decision calculus! Upon factoring this consideration in, I think these categories of activities start looking extra appealing. I think all four of these categories are essential. I’ve ordered some examples within each category based on how much I’d recommend them.

1.1 Activities that will build your moral conviction toward improving the world

1.2 Activities that increase your sense of community

1.3 Activities that create habits or accountability

  • Sign up for EA newsletters (find more here)
  • (Trial) Pledge to donate a percentage of your income to effective charities
  • Set up an environment to replace some of your unendorsed social media engagement with the EA forum (add a widget to your home screen!) or the 80,000 Hours Podcast
  • Communicate your priorities about impact on your LinkedIn bio or elsewhere
  • Make your ambitious impact-oriented plans public somewhere

I quite like this video on willpower and the Atomic Habits advice of making hard things less hard.

1.4 Activities that create positive associations with the principles and virtues you care about

  • Find and follow EA & EA-adjacent writers whose posts you enjoy reading
  • Properly celebrate your wins, and record them somewhere you can reference
  • Do fun things with people who share your values! (game nights, pickleball, etc.)
  • Occasionally directly optimize for fuzzies

Caveat #1: Sometimes, changes in your values or priorities are good!

Your future self will likely have more knowledge and experience than you, and maybe they’ll have good reasons for changing posture that you can’t foresee.

I mostly think you should be looking to address the failure modes where your care and motivation to create a radically better future either just fizzles out or is violently shut down by your other preferences acting in self-defense (see below).

So rather than just reducing the odds that future you change values or priorities in general, maybe focus on just reducing the odds that future you changes values or priorities for reasons that, if fully understood by current you, you would not find compelling or endorsable.

Caveat #2: Setting yourself up for disillusionment is bad

To be clear, I don’t think you should try to influence your future self via activities that might qualify as unhealthy or by trying to trick yourself.

I don’t think you should do things like selectively engage with only pro-EA content, or automatically dismiss criticisms, or refuse to have meta-ethical conversations because they might persuade you away from utilitarianism and this would be bad for your motivation, etc. (though maybe Epistemic Learned Helplessness is worth thinking about?).

Even if you don’t buy into why this might be important in principle, I think this behavior is a door best left closed for impact reasons too:

  • Eventually, you’ll likely “wake up” to reality, and in the post-disillusionment “everything must go” mental flushing of anything associated with “the bad,” the very principles you sought to grow might very well get cast out.
  • The social second-order effects of this behavior seem quite negative - consider someone new being introduced to EA by someone who is just dogmatically closed-minded about things that might negatively influence their motivation, etc.
  • There actually might be good reasons at some point to drop various values or update your priorities, and you should be responsive to these considerations. Engaging in mild self-deception, even if you think it’s just in a contained environment of supporting your motivation, will likely lead to breakout harms you won’t anticipate or notice. This is “the default story of not fighting for truth. You think the consequences are minimal, but you can’t know because the entire problem is that information is being suppressed.”

In fact, I think that as you build your motivation and care for the principles and ideas you endorse, it’s best to actively preempt any potential disillusionment you might experience. Find the hardest-hitting criticisms and engage with them directly, so you can thoughtfully choose the (metaphorical) general of the army you’ll be fighting for.

At a high level, doing anything really wack to create motivation seems really bad from a personal, communal, epistemic, and impact perspective. It also seems unnecessary! Truthseeking is the ground in which other principles grow, so just stick with the honest stuff and feel confident, grounded, and able to act with integrity when planning for your future self.

2. Be careful when spending motivation

This altruism sharpens altruism post suggests another thing you can do to build energy for altruist behavior. Do altruistic things!

However, occasionally, the pursuit of impact can come at a high personal cost, which might directly reduce one’s likelihood of pursuing altruistic behavior in the future. In the “parliament of your mind,” the “let’s be altruistic” party probably won’t ever acquire a permanent majority. To achieve its goals, it needs to form coalitions with your other preferences and make trades. Even if it temporarily manages to strongarm a totalistic and demanding allocation of your resources towards impact, you may find it eventually couped out of power by the enraged mob of your other values and preferences. In other words, burnout is bad and sustainability is good!

Here are some scenarios where producing short-term impact and preserving long-term motivation can come into conflict, where in general I think people should be more careful & intentional in navigating this tradeoff:

2.1 Activities that you find actively painful

  • Forcing yourself to repeatedly think about things you find deeply uncomfortable
  • Doomscrolling low-effort criticisms on e.g. Twitter so you can always “be in the know”
  • Spending way too much time mindlessly putting up posters for your EA group
  • Hosting or attending events you don’t enjoy

2.2 Activities you’ll grow to strongly dislike/feel resentful about

  • Forcing yourself to invest lots of energy into submitting applications for roles you’ll probably get rejected for
  • Volunteering
  • Working for low pay or uncertain job stability
  • Realigning your career plan in a way that painfully diverges from your preferences and fit
  • Working hard for little / no recognition

2.3 Activities that ask you to suppress other intrinsic preferences you have

  • Not having kids even though you sincerely want them
  • Skipping your friend group vacation plans so you can be frugal & donate
  • Turning down a slice of your grandmother’s birthday cake so you can be vegan

I think the framings in Pain is not the unit of Effort are really helpful here – “If it hurts, you're probably doing it wrong” and “You're not trying your best if you're not happy.”

Caveat #3: Sometimes, it’s worth it

I want to be clear that I’m not arguing that if something feels hard, you therefore shouldn’t do it.

For one, things often aren’t as hard as we expect, and sometimes your altruistic motivation will counterintuitively go up when you pursue them! I often worry people rule out options and claim one of the above reasons (or claim “personal fit”) when really an idea just feels ughy or seems outside your personal Overton window, but once you actually try, it’s surprisingly doable. How much of a motivational cost would you really be paying? How much impact would you really be leaving on the table?

Secondly, sometimes a thing does actually require a high motivational cost, but doing it anyway is the highest impact choice! If you’re thinking proactively, you can often find ways to compromise that will decrease your impact in the short term but are more sustainable (and therefore higher impact overall). But if there’s a big deadline, or your timelines are short, or if you’re facing a golden opportunity, it could be worth it to make a large trade between motivation and impact.

Ok, but… motivation towards what exactly?

If you’re not careful, over time your values might be blindly reshaped in some discrete but powerful way that abandons impact for some weaker proxy. I don’t think you want to build allegiance to or overly tie your motivation to specific causes. Having positive associations with the EA community is probably instrumentally helpful, but not a goal in itself. I think the target of your motivational investing should ultimately be something like the general principles of lowercase effective altruism.

I think the four principles outlined in this post of scope sensitivity, scout mindset, impartiality, and recognition of tradeoffs OR the four ones outlined in this post of it's important to help others, people are equal, helping more is better than helping less, and our resources are limited, are great starting places.

You want your motivation aimed at improving the world to be resilient. If it’s flowing from a specific cause, person, institution, community, etc, then it’s vulnerable. If it’s built off a well-grounded internal care about the world’s present and future beings, then your future motivation is less likely to get upended by drama/scandal and you’ll likely be more able to update your priorities in the face of new evidence (rather than being stuck to the first cause area you married).

You might imagine that in a few years, you’re really ambitiously pursuing impact and your passion for world betterment is at an all-time high. What helped get you there? How did you sustain personal motivation in the face of [insert bad thing that happened to the community, cause area, thought leader, or institution]? How did you overcome the societal drag of complacency and normalcy? How did you stay aligned with impartial, altruistic, effective world improvement and keep your gaze steady even when other shiny opportunities showed up? How did you adapt when your plans failed, when what was most important changed, when your peers didn’t believe in you, when it really seemed like there was no way forward?

Then, thinking prospectively, how might you achieve these things, and what behaviors, systems, plans, and thought patterns can you establish now to improve the chances you make it all the way there.

“What if I still lose my motivation or my values still drift?”

Sometimes when this happens, it’s good. Maybe you made a bet to spend a bunch of motivation on a high-impact opportunity, and this was a worthwhile price. But also, in general, I’m saddened to hear of people’s internal fire and passion for impact dying out. I think there are things you can try to mitigate this likelihood, but its also just true that ultimately future you is their own person who will have their own experiences, values, and preferences.

I’m saddened because I do think the world is awful, the world is much better, and the world can be much better. Talking in terms of abstract “impact” is a convenient shorthand, and in philosophically exploratory conversations I think being willing to wholeheartedly engage with testing the edges of your frameworks is really valuable and made easier by this abstraction. Still, I also believe keeping sober sight of what’s really at stake is of utmost importance.

Thousands of children die every day for no good reason. Trillions of animals spend much of their lives in agony and die painful deaths for no good reason. The future could be huge and deeply meaningful, but if we’re not careful, we could squander everything.

The world needs people who care and take action to solve these problems. If you care and are working towards solving these problems, then for the sake of those whose well-being depends on your continued efforts, don’t neglect to invest in and protect the motivation of future you!

Exercise Template: Make a plan for your future motivation
Comments5


Sorted by Click to highlight new comments since:

I really enjoyed reading this and think it would make excellent extra reading for intro fellows - especially because it breaks down the “If it hurts, you're probably doing it wrong” thing so well. Thanks for writing! :)

Thank you! I like the framing and also having a template for planning concrete next steps, especially at the end of the year. I’ve shared the post in my local group and can imagine continuing to recommend it in future.

I’ve linked a template at the end with questions I’d find helpful for channeling my short-term motivation into action - maybe you’d find them helpful too!


This is awesome, great job! 

Just a minor correction: The author of "The world is much better. The world is awful. The world can be much better." is Max Roser, not Mark Roser. 

Executive summary: To maximize long-term impact, treat your future self as a different person whose motivation needs to be actively cultivated and protected through strategic activities and careful management of motivational costs.

Key points:

  1. Build motivation through four key categories: strengthening moral conviction, increasing community engagement, creating habits/accountability, and fostering positive associations with EA principles.
  2. Balance short-term impact against long-term motivation costs - avoid activities that are painfully demanding or suppress core preferences unless truly necessary.
  3. Focus motivation on fundamental principles (like scope sensitivity and impartiality) rather than specific causes or communities to build resilience.
  4. Value drift isn't always bad - future self may have valid reasons to change priorities, but protect against unmotivated fizzling out.
  5. Actively preempt disillusionment by engaging with criticism honestly rather than using self-deception or dogmatism to maintain motivation.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f