Hide table of contents
4 min read 2

21

Life is a response to being


Part II

Over the course of my twenties, I'd accumulated thousands of hastily jotted notes. Thoughts that had grown in the gaps between work and play I'd managed to save before lost off the cliff of forgetting.

Thoughts like, "I'm starting to suspect it's possible to be robustly happy on purpose..." or "Inadequate tools for consensus keeps coming up as a kind of problem behind all problems I care about".

Upon arrival, I didn't fully understand why I'd felt so strongly I had to go to a cabin in the woods to be alone for a month. But I knew it had something to do with these notes. They were all the errant Frisbees my mind had wanted to chase when it should have been working, or shopping, or trying to sleep. Thought that couldn't be fully explored for lack of space.

Fruits of a wayward youth

I spent my entire first week in the cabin working through each text file A to Z. From amsterdam-trip-reflections.txt all the way through to xmas-gifts-2021.txt.

Many felt irrelevant to who I was now.

Others felt revelatory.

I began a fresh note, (threads.txt) to tease out the myriad themes, inquiries, and projects tangled across these vast tracts of text. From ~500 text files I pulled ~200 "threads", and wrote the title of each down in pen on an index card.

Everything I'd ever been passionate about in the last 7 years. Anytime a book, or a podcast, or someone's tipsy rant landed in my brain and flowered into violent spirals of thought. It was all here, finally down in one place.

Personal and collective flourishing, how we could better make sense of our shared condition, why is community so scarce if we all crave it, escaping the meaning crisis, x-risk, governance, prediction markets, crypto, effective altruism, dating, what's up with the free energy principle, memetics, sense-making, dungeons and dragons, gender, origins of life, suffering, living in new york, progress studies, dance, game b, virtue, vocation, factory farming, intentional community, the qualia research institute, charter cities apollo longevity vTaiwan attachment the meta crisis powerlifting axiology circling functional programming awakening biotech aesthetics metamorphosis *mind explodes*

For days I couldn't bear to look at it. I played video games, went for walks, and sat in the woods filling my copy of Walden with underlines (feeling enormously grateful to the EAs I met passing through Berlin, who insisted I buy this weird American book).

Eventually, I felt brave enough to sit down with some tea and frown at it all. Slowly, over the course of a morning, the reality of my situation began to sink in.

For eons, matter and information and who knows what else had been up to something incomprehensibly complicated.
And now, a ways into the whole affair, somehow, apparently, I'm here.
I'm looking out from behind these eyes.
I have these hands.
And now it's my turn to be alive.
To face the same question posed to every ancestor who came before me...

What the actual fuck?

What's going on?

How could it be that this is happening?

How is it that I am??

Who authorised this?

Is it good?

Am I meant to respond to this in some way!?

Every time I'd had some space to slow down a little, this was the question looming over my shoulder. The question that forever makes me feel I need more time, more space, more room to step all the way back. To ask:

How do I want to respond?

How do I want to respond to being human? To waking up a couple billion years into whatever it is that's going on with the fact that reality is unfolding moment to moment, and just plain refuses to stop even though it's obviously absurd.

I realised this is why I'd felt so strongly I needed a month of space just to stop and think. Why I'd spent a week filling a kitchen table with chopped up index cards and bad handwriting.

That I am, and must respond to that fact, is a profoundly vexing puzzle. It was in the context of that puzzle, that these threads made sense. They were hints.

This hint came from that drive in 2019, listening to Dave Chalmers explain The Hard Problem of Consciousness. This one here from the book Red Plenty, that one there from a conversation in Mexico. Every time, there's this feeling...

I've no idea what's going on with the fact that I'm alive, but there's something about this idea that seems relevant in some way to what's going on... relevant to the puzzle of what a response commensurate to existing could possibly entail.

Beyond any particular project, or goal, or career trajectory, this is what I want most for my life. To have responded productively to reality as it was presented to me.

I started calling this project I was undertaking "Apollo". After the god of light, logic, and long journeys into the unknown (spawning, obviously, apollo.txt).

Undertaking your own personal Apollo project is a personal long reflection. It's a response to forgetting, to the cheeky puzzle of existence staring us in the face every second of every day, and to all the hints we chance upon as we engage with reality.

It's a radical act of sensemaking by which you attempt to terraform the landscape over which your time flows. Trying to bend the course of your days just a little more towards your best guess as to what might really matter.

My purpose in going to Walden Pond was not to live cheaply or to live dearly there, but to transact some private business with the fewest obstacles; to be hindered from accomplishing which for want of a little common sense, a little enterprise and business talent, appeared not so sad as foolish.

- Walden

Apollo is going out to where there's no light so as to spy the north star, so you may return home with some manner of almanac to guide your way.

This sequence is my own.

Coming up next: Space

Comments2


Sorted by Click to highlight new comments since:

I'm so happy you're writing down this story! Looking forward to reading the whole sequence :)

Thanks for following along! <3

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
Relevant opportunities
13
82
· · 3m read