Nov 10
Funding strategy week
Nov 17
Marginal funding week
Nov 24
Donation election
Dec 8
Why I donate week
A week to share the personal stories and reasons behind our donations. Read more.
Dec 15
Donation celebration
Such a lovely post, Nick! Made me chuckle, I really appreciate the eclectic sources and how you highlight uncertainties and nuances among Christians. 
Makes sense. I should just flag my consumption is significantly larger than suggested by my past spending of 4.50 k€/year because I live with my family. In case anyone is wondering, the speaker is Jesus in Mark 12:41–44. I see the bet as an investment with high returns. I am planning to count the donations I make as a result of winning the bet.
hey man. From my perspective I'm at least as impressed by small earners who give high percentages, although obviously there are good utility arguments against this being the most important that. I'll let a wiser person explain why ;) "Sitting across from the offering box, he was observing how the crowd tossed money in for the collection. Many of the rich were making large contributions. One poor widow came up and put in two small coins—a measly two cents. Jesus called his disciples over and said, “The truth is that this poor widow gave more to the collection than all the others put together. All the others gave what they’ll never miss; she gave extravagantly what she couldn’t afford—she gave her all"  And i would count that 10k bet why not?
Thanks for the mention, Nick! However, I have not donated much in absolute or relative terms. I have been keeping my assets (money in the bank plus global stocks) equal to 6 times the global real gross domestic product (real GDP) per capita, and have only spent 4.50 k€/year (excluding donations) since I started working. However, in practice, I have only donated 13.9 k€ (excluding a transfer of 10 k€ to PauseAI done as part of a bet), 10.7 % of my past net earnings.

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
'Why I Donate' Week
Global health
Animal welfare
Existential risk
12 more
They should call ALLFED's research "The Recipice".
27
Lizka
2d
0
When thinking about the impacts of AI, I’ve found it useful to distinguish between different reasons for why automation in some area might be slow. In brief:  1. raw performance issues 2. trust bottlenecks 3. intrinsic premiums for “the human factor” 4. adoption lag 5. motivated/active protectionism towards humans I’m posting this mainly because I’ve wanted to link to this a few times now when discussing questions like "how should we update on the shape of AI diffusion based on...?". Not sure how helpful it will be on its own! ---------------------------------------- In a bit more detail: (1) Raw performance issues There’s a task that I want an AI system to do. An AI system might be able to do it in the future, but the ones we have today just can’t do it.  For instance: * AI systems still struggle to stay coherent over long contexts, so it’s often hard to use an AI system to build out a massive codebase without human help, or write a very consistent detective novel. * Or I want an employee who’s more independent; we can get aligned on some goals and they will push them forward, coming up with novel pathways, etc.  * (Other capability gaps: creativity/novelty, need for complicated physical labor, …) A subclass here might be performance issues that are downstream of “interface mismatch”.[1] Cases where AI might be good enough at some fundamental task that we’re thinking of (e.g. summarizing content, or similar), but where the systems that surround that task or the interface through which we’re running the thing — which are trivial for humans — is a very poor fit for existing AI, and AI systems struggle to get around that.[2] (E.g. if the core concept is presented via a diagram, or requires computer use stuff.) In some other cases, we might separately consider whether the AI systems has the right affordances at all.  This is what we often think about when we think about the AI tech tree / AI capabilities. Others are often important, though:  (2) Veri
Here's some quick takes on what you can do if you want to contribute to AI safety or governance (they may generalise, but no guarantees). Paraphrased from a longer talk I gave, transcript here.  * First, there’s still tons of alpha left in having good takes. * (Matt Reardon originally said this to me and I was like, “what, no way”, but now I think he was right and this is still true – thanks Matt!) * You might be surprised, because there’s many people doing AI safety and governance work, but I think there’s still plenty of demand for good takes, and you can distinguish yourself professionally by being a reliable source of them. * But how do you have good takes? * I think the thing you do to form good takes, oversimplifying only slightly, is you read Learning by Writing and you go “yes, that’s how I should orient to the reading and writing that I do,” and then you do that a bunch of times with your reading and writing on AI safety and governance work, and then you share your writing somewhere and have lots of conversations with people about it and change your mind and learn more, and that’s how you have good takes. * What to read? * Start with the basics (e.g. BlueDot’s courses, other reading lists) then work from there on what’s interesting x important * Write in public * Usually, if you haven’t got evidence of your takes being excellent, it’s not that useful to just generally voice your takes. I think having takes and backing them up with some evidence, or saying things like “I read this thing, here’s my summary, here’s what I think” is useful. But it’s kind of hard to get readers to care if you’re just like “I’m some guy, here are my takes.” * Some especially useful kinds of writing * In order to get people to care about your takes, you could do useful kinds of writing first, like: * Explaining important concepts * E.g., evals awareness, non-LLM architectures (should I care? why?) , AI control, best arguments for/against sho
I have the impression that the most effective interventions, especially in global health/poverty, are usually temporary, in the sense that you need to keep reinvesting regularly, usually because the intervention provides a consumable good; for example malaria chemoprevention: it needs to be provided yearly. In contrast, solutions that seem more permanent in the long-term (e.g. a hypothetical malaria vaccination, or building infrastructure), are typically much less cost-effective on the margin because of their high cost. How do we balance pure marginal effectiveness vs eventually moving towards more permanent solutions? Could it be that by overly optimising for marginal cost-effectiveness, we might be missing a better ‘global maximum’ in the utility landscape, but we just need to descend from the current ‘local maximum’ to be able to get there eventually?
Longtermism is a spectacular intellectual failure. It’s been eight years and there are zero good ideas for longtermist interventions other than those that long predate longtermism. What does longtermism recommend we actually do differently? Absolutely nothing.