Nov 10
Funding strategy week
Nov 17
Marginal funding week
Nov 24
Donation election
Dec 8
Why I donate week
A week to share the personal stories and reasons behind our donations. Read more.
Dec 15
Donation celebration
Conscious Meaning We share every moment with trillions of other conscious beings. Some are much like us, and others experience the world very differently. Creatures without a language to structure their thoughts, some who see broader spectrums of light or others who might experience the world in comparative slow motion. Each conscious moment immediately slips into the past largely unobserved and forgotten. They fall through time like snow to become frozen in the past. Always to have happened just as they did. Each conscious moment is transient and one small part of a vast whole, so one could see any individual as meaningless and insignificant. But every conscious moment is imbued with meaning. Happiness that need not justify itself and pains that consume any desire but to escape them.   As individuals, we are not responsible for the state of the world. You did not choose to create disease, poverty and mental illness. You can’t control nature, and you can’t control the society around you.  Many schools of philosophy disagree exactly on what our moral obligations are to others. Given this disagreement, we could default to radical scepticism that all attempts to decide what the right way to live are mere pretensions.  We could ignore the plight of others and live our lives solely for our own immediate gratification. In doing so, we will feed into many of the callous systems set in place for our own comfort. Feed into the destruction of our planet, the exploitation of labour and the abject horror of factory farming.  To me, it is clear that the default path of apathy is a tacit endorsement of the suffering in inequity in the world. To live life well, we need to take responsibility for what we can control, ourselves. You can control you, we are responsible for our intentions and actions. How you respond to the suffering and injustices of the world is on you.  Ovarian lottery However, our place in the world is not. Much of our lives is largely outside our control,
Thanks for asking, JD! It is also good to know Nick played a role in your interest! I would like to see more research informing how to i) increase the welfare of soil animals, and ii) compare hedonistic welfare across species. Rethink Priorities (RP) has a research agenda covering the latter. I am planning to donate 3 k$ over the next few months to a project on the welfare of springtails, mites, or nematodes. It is not public, but it will most likely start next year. I hope there will be more related projects in the future. People interested in funding research informing how to increase the welfare of soil animals are welcome to fill this very short form. My last substantial donations went to the Arthropoda Foundation. Here is their case for funding them. As I commented there, I would like them to focus more on soil animals. They have so far only made grants targeting farmed arthropods. However, I still think funding Arthropoda is the best publicly available opportunity to increase the welfare of soil animals.
(I've been lurking your posts, inspired partially by Nick's high opinion of you, partially by my interest in helping animals as effectively as possible!)
Thank you for your generosity! What animals are you donating to these days? Or are you stocking up a kind of DAF-like instrument for dispursing later?

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
'Why I Donate' Week
Global health
Animal welfare
Existential risk
12 more
They should call ALLFED's research "The Recipice".
I live in Australia, and am interested in donating to the fundraising efforts of MIRI and Lightcone Infrastructure, to the tune of $2,000 USD for MIRI and $1,000 USD for Lightcone. Neither of these are tax-advantaged for me. Lightcone is tax advantaged in the US, and MIRI is tax advantaged in a few countries according to their website.  Anyone want to make a trade, where I donate the money to a tax-advantaged charity in Australia that you would otherwise donate to, and you make these donations? As I understand it, anything in Effective Altruism Australia would work. Since my tax bill is expected to be about 1/3rd this year, I'm open to matching up to 4.5k USD for this instead of 3k, which will cost me about 3k in the long run. This will not funge against my existing 10% donations to global health, it's on top of them, so you get all that sweet, sweet counterfactual impact. If he's still around, @Mitchell Laughlin🔸 can confirm that I've successfully done a match like this before across a longer timeframe and larger total amount.
The mental health EA cause space should explore more experimental, scalable interventions, such as promoting anti-inflammatory diets at school/college cafeterias to reduce depression in young people, or using lighting design to reduce seasonal depression. What I've seen of this cause area so far seems focused on psychotherapy in low-income countries. I feel like we're missing some more out-of-the-box interventions here. Does anyone know of any relevant work along these lines? 
27
Lizka
3d
0
When thinking about the impacts of AI, I’ve found it useful to distinguish between different reasons for why automation in some area might be slow. In brief:  1. raw performance issues 2. trust bottlenecks 3. intrinsic premiums for “the human factor” 4. adoption lag 5. motivated/active protectionism towards humans I’m posting this mainly because I’ve wanted to link to this a few times now when discussing questions like "how should we update on the shape of AI diffusion based on...?". Not sure how helpful it will be on its own! ---------------------------------------- In a bit more detail: (1) Raw performance issues There’s a task that I want an AI system to do. An AI system might be able to do it in the future, but the ones we have today just can’t do it.  For instance: * AI systems still struggle to stay coherent over long contexts, so it’s often hard to use an AI system to build out a massive codebase without human help, or write a very consistent detective novel. * Or I want an employee who’s more independent; we can get aligned on some goals and they will push them forward, coming up with novel pathways, etc.  * (Other capability gaps: creativity/novelty, need for complicated physical labor, …) A subclass here might be performance issues that are downstream of “interface mismatch”.[1] Cases where AI might be good enough at some fundamental task that we’re thinking of (e.g. summarizing content, or similar), but where the systems that surround that task or the interface through which we’re running the thing — which are trivial for humans — is a very poor fit for existing AI, and AI systems struggle to get around that.[2] (E.g. if the core concept is presented via a diagram, or requires computer use stuff.) In some other cases, we might separately consider whether the AI systems has the right affordances at all.  This is what we often think about when we think about the AI tech tree / AI capabilities. Others are often important, though:  (2) Veri
Here's some quick takes on what you can do if you want to contribute to AI safety or governance (they may generalise, but no guarantees). Paraphrased from a longer talk I gave, transcript here.  * First, there’s still tons of alpha left in having good takes. * (Matt Reardon originally said this to me and I was like, “what, no way”, but now I think he was right and this is still true – thanks Matt!) * You might be surprised, because there’s many people doing AI safety and governance work, but I think there’s still plenty of demand for good takes, and you can distinguish yourself professionally by being a reliable source of them. * But how do you have good takes? * I think the thing you do to form good takes, oversimplifying only slightly, is you read Learning by Writing and you go “yes, that’s how I should orient to the reading and writing that I do,” and then you do that a bunch of times with your reading and writing on AI safety and governance work, and then you share your writing somewhere and have lots of conversations with people about it and change your mind and learn more, and that’s how you have good takes. * What to read? * Start with the basics (e.g. BlueDot’s courses, other reading lists) then work from there on what’s interesting x important * Write in public * Usually, if you haven’t got evidence of your takes being excellent, it’s not that useful to just generally voice your takes. I think having takes and backing them up with some evidence, or saying things like “I read this thing, here’s my summary, here’s what I think” is useful. But it’s kind of hard to get readers to care if you’re just like “I’m some guy, here are my takes.” * Some especially useful kinds of writing * In order to get people to care about your takes, you could do useful kinds of writing first, like: * Explaining important concepts * E.g., evals awareness, non-LLM architectures (should I care? why?) , AI control, best arguments for/against sho