Tobias Dänzer

Posts

Sorted by New

Topic Contributions

Comments

Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety)

Also see Wikipedia on affective forecasting and projection bias. (h/t the book Stumbling on Happiness; note that I'm not sure whether this field of study was hit by the replication crisis)

The Unweaving of a Beautiful Thing

Also:

I would not a miss a second of the beauty

-> would not miss

Cheerfully

There can never be too many essays recommending EAs not to push themselves past their breaking point. This essay may not be the most potent take on this concept, but since there are bound to be some essays on optimization-at-all-costs among the most-upvoted EA essays, there should be some essays like this one to counterbalance that. For instance, this essay is an expanded take on the concept, but is too new to be eligible for this year's EA Review.

Which are better, short-term or long-term organizations?

That's a perfectly fine attitude to have! In that case I would likely advice donating to short-term charities rather than long-term ones which are more speculative. I don't have as much experience with the former myself, and so have to defer to e.g. GiveWell's recommended charities and the like.

Also, if you discover in a few years that you're more or less risk-averse than you'd thought, you can still reconsider where to donate.

Finally, if you care about getting as much "bang for your buck" for your EA donations, keep a look out for ~yearly recurring donation matching events like this current one by Double Up Drive (though in that case it's not entirely clear to me whether they match donations outside the US, and to which extent these donation matches can be considered counterfactual).

Which are better, short-term or long-term organizations?

Asking which is "better" seems like a false question, a type error.

The "gamble" framing works better. Donating to a short-term charity is equivalent to taking a very-high-probability bet for a comparatively but not absolutely small impact (e.g. distributing malaria nets to prevent malaria infections), which gives your donation moderate expected value (EV). Conversely, long-term charities are more speculative but also more ambitious, so donating to them is like taking a low-probability bet for a comparatively bigger payout (e.g. averting extinction).

From my perspective, if one is sufficiently risk-tolerant, some of the best long-term charities seem to offer better EV than the very best short-term charities. But I've formed that impression by reading about these topics for years, taking some pretty weird argument seriously, etc. And I care about EV rather than certain success, so I'm fine with making long-shot bets that may not pay off. In that sense, I treat donating like I would investing.

In your case, it might help to ask yourself what you expect your donations to achieve, how risk-averse or risk-tolerant you want to be in donating: do you consider a donation more like an investment in some speculative altruistic outcome (which might mean emphasizing EV), or like a purchase of a certain altruistic outcome (which might mean emphasizing high probability)?

If a purchase doesn't pay off, that feels like you've wasted your money; whereas if an investment doesn't pay off, that's just bad luck (assuming your assessment at the time was correct of this investment being high EV).

Effective Altruism: The First Decade (Forum Review)

What's the copyright situation here? If a crosspost is just a link to a post outside the EA forum, that's one thing; but if it involves copying the entire text to the forum, crossposting presumably requires the permission of the author.

Make a $100 donation into $200 (or more)

I'm interested in directing more matched donations towards MIRI via donation trading with others who would not counterfactually donate to MIRI by themselves. As for myself, I'm from the Less Wrong cluster and so mostly care about x-risk stuff and meta stuff and would e.g. not counterfactually donate to animal charities.

After the trade, we could share screenshots of the confirmation email received after donating, to confirm that the trade has taken place.

If you're interested, send me a message. I expect the remaining matching funds to deplete within the next 24-48 hours, so there isn't much time to coordinate. (EDIT: Given that the fundraiser has at least twice added additional funds, I would no longer consider this as particularly urgent.)

PS: I have donated to GiveWell, Our World in Data, Centre for Effective Altruism, and MIRI, so can't get those donation-matched anymore. EDIT: Link to my donations.

Make a $100 donation into $200 (or more)

To spare some others the hassle, while donating by bank is supposedly free of fees, the add-a-bank process asks for a "bank routing number", which as far as I can tell is a US-only concept, so if you're from outside the US you'll have to donate by another method instead.

Make a $100 donation into $200 (or more)

Thanks for posting this. Have donated. For 3 donations, I got 3x100$ matched, plus 25$ from the referal link, plus 3x10$ for ostensibly "sharing" donations, plus 3$ from "liking" donations, or 358$ in matching funds for 300$ donated, which comes to 119% in matching minus maybe 2.3% in credit card fees.

Half-assing it with everything you've got

@Aaron_Gertler: FYI, you've rehosted Nate's Replacing Guilt series here on the EA forum. I figured this would be useful when I wanted to read the essays but the Minding Our Way website seemed to be down (unsure if temporary or permanent).

Unfortunately, the images embedded in the posts were not rehosted on the EA forum but still hotlink to the Minding Our Way website. This is suboptimal, since now the images are also down.

When one posts on the Less Wrong forum, images are (re?)hosted on their own content delivery system. IIRC the EA forum runs on the same software (?), so maybe there's a way to re-host stuff here? I don't know about the technical details, but the LW mod team might know.

Load More