Funding Strategy Week
Marginal Funding Week
Donation Election
Pledge Highlight
Donation Celebration
Nov 12 - 18
Marginal Funding Week
A week for organisations to explain what they would do with marginal funding. Read more.
Dec 3 - 16
Donation Election
A crowd-sourced pot of funds will be distributed amongst three charities based on your votes. Continue donation election conversations here.
$23 947 raised
Dec 3 - 16
Intermission
Dec 16 - 22
Pledge Highlight
A week to post about your experience with pledging, and to discuss the value of pledging. Read more.
Dec 23 - 31
Donation Celebration
When the donation celebration starts, you’ll be able to add a heart to the banner showing that you’ve done your annual donations.

New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Hi, Pepijn here, co-founder of the Tien Procent Club (Dutch org that promotes effective giving) and a highly irregular reader of this forum. Here's my quick take:  I think there's an opportunity to make much better EA videos simply by interviewing founders of the most effective non-profits. The medium is the message, and video lends itself perfectly for conveying emotions. It feels like there's a lot of room left to produce entertaining, exciting and high information density videos on effective non-profits. Explanation: I know there are some explainers about EA concepts out there but their main problem is that they're cerebral and often quite abstract.  Compare this to how moving, concrete and personal s3 movies are. Check this EA-adjacent episode about Make Sunsets, who are trialling ultra cheap atmospheric sulphur dioxide insertion and you'll see what I mean. All s3 videos are about deeptech startups and their passionate, quirky, high energy founders. These movies are made (at least the first episodes) by a single person, Jason Carman.  They excel in: 1) bringing across the infectious energy of startup founders 2) showcasing ideas and pathways that make the future something to be exciting about 3) being entertaining because of high information and novel concepts density -- check this one with Casey Handmer. No need to reinvent the wheel here. A single talented individual could start doing this, simply copying and slightly modifying the s3 style.
Some of my thoughts on funding. It's giving season and I want to finally get around to publishing some of my thoughts and experiences around funding. I haven't written anything yet because I feel like I am mostly just revisiting painful experiences and will end up writing some angry rant. I have ideas for how things could be better so hopefully this can lead to positive change not just more complaining. All my experiences are in AI Safety. On Timing: Certainty is more important than speed. The total decision time is less important than the overdue time. Expecting a decision in 30 days and getting it in 35 days is worse than if I expect the decision in 90 days and I get it in 85 days. Grantmakers providing statistics about timing expectations makes things worse. If the mean or median response time is N days it is now N+5 days is it appropriate for me to send a follow-up email to check on the status? Technically it's not late yet. It could come tomorrow or in N more days. Imagine if the Uber app showed you the global mean wait time for the last 12 months and there was no map to track your driver's arrival. "It doesn't have to reduce the waiting time it just has to reduce the uncertainty" - Rory Sutherland My conversations about people's expectations and experiences with people in Berkeley are at times very different to those outside of Berkeley. After I posted my announcement about shutting down AISS and my comment on the LTFF update several people reached out to me about their experiences. Some people I already knew well, some I had met and others I didn't know before. Some of them had received funding a couple of times but their negative experiences led them to not reapply and walk away from their work or the ecosystem entirely. At least one mentioned having a draft post about their experience that they did not feel comfortable publishing. There was definitely a point for me where I had already given up but just not realised it. I had already run out of fundi
After following the Ukraine war closely for almost three years, I naturally also watch China's potential for military expansionism. Whereas past leaders of China talked about "forceful if necessary" reunification with Taiwan, Xi Jinping seems like a much more aggressive person, one who would actually do it―especially since the U.S. is frankly showing so much weakness in Ukraine. I know this isn't how EAs are used to thinking, but you have to start from the way dictators think. Xi, much like Putin, seems to idolize the excesses of his country's communist past, and is a conservative gambler: that is, he will take a gamble if the odds seem enough in his favor. Putin badly miscomputed his odds in Ukraine, but Russia's GDP and population were 1.843 trillion and 145 million, versus 17.8 trillion and 1.4 billion for China. At the same time, Taiwan is much less populous than Ukraine and its would-be defenders in the USA/EU/Japan are not as strong naval powers as China (yet would have to operate over a longer range). Last but not least, China is the factory of the world―if they should decide they want to do world domination military-style, they can probably do that fairly well while simultaneously selling us vital goods at suddenly-inflated prices. So when I hear China ramped up nuclear weapon production, I immediately think of it as a nod toward Taiwan. If we don't want an invasion of Taiwan, what do we do? Liberals have a habit of magical thinking in military matters, talking of diplomacy, complaining about U.S. "war mongers", and running protests with "No Nukes" signs. But the invasion of Taiwan has nothing to do with the U.S.; Xi simply *wants* Taiwan and has the power to take it. If he makes that decision, no words can stop him. So the Free World has no role to play here other than (1) to deter and (2) to optionally help out Taiwan if Xi invades anyway. Not all deterrents are military, of course; China and USA will surely do huge economic damage to each other if China
I'm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I don't want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim that's like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isn't it possible going to 80k could be even better for AIS than doing the job could be? In that form, the argument is naive and implausible. But I don't think I know what the "sophisticated" argument that replaces it is. Here are some thoughts: * Working in AIS also promotes growth of AIS. It would be a mistake to only consider the second-order effects of a job when you're forced to by the lack of first-order effects. * OK, but focusing on org growth fulltime seems surely better for org growth than having it be a side effect of the main thing you're doing. * One way to think about this is to compare two strategies of improving talent at a target org, between "try to find people to move them into roles in the org, as part of cultivating a whole overall talent pipeline into the org and related orgs", and "put all of your fulltime effort into having a single person, i.e. you, do a job at the org". It seems pretty easy to imagine that the former would be a better strategy? * I think this is the same intuition that makes pyramid schemes seem appealing (something like: surely I can recruit at least 2 people into the scheme, and surely they can recruit more people, and surely the norm is actually that you recruit a tonne of people" and it's really only by looking at the mathematics of the population as a whole you can see that it can't possibly work, and that actually it's necessarily the case that most people in the scheme will recruit exactly zero people ever. * Maybe a pyramid scheme is the extreme of "what if literally everyone in EA work
Equal Hands — 2 Month Update Equal Hands is an experiment in democratizing effective giving. Donors simulate pooling their resources together, and voting how to distribute them across cause areas. All votes count equally, independent of someone's ability to give. You can learn more about it here, and sign up to learn more or join here. If you sign up before December 16th, you can participate in our current round. As of December 7th, 2024 at 11:00pm Eastern time, 12 donors have pledged $2,915, meaning the marginal $25 donor will move ~$226 in expectation to their preferred cause areas. In Equal Hands’ first 2 months, 22 donors participated and collectively gave $7,495.01 democratically to impactful charity. Including pledges for its third month, that number will likely increase to at least 24, and $10,410.01 Across the first two months, the gifts made by cause area and pseudo-counterfactual effect (e.g. if people had given their own money in line with their voting, rather than following the democratic outcome) has been: * Animal welfare: $3,133.35, a decrease of $1,662.15 * Global health: $1,694.85, a decrease of $54.15 * Global catastrophic risks: $2,093.91, an increase of $1,520.16  * EA community building: $319.38, an increase of $179.63  * Climate change: $253.52, an increase of $16.52  Interestingly, the primary impact has been money being reallocated from animal welfare to global catastrophic risks. From the very little data that we have, this primarily appears to be because animal welfare-motivated donors are much more likely to pledge large amounts to democratic giving, while GCR-motivated donors are more likely to sign up (or are a larger population in general), but are more likely to give smaller amounts. * I’m not sure why exactly this is! The motivation should be the same regardless of cause area for small donors — in expectation, the average vote has moved over $200 to each donor’s preferred causes across both of the first two months, so I w