Hi Bella, thank you for writing this up! Are you willing to share more granular performance data for the different marketing efforts to help other orgs estimate the expected cost and performance of paid advertising?
Hello! One point that seems important to make: "People in the space" being skeptical of a startup idea, or even being confident it's a bad idea, is not good evidence that it's a bad idea.
Whilst we can expect subject matter experts to be skeptical of ideas that turn out to be bad, we can also expect them to be skeptical of a lot of ideas that turn out to be good!
This is true of many extremely successful for-profit start-ups (it's mentioned in Y-Combinator lectures a lot) and of non-profits as well, including many of CE's most successful incubated char...
Great that you’ve taken the time to write this up even though the conclusion was not to recommend
Also, as Karen Levy pointed out in her ep on the 80k podcast, adoption by LMIC governments effectively means the taxpayers of these countries are the ones to pay. Sustained service by an internationally funded charity represents a desirable wealth transfer to LMICs. Better than the charity graduating from being funded by EAs to being funded by LMIC governments would be for it to graduate to being funded by big funders with cheap counterfactuals like USAID.
(To play devil’s advocate to myself: If government adoption means capacity building in LMIC healthcare...
“There's no life bad enough for us to try to actively extinguish it when the subject itself can't express a will for that” - holding this view while also thinking that it’s good to prevent the existence of factory farmed chickens would need some explaining IMO.
Also, the claim that Michael’s line of reasoning is “weird and bad” seems to imply that it being “weird” should count against it in some way, just as it being “bad” should count against it. But why/how exactly? After all, from most people’s perspective caring about shrimp at all is weird.
Hello, member of the incubation program team here! There has been no change in our thinking on the optimal number of co-founders. This is a rare scenario where 3 makes sense :) The reasons it made sense in this case are idiosyncratic to the individuals involved and their career plans, so I won’t speak to that here, but I’m sure they’d be happy to explain the context 1:1 if you’re interested!
Fantastic work! It’s awesome to see a national EA chapter taking on such an ambitious project and having the follow through to make it happen.
I just want to clarify what you mean when you say “most of our research is empirical and quasi-experimental designed (and RCTs when possible) based on the outputs of each nonprofits”: I assume this means that your research uses existing empirical (preferably quasi-experimental or better yet RCT) evidence . You don’t mean you’re actually conducting or funding any primary research, right? I ask because that would be insanely cheap and fast
Manifold is a lot lower than I expected given it's a tech platform that presumably requires a bunch of dev hours!
Nice stuff! In particular I think “Finalists will also get to talk to other incubatees from that cohort about what it was like to work with your future co-founder” is an excellent feature.
Good stuff Jona! I agree on all fronts.
Re: #2, at Charity Entrepreneurship for example, we should have ToCs for our Incubation Program, Grantmaking Program and Research Training Program, but we don't yet. We have a fairly polished one for the Incubation Program, and a few different ones drafted for the new Research Program we're planning, but we haven't written one down for our Grantmaking Program, so here I am again not practicing what I preach. Looks like we have work to do :)
I'm broadly in favour of automation and against jobs for jobs' sake, so I agree with this post :)
However I do think that we need to invest heavily in making sure that the transition to a jobless or low-job society goes well. Currently, many people's identity and self-worth is tied up in their jobs... having a job is a prerequisite for getting a romantic partner in a lot of the world etc. I'd like to see more ideas about how to manage this transition.
Meanwhile, small quibble: I don't agree that thinking is uniquely human (what about non-human animals, and in the future, digital minds?)
~$120,000 (sans benefits). It varies greatly by role and location. You can get a sense for roughly what a given role might pay by looking at our job postings. As mentioned elsethread, these salaries are aimed at not being huge sacrifices for tech workers living in expensive american cities, while also not being egregiously luxurious in lower salary places like Oxford. I imagine some engineers might look at that number and think it’s low compared to their expectations, and some non-engineer Brits might think it’s quite high. I encourage you t...
I don’t think someone being young should be weighted highly in the assessment of their capacity to give good grants. I also think it’s important to remember that the majority of philanthropists come to have the power to give out grants due to success in the for-profit world and/or through good fortune, neither of which are necessarily correlated with being well positioned to give good grants. As a result, I don’t think the bar that Rachel needs to meet is so high that we should think that it’s unlikely that her being chosen as a regranter is based on merit.
That being said, the optics aren’t great so I understand where the original commenter is coming from.
Would banning exports of cages be a net positive for animals, or would it make transitioning to cage-free in high income countries so much more expensive, with developing countries still able to buy new cages, such that it would be negative for animals?
I wonder whether Animal Policy International should consider bundling bans of equipment that would be used to produce animal products that don’t meet local standards in with the import bans they’re campaigning for.
Hi Mark, I found your perspective really interesting. Your critiques make a lot of sense, but I’m unclear on what using the mixed methods you mention would look like in practice. Is there anywhere you can point us to in order to learn more about the approaches to deciding what interventions to prioritise that you advocate for?
Thank you! I totally agree. There is something to be said for taking a weekend to step back and think about EA topics outside the specific things you think about day to day. I get the sense that some people feel pressured to book as many 1-1s as possible and many of these end up being low value.
You’re right, and so It is a top priority! Others can say more as to the current hypotheses on how to do so
I would use this! I go back and forth on whether I should give money to beggars. Whilst I think the answer to this question depends on the specific location and context, this app would make the “but I should rather give that discretionary money to an effective charity” option a lot more realistic.
The idea makes a lot of sense, but my guess is that the circumstance where the cost is driven by the intervention itself isn’t that common: In the context of charities, we’re thinking about applying RCTs to test whether an intervention works. Generally the intervention is happening anyway. The cost of RCTs then doesn’t come from applying the intervention to the treatment group - it comes from establishing the experimental conditions where you have a randomised group of participants and the ability to collect data on them.
Very pleased to see this! I'd love to see more focus from EA orgs (and others of course) on the fundamentals of being an effective nonprofit (e.g. having a strong, well-evidenced theory of change, and using M&E to test the weakest links in that theory of change and measure impact).
In particular, on theory of change, I'd like to add the following impassioned rant:
A non-profit’s theory of change is analogous to a business model in the for-profit world. Just as you wouldn’t found a company without a clear business model (and nobody would fund you), ...
I think OP’s idea is not to get longermists to switch back, but to insulate neartermists from the harms that one might argue come from sharing a broader movement name with the longtermist movement.
Thanks so much for this interesting post - this framing of wellbeing had never occurred to me before. On the first example you use to explain why you find the capabilities framing to be more intuitive than a preference framing: can't we square your intuition that the second child's wellbeing is better with preference satisfaction by noting that people often have a preference to have the option to do things they don't currently prefer? I think this preference comes from (a) the fact that preferences can change, so the option value is instrumentally useful, ...
Thanks for the thoughtful question Joel!
I'll take this question in three parts:
(1) Why not just give the money to strong existing foundations whose values match your own?
Got it. FTX wasn't Y-combinator incubated right? (A quick google doesn't seem to suggest it was). Not that that nullifies your point - I'm just clarifying
That's correct. YCombinator is a convenience sample for rapid-growth companies. I would find it helpful of people repeated this for other samples; the next one on my mind currently is Sequoia-backed companies.
It’s interesting to me, because many entrepreneurs like myself get into entrepreneurship with (we sincerely believe) a goal of making the world a better place. Some are seemingly frauds. It is good to read this, to gain perspective on what not to do.
Exciting stuff! Looking forward to seeing what you come up with. I agree that the movement has not been systematic enough on cause prioritisation.
One thing I'm curious about.. where do you draw the line on:
(a) Where one cause ends and the other begins / how to group causes:
For example, aren't fungal diseases, nuclear war and asteroids all sub-causes of global health, in that we only (or at least mainly) care about them insofar as they threaten global health? AI safety is the same (except that in addition to mattering because it threatens health, it a...
Defenders of objective list theories might object to the previous two monistic theories on the grounds that they are naively simplistic in holding that well-being can be reduced to a single element: life is far more complicated than that (Fletcher, 2013).
I don't see how this objection makes sense. A desire (or preference) account of wellbeing effectively means that wellbeing is about maximising a very long, potentially infinite, list of values. It's objective list theory that over-simplifies wellbeing by reducing it to a handful of values.
+1 to this. I've been struggling figure out what seems wrong with every account of wellbeing and every form of utilitarianism I'd come across so far, and the answer was the lack of this account of wellbeing.
Preference utilitarianism, in which a ubiquitous preference is to have quality subjective experiences, and where the quality of subjective experience is understood in terms of tranquilism is by far the most accurate-seeming account of wellbeing I've come across so far
Hi there! Is there anywhere you can direct me to that makes the case that constant replacement occurs? In what sense do we stop existing and get replaced by a new person each moment? What is your reason for believing this? This is stated in the post but not justified anywhere. Apologies if I have missed it somewhere. I also tried googling 'constant replacement', 'constant replacement self', 'constant replacement identity' etc. and couldn't find more on this.
Thank you for your response! Makes sense. I'm not 100% convinced on the last point, but a few of your articles and 80k podcast appearances have definitely shifted me from thinking that E2G is unambiguously the best way for me to maximise the amount of near-term suffering I can abate, to thinking that direct work is a real contender. So thanks!!
The link to "Why do so few EAs and Rationalists have children?" is broken and I can't find it online but am keen to read it. Does anyone know where to find it? Thanks
Hi there!
I'm a bit confused about the claim that the bottleneck is ways to deploy funding rather than funding itself.
In global poverty and health cause areas for example, there are highly scalable EA-endorsed interventions like insecticide treated bed nets, deworming and cash transfers, and there are still plenty of people with malaria, children to deworm, and folks below the poverty line who could receive cash transfers. As far as I'm aware, AMF, Deworm the World / SCI and GiveDirectly could deploy more funds, and to the extent that they neede...
Thanks for the prompt response! DM'd you