All of Aidan Alexander's Comments + Replies

Hi Bella, thank you for writing this up! Are you willing to share more granular performance data for the different marketing efforts to help other orgs estimate the expected cost and performance of paid advertising?

9
Bella
24d
Hey Aidan! I'm not sure — I didn't do this in this post/didn't have any plans to, mostly because I'm unsure how much our experiences would generalise to different contexts. Performance of our ads within the same channel can vary by up to a couple orders of magnitude, so I'm just not sure how helpful it'd be for others. That said, if you're considering a specific project, I'd probably be happy for me or one of my team to chat to you about it based on our experience?

Hello! One point that seems important to make: "People in the space" being skeptical of a startup idea, or even being confident it's a bad idea, is not good evidence that it's a bad idea.

Whilst we can expect subject matter experts to be skeptical of ideas that turn out to be bad, we can also expect them to be skeptical of a lot of ideas that turn out to be good! 

This is true of many extremely successful for-profit start-ups (it's mentioned in Y-Combinator lectures a lot) and of non-profits as well, including many of CE's most successful incubated char... (read more)

9
mildlyanonymous
1mo
I think this is overly simplifying of something a lot more complex, and I’m surprised it’s a justification you use for this. Of course on some level what you’re saying is correct in many cases. But imagine you recommend a global health charity to be launched. GiveWell says “you’re misinterpreting some critical evidence, and this isn’t as impactful as you think”. Charities on the ground say “this will impact our existing work, so try doing it this other way”. You launch the intervention anyway. The founders immediately get the same feedback, including from trying the intervention, then pivot to coordinating more and aligning with external experts. This seems much more analogous to what happens in the animal space, and it seems absolutely like a good indicator that people were skeptical. Charities aren’t for-profits, who exist in a vacuum of their own profitability. They are part of a broader ecosystem.

Great that you’ve taken the time to write this up even though the conclusion was not to recommend

Also, as Karen Levy pointed out in her ep on the 80k podcast, adoption by LMIC governments effectively means the taxpayers of these countries are the ones to pay. Sustained service by an internationally funded charity represents a desirable wealth transfer to LMICs. Better than the charity graduating from being funded by EAs to being funded by LMIC governments would be for it to graduate to being funded by big funders with cheap counterfactuals like USAID.

(To play devil’s advocate to myself: If government adoption means capacity building in LMIC healthcare... (read more)

2
freedomandutility
4mo
I don't think adoption by LMIC governments removes the desirable wealth transfer to LMICs. I think most of the wealth transfers to LMICs will continue via other NGOs.  CGD have some interesting work making the case that governments should focus on prioritising the most cost-effective health services, and donors, whose funding is less reliable should focus on additional, less cost-effective stuff - https://www.cgdev.org/blog/putting-aid-its-place-new-compact-financing-health-services
5
Ben Williamson
5mo
I definitely agree with and share the concerns over government adoption as a silver bullet of sorts on the charity's side. Outsourcing all the costs when the government's money and resources are more counterfactually precious than the charity's is not the way we want things to go! Our aim is closer to your last sentence: government adoption to leverage the cost-savings from delivery through their existing systems of training/data collection/material distribution, with MHI continuing to pay for the costs involved that the government wouldn't incur anyway.

“There's no life bad enough for us to try to actively extinguish it when the subject itself can't express a will for that” - holding this view while also thinking that it’s good to prevent the existence of factory farmed chickens would need some explaining IMO.

Also, the claim that Michael’s line of reasoning is “weird and bad” seems to imply that it being “weird” should count against it in some way, just as it being “bad” should count against it. But why/how exactly? After all, from most people’s perspective caring about shrimp at all is weird.

4
HenryStanley
6mo
Agreed that this seems nonsensical on its face.

Hello, member of the incubation program team here! There has been no change in our thinking on the optimal number of co-founders. This is a rare scenario where 3 makes sense :) The reasons it made sense in this case are idiosyncratic to the individuals involved and their career plans, so I won’t speak to that here, but I’m sure they’d be happy to explain the context 1:1 if you’re interested!

Fantastic work! It’s awesome to see a national EA chapter taking on such an ambitious project and having the follow through to make it happen.

I just want to clarify what you mean when you say “most of our research is empirical and quasi-experimental designed (and RCTs when possible) based on the outputs of each nonprofits”: I assume this means that your research uses existing empirical (preferably quasi-experimental or better yet RCT) evidence . You don’t mean you’re actually conducting or funding any primary research, right? I ask because that would be insanely cheap and fast

2
Yonatan Schoen
8mo
Good point, thanks for bringing this up.    As you've mentioned, with the time constraints of the program, it was unreasonable for all data collection to occur within the program timeframe. So in most cases, we relied on high-quality evidence from prior research conducted by the nonprofits. So yes, the cost per nonprofit I mentioned doesn't include data collection in most cases (which can be VERY costly, specifically when doing RCTs) Some assessments did leverage experiments that had started before and continued during the program. And in select cases, we assisted nonprofits with parts of their data collection and experiment implementation through facilitation.

Manifold is a lot lower than I expected given it's a tech platform that presumably requires a bunch of dev hours!

Nice stuff! In particular I think “Finalists will also get to talk to other incubatees from that cohort about what it was like to work with your future co-founder” is an excellent feature.

Good stuff Jona! I agree on all fronts. 

Re: #2, at Charity Entrepreneurship for example, we should have ToCs for our Incubation Program, Grantmaking Program and Research Training Program, but we don't yet. We have a fairly polished one for the Incubation Program, and a few different ones drafted for the new Research Program we're planning, but we haven't written one down for our Grantmaking Program, so here I am again not practicing what I preach. Looks like we have work to do :) 

I'm broadly in favour of automation and against jobs for jobs' sake, so I agree with this post :)

However I do think that we need to invest heavily in making sure that the transition to a jobless or low-job society goes well. Currently, many people's identity and self-worth is tied up in their jobs... having a job is a prerequisite for getting a romantic partner in a lot of the world etc. I'd like to see more ideas about how to manage this transition. 

Meanwhile, small quibble: I don't agree that thinking is uniquely human (what about non-human animals, and in the future, digital minds?)

~$120,000 (sans benefits). It varies greatly by role and location. You can get a sense for roughly what a given role might pay by looking at our job postings. As mentioned elsethread, these salaries are aimed at not being huge sacrifices for tech workers living in expensive american cities, while also not being egregiously luxurious in lower salary places like Oxford. I imagine some engineers might look at that number and think it’s low compared to their expectations, and some non-engineer Brits might think it’s quite high. I encourage you t... (read more)

2
pseudonym
10mo
The salary of the content specialist is here. It sounds like the 1.5 million does not include $ on the forum team or facilitators - what does the total spending/average salary look like if you include these groups?

I don’t think someone being young should be weighted highly in the assessment of their capacity to give good grants. I also think it’s important to remember that the majority of philanthropists come to have the power to give out grants due to success in the for-profit world and/or through good fortune, neither of which are necessarily correlated with being well positioned to give good grants. As a result, I don’t think the bar that Rachel needs to meet is so high that we should think that it’s unlikely that her being chosen as a regranter is based on merit.

That being said, the optics aren’t great so I understand where the original commenter is coming from.

Would banning exports of cages be a net positive for animals, or would it make transitioning to cage-free in high income countries so much more expensive, with developing countries still able to buy new cages, such that it would be negative for animals?

I wonder whether Animal Policy International should consider bundling bans of equipment that would be used to produce animal products that don’t meet local standards in with the import bans they’re campaigning for.

6
Jason
10mo
Or buyback programs, in some cases, under which the cages would be recycled or destroyed. We've decided as a society to grandfather a lot of stuff that isn't up to current standards for the rest of the stuff's useful life. For instance, I'm generally allowed to continue using older buildings that don't meet accessibility standards . . . but if I significantly update the building or build a new one, I have to meet current standards. Grandfathering is often relatively uncontroversial, as it can be justified on both fairness and rule-utilitarian grounds. In a circumstance where you don't want to grandfather, there's going to be a deadweight loss someone has to bear. I'd characterize allowing cage export as the equivalent of partial grandfathering -- the prior owners only recoup a portion, but only a portion, of the remaining value of their capital investment.  I don't know much about intercontinential shipping or customs, but I imagine the companies selling the cages after the ban are making significantly less than is being charged for them in Africa. Farmers are a powerful lobby in many countries, and farming is often a low-margin business. Moreover, to the extent individual farmers would be bearing the deadweight loss, they are often in lots of debt and are generally sympathetic to the general public. So a ban on export is likely to be politically difficult and/or require a longer transition period.  If all that is correct, it might be better in some cases to couple an export ban with a publicly-funded buyback program that paid as much as the previous owners could have counterfactually received from the third-party market. This is only true if you think the export ban would have a counterfactual impact on the number of cages in use (which may depend on possible alternative locations and transit costs).

Hi Mark, I found your perspective really interesting. Your critiques make a lot of sense, but I’m unclear on what using the mixed methods you mention would look like in practice. Is there anywhere you can point us to in order to learn more about the approaches to deciding what interventions to prioritise that you advocate for?

Thank you! I totally agree. There is something to be said for taking a weekend to step back and think about EA topics outside the specific things you think about day to day. I get the sense that some people feel pressured to book as many 1-1s as possible and many of these end up being low value.

You’re right, and so It is a top priority! Others can say more as to the current hypotheses on how to do so

I would use this! I go back and forth on whether I should give money to beggars. Whilst I think the answer to this question depends on the specific location and context, this app would make the “but I should rather give that discretionary money to an effective charity” option a lot more realistic.

The idea makes a lot of sense, but my guess is that the circumstance where the cost is driven by the intervention itself isn’t that common: In the context of charities, we’re thinking about applying RCTs to test whether an intervention works. Generally the intervention is happening anyway. The cost of RCTs then doesn’t come from applying the intervention to the treatment group - it comes from establishing the experimental conditions where you have a randomised group of participants and the ability to collect data on them.

1
Rory Fenton
1y
Hey Aidan-- that's a good point. I think it will probably apply to different extents for different cases, but probably not to all cases. Some scenarios I can imagine: 1. A charity uses its own funds to run an RCT of a program it already runs at scale:  * In this case, you are right that treatment is happening "anyway" and in a sense the $ saved in having a smaller treatment group will just end up being spent on more "treatment",  just not in the RCT.  * Even in this case I think the charity would prefer to fund its intervention in a non-RCT context: providing an intervention in an RCT context is inherently costlier than doing it under more normal circumstances, for example if you are delivering assets, your trucks have to drive past control villages to get to treatment ones, increasing delivery costs.  * That's pretty small though, I agree that otherwise the intervention is basically "already happening" and the effective savings are smaller than implied in my post * That said, if the charity has good reason to think their intervention works and so spending more on treatment is "good", the value of the RCT in the first place seems lower to me * 2) A charity uses its own funds to run an RCT of a trial program it doesn't operate at scale: * In this case, the charity is running the RCT because it isn't sure the intervention is a good one * Reducing the RCT treatment group frees up funds for the charity to spend on the programs that it does know work, with overall higher EV * 3) A donor wants to fund RCTs to generate more evidence:  * The donor is funding the RCT because they aren't sure the intervention works * Keeping RCT costs lower means they can fund more RCTs, or more proven interventions * 4) A charity applies for donor funds for an RCT of a new program: * In this case, the cheaper study is more likely to get funded, so the larger control/smaller treatment is a better option for the charity Overall, I think cases 2/3/4 benefit

I'm getting a 404 error at that link

Very pleased to see this! I'd love to see more focus from EA orgs (and others of course) on the fundamentals of being an effective nonprofit (e.g. having a strong, well-evidenced theory of change, and using M&E to test the weakest links in that theory of change and measure impact).

In particular, on theory of change, I'd like to add the following impassioned rant: 

A non-profit’s theory of change is analogous to a business model in the for-profit world. Just as you wouldn’t found a company without a clear business model (and nobody would fund you), ... (read more)

I think OP’s idea is not to get longermists to switch back, but to insulate neartermists from the harms that one might argue come from sharing a broader movement name with the longtermist movement.

Thanks so much for this interesting post - this framing of wellbeing had never occurred to me before. On the first example you use to explain why you find the capabilities framing to be more intuitive than a preference framing: can't we square your intuition that the second child's wellbeing is better with preference satisfaction by noting that people often have a preference to have the option to do things they don't currently prefer? I think this preference comes from (a) the fact that preferences can change, so the option value is instrumentally useful, ... (read more)

8
ryancbriggs
1y
Glad you found this interesting, and you have my sympathies as another walking phone writer. A few people have asked similar comments about preference structures. I can give perhaps a sharper example that addresses this specific point. I left a lot out of my original post in the interest of brevity, so I'm happy to expand more in the comments. Probably the sharpest example I can give of a place where a capability approach separates from preference satisfaction is over the issue of adaptive preferences. This is an extended discussion, but the gist of it is that it seems not so hard to come up with situations where the people in a given situation do not seem upset by some x even though upon reflection (or with full/better information), they might well be upset. There is ample space for this in the capability approach, but there is not in subjective preference satisfaction. This point is similar in spirit to my women in the 1970s example, and similar to where I noted in the text that "Using subjective measures to allocate aid means that targeting will depend in part on people’s ability to imagine a better future (and thus feel dissatisfaction with the present)." The chapter linked above gives lots of nice  examples and has good discussion. If you want a quick example: consider a case where women are unhappy because they lack the right to vote. In the capability approach, this can only be addressed in one way, which is to expand their capability to vote. In preference satisfaction or happiness approaches, one could also do that, or one could shape the information environment so that women no longer care about this and this would fix the problem of "I have an unmet preference to vote" and "I'm unhappy because I can't vote." I prefer how the capability approach handles this. The downside to the way the capability approach handles this is that even if the women were happy about not voting the capability approach would still say "they lack the capability to vote" and wou

Thanks for the thoughtful question Joel! 

I'll take this question in three parts:

(1) Why not just give the money to strong existing foundations whose values match your own?

  • Greater funder diversity is a good thing. 
    It contributes to worldview diversification within EA, and reduces the chance that important cause areas go unfunded (more on this below).
  • Existing foundations don't have the capacity to do everything. 
    The world is big - there are a lot of problems to solve, a lot of potential solutions to consider funding, and a lot that needs to be
... (read more)

Got it. FTX wasn't Y-combinator incubated right? (A quick google doesn't seem to suggest it was). Not that that nullifies your point - I'm just clarifying

That's correct. YCombinator is a convenience sample for rapid-growth companies. I would find it helpful of people repeated this for other samples; the next one on my mind currently is Sequoia-backed companies.

I’m probably being daft.. but what does this have to do with effective altruism?

My thought was: base rates for FTX-style disasters?

It’s interesting to me, because many entrepreneurs like myself get into entrepreneurship with (we sincerely believe) a goal of making the world a better place. Some are seemingly frauds. It is good to read this, to gain perspective on what not to do.

Partial Identification, rest assured I downvoted because your comment is low quality

Exciting stuff! Looking forward to seeing what you come up with. I agree that the movement has not been systematic enough on cause prioritisation. 

One thing I'm curious about.. where do you draw the line on:

(a) Where one cause ends and the other begins / how to group causes:

For example, aren't fungal diseases, nuclear war and asteroids all sub-causes of global health, in that we only (or at least mainly) care about them insofar as they threaten global health? AI safety is the same (except that in addition to mattering because it threatens health, it a... (read more)

1
Joel Tan
2y
(a) It's definitely fairly arbitrary, but the way I find it useful to think about it is that causes are problems, and you can break them down into: * High-level cause area: The broadest possible classification, like (i) problems that primarily affect humans in the here and now; (ii) problems that affect non-human animals; (iii) problems that primarily affect humans in the long run; and (iv) meta problems to do with EA itself. * Cause Area: High-level cause domains (e.g. neartermist human problems) can then be broken down into various intermediate-level cause areas (e.g. global disease and poverty -> global health -> communicable diseases -> vector-borne diseases -> mosquito borne diseases) until they reach the narrowest, individual cause level) * Cause: At the bottom, we have problems that are defined in the most narrow way possible (e.g. malaria). In terms of what level cause prioritization research should focus on - I'm not sure if there's an optimal level to always focus on. On the one hand, going narrow makes the actual research easier; on the other, you increase the amount of time needed to explore the search space, and also risk missing out on cross-cause solutions (e.g. vaccines for fungal diseases in general and not just, say, candidiasis). (b) I think Michael Plant's thesis had a good framing of the issue, and at the risk of summarizing his work poorly, I think the main point is that if causes are problems then interventions are solutions, and since we ultimately care about solving problems in a way that does the most good, we can't really do cause prioritization research without also doing intervention evaluation. The real challenge is identifying which solutions are the most effective, since at the shallow research stage we don't have the time to look into everything. I can't say I have a good answer this challenge, but in practice I would just briefly research what causes there are, and choose what superficially seems like the most effective. On t

Defenders of objective list theories might object to the previous two monistic theories on the grounds that they are naively simplistic in holding that well-being can be reduced to a single element: life is far more complicated than that (Fletcher, 2013). 

I don't see how this objection makes sense. A desire (or preference) account of wellbeing effectively means that wellbeing is about maximising a very long, potentially infinite, list of values. It's objective list theory that over-simplifies wellbeing by reducing it to a handful of values. 

3
finm
2y
That's a good point. It is the case that preferences can be about an indefinite number of things. But I suppose there is still a sense in which a preference satisfaction account is monistic, namely in essentially valuing only the satisfaction of preferences (whatever they are about); and there is no equivalent sense in which objective list theories (with more than one item) are monistic. Also note that objective list theories can contain something like the satisfaction of preferences, and as such can be at least as complex and ecumenical as preference satisfaction views. 

+1 to this. I've been struggling figure out what seems wrong with every account of wellbeing and every form of utilitarianism I'd come across so far, and the answer was the lack of this account of wellbeing.

Preference utilitarianism, in which a ubiquitous preference is to have quality subjective experiences, and where the quality of subjective experience is understood in terms of tranquilism is by far the most accurate-seeming account of wellbeing I've come across so far

Hi there! Is there anywhere you can direct me to that makes the case that constant replacement occurs? In what sense do we stop existing and get replaced by a new person each moment? What is your reason for believing this? This is stated in the post but not justified anywhere. Apologies if I have missed it somewhere. I also tried googling 'constant replacement', 'constant replacement self', 'constant replacement identity' etc. and couldn't find more on this.

2
Holden Karnofsky
2y
I didn't make a claim that constant replacement occurs "empirically." As far as I can tell, it's not possible to empirically test whether it does or not. I think we are left deciding whether we choose to think of ourselves as being constantly replaced, or not - either choice won't contradict any empirical observations. My post was pointing out that if one does choose to think of things that way, a lot of other paradoxes seem to go away.

Thank you for your response! Makes sense. I'm not 100% convinced on the last point, but a few of your articles and 80k podcast appearances have definitely shifted me from thinking that E2G is unambiguously the best way for me to maximise the amount of near-term suffering I can abate, to thinking that direct work is a real contender. So thanks!!

The link to "Why do so few EAs and Rationalists have children?" is broken and I can't find it online but am keen to read it. Does anyone know where to find it? Thanks

2
Milan_Griffes
1y
I pulled it down for a while, and just reposted it. 
4
Luke Freeman
3y
Wayback machine to the rescue: http://web.archive.org/web/20210422032939/https://forum.effectivealtruism.org/posts/8G3p8rLb3cYP4fSSk/why-do-so-few-eas-and-rationalists-have-children 

Hi there! 

I'm a bit confused about the claim that the bottleneck is ways to deploy funding rather than funding itself. 

In global poverty and health cause areas for example, there are highly scalable EA-endorsed interventions like insecticide treated bed nets, deworming and cash transfers, and there are still plenty of people with malaria, children to deworm, and folks below the poverty line who could receive cash transfers. As far as I'm aware, AMF, Deworm the World / SCI and GiveDirectly could deploy more funds, and to the extent that they neede... (read more)

5
Benjamin_Todd
3y
Hi Aidan, the short answer is that global poverty seems the most funding constrained of the EA causes. The skill bottlenecks are most severe in longtermism and meta e.g. at the top of the 'implications section' I said: That said, I still thinking global poverty is 'talent constrained' in the sense that: * If you can design something that's several-fold more cost-effective than GiveDirectly and moderately scalable, you have a good shot of getting a lot of funding. Global poverty is only highly funding constrained at the GiveDirectly level of cost-effectiveness.    * I think people can often have a greater impact on global poverty via research, working at top non-profits, advocacy, policy etc. rather than via earning to give.