Wiki Contributions


An evaluation of Mind Ease, an anti-anxiety app

I'm really pleased to see this: I have been wondering how one would do an EA-minded evaluation of the cost-effectiveness of a start-up that runs it head to head with things like AMF. I'm particularly pleased to see an analysis of a mental health product.*

I only have one comment. You say:

The promise of mobileHealth (mHealth) is that at scale apps often have ‘zero marginal cost’ per user (much less than $12.50) and so plausibly are very cost-effective

It doesn't seem quite that tech products have zero marginal cost. Shouldn't one include the cost of acquiring (and supporting?) a user, e.g. through advertising? This has a cost, and this cost would need to be lower than $12.50 per user, given your other assumptions. I have no idea what user acquisition costs are and if $12.5 is high or low. 

*(Semi-obligatory disclaimer: Peter Brietbart, MindEase's CEO, is the chair of the board of trustees for HLI, the organisation I run)

Can money buy happiness? A review of new data

Uhh... that shouldn't happen from just re-plotting the same data. In fact, how is it that in the original graph, there is an increase from $400,000 to $620,000, but in the new linear axis graph, there is a decrease?

So, there was a discrepancy between the data provided for the paper and the graph in the paper itself. The graph plotted above used the data provided.  I'm not sure what else to say without contacting the journal itself.

this seems to imply that rich people shouldn't get more money because it barely makes a difference, but this also applies to poor people as well, casting doubt on whether we should bother giving money away.

I don't follow this. The claim is that money makes less of a difference what one might expect, not that it makes no difference. Obviously, there are reasons for and against working at, say, Goldman Sachs besides the salary. It does follow that, if you receiving money makes less of a difference than you would expect, then you giving it to other people, and them receiving it, will also make a smaller-than-anticipated difference. But, of course, you could do something else with your money that could be more effective than giving it away as cash - bednets, deworming, therapy, etc.

US bill limiting patient philanthropy?

I also know almost nothing about US tax law. Call me a cynic but it seems plausible that lots (nearly all?) of the people putting their money into foundations and not spending it are doing so for tax reasons, rather than because they have a sincere concern for the longterm future.

As a communications point, this does make me wonder if longtermist philanthropists who hypothetical campaigned for such a 'loophole' to remain open will, by extension, be seen as unscrupulous tax dodgers.

Can "pride" be used as a subjective measure like "happiness"?

So, if you look at OECD (2013, Annex A) there's a few example questions about subjective well-being. The eudaimonic questions are sort of in your area (see p 251), e.g. "I lead a purposeful and meaningful life", and "I am confident and capable in the activities that are important to me".

You might also be interested by Kahneman's(?) distinctions of decision vs remembered vs experience utility. Sounds like your question taps into "how will I, on reflection, feel about this decision?" and you're sampling your intuitions about how you judge life. 

[Podcast] Suggest a question for Jeffrey Sachs

He may well have been asked this before, but I'd want to know what, if anything, he thinks would be lost be replaced the SDGs - at the least insofar as they apply to current humans - with a measure of happiness.

Also, if/how he thinks about intergenerational trade-offs.

EA Infrastructure Fund: May 2021 grant recommendations

Just a half-formed thought how something could be "meta but not longtermist" because I thought that was a conceptually interesting issue to unpick.

I suppose one could distinguish between meaning "meta" as (1) does non-object level work or (2) benefits more than one value-bearer group, where the classic, not-quite-mutually-exclusive three options for value-bearer groups are (1) near-term humans, (2) animals, and (3) far future lives.

If one is thinking the former way, something is meta to the degree it does non-object level vs object-level work (I'm not going to define these), regardless of what domain it works towards. In this sense, 'meta' and (e.g.) 'longtermist' are independent: you could be one, or the other, both, or neither. Hence, if you did non-object level work that wasn't focused on the longterm, you would be meta but not longtermist (although it might be more natural to say "meta and not longtermist" as there is no tension between them).

If one is thinking the latter way, one might say that an org is less "meta", and more "non-meta", the greater the fraction of its resources are intentionally spent to benefit just only one value-bearer group. Here "meta" and "non-meta" are mutually exclusive and a matter of degree. A "non-meta" org is one that spends, say, more than 50% of its resources aimed at one group. The thought is of this is that, on this framework, Animal Advocacy Careers and 80k are not meta, whereas, say, GWWC is meta. Thinking this way, something is meta but not longtermist if it primarily focuses on non-longtermist stuff.

(In both cases, we will run into familiar issues about to making precise what an agent 'focuses on' or 'intends'.)

EA Infrastructure Fund: May 2021 grant recommendations

In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them

Thanks for this reply, which I found reassuring. 

FWIW, I think this example is pretty unrealistic, as I don't think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination

Okay, this is interesting and helpful to know. I'm trying to put my finger on the source of what seems to be a perspectival difference, and I wonder if this relates to the extent to which fund managers should be trying to instantiate donor's wishes vs fund managers allocating the money by their own lights of what's best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former, not least for long-term concerns about reputation, integrity, and people just taking their money elsewhere. 

To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better.

I suspect you would agree with this in principle: you wouldn't want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused.

However, I imagine you would disagree that this is a problem in practice, because donors expect there to be some overlap between funds and, in any case, fund managers will not recommend things wildly outside their fund's remit.  (I am not claiming this is a problem in practice; might concern is that it may become one and I want to avoid that.)

I haven't thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive - this gives donors greater choice and minimises worries about permissible fund allocation. 

EA Infrastructure Fund: May 2021 grant recommendations

Hello Michelle. Thanks for replying, but I was hoping you would engage more with the substance of my question - your comment doesn't really give me any more information than I already had about what to expect.

Let me try again with a more specific case. Suppose you are choosing between projects A and B - perhaps they have each asked for $100k but you only have $100k left. Project A is only eligible for funding from EAIF - the other EA funds consider it outside their respective purviews. Project B is eligible for funding from one of the other EA funds, but so happens to have applied to EAIF. Suppose, further, you think B is more cost-effective at doing good.

What would you do? I can't think of any other information you would need.

FWIW, I think you must pick A. I think we can assume donors expect the funds not to be overlapping - otherwise, why even have different ones? - and they don't want their money to go to another fund's area - otherwise, that's where they have put it. Hence, picking B would be tantamount to a breach of trust.

(By the same token, if I give you £50, ask you to put it in the collection box for a guide dog charity, and you agree, I don't think you should send the money to AMF, even if you think AMF is better. If you decide you want to spend my money somewhere else from what we agreed to, you should tell me and offer to return the money.)

My current impressions on career choice for longtermists

Thanks for writing this up! I found the overall perspective very helpful, as well as lots of the specifics, particularly (1) what it means to be on track and (2) the emphasis on the importance of 'personal fit' for an aptitude (vs the view there being a single best thing).

Two comments. First, I'm a bit surprised that you characterised this as being about career choice for longtermists.  It seems that the first five aptitudes are just as relevant for non-longtermist do-gooding, although the last two - software engineering and information security - are more specific to longtermism. Hence, this could have been framed as your impressions on career choice for effective altruists, in which you would set out the first five aptitudes and say they applied broadly, then noted the two more which are particular to longtermism. 

In the spirit of being a vocal customer, I would have preferred this framing. I am enthusiastic about effective altruism, but ambivalent about longtermism - I'm glad some people focus on it, but it's not what I prioritise - and found the narrower framing somewhat unwelcoming, as if non-longtermists aren't worth considering. (Cf if you had said this was career advice for women even though gender was only pertinent to a few parts.)

Second, one aptitude that did seem conspicuous by its absence was for-profit entrepreneurship - the section on the "entrepreneur" aptitude only referred to setting up longtermist organisations. After all, the Open Philanthropy Project, along with much of the rest of the effective altruist world, only exists because people became very wealthy and then gave their money away. I'm wondering if you think it is sufficiently easy to persuade (prospectively) wealthy people of effective altruism(/longtermism) that becoming wealthy isn't something community members should focus on; I have some sympathy with this view, but note you didn't state it here. 

EA Infrastructure Fund: May 2021 grant recommendations

Yes, I read that and raised this issue privately with Jonas.

Load More