MichaelPlant

Wiki Contributions

Comments

Low-Hanging (Monetary) Fruit for Wealthy EAs

Ordinary wealthy people don't care as much about getting more money because they already have a lot of it. So we should expect to be able to find overlooked methods for rich people to get richer

I'm not sure what you mean by 'ordinary' wealthy people (vs 'altruistic' wealthy people?) but I'd be pretty surprised if there were overlooked methods. In my experience (ordinary) wealthy types spend lots of time talking to other wealthy types and swap notes on how best to do the most with their money. Because they have more money, it can make sense to hire people to help you save it, eg tax accountants. In short, I reject the premise that most wealthy people are just not trying to make themselves wealthy and therefore there are $20 bills on the sidewalk for wealthy people who really care about helping others. 

I'm not convinced by the examples either. I'm assuming the case with Sam Bankman-Fried is that his business didn't require as much external investment, rather than (with no offense to him) he has remarkable negotiating skills basically all other entrepreneurs lack. 

Prioritization Research for Advancing Wisdom and Intelligence

The implicit framing of this post was that, if individuals just got smarter, everything would work out much better. Which is true to some extent. But I'm concerned this perspective is overlooking something important, namely that it's very often the case it's clear what should be done for the common good, but society doesn't organise itself to do those things because many individuals don't want to - for discussion, see the recent 80k podcast on institutional economics and corruption. So I'd like to see a bit more emphasis on collective decision-making vs just individuals getting smarter.  

How valueable are external reviews?

Thanks for clarifying! I wonder if it would be even better if the review was done by people outside the EA community. Maybe the sympathy of belonging to the same social group and shared, distinctive assumptions (assuming they exist), make people less likely to spot errors? This is pretty speculative, but wouldn't surprise me

I can't immediately remember where I've seen this discussed before, but I concerned I've heard raised is that's it's quite hard to find people who (1) know enough about what you're doing to evaluate your work but (2) are not already in the EA world. 

I see, interesting! This might be a silly idea, but what do you think about setting up a competition where there is a cash-prize of a few thousand dollars for the person who spots an important mistake? If you manage to attract the attention of a lot of phd students in the relevant area, you might really get a lot of competent people trying hard to find your mistakes. 

Hmm. Well, I think you'd have to be quite a big and well funded organisation to do that. It would be a lot of management time to set up and run a competition, one which wouldn't obviously be that useful (in terms of the value of information, such a competition is more valuable the worse you think your research is). I can see organisations quite reasonably thinking this wouldn't be a good staff priority vs other things. I'd be interested to know if this has happened elsewhere and how impactful it had been. 

>> Maybe that would be weird for some people. I would be surprised though if the majority of people wouldn't interpret a positive expert review as a signal that your research is trustworthy (even if its not actually a signal because you chose and paid that expert). 

That's right. People who were suspicious of your research would be unlikely to have much confidence in the assessment of someone you paid.

How valueable are external reviews?

I think my argument here holds for any other similar organisation. 
 

Gotcha

does it count as an independent, in-depth, expert review?

I mean, how long is a piece of string? :) The way I did my reviewing was to check the major assumptions and calculations and see if those made sense. But where a report, say, took information from academic studies, I wouldn't necessarily delve into those or see if they had been interpreted correctly. 

Re making things public, that's a bit trickier than it sounds. Usually I'd leave a bunch of comments in a google doc as I went, which wouldn't be that easy for a reader to follow. You could ask someone to write a prose evaluation - basically like an academic journal review report - but that's quite a lot more effort and not something I've been asked to do.

In HLI, we have asked external academics to do that for us for a couple of pieces of work, and we recognise it's quite a big ask vs just leaving gdoc comments. The people we asked were gracious enough to do it, but they were basically doing us a favour and it's not something we could keep doing (at least with those individuals). I guess one could make them public - we've offered to share ours with donors, but none have asked to see them - but there's something a bit weird about it: it's like you're sending the message "you shouldn't take our word for it, but there's this academic who we've chosen and paid to evaluate us - take their word for it".

How valueable are external reviews?

I'm slightly confused by the framing here. You only mention Founders Pledge, which, to me, implies you think Founders Pledge don't get external reviews but other EA orgs do.

This doesn't seem right, because Founders Pledge do ask others for reviews: they've asked me/my team at HLI to review several of their reports (StrongMinds, Actions for Happiness, psychedelics) which we've been happy to do, although we didn't necessarily get into the weeds. I assume they do this for their other reports and this is what I expect other EA orgs do too.

Presenting: 2021 Incubated Charities (Charity Entrepreneurship)

Very well done to the incubatees! I wish you the best of luck. Two questions.

For Training For Good, did you consider teaching professional skills, eg management, to those in EA orgs? I ask rather self-interestedly and because that was conspicuous by its absence.

For CAPS, could you explain what the cost-effectiveness analysis was that led to that benefit:cost ratio? I couldn't immediately see anything explaining that on the website; sorry if I missed it!

We’re discontinuing the standout charity designation

I'll  post Catherine's reply and then raise a couple of issues:
 

Thanks for your question. You’re right that we model GiveDirectly as the least cost-effective top charity on our list, and we prioritize directing funds to other top charities (e.g. through the Maximum Impact Fund). GiveDirectly is the benchmark against which we compare the cost-effectiveness of other opportunities we might fund.

As we write in the post above, standout charities were defined as those that “support programs that may be extremely cost-effective and are evidence-backed” but “we do not feel as confident in the impact of these organizations as we do in our top charities.”

Our level of confidence, rather than their estimated cost-effectiveness, is the key difference between our recommendation of GiveDirectly and standout charities.

We consider the evidence of GiveDirectly’s impact to be exceptionally strong. We’re not sure that our standout charities were less cost-effective than GiveDirectly (in fact, as we wrote, some may be extremely cost-effective), but we felt less confident in making that assessment, based on the more limited evidence in support of their impact, as well as our more limited engagement with them.

 

I don't see a justification here for keeping GiveDirectly in the list. Okay, there are charities GiveWell is 'confident' in, and those that they aren't, and GiveDirectly, like the other top picks, is in the first category. But this still raises the question of why to recommend GiveDirectly at all. Indeed, it's arguably more puzzling: if you think there's basically no chance A is better than B, why advocate for A? At least if you think A might be better than B, then you might defend recommending A on the grounds there's a chance, that is, if someone believes X, Y, Z they might sensibly believe it's better.

The other thing that puzzles me about this response is its seemingly non-standard approach to expected value reasoning. Suppose you can do G, which has a 100% chance of doing one 'unit' of good, or H, which has a 50% chance of doing 3 'units' of good. I say you should pick H because, in expectation, it's better, even though you're not sure it will be better. 

Where might having less evidence fit into this?

One approach to dealing with different levels of evidence is to discount the 'naive' expected value of the intervention, that is, the one you get from taking the evidence at face value. Why and by how much should you discount your 'naive' estimate? Well, you reduce it to what you expect you would conclude its actual expected value was if you had better information. For instance, suppose one intervention has RCTs with much smaller samples, and you know that effect sizes tend to go down when interventions use larger samples (they are harder to implement at scale, etc.). Hence, you're justified in discounting it because and to that extent. Once you've done this, you have the 'sophisticated' expected values. Then you do the thing with the higher 'sophisticated' expected value. 

Hence, I don't see why lower ('naive') cost-effectiveness should stop someone from recommending something.

We’re discontinuing the standout charity designation

This line of reasoning seems sensible to me. However, it does raise the following question: will GiveWell also stop recommending GiveDirectly, given that, by your own cost-effectiveness numbers, it's 10-20x less cost-effective than basically all your other recommendations? And, if not, why not?

I can understand the importance of having some variety of options to recommend donors, which necessitates recommending some things that are worse than others, but 10x worse seems to leave quite a lot of value on the table. Hence, I'd be curious to hear the rationale.

Has Life Gotten Better?

Thanks for this! I don't see anything here that disagrees with my claim. I said it can't literally be true, which is how lots of people treat it. Going from no income to $400/year also involves an infinity of doublings.

A better claim might be "given you have enough income to subsist, doubling your income causes a fixed increase in happiness." Fine, but note that's not literally the claim "doubling your income causes a fixed increase in happiness." My hope is that by showing the logarithmic model isn't true, that pushes us to come up with a more realistic model of the relationship between happiness and income.

Has Life Gotten Better?

Namely: very rough estimates suggest that we are now 100x-1000x richer than in the past, and our lives are in the range [good-ok], but generally not pure bliss or anything close to it. If we extend reasonable estimations for  the effect of  material circumstances on wellbeing (i.e. doubling of wealth increases satisfaction by 1 point on a 10 point scale) , we should then expect past humans to have been miserable.

I don't think we should expect past humans to have been miserable. One of the key findings in the happiness literature is the so-called Easterlin Paradox, which is that (1) richer people are happier than poorer people at a time but (2) people, in aggregate, happiness doesn't increase over time. This is usually explained by some combination of adaptation and social comparison effects. 

It's also worth noting that the "each doubling of income increases happiness by a fixed amount (eg 1 point on a 10-point scale)" can't literally be true. If it were, anyone with any income would have maximum happiness, because there are an infinity of doubling between zero income and, well, any level of income. Research does, however, find this result is basically true across the range of incomes that people actually have. 

Load More