If you click preview episode on that link you get the full episode. I also get the whole thing on my podcast feed (PocketCasts, not Spotify). Perhaps it's a Spotify issue?
Sorry, I edited while you posted. I see US as 1.44% * 27tn = ~400bn, which is the vast majority of charitable giving when I add in the rest of the countries Wikipedia lists and interpolate based on location for other biggish economies
Our friends estimate the cost at about $258 billion dollars to end extreme poverty for a year, and point out that this is a small portion of yearly philanthropic spending or [...]
Is that true?
Just ballparking this based on fractions of GDP given to charitable organisations (big overestimate imo), I get global giving at ~500bn/year. So I don't believe this is true
Now this is not… great, and certainly quite different from the data by tenthkrige. I'm pretty sure this isn't a bug in my implementation or due to the switch from odds to log-odds, but a deeper problem with the method of rounding for perturbation.
It's not particularly my place to discuss this, but when I replicated his plots I also found got very different results, and since then he shared his code with me and I discovered bug in it.
Basically it simulates the possible outcomes of all the other bets you have open.
How can I do that without knowing my probabilities for all the other bets? (Or have I missed something on how it works?)
Less concave = more risk tolerant, no?
Argh, yes. I meant more concave.
The point of this section is that since there are no good public estimates of the curvature of the philanthropic utility function for many top EA cause areas, like x-risk reduction, we don't know if it's more or less concave than a typical individual utility function. Appendix B just illustrates a bit more concretely how it could go either way. Does that make sense?
No, it doesn't make sense. "We don't know the curvature, ergo it could be anything" is not convincing. What you seem to think is "concrete" seems entirely arbitrary to me.
As Michael Dickens notes, and as I say in the introduction, I think the post argues on balance against adopting as much financial risk tolerance as existing EA discourse tends to recommend.
I appreciate you think that, and I agree that Michael has said he agrees, but I don't understand why either of you think that. I went point-by-point through your conclusion and it seems clear to me the balance is on more risk taking. I don't see another way to convince me other than putting the arguments you put forward into each bucket, weighting them and adding them up...
but these arguments are not as strong as people claim, so we shouldn't say EAs should have high risk tolerance
I don't get the same impression from reading the post especially in light of the conclusions, which even without my adjustments seems in favour of taking more risk.
Can you elaborate on why you believe this? Are you talking specifically about global poverty interventions, or (EA-relevant) philanthropy in general? (I can see the case for global poverty[1], I'm not so sure about other causes.)
I was mosty thinking global poverty and health yes. I think it's still probably true for other EA-relevant philanthropy, but I don't think I can claim that here.
...I'm also not clear on why you believe this, can you explain? (FWIW the claim in the parenthetical is probably false: on a cross-section across countries, GDP growth and equ
Just in the first few lines, we have a nitpick about the grammar of the title
I actually think this is substantial more than a nitpick. I doubt people are reading the whole of a 61(!) minute article and spot that the article doesn't support the title.
I'll grant the second point, I found critiquing this article extremely difficult and frustrating due to the structure and I think the EA forum would be much better if people wrote shorter articles, and it disappoints me that people seem to upvote without reading
I think the article does support the title. By my read, the post is arguing:
(I read "much" in the title as a synonym for "high", I personally would have used the word "high" but I have no problem with the title.)
I agree that short posts are generally better, I did find this post long (I reviewed it before publication and it took me abo...
Wow - this is a long post, and it's difficult for me to point out exact which bits I disagree with and which bits I agree with given it's structure. I'm honestly surprised it's so popular.
I also don't really understand the title. "Against much financial risk tolerance".
Starting with your conclusions:
...Justifying a riskier portfolio
- The “small fish in a big pond + idiosyncratic
We have a much flatter utility curve for consumption (ignoring world-state) vs individual investors (using GiveWell's #s, or cause variety). [Strong]
Can you elaborate on why you believe this? Are you talking specifically about global poverty interventions, or (EA-relevant) philanthropy in general? (I can see the case for global poverty[1], I'm not so sure about other causes.)
...We have a much lower correlation between our utility and risk asset returns. (Typically equities are correlated with developed market economies and not natural disasters) [Strong]
Tyler Cowen on the effect of AGI on real rates:
...In standard models, a big dose of AI boosts productivity, which in turn boosts the return on capital, which then raises real interest rates.
I am less convinced. For one thing, I believe most of the gains from truly fundamental innovations are not captured by capital. Was Gutenberg a billionaire? The more fundamental the innovation, the more the import of the core idea can spread to many things and to many sectors.
Furthermore, over the centuries real rates of return seem to be falling, even th
There's a 3rd reason, which I expect is the biggest contributor. Number of readers of the post/comment.
I summarised a little bit how various organisations in the EA space aggregate QALY's over time here.
What I've been unable to find anywhere in the literature is how many QALYs a typical human life equates to? If I save a newborn from dying, is that worth 70 QALYs (~global life expectancy), 50 QALYs (not all of life is lived in good health), or some other value?
I think this post by Open Phil is probably related to what you're asking for and I would also recommend the GiveWell post on the same topic
I think this is still generally seen as a bit of an open ques...
How do you square:
The order was: I learned about one situation from a third party, then learned the situation described in TIME, then learned of another situation because I asked the woman on a hunch, then learned the last case from Owen.
with
No other women raised complaints about him to me, but I learned (in some cases from him) of a couple of other situations where his interactions with women in EA were questionable.
Emphasis mine. (Highlighting your first statement implies he informed you of multiple cases and this statement implies he only informed you of one)
In the first case, I initially heard about the situation from a third party, but nearly all the information I knew came from Owen. (I asked the woman if she had concerns about the situation that she wanted to discuss, and I didn’t hear back.)
Please would someone be able to put together a slightly more fleshed out timeline of who knew what and when. Best I can tell is:
On Feb 3 I heard from Owen, I discussed the situation with Nicole, I informed Owen I'd be telling the boards, and I told the boards. I told Chana the following morning.
I know I'm probably being dense here, but would it be possible for you to share what the other possibilities are?
Edit: I guess there's "The person doesn't have the role, but we are bound by some kind of confidentiality we agreed when removing them from post"
Just bumping this in case you've forgotten. At the moment there only seem to be two possibities: 1/ you forgot about this comment or 2/ the person does still have a role "picking out promising students" as Peter asked. I'm currently assuming it's 2, and I imagine other people are too.
We are working actively on this, but it is going to take more time. As a general point (not trying to comment on this situation in particular), those are not the only two possibilities, and I think it's really crucial to be able to hold on to that in contexts where there's issues of legality, confidentiality and lots of imperfect information flow.
Edit note: I at first had "local point" instead of "general point", which I meant in a mathy way, like the local logic of the situation point rather than speaking to any of the context, but looking back I don't think that was very clear so I've edited to clarify my meaning.
iirc, there is access to the histogram, which tells you how many people predicted each %age. I then sampled k predictors from that distribution.
"k predictors" is the number of samples I was looking at
">N predictors" was the total number of people who predicted on a given question
what does SD stand for? Usually I would expect standard deviation?
Yes, that's exactly right. The HLI methodology consists of polling together a bunch of different studies effect-sizes (measured in standard deviations) and then converting those standard deviations into WELLBYs. (By mulitplying by a number ~2).
No bet from me on the Ozler tria
Fair enough - I'm open to betting on this with anyone* fwiw. * anyone who hasn't already seen results / involved in the trial ofc
Any intervention is extremely sensitive to implementation details, whether deworming or nets or psychotherapy.
Yes, I'm sorry if my comment appeared to dismiss this fact as I do strongly agree with this.
Maybe some interventions are easier to implement than others, and there might be more variance in the effectiveness of psychotherapy compared with net distribution (although I doubt that, I would guess less variance than nets) but all are very sensitive to implementation details.
This is pretty much my point
...I'd be intereste
My analysis of StrongMinds is based on a meta-analysis of 39 RCTS of group psychotherapy in low-income countries. I didn’t rely solely on StrongMinds’ own evidence alone, I incorporated the broader evidence base from other similar interventions too. This strikes me, in a Bayesian sense, as the sensible thing to do.
I agree, but as we have already discussed offline, I disagree with some of the steps in your meta-analyses, and think we should be using effect sizes smaller than the ones you have arrived at. I certainly didn't mean to claim in my post that...
I had seen both of those, but I didn't read either of them as commitments that HLI thinks that the neutral point is between 0 and 5.
I would guess some combination of:
I don't really have a strong opinion on any of these - macro is really hard and really uncertain. To quote a friend of mine:
...one thing is that if AGI looks something like robin hanson's EM scenario, you really don't want to owe money to anyone
or if other
I'm really confused where any of those numbers have come from for using futures? (But yes, the expected return with low leverage is not spectacular for 2% move in rates).
I want to suggest a bunch of caution against shorting bonds (or tips).
- The 30yr yield is 3.5%, so you make -3.5% per year from that.
- You earn the cash rate on the capital freed up from the shorts, which is 3.8% in interactive brokers.
- If you're right that the real interest rate will rise 2% over 20 years, and average duration is 20 years, then you make +40% over 20 years – roughly 2% per year.
- If you buy an ETF, maybe you lose 0.4% in fees.
So you end up with a +1.9% expected return per year.
I think the calculation you've done here is -3.5% + 3.8% + 2% - 0.4%
Th...
A 60:40 portfolio has an effective duration of ~40 years, where most of that duration comes from equities.
I'm not really sure how you get that? The duration on the bond portion is going to be ~7-10y which would imply 60y duration for equities, which I think is wrong.
...My understanding is that an important part of the reasoning for a focus on avoiding bonds is that an increase in GDP growth driven by AI is clearly negative for bonds, but has an ambiguous effect on equities (plus commodities and real estate), so overall you should hold more equities (/growth a
The highest neutral point we think is plausible is 5/10 on a 0 to 10 wellbeing scale, but we mentioned that some philosophical views would stake a claim to the feasibility of 10/10.
If you can point me to somewhere on the HLI website I can cite I will update this.
...I think you can fill out the missing cells for HLI by taking the average age of death, which for Malaria I is ~2 for under 5's and ~46 for over 5s. Assuming a life expectancy of 70 (what we've assumed previously for malaria deaths), that'd imply a moral weight of under-5s = (70 - 2) * (-1 , 4) &nbs
Whenever I see charts like this in a financial context I twitch. We have 30 years of data for UK real rates, less for other issuers. There are ~2 non-overlapping UK data points on your second chart where I can count at least 15(?) data points?
...To expand a little on "this seems implausible": I feel like there is probably a mistake somewhere in the notion that anyone involves thinks that <doubling income as having 1.3 WELLBY and severe depression has having a 1.3 WELLBY effect.>
The mistake might be in your interpretation of HLI's document (it does look like the 1.3 figure is a small part of some more complicated calculation regarding the economic impacts of AMF and their effect on well being, rather than intended as a headline finding about the cash to well
that would be very important as it would mean that SoGive moral weights fail some basic sanity checks
I would recommend my post here. My opinion is - yes - SoGive's moral weights do fail a basic sanity check.
1 year of averted depression is 4 income doublings
1 additional year of life (using GW life-expectancies for over 5s) is 1.95 income doublings.
ie SoGive would thinks depression is worse than death. Maybe this isn't quite a "sanity check" but I doubt many people have that moral view.
I do think all this is a somewhat separate discussion from the GWWC list
I...
I have a simple answer to this: no, it isn't.
I don't understand how that's possible. If you put 3x the weight on StrongMind's cost-effectiviness viz-a-vis other charities, changing this must move the needle on cost-effectiveness more than anything else. It's possible to me it could have been "well into the range of gold-standard" and now it's "just gold-standard" or "silver-standard". However if something is silver standard, I can't see any way in which your cost-effectivness being adjusted down by 1/3rd doesn't massively shift your rating.
...I'd say that the
I agree - and I started out trying to list all their approaches, but it very quickly becomes untractable in the table format. I have edited to show the full range, although I'm not sure if it's more or less helpful than before. Hopefully it does should how counter-intuitive their model can be
I might be being a bit dim here (I don't have the time this week to do a good job of this), but I think of all the orgs evaluating StrongMinds, SoGive's moral weights are most likely to find favourably for StrongMinds. Given that, I wonder what you expect you'd rate them at if you altered your moral weights to be more inline with FP and HLI?
...SoGive’s Gold Standard Benchmarks are:
- £5,000 per life saved
- £50 to double someone’s consumption (spending) for one year
- £200 to avert one year of severe depression
- £5 to avert the suffering of one chicken who is living in
Out of interest what do your probabilities correspond to in terms of the outcome from the Ozler RCT? (Or is your uncertainty more in terms of what you might find when re-evaluating the entire framwork?)
since we base our assessment mostly on HLI's work, and since we draw different conclusions from HLI's work than you think are reasonable, we should reassess StrongMinds on that basis. Is that right
I'm not sure exactly what you've done, so it's hard for me to comment precisely. I'm just struggling to see how you can be confident in a "6x as effective as GD" conclusion.
what does a distribution of your beliefs over cost-effectiveness for StrongMinds look like?
So there are two sides to this:
I agree that beforemy post GWWC hadn't done anything wrong.
At this point I think that GWWC should be able to see that their current process for labelling top-rated charities is not optimal and they should be changing it. Once they do that I would fully expect that label to disappear.
I'm disappointed that they don't seem to agree with me, and seem to think that no immediate action is required. Obviously that says more about my powers of persuasion than them though, and I expect once they get back to work tomorrow and they actually look in more detail they c...
Hi Simon,
I'm back to work and able to reply with a bit more detail now (though also time-constrained as we have a lot of other important work to do this new year :)).
I still do not think any (immediate) action on our part is required. Let me lay out the reasons why:
(1) Our full process and criteria are explained here. As you seem to agree with from your comment above we need clear and simple rules for what is and what isn't included (incl. because we have a very small team and need to prioritize). Currently a very brief summary of these rules/the process...
As a GWWC member who often donates through the GWWC platform I think it is great that they take a very broad brush and have lots of charities that people might see as top on the platform. I think if their list got to small they would not be able to usefully serve the GWWC donor community (or other donors) as well.
I agree, and I'm not advocating removing StrongMinds from the platform, just removing the label "Top-rated". Some examples of charities on the platform which are not top-rated include: GiveDirectly, SCI, Deworm the World, Happier Lives Institute, ...
Claude's Summary:
Here are a few key points summarizing Will MacAskill's thoughts on the FTX collapse and its impact on effective altruism (EA):
- He believes Sam Bankman-Fried did not engage in a calculated, rational fraud motivated by EA principles or long-termist considerations. Rather, it seems to have stemmed from hubris, incompetence and failure to have proper risk controls as FTX rapidly grew.
- The fraud and collapse has been hugely damaging to EA's public perception and morale within the community. However, the core ideas of using reason and evidence to
... (read more)