Kit

Comments

Which effective altruism projects look disingenuous?

I'm totally not a mod, but I thought I'd highlight the "Is it true? Is it necessary? Is it kind?" test. I think it's right in general, but especially important here. The Forum team seems to have listed basically this too: "Writing that is accurate, kind, and relevant to the discussion at hand."

I'm also excited to highlight another piece of their guidance "When you disagree with someone, approach it with curiosity: try to work out why they think what they think, and what you can learn from each other." On this:

  • Figuring out what someone thinks usually involves talking to them. If posting here is the first someone has heard of your concern, that might not be a very good way of resolving the disagreement.
  • Most people running a project in the community are basically trying to do good. It sounds obvious, but having a pretty strong prior on disagreements being in good faith seems wise here.
The Risk of Concentrating Wealth in a Single Asset

I think this is the best intro to investing for altruists that I've seen published. The investment concepts it covers are the most important ones, and the application to altruists seems right.

(For context: I used to work as a trader, which is somewhat but not very relevant, and have thought about this kind of thing a bit.)

GiveDirectly plans a cash transfer response to COVID-19 in US

I would guess that the decision of which GiveDirectly programme to support† is dominated by the principle you noted, of

the dollar going further overseas.

Maybe GiveDirectly will, in this case, be able to serve people in the US who are in comparable need to people in extreme poverty. That seems unlikely to me, but it seems like the main thing to figure out. I think your 'criteria' question is most relevant to checking this.

† Of course, I think the most important decision tends to be deciding which problem you aim to help solve, which would precede the question of whether and which cash transfers to fund.

GiveDirectly plans a cash transfer response to COVID-19 in US

The donation page and mailing list update loosely suggest that donations are project-specific by default. Likewise, GiveWell says:

GiveDirectly has told us that donations driven by GiveWell's recommendation are used for standard cash transfers (other than some grant funding from Good Ventures and cases where donors have specified a different use of the funds).

(See the donation page for what the alternatives to standard cash transfers are.)

If funding for different GiveDirectly projects are sufficiently separate, your donation would pretty much just increase the budgets of the programmes you wish to support, perhaps especially if you give via GiveWell. If I were considering giving to GiveDirectly, I would want to look into this a bit more.

Concerning the Recent 2019-Novel Coronavirus Outbreak

[Comment not relevant]

[This comment is no longer endorsed by its author]Reply
A small observation about the value of having kids

For the record, I wouldn't describe having children to 'impart positive values and competence to their descendants' as a 'common thought' in effective altruism, at least any time recently.

I've been involved in the community in London for three years and in Berkeley for a year, and don't recall ever having an in-person conversation about having children to promote values etc. I've seen it discussed maybe twice on the internet over those years.

--

Additionally: This seems like an ok state of affairs to me. Having children is a huge commitment (a significant fraction of a life's work). Having children is also a major part of many people's life goals (worth the huge commitment). Compared to those factors, it seems kind of implausible even in the best case that the effects you mention would be decisive.

Then: If one can determine a priori that these effects will rarely affect the decision of whether to have children, the value of information as discussed in this piece is small.

Assumptions about the far future and cause priority

In the '2% RGDP growth' view, the plateau is already here, since exponential RGDP growth is probably subexponential utility growth. (I reckon this is a good example of confusion caused by using 'plateau' to mean 'subexponential' :) )

In the 'accelerating view', it seems that whether there is exponential utility growth in the long term comes down to the same intuitions about whether things keep accelerating forever that are discussed in other threads.

Assumptions about the far future and cause priority

Thanks!

In my understanding, [a confident focus on extinction risk] relies crucially on the assumption that the utility of the future cannot have exponential growth in the long term

I wanted to say thanks for spelling that out. It seems that this implicitly underlies some important disagreements. By contrast, I think this addition is somewhat counterproductive:

and will instead essentially reach a plateau.

The idea of a plateau brings to mind images of sub-linear growth, but all that is required is sub-exponential growth, a much weaker claim. I think this will cause confusion.

I also appreciated that the piece is consistently accurate. As I wrote this comment, there were several times where I was considering writing some response, then saw that the piece has a caveat for exactly the problem I was going to point out, or a footnote which explained what I was confused about.

A particular kind of accuracy is representing the views of others well. I don't think the piece is always as charitable as it could be, but details like footnote 15 make it much easier to understand what exactly other people's views are. Also, the simple absence of gross mischaracterisations of other people's views made this piece much more useful to me than many critiques.

Here are a few thoughts on how the model or framing could be more useful:

'Growth rate'

The concept of a 'growth rate' seems useful in many contexts. However, applying the concept to a long-run process locks the model of the process into the framework of an exponential curve, because only exponential curves have a meaningful long-run growth rate (as defined in this piece). The position that utility will grow like an exponential is just one of many possibilities. As such, it seems preferable to simply talk directly in terms of the shape of long-run utility.

Model decomposition

When discussing the shape of long-run utility, it might be easier to decompose total utility into population size and utility per capita. In particular, the 'utility = log(GDP)' model is actually 'in a perfectly equal world, utility per capita = log(GDP per capita)'. i.e. in a perfectly equal world, utility = population size x log(GDP per capita).[1]

For example, this resolves the objection that

if we duplicate our world and create an identical copy of it, I would find it bizarre if our utility function only increases by a constant amount, and find it more reasonable if it is multiplied by some factor.

The proposed duplication doubles population size while keeping utility per capita fixed, so it is a doubling[2] of utility in a model of this form, as expected.

More broadly, I suspect that the feasibility of ways to gain at-least-exponentially greater resources over time (analogous to population size, e.g. baby-universes, reversible computation[3]) and ways to use those resources at-least-exponentially better (analogous to utility per capita, no known proposals?) might be debated quite separately.

How things relate to utility

Where I disagreed or thought the piece was less clear, it was usually because something seemed at risk of being confused for utility. For example, explosive growth in 'the space of possible patterns of matter we can potentially explore' is used as an argument for possible greater-than-exponential growth in utility, but the connection between these two things seems tenuous. Sharpening the argument there could make it more convincing.

More broadly, I can imagine any concrete proposal for how utility per capita might be able to rise exponentially over very long timescales being much more compelling for taking the idea seriously. For example, if the Christiano reversible computation piece Max Daniel links to turns out to be accurate, that naively seems more compelling.

Switching costs

My take is that these parts don't get at the heart of any disagreements.

It already seems fairly common that, when faced with two approaches which look optimal under different answers to intractable questions, Effective Altruism-related teams and communities take both approaches simultaneously. For example, this is ongoing at the level of cause prioritisation and in how the AI alignment community works on multiple agendas simultaneously. It seems that the true disagreements are mostly around whether or not growth interventions are sufficiently plausible to add to the portfolio, rather than whether diversification can be valuable.

The piece also ties in some concerns about community health to switching costs. I particularly agree that we would not want to lose informed critics. However, similarly to the above, I don't think this is a real point of disagreement. Discussed simultaneously are risk from being 'surrounded by people who think that what I intend to do is of negligible importance' and risks from people 'being reminded that their work is of negligible importance'. I think this conflates what people believe with whether they treat those around them with respect, which I think are largely independent problems. It seems fairly clear that we should attempt to form accurate beliefs about what is best, and simultaneously be kind and supportive to other people trying to help others using evidence and reason.

---

[1] The standard log model is surely wrong but the point stands with any decomposition into population size multiplied by a function of GDP per capita.

[2] I think the part about creating identical copies is not the main point of the thought experiment and would be better separated out (by stipulating that a very similar but not identical population is created). However, I guess that in the case that we are actually creating identical people we can handle how much extra moral relevance we think this creates through the population factor.

[3] I guess it might be worth making super clear that these are hypothetical examples rather than things for which I have views on whether they are real.

Updated Climate Change Problem Profile
I’m curious to know what you think the difference is. Both problems require greenhouse gas emissions to be halted.

I agree that both mainline and extreme scenarios are helped by reducing greenhouse gas emissions, but there are other things one can do about climate change, and the most effective actions might turn out to be things which are specific to either mainline or extreme risks. To take examples from that link:

  • Developing drought-resistant crops could mitigate some of the worst effects of mainline scenarios, but might help little in extreme scenarios.
  • Attempting to artificially reverse climate change may be a last resort for extreme scenarios, but may be too risky to be worthwhile for mainline scenarios.

For the avoidance of doubt, I think that my point about mainline and extreme risks appealing to different worldviews is sufficient reason to separate the analyses even if the interventions ended up looking similar.

if you have two problems who require $100 or $200 of total funding to solve completely, if they both have $50 of funding today, they are not equally neglected

Yep, you could use the word 'neglected' that way, but I stand by my comment that if you do that without also modifying your definition of 'scale' or 'solvability', the three factors no longer add up to a cost-effectiveness heuristic. i.e. if you formalise what you mean by neglectedness and insert it into the formula here without changing anything else, the formula will no longer cancel out to 'good done / extra person or $'.

Updated Climate Change Problem Profile

Thanks for this. I found it interesting to think about. Here are my main comments.

Mainline and extreme risks

I think it would be better to analyse mainline risks and extreme risks separately.

  • Depending on whether or not you put substantial weight on future people, one type of risks may be much more important than the other. The extreme risks appear to pose a much larger existential threat than mainline risks, so if you value future generations the extreme risks may be much more important to focus on. The opposite may be true for people who apply high pure time discounting. (Relatedly, on most worldviews, the 'scale' factor will be materially different between the two.)
  • The ideal responses to mainline and extreme risks appear to be different. (Relatedly, the 'solvability' of those responses may differ, as might the amount of resources that are already committed to the relevant kinds of responses.)
  • The methodologies which are useful are different. Efforts to understand and act on mainline risks are amenable to standard economic approaches, while extreme risk analysis requires substantive judgements about empirical and model uncertainty.

Overall, putting the two together seems to make the analysis less clear, not more.

More expensive things are worse

Firstly, this approach will undervalue capital intensive causes where the total required investment is so large that $10 billion/year may still represent underfunding. A better model would be one which examined the total funding required to solve the problem compared to the current level of funding.

The framework 80k uses is designed to add up to a cost-effectiveness heuristic. Adjusting this by giving more expensive things higher neglectedness scores in effect takes the 'cost' out of the 'cost-effectiveness analysis'. Using a completely different framework would be fine, but making this adjustment alone causes one to depart from any notion of good done per effort put in.

If I put more time into this, I would focus on the solvability part

In your initial comments on solvability, you give a concrete set of interventions which you say would cost approximately a certain amount and achieve approximately a certain amount. If someone were to analyse these in detail, this could be the basis for a cost-effectiveness calculation. Of course, I don't know enough to say whether the Project Drawdown analysis you reference is accurate. Other people looking into this might want to focus on that since it seems crucial to the bottom line.

Qualified 'need's

When you just say 'we need to do Y', this seems to be sort of assuming the conclusion, adding little to my understanding of which actions produce the most impact. For example:

All of these solutions, and more, need to be rolled out as quickly as possible at global scale.

I found it much more helpful when you said 'if we want to achieve X, we need to do Y', improving my understanding of what actions lead to what effects. For example:

the latest UN press release (2019-10-23) states that nations need to increase their targets fivefold to meet the goal of limiting warming to 1.5C.

This part of my comment might sound like a nitpick, but I think attention to this kind of thing can make for better analysis and better communication.

All personal views only, of course.

Load More