richard_ngo

Former AI safety research engineer, now PhD student in philosophy of ML at Cambridge. I'm originally from New Zealand but have lived in the UK for 6 years, where I did my undergrad and masters degrees (in Computer Science, Philosophy, and Machine Learning). Blog: thinkingcomplete.blogspot.com

Sequences

EA Archives Reading List

Comments

Some quick notes on "effective altruism"

Well, my default opinion is that we should keep things as they are;  I don't find the arguments against "effective altruism" particularly persuasive, and name changes at this scale are pretty costly.

Insofar as people want to keep their identities small, there are already a bunch of other terms they can use - like longtermist, or environmentalist, or animal rights advocate. So it seems like the point of having a term like EA on top of that is to identify a community. And saying "I'm part of the effective altruism community" softens the term a bit.

around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists"

This seems like the most important point to think about; relatedly, I remember being surprised when I interned at FHI and learned how many people there don't identify as effective altruists. It seems indicative of some problem, which seems worth pursuing directly. As a first step, it'd be good to hear more from people who have reservations about identifying as an effective altruist. I've just made a top-level question about it, plus an anonymous version - if that describes you, I'd be interested to see your responses!

Some quick notes on "effective altruism"

I think the "global priorities" label fails to escape several of the problems that Jonas argued the EA brand has. In particular, it sounds arrogant for someone to say that they're trying to figure out global priorities. If I heard of a global priorities forum or conference, I'd expect it to have pretty strong links with the people actually responsible for implementing global decisions; if it were actually just organised by a bunch of students, then they'd seem pretty self-aggrandizing.

The "priorities" part may also suggest to others that they're not a priority. I expect "the global priorities movement has decided that X is not a priority" seems just as unpleasant to people pursuing X as "the effective altruism movement has decided that X is not effective".

Lastly, "effective altruism" to me suggests both figuring out what to do, and then doing it. Whereas "global priorities" only has connotations of the former.

Proposed Longtermist Flag

What would you think about the same flag with the sun removed?

Might make it look a little unbalanced, but I kinda like that - longtermism is itself unbalanced in its focus on the future.

Some preliminaries and a claim

I didn't phrase this as clearly as I should have, but it seems to me that there are two separate issues here: firstly whether group X's views are correct, and secondly whether group X uses a methodology that is tightly coupled to reality (in the sense of having tight feedback loops, or making clear predictions, or drawing on a lot of empirical evidence).

I interpret your critique of EA roughly as the claim that a lack of a tight methodological coupling to reality leads to a lack of correctness. My critique of the posts you linked is also that they lack tight methodological coupling to reality, in particular because they rely on high-level abstractions. I'm not confident about whether this means that they're actually wrong, but it still seems like a problem.

Some preliminaries and a claim

I claim that the Effective Altruism and Bay Area Rationality communities have collectively decided that they do not need to participate in tight feedback loops with reality in order to have a huge, positive impact.

I am somewhat sympathetic to this complaint. However, I also think that many of the posts you linked are themselves phrased in terms of very high-level abstractions which aren't closely coupled to reality, and in some ways exacerbate the sort of epistemic problems they discuss. So I'd rather like to see a more careful version of these critiques.

Contact with reality

Yes, I think I still have these concerns; if I had extreme cognitive biases all along, then I would want them removed even if it didn't improve my understanding of the world. It feels similar to if you told me that I'd lived my whole life in a (pleasant) dreamlike fog, and I had the opportunity to wake up. Perhaps this is the same instinct that motivates meditation? I'm not sure.

Contact with reality

This post is great, and I think it frames the idea very well.

My only disagreement is with the following part of the scenario you give:

Every time you try to think things through, the machine will cause you to make mistakes of reasoning that you won’t notice: indeed, you’ve already been making lots of these. You’re hopelessly confused on a basic level, and you’ll stay that way for the rest of your life.

The inclusion of this seems unhelpful to me, because it makes me wonder about the extent to which a version of me whose internal thought processes are systematically manipulated is really the same person (in the sense that I care about). Insofar as the ways I think and reason are part of my personality and identity, then I might have additional reasons to not want them to be changed (in addition to wanting my beliefs to be accurate).

As you identify, it may still be necessary to interfere with my beliefs for the purposes of maintaining social fictions, but this could plausibly require only minor distortions. Whereas losing control of my mind in the way you describe above seems quite different from just having false beliefs.

Scope-sensitive ethics: capturing the core intuition motivating utilitarianism

I'd say scope sensitive ethics is a reinvention of EA.

This doesn't seem quite right, because ethical theories and movements/ideologies are two different types of things. If you mean to say that scope sensitive ethics is a reinvention of the ethical intuitions which inspired EA, then I'm happy to agree; but the whole point of coining the term is to separate the ethical position from other empirical/methodological/community connotations that EA currently possesses, and which to me also seem like "core ideas" of EA.

Clarifying the core of Effective Altruism

Thanks for the kind words and feedback! Some responses:

I wonder if there are examples?

The sort of examples which come to mind are things like new religions, or startup, or cults - all of which make heavy demands on early participants, and thereby foster a strong group bond and  sense of shared identity which allows them greater long-term success. 

since the antecedent "if you want to contribute to the common good" is so minimal, ben's def feels kind of near-normative to me

Consider someone who only cares about the lives of people in their own town. Do they want to contribute to the common good? In one sense yes, because the good of the town is a part of the common good. But in another sense no; they care about something different from the common good, which just happens to partially overlap with it.

Using the first definition, "if you want to contribute to the common good" is too minimal to imply that not pursuing effective altruism is a mistake.

Using the second definition, "if you want to contribute to the common good" is too demanding - because many people care about individual components of the common good (e.g. human flourishing) without being totally on board with "welfare from an impartial perspective".

I think I disagree about the maximising point. Basically I read your proposed definition as near-maximising, becuase when you iterate on 'contributing much more' over and over again you get a maximum or a near-maximum.

Yeah, I agree that it's tricky to dodge maximalism. I give some more intuitions for what I'm trying to do in this post. On the 2nd worry: I think we're much more radically uncertain about the (ex ante) best option available to us out of the space of all possible actions, than we are radically uncertain about a direct comparison between current options vs a new proposed option which might do "much more" good. On the 3rd worry: we should still encourage people not to let their personal preferences stand in the way of doing much more good. But this is consistent with (for example) people spending 20% of their charity budget in less effective ways. (I'm implicitly thinking of "much more" in relative terms, not absolute - so a 25% increase is not "much more" good.)

AMA: Ajeya Cotra, researcher at Open Phil

An extension of Daniel's bonus question:

If I condition on your report being wrong in an important way (either in its numerical predictions, or via conceptual flaws) and think about how we might figure that out today, it seems like two salient possibilities are inside-view arguments and outside-view arguments.

The former are things like "this explicit assumption in your model is wrong". E.g. I count my concern about the infeasibility of building AGI using algorithms available in 2020 as an inside-view argument.

The latter are arguments that, based on the general difficulty of forecasting the future, there's probably some upcoming paradigm shift or crucial consideration which will have a big effect on your conclusions (even if nobody currently knows what it will be).

Are you more worried about the inside-view arguments of current ML researchers, or outside-view arguments?

Load More