JL

james.lucassen

489 karmaJoined Dec 2020Claremont, CA 91711, USA

Posts
2

Sorted by New

Comments
49

Hm. I think  I agree with the point you're making, but not the language it's expressed in? I notice that your suggestion is a change in endorsed moral principles, but you make an instrumental argument, not a moral one. To me, the core of the issue is here: 

If EA becomes a very big movement, I predict that individuals on the fringes of the movement will commit theft with the goal to donate more to charity and violence against individuals and organisations who pose x-risks.

This seems to me more of a matter of high-fidelity communication than a matter of which philosophical principles we endorse. The idea of ethical injunctions is extremely important, but not currently treated a central EA pillar idea. I would be very wary of EA self-modifying into a movement that explicitly rejects utilitarianism on the grounds that this will lead to better utilitarian outcomes.

if the AI is scheming against us, reading those posts won’t be very helpful to it, because those ideas have evidently already failed.

Pulling this sentence out for emphasis because it seems like the crux to me.

saving money while searching for the maximum seems bad

In the sense of "maximizing" you're using here, I agree entirely with this post. Aiming for the very best option according to a particular model and pushing solely on that as hard as you can will expose you to Goodhart problems, diminishing returns, model violations, etc. 

However, I think the sense of "maximizing" used in the post you're responding to, and more broadly in EA when people talk about "maximizing ethics", is quite different. I understand it to mean something more like "doing the most good possible" - not aiming to clear a certain threshold, or trading off with other ethical or non-ethical priorities. It's a philosophical commitment that says "even if you're already saved a hundred lives, it's just as ethically important to save one more. You're not done."

It's possible that a commitment to a maximizing philosophy can lead people to adopt a mindset like the one you describe in this post - to the extent that that's true I don't disagree at all that they're making a mistake. But I think there may be a terminological mismatch here that will lead to illusory disagreements.

Like the idea of having the place in the name, but I think we can keep that while also making the name cool/fun? 

Personally I wouldn't be opposed to calling EA spaces "constellations" in general, and just calling this one the "Harvard Constellation" or something. This is mostly because I think Constellation is an extraordinarily good name - it's when a bunch of stars get together to create something bigger that'll shine light into the darkness :)

Alternatively, "Harvard Hub" is both easy and very punchy.

I'm broadly on board with the points made here, but I would prefer to frame this as an addition to the pitch playbook, not a tweak to "the pitch".

Different people do need to hear different things. Some people probably do have the intuition that we should care about future people, and would react negatively to something like MacAskill's bottle example. But personally, I find that lots of people do react to longtermism with something like "why worry about the future when there are so many problems now?", and I think the bottle example might be a helpful intuition pump for those people.

The more I think about EA pitches the more I wonder if anyone has just done focus group testing or something...

yup sounds like we're on the same page - I think  I steelmanned a little too hard. I agree that the people making these criticisms probably do in fact think that being shot by robots or something would be bad.

I propose we Taboo the phrase "most important", and agree that it's quite vague. The claim I read Karnofsky as making, phrased more precisely, is something like:

In approximately this century, it seems likely that humanity will be exposed to a high level of X-risk, while also developing technology capable of eliminating almost all known X-risks.

This is the Precipice view of things - we're in a brief dangerous bottleneck, after which it seems like things will be much safer. I agree it takes a leap to forecast that no further X-risks will arise in the trillions of years post-Precipice.

Based on your post, I'm guessing that your use of "important" is something more about availability of choice, wildness, value, and maybe a twist where the present is always the most important by definition. I don't think Karnofsky would argue that the current century is the "most important" in any of these senses of the word.

Is there still disagreement after this Taboo?

tldr: I think this argument is in danger of begging the question, and rejecting criticisms that implicitly just say "EA isn't that important" by asserting "EA is important!"

There’s an analogy I think is instructive here

I think the fireman analogy is really fun, but I do have a problem with it. The analogy is built around the "fire = EA cause areas" analogy, and gets almost all of its mileage out of the implicit assumption that fires are important and need to be put out. 

This is why the first class of critics in the analogy look reasonable, and the second class of critics look ridiculous. The first go along with the assumption that fires are important, the second reject it (but all of this is implicit!)

I think criticism of the form "fires aren't actually as important as you think they are" is a valid, if extremely foundational, criticism. It's not vacuous. If someone has found the True Theory of Ethics and it says the most important thing is "live right" or "have diverse friends" or "don't be cringe", then I would want to know that!

I do wish they'd express it as the dramatic ethical claim it really is though, and not in this vague way that makes an unimportant criticism but implies it's important through tone and so indirectly hints at the real value claim behind it.

Agree  that the impactfulness of working on better government is an important claim, and one you don't provide much evidence for. In the interest of avoiding an asymmetric burden of proof, I want to note that I personally don't have strong evidence against this claim either. I would love to see it further investigated and/or tried out more.

All else equal I definitely like the idea to popularize some sort of longtermist sentiment. I'm still unsure about the usefulness - I have some doubts about the paths to impact proposed. Personally, I think that a world with a mass-appeal version of longtermism would be a lot more pleasant for me to live in, but not necessarily much better off on the metrics that matter.

  • Climate is a very democratically legitimate issue. It's discussed all the time, lots of people are very passionate about it, and it can probably move some pretty hefty voting blocs. I think investing the amount of energy it would take to get low-key longtermism to the same level of democratic legitimacy as climate, to get the same returns from government that the climate folks are getting, would be pretty abysmal. That said, I don't really know what the counterfactual looks like, so it's hard to compare how worthwhile the mass attention really is.
  • Widening the talent pool seems most plausible, but the model here is a bit fuzzy to me. Very few people work on one of their top-three-world-issues, but EA is currently very small, so doing this would probably bring in a serious influx of people wanting to do direct work. But if this dominates the value of the proposal, is there a reason it wouldn't be better/cheaper/faster to do more targeted outreach instead of aiming for Mass Appeal? I guess it depends on how easy vs how expensive it is to try and target the folks that really do want to work on one of their top-three-world-issues.
  • I think the benefit of "making longtermist causes easier to explain" is mostly subsumed by the other two arguments? I can't think of any path-to-impact for this that doesn't run through marginal pushes towards either government action or direct work.

Also, quick flag that the slogan "creating a better future for our grandchildren" reads a bit nationalist to me - maybe because of some unpleasant similarity to the 14 words.

Load more