A

AnonymousQualy

240 karmaJoined Jan 2023

Bio

Enthusiastic utilitarian and moral realist.  I made this anonymous account to talk about the controversial stuff.

Comments
25

What makes the best solution the longtermists breaking off, instead of everyone else breaking off?

I more or less agrees with this post that (1) longtermism is dominant,  (2)  longtermism is a bad cause area, and (3) longtermism is bad for PR reasons.  But I don't think we can divorce EA from a cause area a majority of its members (and associated organizations!) find compelling.  Even if we could, the PR damage that's already been caused wouldn't go away.

So it seems more realistic for exclusively near-termist EAs to try to carve out a separate space for ourselves.  Obviously that's a huge logistical task.  I don't really expect it to be successful.  But I rate its chances of success higher than cutting longtermism out of EA.

Disclaimer: this is very much just me spitballing.  And I only know about the U.S.  And I ditched the tractability requirement (see below).

Reasonably high confidence that these are high-impact (though I don't claim to be an expert):

  • Unconditional cash transfers (weaker form: child tax credit (big effects on child poverty))
  • Liberalizing land-use restrictions (like zoning )
  • Land value tax (this op-ed  based on this research was making waves today)
  • Prison reform (I understand this used to be an EA cause area, but I don't hear about it much anymore?)
  • Pandemic resistance (pretty sure there are EAs working on this right now?)
  • Welfare reform (this is a massive category that deserves to be broken down more, but in the interest of my not pouring too much work into this post I'll just say: housing vouchers are way too difficult to get and too many programs use work requirements that don't do much good)
  • Proportional representation (or, a weaker but perhaps more achievable electoral reform: ranked choice voting)
  • Liberalizing immigration
  • It felt wrong not to include something healthcare-related here, but health policy is an area I don't know much about.  As I understand it, the highest impact reforms in the U.S. would need to involve both moving in a single-payer direction, but also require supply-side interventions to decrease the costs of care and drugs (see patent idea below)

These policies are also interesting to think about, though I have lower confidence in their impact:

  • Sectoral bargaining
  • Occupational licensing reform (too many jobs require too many hoops to jump through)
  • Patent reform (could we get the same innovation with prizes? I don't know but it seems worth considering)
  • Public school funding (LARGE CAVEAT: There's clear benefits to higher quality teachers, but it's less clear how much you'd need to pay high quality teachers to get them into disadvantaged classrooms where they're needed)

I ditched the "tractability" requirement here because I'm not sure how to think about it.  A lot of great policies already have people working at them but still aren't getting enough attention.  Could EA move the needle? Idk, maybe (though frankly, I think our current brand is probably too politically toxic).  But then there's also the fact that once any good policy starts getting attention, it triggers political resistance.

Also, I'm all but certain that policies in low-income countries have way more impact than here in the U.S.  I just don't know enough to speak to those.

my personal consumption decisions just have such a tiny effect compared to my career/donation decisions that it feels like I shouldn’t pay much attention to their direct consequences

This isn't an argument against veganism; it's an argument against prioritizing  veganism as a cause area.  And EA isn't prioritizing veganism as far as I can tell?

 I worry it could be really easy for EA to become a community where people rationalize doing bad things on account of the fact that those things are just a little bit bad compared to all the good things they do.

I really don't want EA to become that.

This might just be an object-level disagreement about where EA's main positive impact is likely to come from, on our respective models of the world. E.g., if you think EA mainly has a positive impact via increasing donations to GiveDirectlly, then I buy that EA's current idea pipeline might be a lot weirder than optimal for that.

This is something of an argument for not including such different cause areas under the same banner.

Strong upvote.

Three additional arguments in favor of (marginally!!!!) greater social norm enforcement:

(1)

A movement can only optimize for one thing at a time.  EA should be optimizing for doing the most good.

That means sometimes, EA will need to acquiesce to social norms against behaviors that - even if fine in isolation - pose too great a risk of damaging EA's reputation and through it, EA's ability to do the most good.

This is trivially true; I think people just disagree about where the line should be drawn.  But I'm honestly not sure we're drawing any lines right now, which seems suboptimal.

(2)

Punishing norm violations can be more efficient than litigating every issue in full (this is in part why humans evolved punishment norms in the first place).

And sometimes, enforcing social norms may not just more efficient; it may be more likely to reach a good outcome.   For example, when the benefits of a norm are diffuse across many people and gradual, but the costs are concentrated and immediate, a collective action problem arises:  the beneficiaries have little incentive to litigate the issue, while those hurt have a large incentive.  Note how this interacts with point (1): reputation damages to EA at large are highly diffuse.

To strengthen this point, social norms often pass down knowledge that benefits adherents without their ever realizing it.  Humans aren't good at getting the best outcomes from our individual reasoning; we're good at collective learning.

(3)

 There are a lot more people in the world interested in norm violation than in doing the most good.  Therefore, we should expect that a movement too tolerant of weirdness will create too high a ratio of norm-violators to helpful EAs (this is the witch hunt point made in the OP).

I agree!  When I say "wing" I mean something akin to "AI risk" or "global poverty" - i.e., an EA cause area that specific people working on.

I agree!  Greater leniency across cultural divides is good and necessary.

But I also think that:

(1) That doesn't apply to the Bostrom letter

(2) There are certain areas where we might think our cultural norms are better than many alternatives; in these situations, it would make sense to tell the person from the alternate culture about our norm and try to persuade them to abide by it (including through social pressure).   I'm pretty comfortable with the idea that there's a tradeoff between cultural inclusion and maintaining good norms, and that the optimal balance between the two will be different for different norms.

Agreed.

I'm no cultural conservative, but norms are important social tools we shouldn't expect to entirely discard.  Anthropologist Joe Henrich's writing really opened my eyes to how norms pass down complex knowledge that would be inefficient for an individual to try to learn on their own.

I wholeheartedly agree that EA must remain welcoming to neurodiverse people.  Part of how we do that is being graceful and forgiving for people who inadvertantly violate social norms in pursuit of EA goals.

But I worry this specific comment overstates its case by (1) leaving out both the "inadvertent" part and the "in pursuit of EA goals" part, which implies that we ought to be fine with gratuitous norm violation, and (2)  incorporating political bias.  You say:

If we impose standard woke cancel culture norms on everybody in EA, we will drive away [neurodiverse people]. Politically correct people love to Aspy-shame.  They will seek out the worst things a neurodiverse person has ever said, and weaponize it to destroy their reputation, so that their psychological traits and values are allowed no voice in public discourse.

I don't want to speak for anyone with autism.  However, as best I can tell, this is not at all a universal view.  I know multiple peope who thrive in lefty spaces despite seeming (to me at least) like high decouplers.  So it seems more plausible to me that this isn't narrowly true about high decouplers in "woke" spaces; it's broadly true about high decouplers in communities who's political/ethical beliefs the decoupler does not share.

I also think that, even for a high decoupler (which I consider myself to be, though as far as I know I'm not on the autism spectrum) the really big taboos - like race and intelligence - are usually obvious, as is the fact that you're supposed to be careful when talking about them.  The text of Bostrom's email demonstrates he knows exactly what taboos he's violating.

I also think we should be careful not to mistake correlation for causation, when looking at EA's success and the traits of many of its members.  For example, you say:

[if we punish social norm violation] we will drive away everybody with the kinds of psychological traits that created EA, that helped it flourish, and that made it successful

There are valuable EA founders/popularizers who seem pretty adept at navigating taboos.  For example, every interview I've seen with Will MacCaskill involves him reframing counterintuitive ethics to fit with the average person's moral intuitions.  This seems to have been really effective at popularizing EA!

I agree that there are benefits from decoupling.  But there are clear utilitarian downsides too.  Contextualizing a statement is often necessary to anticipate its social welfare implications.  Contextualizing therefore seems necessary to EA.

Finally, I want to offer a note of sympathy.  While I don't think I'm autistic, I do frequently find myself at odds with mainstream social norms.  I prefer more direct styles of communication than most people.  I'm a hardcore utilitarian.   Many of the leftwing shibboleths common in among my graduate school classmates I find annoying, wrong, and even harmful.  For all these reasons, I share your feeling that EA is "oasis."   In fact, it's the only community I'm a part of that reaffirms my deepest beliefs about ethics in a clear way.

But ultimately, I think EA should not optimize to be that sort of reaffirming space for me.   EA's goal is wellbeing maximization, and anything other than wellbeing maximization will sometimes - even if only rarely - have to be compromised.

Lying to meet goals != contextualizing

It's hard for me to follow what you're trying to communicate.  Are you saying that high contextualizers don't/can't apply their morals universally while high decouplers can?  I don't see any reason to believe that.   Are you saying that decouplers are more honest?  I also don't see any reason to believe that.

Load more