12 comments, sorted by Click to highlight new comments since: Today at 5:26 PM
New Comment

CC'd to lesswrong.com/shortform

Positive and negative longtermism

I'm not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.

In this shortform, I'm going to take a polarity approach. I'm going to bring each pole to it's extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.

Negative longtermism is saying "let's not let some bad stuff happen", namely extinction. It wants to preserve. If nothing gets better for the poor or the animals or the astronauts, but we dodge extinction and revolution-erasing subextinction events, that's a win for negative longtermism.

In positive longtermism, such a scenario is considered a loss. From an opportunity cost perspective, the failure to erase suffering or bring to agency and prosperity to 1e1000 comets and planets hurts literally as bad as extinction.

Negative longtermism is a vision of what shouldn't happen. Positive longtermism is a vision of what should happen.

My model of Ord says we should lean at least 75% toward positive longtermism, but I don't think he's an extremist. I'm uncertain if my model of Ord would even subscribe to the formation of this positive and negative axis.

What does this axis mean? I wrote a little about this earlier this year. I think figuring out what projects you're working on and who you're teaming up with strongly depends on how you feel about negative vs. positive longtermism. The two dispositions toward myopic coalitions are "do" and "don't". I won't attempt to claim which disposition is more rational or desirable, but explore each branch

When Alice wants future X and Bob wants future Y, but if they don't defeat the adversary Adam they will be stuck with future 0 (containing great disvalue), Alice and Bob may set aside their differences and choose form a myopic coalition to defeat Adam or not.

  • Form myopic coalitions. A trivial case where you would expect Alice and Bob to tend toward this disposition is if X and Y are similar. However, if X and Y are very different, Alice and Bob must each believe that defeating Adam completely hinges on their teamwork in order to tend toward this disposition, unless they're in a high trust situation where they each can credibly signal that they won't try to get a head start on the X vs. Y battle until 0 is completely ruled out.
  • Don't form myopic coalitions. A low trust environment where Alice and Bob each fully expect the other to try to get a head start on X vs. Y during the fight against 0 would tend toward the disposition of not forming myopic coalitions. This could lead to great disvalue if a project against Adam can only work via a team of Alice and Bob.

An example of such a low-trust environment is, if you'll excuse political compass jargon, reading bottom-lefts online debating internally the merits of working with top-lefts on projects against capitalism. The argument for coalition is that capitalism is a formiddable foe and they could use as much teamwork as possible; the argument against coalition is historical backstabbing and pogroms when top-lefts take power and betray the bottom-lefts.

For a silly example, consider an insurrection against broccoli. The ice cream faction can coalition with the pizzatarians if they do some sort of value trade that builds trust, like the ice cream faction eating some pizza and the pizzatarians eating some ice cream. Indeed, the viciousness of the fight after broccoli is abolished may have nothing to do with the solidarity between the two groups under broccoli's rule. It may or may not be the case that the ice cream faction and the pizzatarians can come to an agreement about best to increase value in a post-broccoli world. Civil war may follow revolution, or not.

Now, while I don't support long reflection (TLDR I think a collapse of diversity sufficient to permit a long reflection would be a tremendous failure), I think elements of positive longtermism are crucial for things to improve for the poor or the animals or the astronauts. I think positive longtermism could outperform negative longtermism when it comes to finding synergies between the extinction prevention community and the suffering-focused ethics community. However, I would be very upset if I turned around in a couple years and positive longtermists were, like, the premiere face of longtermism. The reason for this is once you admit positive goals, you have to deal with everybody's political aesthetics, like a philosophy professor's preference for a long reflection or an engineer's preference for moar spaaaace or a conservative's preference for retvrn to pastorality or a liberal's preference for intercultural averaging. A negative goal like "don't kill literally everyone" greatly lacks this problem. Yes, I would change my mind about this if 20% of global defense expenditure was targeted at defending against extinction-level or revolution-erasing events, then the neglectedness calculus would lead us to focus the by comparison smaller EA community on positive longtermism.

The takeaway from this shortform should be that quinn thinks negative longtermism is better for forming projects and teams.

CW death

I'm imagining myself having a 6+ figure net worth at some point in a few years, and I don't know anything about how wills work. 

Do EAs have hit-by-a-bus contingency plans for their net worths? 

Is there something easy we can do to reduce the friction of the following process: Ask five EAs with trustworthy beliefs and values to form a grantmaking panel in the event of my death. This grantmaking panel could meet for thirty minutes and make a weight allocation decision on the giving what we can app, or they can accept applications and run it that way, or they can make an investment decision that will interpret my net worth as seed money for an ongoing fund; it would be up to them. 

I'm assuming this is completely possible in principle: I solicit those five EAs who have no responsibilities or obligations as long as I'm alive, if they agree I get a lawyer to write up a will that describes everything. 

If one EA has done this, the "template contract" would be available to other EAs to repeat it. Would it be worth lowering the friction of making this happen? 

Related idea: I can hardcode weight assignment for the giving what we can app into my will, surely a non-EA will-writing lawyer could wrap their head around this quickly. But is there a way to not have to solicit the lawyer every time I want to update my weights, in response to my beliefs and values changing while I'm alive? 

It sounds at the face of it that the second idea is lower friction and almost as valuable as the first idea for most individuals. 

Why have I heard about Tyson investing into lab grown, but I haven't heard about big oil investing in renewable?

Tyson's basic insight here is not to identify as "an animal agriculture company". Instead, they identify as "a feeding people company". (Which happens to align with doing the right thing, conveniently!)

It seems like big oil is making a tremendous mistake here. Do you think oil execs go around saying "we're an oil company"? When they could instead be going around saying "we're a powering stuff" company. Being a powering stuff company means you have fuel source indifference!

I mean if you look at all the money they had to spend on disinformation and lobbying, isn't it insultingly obvious to say "just invest that money into renewable research and markets instead"?

Is there dialogue on this? Also, have any members of "big oil" in fact done what I'm suggesting, and I just didn't hear about it?

CC'd to lesswrong shortform

This happens quite widely to my knowledge and I've heard about it a lot (but I'm heavily involved in the climate movement so that make sense). Examples:

  • BP started referring to themselves as "Beyond Petroleum" rather than "British Petroleum" over 20 years ago.
  • A report by Greenpeace that found on average amongst a few "big oil" business, 63% of their advertising was classed as "greenwashing" when approx. only 1% of their total portfolios where renewable energy investment.
  • Guardian article covering analysis by Client Earth who are suing big oil companies for greenwashing
  • A lawsuit by Client Earth got BP to retract some greenwashing adverts for being misleading
  • Examples of oil companies promoting renewables
  • Another article on marketing spending to clean up the Big Oil image

post idea: based on interviews, profile scenarios from software of exploit discovery, responsible disclosure, coordination of patching, etc. and try to analyze with an aim toward understanding what good infohazard protocols would look like. 

(I have a contact who was involved with a big patch, if someone else wants to tackle this reach out for a warm intro!)

Don't Look Up might be one of the best mainstream movies for the xrisk movement. Eliezer said it's too on the nose to bare/warrant actually watching. I fully expect to write a review for EA Forum and lesswrong about xrisk movement building.

We need a name for the following heuristic, I think, I think of it as one of those "tribal knowledge"  things that gets passed on like an oral tradition without being citeable in the sense of being a part of a literature. If you come up with a name I'll certainly credit you in a top level post!

I heard it from Abram Demski at AISU'21. 

Suppose you're either going to end up in world A or world B, and you're uncertain about which one it's going to be. Suppose you can pull lever  which will be 100 valuable if you end up in world A, or you can pull lever  which will be 100 valuable if you end up in world B. The heuristic is that if you pull  but end up in world B, you do not want to have created disvalue, in other words, your intervention conditional on the belief that you'll end up in world A should not screw you over in timelines where you end up in world B

This can be fully mathematized by saying "if most of your probability mass is on ending up in world A, then obviously you'd pick a lever L such that  is very high, just also make sure that  or creates an acceptably small amount of disvalue.", where  is read "the value of pulling lever L if you end up in world A" 

Is there an econ major or geek out there who would like to 

  1. accelerate my lit review as I evaluate potential startup ideas in prediction markets and IIDM by writing paper summaries
  2. occasionally tutor me in microeconomics and game theory and similar fun things 

something like 5 hours / week, something like  $20-40 /hr

(EA Forum DMs / quinnd@tutanota.com / disc @quinn#9100) 

I'm aware that there are contractor-coordinating services for each of these asks, I just think it'd be really awesome to have one person to do both and to keep the money in the community, maybe meet a future collaborator! 

What's the latest on moral circle expansion and political circle expansion? 

  • Were slaves excluded from the moral circle in ancient greece or the US antebellum south, and how does this relate to their exclusion from the political circle? 
  • If AIs could suffer, is recognizing that capacity a slippery slope toward giving AIs the right to vote? 
  • Can moral patients be political subjects, or must political subjects be moral agents? If there was some tipping point or avalanche of moral concern for chickens, that wouldn't imply arguments for political representation of chickens, right? 
  • Consider pre-suffrage women, or contemporary children: they seem fully admitted into the moral circle, but only barely admitted to the political circle. 
  • A critique of MCE is that history is not one march of worse to better (smaller to larger), there are in fact false starts, moments of retrograde, etc. Is PCE the same but even moreso? 

If I must make a really bad first approximation, I would say a rubber band is attached to the moral circle, and on the other end of the rubber band is the political circle, so when the moral circle expands it drags the political circle along with it on a delay, modulo some metaphorical tension and inertia. This rubber band model seems informative in the slave case, but uselessly wrong in the chickens case, and points to some I think very real possibilities in the AI case. 

[+][comment deleted]6mo 1
[+][comment deleted]6mo 1