My knowledge of christians and stem cell research in the US is very limited, but my understanding is that they accomplished real slowdown.
Has anyone looked to that movement for lessons about AI?
Did anybody from that movement take a "change it from the inside" or "build clout by boosting stem cell capabilities so you can later spend that clout on stem cell alignment" approach?
I'm not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.
In this shortform, I'm going to take a polarity approach. I'm going to bring each pole to it's extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.
Negative longtermism is saying "let's not let some bad stuff happen", namely extinction. It wants to preserve. If nothing gets better for the poor or the animals or the astronauts, but we dodge extinction and revolution-erasing subextinction events, that's a win for negative longtermism.
In positive longtermism, such a scenario is considered a loss. From an opportunity cost perspective, the failure to erase suffering or bring to agency and prosperity to 1e1000 comets and planets hurts literally as bad as extinction.
Negative longtermism is a vision of what shouldn't happen. Positive longtermism is a vision of what should happen.
My model of Ord says we should lean at least 75% toward positive longtermism, but I don't think he's an extremist. I'm uncertain if my model of Ord would even subscribe to the formation of this positive and negative axis.
What does this axis mean? I wrote a little about this earlier this year. I think figuring out what projects you're working on and who you're teaming up with strongly depends on how you feel about negative vs. positive longtermism. The two dispositions toward myopic coalitions are "do" and "don't". I won't attempt to claim which disposition is more rational or desirable, but explore each branch
When Alice wants future X and Bob wants future Y, but if they don't defeat the adversary Adam they will be stuck with future 0 (containing great disvalue), Alice and Bob may set aside their differences and choose form a myopic coalition to defeat Adam or not.
Form myopic coalitions. A trivial case where you would expect Alice and Bob to tend toward this disposition is if X and Y are similar. However, if X and Y are very different, Alice and Bob must each believe that defeating Adam completely hinges on their teamwork in order to tend toward this disposition, unless they're in a high trust situation where they each can credibly signal that they won't try to get a head start on the X vs. Y battle until 0 is completely ruled out.
Don't form myopic coalitions. A low trust environment where Alice and Bob each fully expect the other to try to get a head start on X vs. Y during the fight against 0 would tend toward the disposition of not forming myopic coalitions. This could lead to great disvalue if a project against Adam can only work via a team of Alice and Bob.
An example of such a low-trust environment is, if you'll excuse political compass jargon, reading bottom-lefts online debating internally the merits of working with top-lefts on projects against capitalism. The argument for coalition is that capitalism is a formiddable foe and they could use as much teamwork as possible; the argument against coalition is historical backstabbing and pogroms when top-lefts take power and betray the bottom-lefts.
For a silly example, consider an insurrection against broccoli. The ice cream faction can coalition with the pizzatarians if they do some sort of value trade that builds trust, like the ice cream faction eating some pizza and the pizzatarians eating some ice cream. Indeed, the viciousness of the fight after broccoli is abolished may have nothing to do with the solidarity between the two groups under broccoli's rule. It may or may not be the case that the ice cream faction and the pizzatarians can come to an agreement about best to increase value in a post-broccoli world. Civil war may follow revolution, or not.
Now, while I don't support long reflection (TLDR I think a collapse of diversity sufficient to permit a long reflection would be a tremendous failure), I think elements of positive longtermism are crucial for things to improve for the poor or the animals or the astronauts. I think positive longtermism could outperform negative longtermism when it comes to finding synergies between the extinction prevention community and the suffering-focused ethics community. However, I would be very upset if I turned around in a couple years and positive longtermists were, like, the premiere face of longtermism. The reason for this is once you admit positive goals, you have to deal with everybody's political aesthetics, like a philosophy professor's preference for a long reflection or an engineer's preference for moar spaaaace or a conservative's preference for retvrn to pastorality or a liberal's preference for intercultural averaging. A negative goal like "don't kill literally everyone" greatly lacks this problem. Yes, I would change my mind about this if 20% of global defense expenditure was targeted at defending against extinction-level or revolution-erasing events, then the neglectedness calculus would lead us to focus the by comparison smaller EA community on positive longtermism.
The takeaway from this shortform should be that quinn thinks negative longtermism is better for forming projects and teams.
In negative longtermism, we sometimes invoke this concept of existential security (which i'll abbreviate to xsec), the idea that at some point the future is freed from xrisk, or we have in some sense abolished the risk of extinction.
One premise for the current post is that in a veil of ignorance sense, affluent and smart humans alive in the 21st century have duties/responsibilities/obligations, (unless they're simply not altruistic at all), derived from Most Important Century arguments.
I think it's tempting to say that the duty -- the ask -- is to obtain existential security. But I think this is wildly too hard, and I'd like to propose a kind of different framing
Xsec is a delusion
I don't think this goal is remotely obtainable. Rather, I think the law of mad science implies that either we'll obtain a commensurate rate of increase in vigilance or we'll die. "Security" implies that we (i.e. our descendants) can relax at some point (as the minimum IQ it takes to kill everyone drops further and further). I think this is delusional, and Bostrom says as much in the Vulnerable World Hypothesis (VWH).
I think the idea that we'd obtain xsec is unnecessarily utopian, and very misleading.
Instead of xsec summed over the whole future, zero in on subsequent 1-3 generations, and pour your trust into induction
Obtaining xsec seems like something you don't just do for your grandkids, or for the 22nd century, but for all the centuries in the future.
I think this is too tall an order. I think that instead of trying something that's too hard and we're sure to fail at, we should initialize a class or order of protectors who zero in on getting their 1-3 first successor generations to make it.
In math/computing, we reason about infinite structures (like the whole numbers) by asking what we know about "the base case" (i.e., zero) and by asking what we know about constructions assuming we already know stuff about the ingredients to those constructors (i.e., we would like for what we know about n to be transformed into knowledge about n+1). This is the way I'm thinking about how we can sort of obtain xsec just not all at once. There are no actions we can take to obtain xsec for the 25th century, but if every generation 1. protects their own kids, grandkids, and great-grandkids, and 2. trains and incubates a protector order from among the peers of their kids, grandkids, and great-grandkids, then overall the 25th century is existentially secure.
Yes, the realities of value drift make it really hard to simply trust induction to work. But I think it's a much better bet than searching for actions you can take to directly impact arbitrary centuries.
I think when scifis like dune or foundation reasoned about this, there was a sort of intergenerational lock-in, people are born into this order, they have destinies and fates and so on, whereas I think in real life people can opt-in and opt-out of it. (but I think the 0 IQ approach to this is to just have kids of your own and indoctrinate them, which may or may not even work).
But overall, I think the argument that accumulating cultural wisdom among cosmopolitans, altruists, whomever is the best lever we have right now is very reasonable (especially if you take seriously the idea that we're in the alchemy era of longtermism).
The law of mad science (LOMS) states that the minimum IQ needed to destroy the world drops by x points every y years.
My sense from talking to my friend in biorisk and honing my views of algorithms and the GPU market is that it is wise to heed this worldview. It's sort of like the vulnerable world hypothesis (Bostrom 2017), but a bit stronger. VWH just asks "what if nukes but cost a dollar and fit in your pocket?", whereas LOMS goes all the way to "the price and size of nukes is in fact dropping".
I also think that the LOMS is vague and imprecise.
I'm basically confused about a few obvious considerations that arise when you begin to take the LOMS seriously.
Are x (step size) and y (dropping time) fixed from empiricism to extinction? This is about as plausible as P = NP, obviously Alhazen (or an xrisk community contemporaneous with Alhazen) didn't have to deal with the same step size and dropping time as Shannon (or an xrisk community contemporaneous with Shannon), but it needs to be argued.
With or without a proof of 1's falseness, what are step size and dropping time a function of? What are changes in step size and dropping time a function of?
Assuming my intuition that the answer to 2 is mostly economic growth, what is a moral way to reason about the tradeoffs between lifting people out of poverty and making the LOMS worse? Does the LOMS invite the xrisk community to join the degrowth movement?
Is the LOMS sensitive to population size, or relative consumption of different proportions of the population?
For fun, can you write a coherent scifi about a civilization that abolished the LOMS somehow? (this seems to be what Ord's gesture at "existential security" entails). How about merely reversing it's direction, or mere mitigation?
My first guess was that empiricism is the minimal civilizational capability that a planet-lifeform pair has to acquire before the LOMS kicks in. Is this true? Does it, in fact, kick in earlier or later? Is a statement of the form "the region between an industrial revolution and an information or atomic age is the pareto frontier of the prosperity/security tradeoff" on the table in any way?
While I'm not 100% sure there will be actionable insights downstream of these open problems, it's plausibly worth researching.
We need an in-depth post on moral circle expansion (MCE), minoritarianism, and winning. I expect EA's MCE projects to be less popular than anti-abortion in the US (37% say ought to be illegal in all or most cases, while for one example veganism is at 6%) . I guess the specifics of how the anti-abortion movement operated may be too in the weeds of contingent and peculiar pseudodemocracy, winning elections with less than half of the votes and securing judges and so on, but it seems like we don't want to miss out on studying this. There may be insights.
While many EAs would (I think rightly) consider the anti-abortion people colleagues as MCE activists, some EAs may also (I think debatably) admire republicans for their ruthless, shrewd, occasionally thuggish commitment to winning. Regarding the latter, I would hope to hear out a case for principles over policy preference, keeping our hands clean, refusing to compromise our integrity, and so on. I'm about 50:50 on where I'd expect to fall personally, about the playing fair and nice stuff. I guess it's a question of how much republicans expect to suffer from externalities of thuggishness, if we want to use them to reason about the price we're willing to put on our integrity.
Moreover, I think this "colleagues as MCE activists" stuff is under-discussed. When you steelman the anti-abortion movement, you assume that they understand multiplication as well as we do, and are making a difficult and unhappy tradeoff about the QALY's lost to abortions needed by pregancies gone wrong or unclean black-market abortions or whathaveyou. I may feel like I oppose the anti-abortion people on multiplicationist/consequentialist grounds (I also just don't think reducing incidence of disvaluable things by outlawing them is a reasonable lever), but things get interesting when I model them as understanding the tradeoffs they're making.
(To be clear, this isn't "EA writer, culturally coded as a democrat for whatever college/lgbt/atheist reasons, is using a derogatory word like 'thuggish' to describe the outgroup", I'm alluding to empirical claims about how the structure of the government interacts with population density to create minority rule, and making a moral judgment about the norm-dissolving they fell back on when obama appointed a judge.)
(I also just don't think reducing incidence of disvaluable things by outlawing them is a reasonable lever)
This is a pretty strong stance to take! Most people believe that it is reasonable to ban at least some disvaluable things, like theft, murder, fraud etc., in an attempt to reduce their incidence. Even libertarians who oppose the existence of the state altogether generally think it will be replaced by some private alternative system which will effectively ban these things.
right, yeah, I think it's a fairly common conclusion regarding a reference class like drugs and sex work, but for a reference class like murder and theft it's a much rarer (harder to defend) stance.
I don't know if it's on topic for the forum to dive into all of my credences over all the claims and hypotheses involved here, I just wanted to briefly leak a personal opinion or inclination in OP.
I'm imagining myself having a 6+ figure net worth at some point in a few years, and I don't know anything about how wills work.
Do EAs have hit-by-a-bus contingency plans for their net worths?
Is there something easy we can do to reduce the friction of the following process: Ask five EAs with trustworthy beliefs and values to form a grantmaking panel in the event of my death. This grantmaking panel could meet for thirty minutes and make a weight allocation decision on the giving what we can app, or they can accept applications and run it that way, or they can make an investment decision that will interpret my net worth as seed money for an ongoing fund; it would be up to them.
I'm assuming this is completely possible in principle: I solicit those five EAs who have no responsibilities or obligations as long as I'm alive, if they agree I get a lawyer to write up a will that describes everything.
If one EA has done this, the "template contract" would be available to other EAs to repeat it. Would it be worth lowering the friction of making this happen?
Related idea: I can hardcode weight assignment for the giving what we can app into my will, surely a non-EA will-writing lawyer could wrap their head around this quickly. But is there a way to not have to solicit the lawyer every time I want to update my weights, in response to my beliefs and values changing while I'm alive?
It sounds at the face of it that the second idea is lower friction and almost as valuable as the first idea for most individuals.
Why have I heard about Tyson investing into lab grown, but I haven't heard about big oil investing in renewable?
Tyson's basic insight here is not to identify as "an animal agriculture company". Instead, they identify as "a feeding people company". (Which happens to align with doing the right thing, conveniently!)
It seems like big oil is making a tremendous mistake here. Do you think oil execs go around saying "we're an oil company"? When they could instead be going around saying "we're a powering stuff" company. Being a powering stuff company means you have fuel source indifference!
I mean if you look at all the money they had to spend on disinformation and lobbying, isn't it insultingly obvious to say "just invest that money into renewable research and markets instead"?
Is there dialogue on this? Also, have any members of "big oil" in fact done what I'm suggesting, and I just didn't hear about it?
This happens quite widely to my knowledge and I've heard about it a lot (but I'm heavily involved in the climate movement so that make sense). Examples:
BP started referring to themselves as "Beyond Petroleum" rather than "British Petroleum" over 20 years ago.
A report by Greenpeace that found on average amongst a few "big oil" business, 63% of their advertising was classed as "greenwashing" when approx. only 1% of their total portfolios where renewable energy investment.
Guardian article covering analysis by Client Earth who are suing big oil companies for greenwashing
A lawsuit by Client Earth got BP to retract some greenwashing adverts for being misleading
Another CCing of something I said on discord to shortform
If I was in comms at Big EA, I think I'd just say "EAs are people who like to multiply stuff" and call it a day
I think the principle that is both 1. as small as possible and 2. is shared as widely between EAs as possible is just "multiplication is morally and epistemically sound".
Find a nobel prizewinner and come up with a more accurate distribution of shapley points.
The Norman Borlaug biography (the one by Leon Hesser) really drove home for me that, in this case, there was a whole squad behind the nobel prize, but only one guy got the prize. Tons of people moved through the rockefeller foundation and institutions in mexico to lay the groundwork for the green revolution, Borlaug was the real deal but history should also appreciate his colleagues.
It'd be awesome if evaluators could study high impact projects and come up with shapley point allocations. It'd really outperform the simple prizes approach.
Thanks to the discord squad (EA Corner) who helped with this.
Casual, not-resolvable-by-bet prediction:
Basically EA is going to splinter into "trying to preserve permanent counter culture" and "institutionalizing"
I wrote yesterday about "the borg property", that we shift like the sands in response to arguments and evidence, which amounts to assimilating critics into our throngs.
As a premise, there exists a basic march of subcultures marching from counterculture to institution: abolitionists went from wildly unpopular to champions commonsense morality over the course of some hundreds of years, I think feminism is reasonably institutionalized now but had countercultural roots, let's say 150 years. Drugs from weed to hallucinogens have counterculture roots, and are still a little counterculture, but may not always be. BLM has gotten way more popular over the last 10 years.
But the borg property seems to imply that we'll not ossify (into, begin metaphor torturing sequence: rocks) enough to follow that march, not entirely. Rocks turn into sand via erosion, we should expect bottlenecks to reverse erosion (sand turning into rocks), i.e. the constant shifting of the dunes with the wind.
Consequentialist cosmopolitans, rats, people who like to multiply stuff, whomever else may have to rebrand if institutionalized EA got too hegemonic, and I've heard a claim that this is already happening in the "rats who arent EAs" scene in the bay, that there are ambitious rats who think the ivy league & congress strategy is a huge turn-off.
Of interest is the idea that we may live in a world where "serious careerists who agree with leadership about PR are the only people allowed in the moskovitz, tuna, sbf ecosystems", perhaps this is a cue from the koch or thiel ecosystems (perhaps not: I don't really know how they operate). Now the core branding of EA may align itself with that careerism ecosystem, or it may align itself with higher variance stuff. I'm uncertain what will happen, I only expect splintering not any proposition about who lands where.
A manifold market could look like "will there exist charities founded and/or staffed by people who were high-engagement EAs for a number of years before starting these projects, but are not endorsed by EA's billionaires". This may capture part of it.
post idea: based on interviews, profile scenarios from software of exploit discovery, responsible disclosure, coordination of patching, etc. and try to analyze with an aim toward understanding what good infohazard protocols would look like.
(I have a contact who was involved with a big patch, if someone else wants to tackle this reach out for a warm intro!)
Don't Look Up might be one of the best mainstream movies for the xrisk movement. Eliezer said it's too on the nose to bare/warrant actually watching. I fully expect to write a review for EA Forum and lesswrong about xrisk movement building.
One brief point against Left EA: solidarity is not altruism.
low effort shortform: do pingback to here if you steal these ideas for a more effortful post
It has been said in numerous places that leftism and effective altruism owe each other some relationship, stemming from common goals and so on. In this shortform, I will sketch one way in which this is misguided.
I will be ignoring cultural/social effects, like bad epistemics, because I think bad epistemics are a contingent rather than necessary feature of the left.
Solidarity appeals to skin-in-the-game. Class awareness is good to team up with your colleague to bargain for higher wages, but it's literally orthogonal to cosmopolitanism/impartiality. Two objections are mutual aid and some form of "no actually leftism is cosmopolitanism". Under mutual aid, at least as it was taught at the philly food not bombs chapter back in my sordid past, we observe the hungry working alongside the fed to feed even more of the hungry, that you can coalition across the hierarchical barrier between charitable action and skin in the game, or reject the barrier flatly. While this lesson works great for meals or needle exchanges, I'm skeptical about how well it generalizes even to global poverty, to say nothing of animals or the unborn. The other objection, that leftism actually is cosmopolitan, only really makes sense to the thought-leaders of leftism and is dissonant with theories of change that involve changing ordinary peoples' minds (which is most theories of change). A common pattern for leftist intellectuals to take is "we have to free the whole world from the shackles of capitalism, working class consciousness shows people that they can fight to improve their lot" (or some flavor of "think global act local"). It is always the intellectual who's thinking about that highfalutin improving the lot of others, while the pleb rank and file is only asked to advocate for themselves. Instead, EAs should be honest: that we do not fight via skin in the game, we fight via caring about others; EA thought leaders and EA rank and file should be on the same page about this. This is elitist to only the staunchest horizontalist. (However, while I think it is sparingly that we defer to standpoint epistemology, for good reason, it's very plausible that it has it's moments to shine, and plausible that we currently don't standpoint epistemology enough, but that's getting a bit afield).
We need a name for the following heuristic, I think, I think of it as one of those "tribal knowledge" things that gets passed on like an oral tradition without being citeable in the sense of being a part of a literature. If you come up with a name I'll certainly credit you in a top level post!
I heard it from Abram Demski at AISU'21.
Suppose you're either going to end up in world A or world B, and you're uncertain about which one it's going to be. Suppose you can pull lever LA which will be 100 valuable if you end up in world A, or you can pull lever LB which will be 100 valuable if you end up in world B. The heuristic is that if you pull LA but end up in world B, you do not want to have created disvalue, in other words, your intervention conditional on the belief that you'll end up in world A should not screw you over in timelines where you end up in world B.
This can be fully mathematized by saying "if most of your probability mass is on ending up in world A, then obviously you'd pick a lever L such that V(L|A) is very high, just also make sure that V(L|B)>=0 or creates an acceptably small amount of disvalue.", where V(L|A) is read "the value of pulling lever L if you end up in world A"
I'm aware that there are contractor-coordinating services for each of these asks, I just think it'd be really awesome to have one person to do both and to keep the money in the community, maybe meet a future collaborator!
What's the latest on moral circle expansion and political circle expansion?
Were slaves excluded from the moral circle in ancient greece or the US antebellum south, and how does this relate to their exclusion from the political circle?
If AIs could suffer, is recognizing that capacity a slippery slope toward giving AIs the right to vote?
Can moral patients be political subjects, or must political subjects be moral agents? If there was some tipping point or avalanche of moral concern for chickens, that wouldn't imply arguments for political representation of chickens, right?
Consider pre-suffrage women, or contemporary children: they seem fully admitted into the moral circle, but only barely admitted to the political circle.
A critique of MCE is that history is not one march of worse to better (smaller to larger), there are in fact false starts, moments of retrograde, etc. Is PCE the same but even moreso?
If I must make a really bad first approximation, I would say a rubber band is attached to the moral circle, and on the other end of the rubber band is the political circle, so when the moral circle expands it drags the political circle along with it on a delay, modulo some metaphorical tension and inertia. This rubber band model seems informative in the slave case, but uselessly wrong in the chickens case, and points to some I think very real possibilities in the AI case.
Stem cell slowdown and AI timelines
My knowledge of christians and stem cell research in the US is very limited, but my understanding is that they accomplished real slowdown.
Has anyone looked to that movement for lessons about AI?
Did anybody from that movement take a "change it from the inside" or "build clout by boosting stem cell capabilities so you can later spend that clout on stem cell alignment" approach?
CC'd to lesswrong.com/shortform
Positive and negative longtermism
I'm not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.
In this shortform, I'm going to take a polarity approach. I'm going to bring each pole to it's extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.
Negative longtermism is saying "let's not let some bad stuff happen", namely extinction. It wants to preserve. If nothing gets better for the poor or the animals or the astronauts, but we dodge extinction and revolution-erasing subextinction events, that's a win for negative longtermism.
In positive longtermism, such a scenario is considered a loss. From an opportunity cost perspective, the failure to erase suffering or bring to agency and prosperity to
1e1000
comets and planets hurts literally as bad as extinction.Negative longtermism is a vision of what shouldn't happen. Positive longtermism is a vision of what should happen.
My model of Ord says we should lean at least 75% toward positive longtermism, but I don't think he's an extremist. I'm uncertain if my model of Ord would even subscribe to the formation of this positive and negative axis.
What does this axis mean? I wrote a little about this earlier this year. I think figuring out what projects you're working on and who you're teaming up with strongly depends on how you feel about negative vs. positive longtermism. The two dispositions toward myopic coalitions are "do" and "don't". I won't attempt to claim which disposition is more rational or desirable, but explore each branch
When Alice wants future
X
and Bob wants futureY
, but if they don't defeat the adversary Adam they will be stuck with future0
(containing great disvalue), Alice and Bob may set aside their differences and choose form a myopic coalition to defeat Adam or not.X
andY
are similar. However, ifX
andY
are very different, Alice and Bob must each believe that defeating Adam completely hinges on their teamwork in order to tend toward this disposition, unless they're in a high trust situation where they each can credibly signal that they won't try to get a head start on theX
vs.Y
battle until0
is completely ruled out.X
vs.Y
during the fight against0
would tend toward the disposition of not forming myopic coalitions. This could lead to great disvalue if a project against Adam can only work via a team of Alice and Bob.An example of such a low-trust environment is, if you'll excuse political compass jargon, reading bottom-lefts online debating internally the merits of working with top-lefts on projects against capitalism. The argument for coalition is that capitalism is a formiddable foe and they could use as much teamwork as possible; the argument against coalition is historical backstabbing and pogroms when top-lefts take power and betray the bottom-lefts.
For a silly example, consider an insurrection against broccoli. The ice cream faction can coalition with the pizzatarians if they do some sort of value trade that builds trust, like the ice cream faction eating some pizza and the pizzatarians eating some ice cream. Indeed, the viciousness of the fight after broccoli is abolished may have nothing to do with the solidarity between the two groups under broccoli's rule. It may or may not be the case that the ice cream faction and the pizzatarians can come to an agreement about best to increase value in a post-broccoli world. Civil war may follow revolution, or not.
Now, while I don't support long reflection (TLDR I think a collapse of diversity sufficient to permit a long reflection would be a tremendous failure), I think elements of positive longtermism are crucial for things to improve for the poor or the animals or the astronauts. I think positive longtermism could outperform negative longtermism when it comes to finding synergies between the extinction prevention community and the suffering-focused ethics community. However, I would be very upset if I turned around in a couple years and positive longtermists were, like, the premiere face of longtermism. The reason for this is once you admit positive goals, you have to deal with everybody's political aesthetics, like a philosophy professor's preference for a long reflection or an engineer's preference for moar spaaaace or a conservative's preference for retvrn to pastorality or a liberal's preference for intercultural averaging. A negative goal like "don't kill literally everyone" greatly lacks this problem. Yes, I would change my mind about this if 20% of global defense expenditure was targeted at defending against extinction-level or revolution-erasing events, then the neglectedness calculus would lead us to focus the by comparison smaller EA community on positive longtermism.
The takeaway from this shortform should be that quinn thinks negative longtermism is better for forming projects and teams.
In negative longtermism, we sometimes invoke this concept of existential security (which i'll abbreviate to xsec), the idea that at some point the future is freed from xrisk, or we have in some sense abolished the risk of extinction.
One premise for the current post is that in a veil of ignorance sense, affluent and smart humans alive in the 21st century have duties/responsibilities/obligations, (unless they're simply not altruistic at all), derived from Most Important Century arguments.
I think it's tempting to say that the duty -- the ask -- is to obtain existential security. But I think this is wildly too hard, and I'd like to propose a kind of different framing
Xsec is a delusion
I don't think this goal is remotely obtainable. Rather, I think the law of mad science implies that either we'll obtain a commensurate rate of increase in vigilance or we'll die. "Security" implies that we (i.e. our descendants) can relax at some point (as the minimum IQ it takes to kill everyone drops further and further). I think this is delusional, and Bostrom says as much in the Vulnerable World Hypothesis (VWH).
I think the idea that we'd obtain xsec is unnecessarily utopian, and very misleading.
Instead of xsec summed over the whole future, zero in on subsequent 1-3 generations, and pour your trust into induction
Obtaining xsec seems like something you don't just do for your grandkids, or for the 22nd century, but for all the centuries in the future.
I think this is too tall an order. I think that instead of trying something that's too hard and we're sure to fail at, we should initialize a class or order of protectors who zero in on getting their 1-3 first successor generations to make it.
In math/computing, we reason about infinite structures (like the whole numbers) by asking what we know about "the base case" (i.e., zero) and by asking what we know about constructions assuming we already know stuff about the ingredients to those constructors (i.e., we would like for what we know about n to be transformed into knowledge about n+1). This is the way I'm thinking about how we can sort of obtain xsec just not all at once. There are no actions we can take to obtain xsec for the 25th century, but if every generation 1. protects their own kids, grandkids, and great-grandkids, and 2. trains and incubates a protector order from among the peers of their kids, grandkids, and great-grandkids, then overall the 25th century is existentially secure.
Yes, the realities of value drift make it really hard to simply trust induction to work. But I think it's a much better bet than searching for actions you can take to directly impact arbitrary centuries.
I think when scifis like dune or foundation reasoned about this, there was a sort of intergenerational lock-in, people are born into this order, they have destinies and fates and so on, whereas I think in real life people can opt-in and opt-out of it. (but I think the 0 IQ approach to this is to just have kids of your own and indoctrinate them, which may or may not even work).
But overall, I think the argument that accumulating cultural wisdom among cosmopolitans, altruists, whomever is the best lever we have right now is very reasonable (especially if you take seriously the idea that we're in the alchemy era of longtermism).
open problems in the law of mad science
The law of mad science (LOMS) states that the minimum IQ needed to destroy the world drops by x points every y years.
My sense from talking to my friend in biorisk and honing my views of algorithms and the GPU market is that it is wise to heed this worldview. It's sort of like the vulnerable world hypothesis (Bostrom 2017), but a bit stronger. VWH just asks "what if nukes but cost a dollar and fit in your pocket?", whereas LOMS goes all the way to "the price and size of nukes is in fact dropping".
I also think that the LOMS is vague and imprecise.
I'm basically confused about a few obvious considerations that arise when you begin to take the LOMS seriously.
While I'm not 100% sure there will be actionable insights downstream of these open problems, it's plausibly worth researching.
As far as I know, this is the original attribution.
We need an in-depth post on moral circle expansion (MCE), minoritarianism, and winning. I expect EA's MCE projects to be less popular than anti-abortion in the US (37% say ought to be illegal in all or most cases, while for one example veganism is at 6%) . I guess the specifics of how the anti-abortion movement operated may be too in the weeds of contingent and peculiar pseudodemocracy, winning elections with less than half of the votes and securing judges and so on, but it seems like we don't want to miss out on studying this. There may be insights.
While many EAs would (I think rightly) consider the anti-abortion people colleagues as MCE activists, some EAs may also (I think debatably) admire republicans for their ruthless, shrewd, occasionally thuggish commitment to winning. Regarding the latter, I would hope to hear out a case for principles over policy preference, keeping our hands clean, refusing to compromise our integrity, and so on. I'm about 50:50 on where I'd expect to fall personally, about the playing fair and nice stuff. I guess it's a question of how much republicans expect to suffer from externalities of thuggishness, if we want to use them to reason about the price we're willing to put on our integrity.
Moreover, I think this "colleagues as MCE activists" stuff is under-discussed. When you steelman the anti-abortion movement, you assume that they understand multiplication as well as we do, and are making a difficult and unhappy tradeoff about the QALY's lost to abortions needed by pregancies gone wrong or unclean black-market abortions or whathaveyou. I may feel like I oppose the anti-abortion people on multiplicationist/consequentialist grounds (I also just don't think reducing incidence of disvaluable things by outlawing them is a reasonable lever), but things get interesting when I model them as understanding the tradeoffs they're making.
(To be clear, this isn't "EA writer, culturally coded as a democrat for whatever college/lgbt/atheist reasons, is using a derogatory word like 'thuggish' to describe the outgroup", I'm alluding to empirical claims about how the structure of the government interacts with population density to create minority rule, and making a moral judgment about the norm-dissolving they fell back on when obama appointed a judge.)
This is a pretty strong stance to take! Most people believe that it is reasonable to ban at least some disvaluable things, like theft, murder, fraud etc., in an attempt to reduce their incidence. Even libertarians who oppose the existence of the state altogether generally think it will be replaced by some private alternative system which will effectively ban these things.
right, yeah, I think it's a fairly common conclusion regarding a reference class like drugs and sex work, but for a reference class like murder and theft it's a much rarer (harder to defend) stance.
I don't know if it's on topic for the forum to dive into all of my credences over all the claims and hypotheses involved here, I just wanted to briefly leak a personal opinion or inclination in OP.
Jamie Harris at Sentience Institute authored a report on "Social Movement Lessons From the US Anti-Abortion Movement" that may be of interest.
perfect, thanks!
CW death
I'm imagining myself having a 6+ figure net worth at some point in a few years, and I don't know anything about how wills work.
Do EAs have hit-by-a-bus contingency plans for their net worths?
Is there something easy we can do to reduce the friction of the following process: Ask five EAs with trustworthy beliefs and values to form a grantmaking panel in the event of my death. This grantmaking panel could meet for thirty minutes and make a weight allocation decision on the giving what we can app, or they can accept applications and run it that way, or they can make an investment decision that will interpret my net worth as seed money for an ongoing fund; it would be up to them.
I'm assuming this is completely possible in principle: I solicit those five EAs who have no responsibilities or obligations as long as I'm alive, if they agree I get a lawyer to write up a will that describes everything.
If one EA has done this, the "template contract" would be available to other EAs to repeat it. Would it be worth lowering the friction of making this happen?
Related idea: I can hardcode weight assignment for the giving what we can app into my will, surely a non-EA will-writing lawyer could wrap their head around this quickly. But is there a way to not have to solicit the lawyer every time I want to update my weights, in response to my beliefs and values changing while I'm alive?
It sounds at the face of it that the second idea is lower friction and almost as valuable as the first idea for most individuals.
Why have I heard about Tyson investing into lab grown, but I haven't heard about big oil investing in renewable?
Tyson's basic insight here is not to identify as "an animal agriculture company". Instead, they identify as "a feeding people company". (Which happens to align with doing the right thing, conveniently!)
It seems like big oil is making a tremendous mistake here. Do you think oil execs go around saying "we're an oil company"? When they could instead be going around saying "we're a powering stuff" company. Being a powering stuff company means you have fuel source indifference!
I mean if you look at all the money they had to spend on disinformation and lobbying, isn't it insultingly obvious to say "just invest that money into renewable research and markets instead"?
Is there dialogue on this? Also, have any members of "big oil" in fact done what I'm suggesting, and I just didn't hear about it?
CC'd to lesswrong shortform
This happens quite widely to my knowledge and I've heard about it a lot (but I'm heavily involved in the climate movement so that make sense). Examples:
https://www.lesswrong.com/posts/kq8CZzcPKQtCzbGxg/quinn-s-shortform?commentId=yLG8yWWHhuTKLbdZA seems like a I didn't hear about it kind of thing
Another CCing of something I said on discord to shortform
If I was in comms at Big EA, I think I'd just say "EAs are people who like to multiply stuff" and call it a day
I think the principle that is both 1. as small as possible and 2. is shared as widely between EAs as possible is just "multiplication is morally and epistemically sound".
It just seems to me like the most upstream thing.
That's the post.
cool projects for evaluators
Find a nobel prizewinner and come up with a more accurate distribution of shapley points.
The Norman Borlaug biography (the one by Leon Hesser) really drove home for me that, in this case, there was a whole squad behind the nobel prize, but only one guy got the prize. Tons of people moved through the rockefeller foundation and institutions in mexico to lay the groundwork for the green revolution, Borlaug was the real deal but history should also appreciate his colleagues.
It'd be awesome if evaluators could study high impact projects and come up with shapley point allocations. It'd really outperform the simple prizes approach.
Thanks to the discord squad (EA Corner) who helped with this.
Casual, not-resolvable-by-bet prediction:
Basically EA is going to splinter into "trying to preserve permanent counter culture" and "institutionalizing"
I wrote yesterday about "the borg property", that we shift like the sands in response to arguments and evidence, which amounts to assimilating critics into our throngs.
As a premise, there exists a basic march of subcultures marching from counterculture to institution: abolitionists went from wildly unpopular to champions commonsense morality over the course of some hundreds of years, I think feminism is reasonably institutionalized now but had countercultural roots, let's say 150 years. Drugs from weed to hallucinogens have counterculture roots, and are still a little counterculture, but may not always be. BLM has gotten way more popular over the last 10 years.
But the borg property seems to imply that we'll not ossify (into, begin metaphor torturing sequence: rocks) enough to follow that march, not entirely. Rocks turn into sand via erosion, we should expect bottlenecks to reverse erosion (sand turning into rocks), i.e. the constant shifting of the dunes with the wind.
Consequentialist cosmopolitans, rats, people who like to multiply stuff, whomever else may have to rebrand if institutionalized EA got too hegemonic, and I've heard a claim that this is already happening in the "rats who arent EAs" scene in the bay, that there are ambitious rats who think the ivy league & congress strategy is a huge turn-off.
Of interest is the idea that we may live in a world where "serious careerists who agree with leadership about PR are the only people allowed in the moskovitz, tuna, sbf ecosystems", perhaps this is a cue from the koch or thiel ecosystems (perhaps not: I don't really know how they operate). Now the core branding of EA may align itself with that careerism ecosystem, or it may align itself with higher variance stuff. I'm uncertain what will happen, I only expect splintering not any proposition about who lands where.
Expected and obligate citation.
Ok, maybe a little resolvable by bet
A manifold market could look like "will there exist charities founded and/or staffed by people who were high-engagement EAs for a number of years before starting these projects, but are not endorsed by EA's billionaires". This may capture part of it.
post idea: based on interviews, profile scenarios from software of exploit discovery, responsible disclosure, coordination of patching, etc. and try to analyze with an aim toward understanding what good infohazard protocols would look like.
(I have a contact who was involved with a big patch, if someone else wants to tackle this reach out for a warm intro!)
Don't Look Up might be one of the best mainstream movies for the xrisk movement. Eliezer said it's too on the nose to bare/warrant actually watching. I fully expect to write a review for EA Forum and lesswrong about xrisk movement building.
One brief point against Left EA: solidarity is not altruism.
low effort shortform: do pingback to here if you steal these ideas for a more effortful post
It has been said in numerous places that leftism and effective altruism owe each other some relationship, stemming from common goals and so on. In this shortform, I will sketch one way in which this is misguided.
I will be ignoring cultural/social effects, like bad epistemics, because I think bad epistemics are a contingent rather than necessary feature of the left.
Solidarity appeals to skin-in-the-game. Class awareness is good to team up with your colleague to bargain for higher wages, but it's literally orthogonal to cosmopolitanism/impartiality. Two objections are mutual aid and some form of "no actually leftism is cosmopolitanism". Under mutual aid, at least as it was taught at the philly food not bombs chapter back in my sordid past, we observe the hungry working alongside the fed to feed even more of the hungry, that you can coalition across the hierarchical barrier between charitable action and skin in the game, or reject the barrier flatly. While this lesson works great for meals or needle exchanges, I'm skeptical about how well it generalizes even to global poverty, to say nothing of animals or the unborn. The other objection, that leftism actually is cosmopolitan, only really makes sense to the thought-leaders of leftism and is dissonant with theories of change that involve changing ordinary peoples' minds (which is most theories of change). A common pattern for leftist intellectuals to take is "we have to free the whole world from the shackles of capitalism, working class consciousness shows people that they can fight to improve their lot" (or some flavor of "think global act local"). It is always the intellectual who's thinking about that highfalutin improving the lot of others, while the pleb rank and file is only asked to advocate for themselves. Instead, EAs should be honest: that we do not fight via skin in the game, we fight via caring about others; EA thought leaders and EA rank and file should be on the same page about this. This is elitist to only the staunchest horizontalist. (However, while I think it is sparingly that we defer to standpoint epistemology, for good reason, it's very plausible that it has it's moments to shine, and plausible that we currently don't standpoint epistemology enough, but that's getting a bit afield).
idea: taboo "community building", say "capacity building" instead.
https://en.wikipedia.org/wiki/Capacity_building
Why?
We need a name for the following heuristic, I think, I think of it as one of those "tribal knowledge" things that gets passed on like an oral tradition without being citeable in the sense of being a part of a literature. If you come up with a name I'll certainly credit you in a top level post!
I heard it from Abram Demski at AISU'21.
Suppose you're either going to end up in world A or world B, and you're uncertain about which one it's going to be. Suppose you can pull lever LA which will be 100 valuable if you end up in world A, or you can pull lever LB which will be 100 valuable if you end up in world B. The heuristic is that if you pull LA but end up in world B, you do not want to have created disvalue, in other words, your intervention conditional on the belief that you'll end up in world A should not screw you over in timelines where you end up in world B.
This can be fully mathematized by saying "if most of your probability mass is on ending up in world A, then obviously you'd pick a lever L such that V(L|A) is very high, just also make sure that V(L|B)>=0 or creates an acceptably small amount of disvalue.", where V(L|A) is read "the value of pulling lever L if you end up in world A"
Is there an econ major or geek out there who would like to
something like 5 hours / week, something like $20-40 /hr
(EA Forum DMs / quinnd@tutanota.com / disc @quinn#9100)
I'm aware that there are contractor-coordinating services for each of these asks, I just think it'd be really awesome to have one person to do both and to keep the money in the community, maybe meet a future collaborator!
What's the latest on moral circle expansion and political circle expansion?
If I must make a really bad first approximation, I would say a rubber band is attached to the moral circle, and on the other end of the rubber band is the political circle, so when the moral circle expands it drags the political circle along with it on a delay, modulo some metaphorical tension and inertia. This rubber band model seems informative in the slave case, but uselessly wrong in the chickens case, and points to some I think very real possibilities in the AI case.