This is a special post for quick takes by Eevee🔹. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Epistemic status: preliminary take, likely not considering many factors.

I'm starting to think that economic development and animal welfare go hand in hand. Since the end of the COVID pandemic, the plant-based meat industry has declined in large part because consumers' disposable incomes declined (at least in developed countries). It's good that GFI and others are trying to achieve price parity with conventional meat. However, finding ways to increase disposable incomes (or equivalently, reduce the cost of living) will likely accelerate the adoption of meat substitutes, even if price parity isn't reached.

the plant-based meat industry has declined in large part because consumers' disposable incomes declined (at least in developed countries)


Do you have a source for this? Median real disposable income is growing in the US, as is meat consumption. https://www.vox.com/future-perfect/386374/grocery-store-meat-purchasing people are buying more and more meat as they get richer, even in developed countries

5
trammell
My understanding is that the consumption of essentially all animal products seems to increase in income at the country level across the observed range, whether or not you control for various things. See the regression table on slide 7 and the graph of "implied elasticity on income" on slide 8 here. I'm not seeing the paper itself online anywhere, but maybe reach out to Gustav if you're interested.
4
Tym 🔸
Status: recollection of past reading on meat consumption elasticity a while ago and some claude fact-checking AFAIK atleast in many developing economies (which collectively hold atleast >70% of the human population),an increase in disposable incomes leads to an increase in meat consumption. I think the net effects in developed countries is the same, plant based meat consumption goes up but simulatounsly the lower income members of society eat more meat. Most of this meat consumption increase relies on the cheapest meat of factory farmed chickens in particular so I'm not sure if I agree on the symbiosis here. However, Sonnet 3.5 says that insect consumption broadly decreases with economic development so a weaker version of your claim could be closer to the truth

I've heard from women I know in this community that they are often shunted into low-level or community-building roles rather than object-level leadership roles. Does anyone else have knowledge about and/or experience with this?

6
harfe
Could you expand a bit on how this would look like? How are they being "shunted", what kind of roles are low-level roles? (E.g. your claim could be that the average male EA CS-student is much less likely to hear "You should change from AI safety to community-building" than female EA CS-students.)
4
Chris Leong
Ironically, I think one of the best ways to address this is more movement building. Lots of groups provide professional training to their movement builders and more of this (in terms of AI/AI safety knowledge) would reduce the chance that someone who could and wants to do technical work gets stuck in a community building role.

I'm concerned about the new terms of service for Giving What We Can, which will go into effect after August 31, 2024:

6.3 Feedback. If you provide us with any feedback or suggestions about the GWWC Sites or GWWC’s business (the “Feedback”), GWWC may use the Feedback without obligation to you, and you irrevocably assign to GWWC all right, title, and interest in and to the Feedback. (emphasis added)

This is a significant departure from the Effective Ventures' TOS (GWWC is spinning out of EV), which has users grant EV an unlimited but non-exclusive license to use feedback or suggestions they send, while retaining the right to do anything with it themselves. I've previously talked to GWWC staff about my ideas to help people give effectively, like a donation decision worksheet that I made. If this provision goes into effect, it would deter me from sharing my suggestions with GWWC in the future because I would risk losing the right to disseminate or continue developing those ideas or materials myself.

Thank you for raising this!

After your email last week, we agreed to edit that section and copy EV's terms on Feedback. I've just changed the text on the website.

We only removed the part about "all Feedback we request from you will be collected on an anonymous basis", as we might want to collect non-anonymous feedback in the future.

If anyone else has any feedback, make sure to also send us an email (like Eevee did) as we might miss things on the EA Forum.

9
Matt_Sharp
Strong upvote for bothering to read the terms and conditions!

A hack to multiply your donations by up to 102%

Disclaimer: I'm a former PayPal employee. The following statements are my opinion alone and do not reflect PayPal's views. Also, this information is accurate as of 2024-10-14 and may become outdated in the future.

More donors should consider using PayPal Giving Fund to donate to charities. To do so, go to this page, search for the charity you want, and donate through the charity's page with your PayPal account. (For example, this is GiveDirectly's page.)

PayPal covers all processing fees on charitable donations made through their giving website, so you don't have to worry about the charity losing money to credit card fees. If you use a credit card that gives you 1.5 or 2% cash back (or 1.5-2x points) on all purchases, your net donation will be multiplied by ~102%. I don't know of any credit cards that offer elevated rewards for charitable donations as a category (like many do for restaurants, groceries, etc.), so you most likely can't do better than a 2% card for donations (unless you donate stocks).

For political donations, platforms like ActBlue and Anedot charge the same processing fees to organizations regardless of what payment metho... (read more)

4
Jeff Kaufman 🔸
Thanks for the reminder! I used to do this before EA Giving Tuesday and should probably start doing it again.
2
Eevee🔹
Fwiw, there are ways to get more than 2% cash back: * Citi Double Cash and Citi Rewards+: you get 10% points back when redeeming points with the Rewards+ card, so if you "pool" the reward accounts together you can get effectively 2.¯2% back on donations made with the Double Cash. * A number of credit cards give unlimited 3-4% cash back on all purchases, but there's usually a catch.

Not sure who to alert to this, but: when filling out the EA Organization Survey, I noticed that one of the fields asks for a date in DD/MM/YYYY format. As an American this tripped me up and I accidentally tried to enter a date in MM/DD/YYYY format because I am more used to seeing it.

I suggest using the ISO 8601 (YYYY-MM-DD) format on forms that are used internationally to prevent confusion, or spelling out the month (e.g. "1 December 2023" or "December 1, 2023").

4
Lorenzo Buonanno🔸
I think it's probably best to alert whoever sent you the survey, I wouldn't rely on them noticing quick takes on the EA Forum

Asking for a friend - there's no dress code for EAG, right?

5
Cillian_
I reached out to the events team and they sent me this link :)
5
alex lawsen
I've seen people wear a very wide range of things at the EAGs I've been to.

Are there currently any safety-conscious people on the OpenAI Board?

The current board is:

  • Bret Taylor (chair): Co-created Google Maps, ex-Meta CTO, ex-Twitter Chairperson, current co-founder of Sierra (AI company)
  • Larry Summers: Ex U.S. Treasury Secretary, Ex Harvard president
  • Adam D'Angelo: Co-founder, CEO Quora
  • Dr. Sue Desmond-Hellmann: Ex-director P&G, Meta, Bill & Melinda Gates; Ex-chancellor UCSF. Pfizer board member
  • Nicole Seligman: Ex-Sony exec, Paramount board member
  • Fidji Simo: CEO & Chair Instacart, Ex-Meta VP
  • Sam Altman
  • Also, Microsoft are allowed to observe board meetings

The only people here who even have rumours of being safety-conscious (AFAIK) is Adam D'Angelo, who allegedly played a role in kickstarting last year's board incident, and Sam, who has contradicted a great deal of his rhetoric with his actions. God knows why Larry Summers is there (give it an air of professionalism?), the rest seem to me like your typical professional board members (i.e. unlikely to understand OpenAI's unique charter & structure). In my opinion, any hope of restraint from this board or OpenAI's current leadership is misplaced.

Okay, so one thing I don't get about "common sense ethics" discourse in EA is, which common sense ethical norms prevail? Different people even in the same society have different attitudes about what's common sense.

For example, pretty much everyone agrees that theft and fraud in the service of a good cause - as in the FTX case - is immoral. But what about cases where the governing norms are ambiguous or changing? For example, in the United States, it's considered customary to tip at restaurants and for deliveries, but there isn't much consensus on when and how much to tip, especially with digital point-of-sale systems encouraging people to tip in more situations. (Just as an example of how conceptions of "common sense ethics" can differ: I just learned that apparently, you're supposed to tip the courier before you get a delivery now, otherwise they might refuse to take your order at all. I've grown up believing that you're supposed to tip after you get service, but many drivers expect you to tip beforehand.) You're never required to tip as a condition of service, so what if you just never tipped and always donated the equivalent amount to highly effective charities instead? That sou... (read more)

Crazy idea: A vegan hot dog eating contest

2
Tobias Häberli

Content warning: Israel/Palestine

Has there been research on what interventions are effective at facilitating dialogue between social groups in conflict?

I remember an article about how during the last Israel-Gaza flare-up, Israelis and Palestinians were using the audio chatroom app Clubhouse to share their experiences and perspectives. This was portrayed as a phenomenon that increased dialogue and empathy between the two groups. But how effective was it? Could it generalize to other ethnic/religious conflicts around the world?

Although focused on civil conflicts, Lauren Gilbert's shallow explores some possible interventions in this space, including:

  • Disarmament, Demobilization, and Reintegration (DDR) Programs 
  • Community-Driven Development
  • Cognitive Behavioral Therapy
  • Cash Transfers and/or Job Training
  • Alternative Dispute Resolution (ADR)
  • Contact Interventions and Mass Media
  • Investigative Journalism
  • Mediation and Diplomacy
8
Julia_Wise🔸
Copenhagen Consensus has some older work on what might be cost-effective to preventing armed conflicts, like this paper.
4
EdoArad
Joshua Greene recently came to Israel to explore extending their work aiming at bridging the Republican-Democrat divide in the US to the Israel-Palestine conflict. A 2020 video here.
2
Jamie_Harris
There's psychological research finding that both "extended contact" interventions and interventions that "encourage participants to rethink group boundaries or to prioritize common identities shared with specific outgroups" can reduce prejudice, so I can imagine the Clubhouse stuff working (and being cheap + scalable). https://forum.effectivealtruism.org/posts/re6FsKPgbFgZ5QeJj/effective-strategies-for-changing-public-opinion-a#Prejudice_reduction_strategies

Crazy idea: When charities apply for funding from foundations, they should be required to list 3-5 other charities they think should receive funding. Then, the grantmaker can run a statistical analysis to find orgs that are mentioned a lot and haven't applied before, reach out to those charities, and encourage them to apply. This way, the foundation can get a more diverse pool of applicants by learning about charities outside their network.

3
Marjolein Oostrom
Great idea!

Maybe EA philanthropists should be invest more conservatively, actually

The pros and cons of unusually high risk tolerance in EA philanthropy have been discussed a lot, e.g. here. One factor that may weigh in favor of higher risk aversion is that nonprofits benefit from a stable stream of donations, rather than one that goes up and down a lot with the general economy. This is for a few reasons:

  • Funding stability in a cause area makes it easier for employees to advance their careers because they can count on stable employment. It also makes it easier for nonp
... (read more)
4
Jason
These are good arguments for providing stable levels of funding per year, but there are often ways to further that goal without materially dialing back the riskiness of one's investments (probable exception: crypto, because the swings can be so wild and because other EA donors may be disproportionately in crypto). One classic approach is to set a budget based on a rolling average of the value of one's investments -- for universities, that is often a rolling three-year average, but it apparently goes back much further than that at Yale using a weighted-average approach. And EA philanthropists probably have more flexibility on this point than universities, whose use of endowments is often constrained by applicable law related to endowment spending.

April Fools' Day is in 11 days! Get yer jokes ready 🎶

I think we separate causes and interventions into "neartermist" and "longtermist" causes too much.

Just as some members of the EA community have complained that AI safety is pigeonholed as a "long-term" risk when it's actually imminent within our lifetimes[1], I think we've been too quick to dismiss conventionally "neartermist" EA causes and interventions as not valuable from a longtermist perspective. This is the opposite failure mode of surprising and suspicious convergence - instead of assuming (or rationalizing) that the spaces of interventions that are... (read more)

"Quality-adjusted civilization years"

We should be able to compare global catastrophic risks in terms of the amount of time they make global civilization significantly worse and how much worse it gets. We might call this measure "quality-adjusted civilization years" (QACYs), or the quality-adjusted amount of civilization time that is lost.

For example, let's say that the COVID-19 pandemic reduces the quality of civilization by 50% for 2 years. Then the QACY burden of COVID-19 is  QACYs.

Another example: suppose climate change will reduce the quality of civilization by 80% for 200 years, and then things will return to normal. Then the total QACY burden of climate change over the long term will be  QACYs.

In the limit, an existential catastrophe would have a near-infinite QACY burden.

I think we need to be careful when we talk about AI and automation not to commit the lump of labor fallacy. When we say that a certain fraction of economically valuable work will be automated at any given time, or that this fraction will increase, we shouldn't implicitly assume that the total amount of work being done in the economy is constant. Historically, automation has increased the size of the economy, thereby creating more work to be done, whether by humans or by machines; we should expect the same to happen in the future. (Note that this doesn't exclude the possibility of increasingly general AI systems performing almost all economically valuable work. This could very well happen even as the total amount of work available skyrockets.)

3
Hauke Hillebrandt
Also see a recent paper finding no evidence for the automation hypothesis: http://www.overcomingbias.com/2019/12/automation-so-far-business-as-usual.html

Utility of money is not always logarithmic

EA discussions often assume that the utility of money is logarithmic, but while this is a convenient simplification, it's not always the case. Logarithmic utility is a special case of isoelastic utility, a.k.a. power utility, where the elasticity of marginal utility is . But  can be higher or lower. The most general form of isoelastic utility is the following:

Some special cases:

  • When , we get linear utility, or .
  • When , we get the square root utility function, .
  • When , we get the familiar logarithmic utility function, .
  • For any , the utility function asymptotically approaches a constant as  approaches infinity. When , we get the utility function .

 tells us how sharply marginal utility drops off with increasing consumption: if a person already has  times as much money as the baseline, then giving them an extra dollar is worth  times as much. Empirical studies have found that  for most people is between 1 and 2. So if the average GiveDirect... (read more)

1
Charlie_Guthmann
see: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1096202
1
Marcel D
The ratio of (jargon+equations):complexity in this shortform seems very high. Wouldn't it be substantially easier to write and read to just use terms and examples like "a project might have a stair-step or high-threshold function: unless the project gets enough money, it provides no return on investment"? Or am I missing something in all the equations (which I must admit I don't understand)?
8
Eevee🔹
I'm basically saying that the logarithmic utility function, which is where we get the idea that doubling one's income from any starting point raises their happiness by the same amount, is a special case of a broader class of utility functions, in which marginal utility can decline faster or slower than in the logarithmic utility function.
4
Larks
  All of the maths here assumes smooth utility returns to money; there are no step functions or threshold effects. Rather, it discusses different possible curvatures.
1
Marcel D
I wasn't trying to imply that was the only possibility, I was just highlighting  step/threshold functions as an example of how the utility of money is not always logarithmic. In short, I just think that if the goal of the post is to dispute that simplification, it doesn't need to be so jargon/equation heavy, and if one of the goals of the post is to also discuss different possible curvatures, it would probably help to draw rough a diagram that can be more-easily understood.
7
Charles He
My fan fiction about what is going on in this thread:   A good guess is that "log utility" is being used by EAs for historical reasons (e.g. GiveWell's work) and is influenced by economics, where log is used a lot because it is extremely convenient. Economists don't literally believe people have log utility in income, it just makes equations work to show certain ideas.   It's possible that log utility actually is a really good approximation of welfare and income. But sometimes ideas or notions get codified/canonized inappropriately and accidentally, and math can cause this.   With the context above, my read is that evelynciara is trying to show that  income might be even more important to poor people than believed.  She's doing this in a sophisticated and agreeable way, by slightly extending the math. So her equations aren't a distraction or unnecessary mathematical, it's exactly the opposite, she's protecting against math's undue influence. 
1
Marcel D
I was hoping for a more dramatic and artistic interpretation of this thread, but I’ll accept what’s been given. In the end, I think there are three main audiences to this short form: 1. People like me who read the first sentence, think “I agree,” and then are baffled by the rest of the post. 2. People who read the first sentence, are confused (or think they disagree), then are baffled by the rest of the post. 3. People who read the first sentence, think “I agree,” are not baffled by the rest of the post and say “Yep, that’s a valid way of framing it.” In contrast, I don’t think there is a large group of people in category 4. Read the first sentence, think “I disagree,” then understand the rest of the post. But do correct me if I’m wrong!
2
Charles He
Well, I don't agree with this perspective and its premise. I guess my view is that it doesn't seem compatible for what I perceive as the informal, personal character of shortform (like, "live and let live") which is specifically designed to have different norms than posts.   I won't continue this thread because it feels like I'm supplanting or speaking for the OP.

testing - I renamed my shortform page

Nonprofit idea: YIMBY for energy

YIMBY groups in the United States (like YIMBY Action) systematically advocate for housing developments as well as rezonings and other policies to create more housing in cities. YIMBYism is an explicit counter-strategy to the NIMBY groups that oppose housing development; however, NIMBYism affects energy developments as well - everything from solar farms to nuclear power plants to power lines - and is thus an obstacle to the clean energy transition.

There should be groups that systematically advocate for energy projects (which are mostly in rural areas), borrowing the tactics of the YIMBY movement. Currently, when developers propose an energy project, they do an advertising campaign to persuade local residents of the benefits of the development, but there is often opposition as well.

I thought YIMBYs were generally pretty in favor of this already? (Though not generally as high a priority for them as housing.) My guess is it would be easier to push the already existing YIMBY movement to focus on energy more, as opposed to creating a new movement from scratch.

2
Eevee🔹
Yeah, I think that might be easier too. But YIMBY groups focus on housing in cities whereas most utility-scale energy developments are probably in suburbs or rural areas.
3
Daniel_Eth
Hmm, culturally YIMBYism seems much harder to do in suburbs/rural areas. I wouldn't be too surprised if the easiest ToC here is to pass YIMBY-energy policies on the state level, with most of the support coming from urbanites.  But sure, still probably worth trying.
2
Eevee🔹
Yeah, good point. Advocating for individual projects or rezonings is so time-consuming, even in the urban housing context.

I think an EA career fair would be a good idea. It could have EA orgs as well as non-EA orgs that are relevant to EAs (for gaining career capital or earning to give)

9
Kirsten
EA Global normally has an EA career fair, or something similar

One thing the EA community should try doing is multinational op-ed writing contests. The focus would be op-eds advocating for actions or policies that are important, neglected, and tractable (although the op-eds themselves don't have to mention EA); and by design, op-eds could be submitted from anywhere in the world. To make judging easier, op-eds could be required to be in a single language, but op-ed contests in multiple languages could be run in parallel (such as English, Spanish, French, and Arabic, each of which is an official language in at least 20 countries).

This would have two benefits for the EA community:

  • It would be a cheap way to spread EA-aligned ideas in multiple countries. Also, the people writing the op-eds would know more about the political climates of the countries for which they are publishing them than the organizers of the contest would, and we can encourage them to tailor their messaging accordingly.
  • It would also be a way to measure countries' receptiveness to EA ideas. For example, if there were multiple submissions about immigration policy, we could use them to compare the receptiveness of different countries to immigration reforms that would increase global well-being.
6
freedomandutility
I think this is a great idea. A related idea I had is a competition for "intro to EA" pitches because I don't currently feel like I can send my friends a link to a pitch that I'm satisfied with. A simple version could literally just be an EA forum post where everyone comments an "intro to EA" pitch under a certain word limit, and other people upvote / downvote. A fancier version could have a cash prize, narrowing down entries through EA forum voting, and then testing the top 5 through online surveys.  I think in a more general sense, we should create markets to incentivise and select persuasive writing on EA issues aimed at the public.
2
muskaan
That’s a great idea! I’ve been trying to find a good intro to EA talk for a while and I recently came across the EA for Christians YouTube video about intro to EA and though it’s kinda leaning towards to the religious angle, it seemed like a pretty good intro for a novice. Would love to hear your thoughts about that. Here’s the link: https://youtu.be/Unt9iHFH5-E

Possible outline for a 2-3 part documentary adaptation of The Precipice:

Part 1: Introduction & Natural Risks

  • Introduce the idea that we are in a time of unprecedented existential risk, but that the future could be very good (Introduction and Chapter 1)
  • Discuss natural risks (Chapter 3)
  • Argue that man-made risks are greater and use this to lead to the next episode (Chapter 3)

Part 2: Human-Made Risks

  • Well-known anthropogenic risks - nuclear war, climate change, other environmental damage (Chapter 4)
  • Emerging technological risks - pandemics, AI, dystopia (Chapter 5)
  • Existential risk and security factors (Chapter 6)

Part 3: What We Can Do

  • Discuss actions society can take to minimize its existential risk (Chapter 7)

What this leaves out:

  • Chapter 2 - mostly a discussion of the moral arguments for x-risk's importance. Can assume that the audience will already care about x-risk at a less sophisticated level, and focus on making the case that x-risk is high and we sort of know what to do about it.
  • The discussion of joint probabilities of x-risks in Chapter 6 - too technical for a general audience

Another way to do it would be to do an episode on each type of risk and what can be done about it, for ... (read more)

An idea I liked from Owen Cotton-Barratt's new interview on the 80K podcast: Defense in depth

If S, M, or L is any small, medium, or large catastrophe and X is human extinction, then the probability of human extinction is

So halving the probability of all small disasters, the probability of any small disaster becoming a medium-sized disaster, etc. would halve the probability of human extinction.

On the difference between x-risks and x-risk factors

I suspect there isn't much of a meaningful difference between "x-risks" and "x-risk factors," for two reasons:

  1. We can treat them the same in terms of probability theory. For example, if  is an "x-risk" and  is a "risk factor" for , then . But we can also say that , because both statements are equivalent to . We can similarly speak of the total probability of an x-risk factor because of the law of total probability (e.g. ) like we can with an x-risk.
  2. Concretely, something can be both an x-risk and a risk factor. Climate change is often cited as an example: it could cause an existential catastrophe directly by making all of Earth unable to support complex societies, or indirectly by increasing humanity's vulnerability to other risks. Pandemics might also be an example, as a pandemic could either directly cause the collapse of civilization or expose humanity to other risks.

I think the difference is that x-risks are events that directly cause an existential catastrophe, such as exti... (read more)

I think your comment (and particularly the first point) has much more to do with the difficulty of defining causality than with x-risks.

It seems natural to talk about force causing the mass to accelerate: when I push a sofa, I cause it to start moving. but Newtonian mechanics can't capture casualty basically because the equality sign in lacks direction. Similarly, it's hard to capture causality in probability spaces.

Following Pearl, I come to think that causality arises from manipulator/manipulated distinction.

So I think it's fair to speak about factors only with relation to some framing:

  • If you are focusing on bio policy, you are likely to take great-power conflict as an external factor.
  • Similarly, if you are focusing on preventing nuclear war between India and Pakistan, you are likely to take bioterrorism as an external factor.

Usually, there are multiple external factors in your x-risk modeling. The most salient and undesirable are important enough to care about them (and give them a name).

Calling bio-risks an x-factor makes sense formally; but doesn't make sense pragmatically because bio-risks are very salient (in our community) on their own because they are a canonica... (read more)

Status: Fresh argument I just came up with. I welcome any feedback!

Allowing the U.S. Social Security Trust Fund to invest in stocks like any other national pension fund would enable the U.S. public to capture some of the profits from AGI-driven economic growth.

Currently, and uniquely among national pension funds, Social Security is only allowed to invest its reserves in non-marketable Treasury securities, which are very low-risk but also provide a low return on investment relative to the stock market. By contrast, the Government Pension Fund of Norway (als... (read more)

3
Larks
It might be worthwhile reading about historical attempts to semi-privatize social security, which would have essentially created an opt-in version of your proposal, since individual people could then choose whether to have their share of the pot in bonds or stocks.

I think partnering with local science museums to run events on EA topics could be a great way to get EA-related ideas out to the public.

1
Ramiro
That's a pretty cool idea

Tentative thoughts on "problem stickiness"

When it comes to comparing non-longtermist problems from a longtermist perspective, I find it useful to evaluate them based on their "stickiness": the rate at which they will grow or shrink over time.

A problem's stickiness is its annual growth rate. So a problem has positive stickiness if it is growing, and negative stickiness if it is shrinking. For long-term planning, we care about a problem's expected stickiness: the annual rate at which we think it will grow or shrink. Over the long term - i.e. time frames of 50 years or more - we want to focus on problems that we expect to grow over time without our intervention, instead of problems that will go away on their own.

For example, global poverty has negative stickiness because the poverty rate has declined over the last 200 years. I believe its stickiness will continue to be negative, barring a global catastrophe like climate change or World War III.

On the other hand, farm animal suffering has not gone away over time; in fact, it has gotten worse, as a growing number of people around the world are eating meat and dairy. This trend will continue at least until alternative proteins become com

... (read more)
1
Charlie_Guthmann
Do you know if anyone else has written more about this? 

UK prime minister Rishi Sunak got some blowback for meeting with Elon Musk to talk about existential AIS stuff on Sky News, and that clip made it into this BritMonkey video criticizing the state of British politics. Starting at moment 1:10:57:

...the Prime Minister of the United Kingdom interviewing the richest man in the world, talking about AI in the context of the James Cameron Terminator films. I can barely believe I'm saying all of this.

Episodes 5 and 6 of Netflix's 3 Body Problem seem to have longtermist and utilitarian themes (content warning: spoiler alert)

  • In episode 5 ("Judgment Day"), Thomas Wade leads a secret mission to retrieve a hard drive on a ship in order to learn more about the San-Ti who are going to arrive on Earth in 400 years. The plan involves using an array of nanofibers to tear the ship to shreds as it passes through the Panama Canal, killing everyone on board. Dr. Auggie Salazar (who invented the nanofibers) is uncomfortable with this plan, but Wade justifies it in th
... (read more)
4
quinn
I loved Liu's trilogy because it makes longtermism seem commonsensical.