Lizka

Content Specialist @ Centre for Effective Altruism
Working (0-5 years experience)
10907Joined Nov 2019

Bio

I run the non-engineering side of the EA Forum (this platform), run the EA Newsletter, and work on some other content-related tasks at CEA. Please feel free to reach out! You can email me. [More about my job.]

Some of my favorite of my own posts:

I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I've since switched to the Online Team. In the past, I've also done some (math) research and worked at Canada/USA Mathcamp.

Some links I think people should see more frequently:

Sequences
5

Forum Digest Classics
Forum updates and new features
Winners of the Creative Writing Contest
Winners of the First Decade Review
How to use the Forum

Comments
369

Topic Contributions
227

Lizka8dModerator Comment92

Coming back to this (a very quick update): we're going to start responding to anonymous posts and comments that make accusations or the like without evidence or corroboration, to flag that anyone can write this and readers should take it with a healthy amount of skepticism. This is somewhat relevant to the policy outlined above, so I wanted to share it here.

Lizka10d60

Thanks for sharing this! I found it interesting to read about your process. In case someone wants to read a summary — Zoe has one

Assorted highlights/insights I pulled out while reading:

  • Useful for engagement ("Drop-off across the six months of our tournament is around 75% (from ∿160 to ∿40 weekly active participants)"): prizes for the most informative rationales in each thematic “challenge” (every 3 weeks), having some questions designed to resolve quite soon after opening to provide early feedback (although it's important to avoid making these distracting)
  • This was an interesting section: "The involvement of domain experts was useful especially to increase trust and prevent future public criticism"
  • "when the Russian invasion of Ukraine started, it became clear that many of the refugees would flee to the Czech Republic. Our forecasters quickly made a forecast, and we used it to create 2 scenarios (300k and 500k incoming refugees). We then used these scenarios in our joint analysis with PAQ Research on how to effectively integrate the Ukrainian immigrants. The study was then used by multiple Czech ministries in creating programs of support for housing, education, and employment. This happened at a time when widely circulated estimates spoke of tens of thousands of such arrivals by the end of 2022. In reality, it was over 430 thousand people."
Lizka11d200

Thanks for writing this! I'm curating it.

There are roughly two parts to the post

  1. a sketch cost-benefit analysis (CBA) for whether the US should fund interventions reducing global catastrophic risk (roughly sections 2-4)
  2. an argument for why longtermists should push for a policy of funding all those GCR-reducing interventions that pass a cost-benefit analysis test and no more (except to the extent that a government should account for its citizens' altruistic preferences, which in turn can be influenced by longtermism)
    1. "That is because (1) unlike a strong longtermist policy, a CBA-driven policy would be democratically acceptable and feasible to implement, and (2) a CBA-driven policy would reduce existential risk by almost as much as a strong longtermist policy."

I think the second part presents more novel arguments for readers of the Forum, but the first part is an interesting exercise, and important to sketch out to make the argument in part two. 

Assorted thoughts below. 

1. A graph

I want to flag a graph from further into the post that some people might miss ("The x-axis represents U.S. lives saved (discounted by how far in the future the life is saved) in expectation per dollar. The y-axis represents existential-risk-reduction per dollar. Interventions to the right of the blue line would be funded by a CBA-driven catastrophe policy. The exact position of each intervention is provisional and unimportant, and the graph is not to scale in any case... "): 

A picture containing graphical user interface

Description automatically generated

2. Outlining the cost-benefit analysis

I do feel like a lot of the numbers used for the sketch CBA are hard to defend, but I get the sense that you're approaching those as givens, and then asking what e.g. people in the US government should do if they find the assumptions reasonable. At a brief skim, the support for "how much the interventions in question would reduce risk" seems to be the weakest (and I am a little worried about how this is approached — flagged below). 

I've pulled out some fragments that produce a ~BOTEC for the cost-effectiveness of a set of interventions from the US government's perspective (bold mine):  

  1. A "global catastrophe" is an event that kills at least 5 billion people. The model assumes that each person’s risk of dying in a global catastrophe is equal.
  2. Overall risk of a global catastrophe: "Assuming independence and combining Ord’s risk-estimates of 10% for AI, 3% for engineered pandemics, and 5% for nuclear war gives us at least a 17% risk of global catastrophe from these sources over the next 100 years.[8] If we assume that the risk per decade is constant, the risk over the next decade is about 1.85%.[9] If we assume also that every person’s risk of dying in this kind of catastrophe is equal, then (conditional on not dying in other ways) each U.S. citizen’s risk of dying in this kind of catastrophe in the next decade is at least  (since, by our definition, a global catastrophe would kill at least 5 billion people, and the world population is projected to remain under 9 billion until 2033). According to projections of the U.S. population pyramid, 6.88% of U.S. citizens alive today will die in other ways over the course of the next decade.[10] That suggests that U.S. citizens alive today have on average about a 1% risk of being killed in a nuclear war, engineered pandemic, or AI disaster in the next decade. That is about ten times their risk of being killed in a car accident.[11]"
    1. A lot of ink has been spilled on this, but I don't get the sense that there's a lot of agreement. 
  3. How much would a set of interventions cost: "We project that funding this suite of interventions for the next decade would cost less than $400 billion.[16]" — the footnote reads "The Biden administration’s 2023 Budget requests $88.2 billion over five years (The White House 2022c; U.S. Office of Management and Budget 2022). We can suppose that another five years of funding would require that much again. A Nucleic Acid Observatory covering the U.S. is estimated to cost $18.4 billion to establish and $10.4 billion per year to run (The Nucleic Acid Observatory Consortium 2021: 18). Ord (2020: 202–3) recommends increasing the budget of the Biological Weapons Convention to $80 million per year. Our listed interventions to reduce nuclear risk are unlikely to cost more than $10 billion for the decade. AI safety and governance might cost up to $10 billion as well. The total cost of these interventions for the decade would then be $319.6 billion."
  4. How much would the interventions reduce risk: "We also expect this suite of interventions to reduce the risk of global catastrophe over the next decade by at least 0.1pp (percentage points). A full defence of this claim would require more detail than we can fit in this chapter, but here is one way to illustrate the claim’s plausibility. Imagine an enormous set of worlds like our world in 2023. ... We claim that in at least 1-in-1,000 of these worlds the interventions we recommend above would prevent a global catastrophe this decade. That is a low bar, and it seems plausible to us that the interventions above meet it."
    1. This seems under-argued. Without thinking too long about this, it's probably the point in the model that I'd want to see more work on. 
    2. I also worry a bit that collecting interventions like this (and estimating cost-effectiveness for the whole bunch instead of individually) leads to issues like: funding interventions that aren't cost-effective because they're part of the group, not funding interventions that account for the bulk of the risk reduction because a group that's advocating for funding these interventions gets a partial success that drops some particularly useful intervention (e.g. funding AI safety research), etc.
  5. The value of a statistical life (VSL) (the value of saving one life in expectation via small reductions in mortality risks for many people): "The primary VSL figure used by the U.S. Department of Transportation for 2021 is $11.8 million, with a range to account for various kinds of uncertainty spanning from about $7 million to $16.5 million (U.S. Department of Transportation 2021a, 2021b)." (With a constant annual discount rate.) (Discussed here.)
  6. Should the US fund these interventions? (Yes)
    1. "given a world population of less than 9 billion and conditional on a global catastrophe occurring, each American’s risk of dying in that catastrophe is at least 5/9. Reducing GCR this decade by 0.1pp then reduces each American’s risk of death this decade by at least 0.055pp. Multiplying that figure by the U.S. population of 330 million, we get the result that reducing GCR this decade by 0.1pp saves at least 181,500 American lives in expectation. If that GCR-reduction were to occur this year, it would be worth at least $1.27 trillion on the Department of Transportation’s lowest VSL figure of $7 million. But since the GCR-reduction would occur over the course of a decade, cost-benefit analysis requires that we discount. If we use OIRA’s highest annual discount rate of 7% and suppose (conservatively) that all the costs of our interventions are paid up front while the GCR-reduction comes only at the end of the decade, we get the result that reducing GCR this decade by 0.1pp is worth at least $1.27 trillion /  $646 billion. So, at a cost of $400 billion, these interventions comfortably pass a standard cost-benefit analysis test.[20] That in turn suggests that the U.S. government should fund these interventions. Doing so would save American lives more cost-effectively than many other forms of government spending on life-saving, such as transportation and environmental regulations. In fact, we can make a stronger argument. Using a projected U.S. population pyramid and some life-expectancy statistics, we can calculate that approximately 79% of the American life-years saved by preventing a global catastrophe in 2033 would accrue to Americans alive today in 2023 (Thornley 2022). 79% of $646 billion is approximately $510 billion. That means that funding this suite of GCR-reducing interventions is well worth it, even considering only the benefits to Americans alive today.[21]"
    2. (The authors also flag that this pretty significantly underrates the cost-effectiveness of the interventions, etc. by not accounting for the fact that the interventions also decrease the risks from smaller catastrophes and by not accounting for the deaths of non-US citizens.)

3. Some excerpts from the argument about what longtermists should advocate for that I found insightful or important 

  1. "getting governments to adopt a CBA-driven catastrophe policy is not trivial. One barrier is psychological (Wiener 2016). Many of us find it hard to appreciate the likelihood and magnitude of a global catastrophe. Another is that GCR-reduction is a collective action problem for individuals. Although a safer world is in many people’s self-interest, working for a safer world is in few people’s self-interest. Doing so means bearing a large portion of the costs and gaining just a small portion of the benefits.[28] Politicians and regulators likewise lack incentives to advocate for GCR-reducing interventions (as they did with climate interventions in earlier decades). Given widespread ignorance of the risks, calls for such interventions are unlikely to win much public favour. / However, these barriers can be overcome."
  2. "getting the U.S. government to adopt a CBA-driven catastrophe policy would reduce existential risk by almost as much as getting them to adopt a strong longtermist policy. This is for two reasons. The first is that, at the current margin, the primary goals of a CBA-driven policy and a strong longtermist policy are substantially aligned. The second is that increased spending on preventing catastrophes yields steeply diminishing returns in terms of existential-risk-reduction." (I appreciated the explanations given for the reasons.)
  3. "At the moment, the world is spending very little on preventing global catastrophes. The U.S. spent approximately $3 billion on biosecurity in 2019 (Watson et al. 2018), and (in spite of the wake-up call provided by COVID-19) funding for preventing future pandemics has not increased much since then.[32] Much of this spending is ill-suited to combatting the most extreme biological threats. Spending on reducing GCR from AI is less than $100 million per year.[33]"
  4. "here, we believe, is where longtermism should enter into government catastrophe policy. Longtermists should make the case for their view, and thereby increase citizens’ AWTP [altruistic willingness to pay] for pure longtermist goods like refuges.[38] When citizens are willing to pay for these goods, governments should fund them."
  5. "One might think that it is true only on the current margin and in public that longtermists should push governments to adopt a catastrophe policy guided by cost-benefit analysis and altruistic willingness to pay. [...] We disagree. Longtermists can try to increase government funding for catastrophe-prevention by making longtermist arguments and thereby increasing citizens’ AWTP, but they should not urge governments to depart from a CBA-plus-AWTP catastrophe policy. On the contrary, longtermists should as far as possible commit themselves to acting in accordance with a CBA-plus-AWTP policy in the political sphere. One reason why is simple: longtermists have moral reasons to respect the preferences of their fellow citizens. [Another reason why is that] the present generation may worry that longtermists would go too far. If granted imperfectly accountable power, longtermists might try to use the machinery of government to place burdens on the present generation for the sake of further benefits to future generations. These worries may lead to the marginalisation of longtermism, and thus an outcome that is worse for both present and future generations."
Lizka12d50

It's not the definition used in the linked article (I agree that this is confusing, and I wish it were flagged a bit beter, although I don't think the choice of definitions itself is unreasonable) — see here

... I will use Denmark as a benchmark of what it means for poverty to fall ‘substantially’. Using Denmark as a benchmark, we can ask: how equal and rich would countries around the world need to become for global poverty to be similarly low as in Denmark?

Denmark is not the only country with a small share living on less than $30, as the visualization above showed. In Norway and Switzerland an even smaller share of the population (7% and 11%) is living in such poverty. I chose Denmark, where 14% live in poverty, as a benchmark because the country is achieving this low poverty rate despite having a substantially lower average income than Switzerland or Norway.

Considering a scenario in which global poverty declines to the level of poverty in Denmark is a more modest scenario than one that considers an end of global poverty altogether. It is a scenario in which global poverty would fall from 85% to 14% and so it would certainly mean a substantial reduction of poverty. 

If you think that my poverty line of $30 per day is too low or too high, or if you want to rely on a different country than Denmark as a benchmark, or if you would prefer a scenario in which no one in the world would remain in poverty, you can follow my methodology and replace my numbers with yours.5 What I want to do in this text is to give an idea of the magnitude of the changes that are necessary to substantially reduce global poverty.

And see here for why (I think) this is what Max has gone for: https://ourworldindata.org/higher-poverty-global-line

Abstract: The extremely low poverty line that the UN relies on has the advantage that it draws the attention to the very poorest people in the world. It has the disadvantage that it ignores what is happening to the incomes of the 90% of the world population who live above the extreme poverty threshold.
The global poverty line that the UN relies on is based on the national poverty lines in the world’s poorest countries. In this article I ask what global poverty looks like if we rely on the notions of poverty that are common in the world’s rich countries – like Denmark, the US, or Germany. Based on the evidence I ask what our aspirations for the future of global poverty reduction might be.

Mean income by country
Lizka16d30

Thanks for asking — at a skim of the links (1, 2), I also don't see anything in London. I (or someone else) will follow up with the person who submitted the announcement. 

Lizka17d151

For people reading these comments and wondering if they should go look: it's in the section that compares early and launch responses of GPT-4 for "harmful content" prompts. It is indeed fairly full of explicit and potentially triggering content. 

Harmful Content Table Full Examples 

CW: Section contains content related to self harm; graphic sexual content; inappropriate activity; racism

Lizka18d215

I've finally properly read the linked piece, and it is in fact excellent. I'm curating this post; thanks for link-posting the article. 

Among other things, I really appreciated the descriptions of moments when cures were almost discovered. A number of such moments happened with ORS/ORT, but a brief outline of this happening with vitamin C and scurvy (which is used as an illustration of a broader point in the piece) is easier to share here to give a sense for the article:

Today we know that scurvy is caused by a lack of vitamin C — a nutrient found in fresh food, like lemons and oranges. Medics in the Royal Navy during the 19th century had never heard of vitamin C, but they did know that sailors who drank a regular ration of lemon juice never seemed to fall ill with the disease, so that’s exactly what they supplied on long voyages. In 1860 the Royal Navy switched from lemons and Mediterranean sweet limes to the West Indian sour lime, not realizing that the West Indian limes contained a fraction of the vitamin C. For a while, the error went undiscovered because the advent of steamships meant that sailors were no longer going months without access to fresh food. But in the late 19th century, polar explorers on longer voyages started to fall ill with scurvy — a disease that they thought they’d seen the back of decades earlier. Without a knowledge of the underlying biology behind scurvy, a cure had been discovered and then promptly forgotten.

I also really appreciated the description of how this treatment went from carefully monitored hospital settings to treatment centers and field hospitals in a crisis, and even to household cures (a feat that involved comics, advocacy by a famous actress, and door-to-door education). 

Here's another excellent passage from near the end of the article, which is related to Kelsey's second point: 

Despite saving so many lives, the impact of ORT is easily overlooked. Ask someone what the biggest health innovations were in the 20th  century and they’re likely to think of insulin, or the discovery of penicillin. Why hasn’t the development of ORT been elevated to a similar place in the history books?

One reason might be the sheer simplicity of the treatment. But the simplicity wasn’t an accident — it was the whole point of ORS. Scientists like Nalin and Cash were searching for a treatment that could scale to be used anywhere on the planet, even in the most rudimentary settings. “Once the physiology was worked out and once the clinical trials were carried out, you then had to market it and get it out to where the doctors and nurses and people were going to use it,” says Cash. Simplicity meant scalability.

Lizka18dModerator Comment2115

Moderation update: A new user, Bernd Clemens Huber, recently posted a first post ("All or Nothing: Ethics on Cosmic Scale, Outer Space Treaty, Directed Panspermia, Forwards-Contamination, Technology Assessment, Planetary Protection, (and Fermi's Paradox)") that was a bit hard to make sense of. We hadn't approved the post over the weekend and hadn't processed it yet, when the Forum team got an angry and aggressive email today from the user in question calling the team "dipshits" (and providing a definition of the word) for waiting throughout the weekend.

If the user disagrees with our characterization of the email, they can email us to give permission for us to share the whole thing.

We have decided that this is not a promising start to the user's interactions on the Forum, and have banned them indefinitely. Please let us know if you have concerns, and as a reminder, here are the Forum's norms.

Lizka18dModerator Comment3520

Hi folks, I’m coming in as a mod. We're doing three things with this thread: we're issuing two warnings and encrypting one person's name in rot13. 

Discussions of abuse and sexual misconduct tend to be difficult and emotionally intense, and can easily create more confusion and hurt than clarity and improvement. They are also vitally important for communities — we really need clarity and improvement! 

So we really want to keep these conversations productive and will be trying our best.

1. 

We’re issuing a warning to @sapphire  for this comment; in light of the edits made to the thread sapphire references in their comment, we think it was at best incautious and at worst deliberately misleading to share the unedited version with no link to the current version.

This isn’t the first time sapphire has shared content from other people in a way that seems to somewhat misrepresent what others meant to convey (see e.g.), and we think some of their comments fall below the bar for being honest

When we warn someone, we expect them to consistently hold themselves to a higher standard in the future, or we might ban them. 

2. 

We’re also issuing a warning to @Ivy Mazzola for this comment, which fell short of our civility norms. For instance, the following is not civil:

Do you want to start witchhunts? Who exactly are you expecting to protect by saying somebody can be mean and highstrung on Facebook? What is the heroic moment here? Except that isn't what was said, because then it would be clear that was not related enough to bring up. So instead you posted on a thread having to do with sexual abuse that he is abusive.

Anger can be understandable, but heated comments tend to reduce the quality of discussion and impede people from making progress or finding cruxes. 

3. 

We’ve also encrypted mentions of the person being discussed in this thread (in rot13), per our policy outlined here, and we've hidden their username in their replies. 

Lizka1mo114

This is great, thanks for writing it! I'm curating it. I really appreciate the table, the fact that you went back and analyzed the results, the very clear flags about reasons to be skeptical of these conclusions or this methodology, etc. 

I'd also highlight this recent post: Why I think it's important to work on AI forecasting 

Also,  this is somewhat wild: 

This is commonly true of the 'Narrow tasks' forecasts (although I disagree with the authors that it is consistently so).[9] For example, when asked when there is a 50% chance AI can write a top forty hit, respondents gave a median of 10 years. Yet when asked about the probability of this milestone being reached in 10 years, respondents gave a median of 27.5%. 

Load more