Lizka

Content Specialist @ Centre for Effective Altruism
Working (0-5 years experience)
9259Joined Nov 2019

Bio

I run the non-engineering side of the EA Forum (this platform), run the EA Newsletter, and work on some other content-related tasks at CEA. Please feel free to reach out! You can email me. [More about my job.]

Some of my favorite of my own posts:

I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I've since switched to the Online Team. In the past, I've also done some (math) research and worked at Canada/USA Mathcamp.

Some links I think people should see more frequently:

Sequences
5

Forum Digest Classics
Forum updates and new features
Winners of the Creative Writing Contest
Winners of the First Decade Review
How to use the Forum

Comments
328

Topic Contributions
225

Thanks for these flags about the newcomer experience, both. I agree that these are important considerations.

​​[Writing just for myself, not my employer or even my team. I am working on the Forum, and that's probably hard to separate from my views  on this topic— but this is a quickly-written comment, not something that I feedback on from the rest of the team, etc.]

I can see how all of this can feel related to the discussion about "bad epistemics" or a claim that the community as a whole is overly navel-gazing, etc. Thanks for flagging that you're concerned about this. 

To be clear, though, one of the issues here (and use of the term "bike-shedding") is more specific than those broader discussions. I think, given whatever it is that the community cares about (without opining about whether that prioritization is "correct"), the issues described in the post will appear. 

Take the example of the Forum itself as a topic that's relevant to building EA and a topic of interest to the EA community. 

Within that broad topic, some sub-topics will get more attention than others for reasons that don't track how much the community actually values them (in ~total). Suppose there are two discussions that could (and potentially should) happen: a discussion about the fonts on the site, and a discussion on how to improve fact-checking (or how to improve the Forum experience for newcomers, or how to nurture a culture that welcomes criticism, or something like that). I'd claim that the latter (sub)topic(s) is likely more important to discuss and get right than the former, but, because it's harder, and harder to participate in than a discussion about the font — something everyone interacts with all the time — it might get less attention. 

Moreover, posts that are more like "I dislike the font, do you?" will often get more engagement than posts like "the font is bad for people with dyslexia, based on these 5 studies — here are some suggestions and some reasons to doubt the studies," because (likely) fewer people will feel like they can weigh in on the latter kind of post. This is where bike-shedding comes in. I think we can probably do better, but it'll require a bit of testing and tweaking.

​​[Writing just for myself, not my employer or even my team. I am working on the Forum, and that's probably hard to separate from my views  on this topic— but this is a quickly-written comment, not something that I feedback on from the rest of the team, etc.]

Thanks for this comment, Amber! 

I'll try to engage with the other things that you said, but I just want to clarify a specific claim first. You write: 

I guess a question underlying all of this is 'what is karma for?' An implication of this post seems to be that karma should reflect quality, or how serious people think the issues are, all things considered.

I actually do not believe this. I think the primary/key point of karma is ordering the Frontpage & providing a signal of what to read (and ordering other pages, like when you're exploring posts on a given topic). We don't need to use only karma for ordering the Frontpage — and I really wish that more people used topic filters to customize their Frontpages, etc. — but I do think that's a really important function of karma. This means that karma needs to reflect usefulness-of-reading-something to a certain extent. This post is about correcting one type of issue that arises given this use. 

Note that we also correct in other ways. The Frontpage isn't just a list of posts from all time sorted by (inflation-adjusted) karma, largely because people find it useful to read newer content (although not always), we have topic tags, etc. 

So I don't directly care about whether a post that's 1000x more net useful than another post has 1000x (or even simply more) karma; I just want people to see the posts that will be most useful for them to engage with. (I think some people care quite a bit about karma correlating strongly with the impact of posts, and don't think this is unreasonable as a desire, but I personally don’t think it’s that important. I do think there are other purposes to karma, like being a feedback mechanism to the authors, a sign of appreciation, etc.)

​​[Writing just for myself, not my employer or even my team. I am working on the Forum, and that's probably hard to separate from my views  on this topic— but this is a quickly-written comment, not something that I feedback on from the rest of the team, etc.]

Thanks for this comment — I agree that tag filtering/following is underused, and we're working on some things that we hope will make it a bit more intuitive and obvious. I like a lot of your suggestions. 

Great, thank you! I appreciate this response, it made sense and cleared some things up for me. 

Re:

Yeah, I'm with you on being told to exercise.  I'm guessing you like this because you're being told to do it, but you know that you  have the option to refuse.

I think you might be right, and this is just something like the power of defaults (rather than choices being taken away). Having good defaults is good. 

(Also, I'm curating the post; I think more people should see it. Thanks again for sharing!) 

I really appreciate this post, thanks for sharing it (and welcome to the Forum)! 

Some aspects I want to highlight: 

  1. The project — trying to translate the known (or assumed) harms from child marriage into the metrics used by related projects that might work on the issue — seems really valuable
  2. Noticing that a key assumption falls through and sharing this is great. I'd love to see more of this
  3. The post also outlines some learnings from the experience
    1. Write out key assumptions and test them / look for things that disprove them
    2. Avoid trusting consensus
    3. Get accountability / find someone to report to
  4. I also like that there isn't the sense that this is the last word on whether working on child marriage is a promising cause area or not — this is an in-progress write-up (see "updated positions and next steps") and doesn't shy away from the fact
  5. And there's an "if you find this interesting, you may also like" section! I'm curious if you've seen: 
    1. Giving What We Can's report from 2014 on this issue? (And the associated page, which also seems pretty outdated.)
    2. Introducing Lafiya Nigeria and the Women's health and welfare and Family planning topic pages. 

Quick notes on the model — I'd be interested in your answers to some questions in the comments (Jeff's, this one that asks in part about the relationship between economic growth (and growth-supporting work) and this issue, etc.). 

  • I skimmed this report on some programs, and in case anyone is interested, it seems: 
    • "In each study country, we tested four approaches: 1) community sensitization to address social norms, 2) provision of school supplies to encourage retention in school, 3) a conditional asset transfer to girls and their families, and 4) one study area that included all the approaches."
  • I'm immediately a bit worried that estimating the impact of these programs is more messy if e.g. one of the harms that stem from child marriage that you track is a loss in education (or loss in nutrition or something) — as presumably e.g. the school supplies program also just directly supports education (so there's potentially some double-counting).
    • (I'm also wondering if, assuming that education delays marriage, more effective education-support programs, like iron supplementation, are just the way to go here.)
      • In general, it seems like there might be a bit of circularity (or, alternatively, loss of information)q if we do something like: "ok, these interventions, which we evaluate on a given factor — how much they delay (child) marriage — are effective to [this degree] at achieving the particular thing we're measuring, which we think is important for [a number of factors

I made a sketch to try to explain my worry about the models (and some alternative approaches I've seen) — it's a very rough sketch, but I'd be curious for takes. 

Thanks for posting this! I do think lots of people in EA take a more measuring-happiness/preference-satisfaction approach, and it's really useful to offer alternatives that are popular elsewhere. 

My notes and questions on the post:

Here's how I understand the main framework of the "capability approach," based mostly on this post, the linked Tweet, and some related resources (including SEP and ChatGPT):[1] 

  • "Freedom to achieve [well-being]" is the main thing that matters from a moral perspective.
    • (This post then implies that we should focus on increasing people's freedom to achieve well-being / we should maximize (value-weighted) capabilities.)
  • "Well-being" breaks down into functionings (stuff you can be or do, like jogging or being a parent) and capabilities (the ability to realize a functioning: to take some options — choices)
    • Examples of capabilities: having the option of becoming a parent, having the option of not having children, having the option of jogging, having the option of not jogging, etc. Note: if you live in a country where you're allowed to jog, but there are no safe places to jog, you do not actually have the capability to jog.
    • Not all functionings/capabilities are equal: we shouldn't naively list options and count them. (So e.g. the ability to spin and clap 756 times is not the same as the option to have children, jog, or practice a religion.) My understanding is that the capability approach doesn't dictate a specific approach to comparing different capabilities, and the post argues that this is a complexity that is just a fact of life that we should accept and pragmatically move forward with: 
      • "Yes, it’s true that people would rank capability sets differently and that they’re very high dimensional, but that’s because life is actually like this. We should not see this and run away to the safety of clean (and surely wrong) simple indices. Instead, we should try to find ways of dealing with this chaos that are approximately right."

In particular, even if it turns out that someone is content not jogging, them having the ability to jog is still better than them not having this ability. 

My understanding of the core arguments of the post, with some questions or concerns I have (corrections or clarifications very much appreciated!): 

  1. What the "capability approach" is — see above.
  2. Why this approach is good
    1. It generally aligns with our intuitions about what is good. 
      1. I view this as both a genuine positive, and also as slightly iffy as an argument — I think it's good to ground an approach in intuitions like "it's good for a woman to choose whether to walk at night even if she might not want to", but when we get into things like comparing potential areas of work, I worry about us picking approaches that satisfy intuitions that might be wrong. See e.g. Don’t Balk at Animal-friendly Results, if I remember that argument correctly, or just consider various philanthropic efforts that focus on helping people locally even if they're harder to help and in better conditions than people who are farther away — I think this is generally justified with things like "it's important to help people locally," which to me seems like over-fitting on intuitions.
      2. At the same time, the point about women being happier than men in the 1970s in the US seems compelling. Similarly, I agree that I don't personally maximize anything like my own well-being — I'm also "a confused mess of priorities." 
    2. It's safer to maximize capabilities than it is to maximize well-being (directly), which both means that it's safer to use the capabilities approach and is a signal that the capabilities approach is "pointing us in the right direction." 
      1. A potentially related point that I didn't see explicitly: this approach also seems safer given our uncertainty about what people value/what matters. This is also related to 2d. 
    3. This approach is less dependent on things like people's ability to imagine a better situation for themselves. 
    4. This approach is more agnostic about what people choose to do with their capabilities, which matters because we're diverse and don't really know that much about the people we're trying to help. 
      1. This seems right, but I'm worried that once you add the value-weighting for the capabilities, you're imposing your biases and your views on what matters in a similar way to other approaches to trying to compare different states of the world. 
      2. So it seems possible that this approach is either not very useful by saying: "we need to maximize value-weighted capabilities, but we can't choose the value-weightings," (see this comment, which makes sense to me) or transforms back into a generic approach like the ones more commonly used often in EA — deciding that there are good states and trying to get beings into those states (healthy, happy,  etc.). [See 3bi for a counterpoint, though.]
  3. Some downsides of the approach (as listed by the post)
    1. It uses individuals as the unit of analysis and assumes that people know best what they want, and if you dislike that, you won't like the approach. [SEE COMMENT THREAD...]
      1. I just don't really see this as a downside.
    2. "A second downside is that the number of sets of capabilities is incredibly large, and the value that we would assign to each capability set likely varies quite a bit, making it difficult to cleanly measure what we might optimize for in an EA context."
      1. The post argues that we can accept this complexity and move forward pragmatically in a better way than going with clean-but-wrong indices. It lists three examples (two indcies and one approach of tracking individual dimensions) that "start with the theory of the capability approach but then make pragmatic concessions in order to try to be approximately right." These seem to mostly track things that seem like common requirements for many other capabilities, like health/being alive, resources, education, etc. 
  4. The influence of the capability approach

Three follow-up confusions/uncertainties/questions (beyond the ones embedded in the summary above): 

  1. Did I miss important points, or get something wrong above? 
  2. If we just claim that people value having freedoms (or freedoms that will help them achive well-being), is this structurally similar to preference satisfaction?
  3. The motivation for the approach makes intuitive sense to me, but I'm confused about how this works with various things I've heard about how choices are sometimes bad. (Wiki page I found after a quick search, which seemed relevant after a skim.) (I would buy that a lot of what I've heard is stuff that's failed in replications, though.)
    1. Sometimes I actually really want to be told, "we're going jogging tonight," instead of being asked, "So, what do you want to do?"
    2. My guess is that these choices are different, and there's something like a meta-freedom to choose when my choice gets taken away? But it's all pretty muddled. 
  1. ^

    I don't have a philosophy background, or much knowledge of philosophy! 

Lizka6d24-39

Hi! Just flagging that I've marked this post as a "Personal Blog" post, based on the Forum's policy on politics

(This means those who've opted in to seeing "Personal Blog" posts on the Frontpage will see it there, while others should only see it in Recent Discussion, on the All Posts page, and on the relevant topic/tag pages.) 

Hi! The process for curation is outlined here. In short, some people can suggest curation, and I currently make the final calls. 

You can also see a list of other posts that have been curated (you can get to the list by clicking on the star next to a curated post's title). 

Thanks for writing this! I'm curating it. 

Some things I really appreciate about the post: 

  1. The claim (paraphrased), "it is pretty easy to get AI safety messaging wrong, but there are some useful things to communicate about AI safety" seems important (and right — I've also seen examples of people accidentally spreading the idea that "AI will be powerful"). I also think lots of people in the EA community should hear it — a good number of people are in fact working on "spreading the ideas of AI safety" (see a related topic page).
  2. It's very nice to have more content on things that ~everyone can help with. 
    1. "practically everyone can help with spreading messages at least some, via things like talking to friends; writing explanations of your own that will appeal to particular people; and, yes, posting to Facebook and Twitter and all of that. [...] I’d guess it can be a big deal: many extremely important AI-related ideas are understood by vanishingly small numbers of people, and a bit more awareness could snowball. Especially because these topics often feel too “weird” for people to feel comfortable talking about them! Engaging in credible, reasonable ways could contribute to an overall background sense that it’s OK to take these ideas seriously."
  3. The lists of kinds of messages that are risky/helpful are helpful: 
    1. Risky (presumably not an exhaustive list!): 
      1. messages that generically emphasize the importance and potential imminence of powerful AI systems
      2. messages that emphasize that AI could be risky/dangerous to the world, without much effort to fill in how, or with an emphasis on easy-to-understand risks (where one of the risks is, "If people have a bad model of how and why AI could be risky/dangerous (missing key risks and difficulties), they might be too quick to later say things like “Oh, turns out this danger is less bad than I thought, let’s go full speed ahead!”")
    2. Helpful + right (This list is presumably also not exhaustive. I should also say that I'm least optimistic about iii (sort of) and v.)
      1. [S] We should worry about conflict between misaligned AI and all humans
      2. [S] AIs could behave deceptively, so “evidence of safety” might be misleading
      3. [S] AI projects should establish and demonstrate safety (and potentially comply with safety standards) before deploying powerful systems
      4. [S] Alignment research is prosocial and great
      5. [S] It might be important for companies (and other institutions) to act in unusual ways
      6. [S] We’re not ready for this

One question/disagreement/clarification I have about the statement, "I’m not excited about blasting around hyper-simplified messages." 

  • The word "simplified" is a bit vague; I think I disagree with some interpretations of the sentence. I agree that "it’s generally not good enough to spread the most broad/relatable/easy-to-agree-to version of each key idea," but I think in some cases, "simplifying" could be really useful for spreading more accurate messages. In particular, "simplifying" could mean something like "dumbing down somewhat indiscriminately" — which is bad/risky — or it could mean something like "shortening and focusing on the key points, making technical points accessible to a more general audience, etc." — something like distillation. The latter approach seems really useful here, in part because it might help overcome a big problem in AI safety messaging: that a lot of the key points about risk are difficult to understand, and that important texts are technical. This means that it's easy to be shown cool demos of new AI systems, but not as easy to understand the arguments that explain why progress in AI might be dangerous. (So people trying to make the case in favor of safety might resort to deferring to experts, get the messages wrong in ways that make the listener unnecessarily skeptical of the overall case, etc.)
  • (More minor: I also think that the word "blast" has negative connotations which make it harder to correctly engage with the sentence. I think you mean "I'm not excited about sharing hyper-simplified messages in a way that reaches a ~random-but-large subset of people." I think I agree — it seems better to target a particular audience — but the way it's currently stated makes it harder to disagree; it's harder to say, "no, I think we should in fact blast some messages" than it is to say, "I think there are some messages that appeal to a very wide range of audiences," or to say "I think there are some messages we should promote extensively.")

(I should say that the opinions I'm sharing here are mine, not CEA's. I also think a lot of my opinions here are not very resilient.)

Load More