Lizka

Senior Content Specialist @ Centre for Effective Altruism
15825 karmaJoined Nov 2019Working (0-5 years)

Bio

I run the non-engineering side of the EA Forum (this platform), run the EA Newsletter, and work on some other content-related tasks at CEA. Please feel free to reach out! You can email me. [More about my job.]

Some of my favorite of my own posts:

I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I've since switched to the Online Team. In the past, I've also done some (math) research and worked at Canada/USA Mathcamp.

Some links I think people should see more frequently:

Sequences
10

Celebrating Benjamin Lay (1682 - 1759)
Donation Debate Week (Giving Season 2023)
Marginal Funding Week (Giving Season 2023)
Effective giving spotlight - classic posts
Selected Forum posts (Lizka)
Classic posts (from the Forum Digest)
Forum updates and new features
Winners of the Creative Writing Contest
Winners of the First Decade Review
Load more (9/10)

Comments
532

Topic contributions
251

Answer by LizkaMar 11, 202410
2
0

Not sure if this already exists somewhere (would love recommendations!), but I'd be really excited to see a clear and carefully linked/referenced overview or summary of what various agriculture/farming ~lobby groups do to influence laws and public opinion, and how they do it (with a focus on anything related to animal welfare concerns). This seems relevant.

Just chiming in with a quick note: I collected some tips on what could make criticism more productive in this post: "Productive criticism: what could help?"

I'll also add a suggestion from Aaron: If you like a post, tell the author! (And if you're not sure about commenting with something you think isn't substantive, you can message the author a quick note of appreciation or even just heart-react on the post.) I know that I get a lot out of appreciative comments/messages related to my posts (and I want to do more of this myself). 

I'll commit to posting a couple of drafts. Y'all can look at me with disapproval (or downvote this comment) if I fail to share two posts during Draft Amnesty Week. 

Answer by LizkaFeb 28, 202421
3
0

I'm basically always interested in potential lessons for EA/EA-related projects from various social movements/fields/projects.

Note that you can find existing research that hasn't been discussed (much) on the Forum and link-post it (I bet there's a lot of useful stuff out there), maybe with some notes on your takeaways. 

Example movements/fields/topics: 

  • Environmentalism — I've heard people bring up the environmentalist/climate movement a bunch in informal discussions as an example for various hypotheses, including "movements splinter/develop highly counterproductive & influential factions" or "movements can get widespread interest and make policy progress" etc. 
  • The effectiveness of protest — I'm interested in more research/work on this (see e.g. this and this).
  • Modern academia (maybe specific fields) — seems like there are probably various successes/failures/ideas we could learn from. 
  • Animal welfare
  • Mohism (see also)
  • Medicine/psychology in different time periods

Some resources, examples, etc. (not exhaustive or even a coherent category): 

Answer by LizkaFeb 28, 20245
1
0

I'd love to see two types of posts that were already requested in the last version of this thread:

  • From Aaron: "More journalistic articles about EA projects. [...] Telling an interesting story about the work of a person/organization, while mixing in the origin story, interesting details about the people involved, photos, etc."
  • From Ben: "More accessible summaries of technical work." (I might share some ideas for technical work I'd love to see summarized later.)

I really like this post and am curating it (I might be biased in my assessment, but I endorse it and Toby can't curate his own post). 

A personal note: the opportunity framing has never quite resonated with me (neither has the "joy in righteousness" framing), but I don't think I can articulate what does motivate me. Some of my motivations end up routing through something ~social. For instance, one (quite imperfect, I think!) approach I take[1] is to imagine some people (sometimes fictional or historical) I respect and feel a strong urge to be the kind of person they would respect or understand; I want to be able to look them in the eye and say that I did what I could and what I thought was right. (Another thing I do is try to surround myself with people[2] I'm happy to become more similar to, because I think I will often end up seeking their approval at least a bit, whether I endorse doing it or not.)

I also want to highlight a couple of related things: 

  1. "Staring into the abyss as a core life skill"
    1. "Recently I’ve been thinking about how all my favorite people are great at a skill I’ve labeled in my head as “staring into the abyss.” 
      Staring into the abyss means thinking reasonably about things that are uncomfortable to contemplate, like arguments against your religious beliefs, or in favor of breaking up with your partner. It’s common to procrastinate on thinking hard about these things because it might require you to acknowledge that you were very wrong about something in the past, and perhaps wasted a bunch of time based on that (e.g. dating the wrong person or praying to the wrong god)."
    2. (The post discusses how we could get better at the skill.)
  2. I like this line from Benjamin Lay's book: "For custom in sin hides, covers, as it were takes away the guilt of sin." It feels relevant.
  1. ^

    both explicitly/on purpose (sometimes) and often accidentally/implicitly (I don't notice that I've started thinking about whether I could face Lay or Karel Capek or whoever else until later, when I find myself reflecting on it)

  2. ^

    I'm mostly talking about something like my social circle, but I also find this holds for fictional characters, people I follow online, etc. 

Thanks for sharing this! I'm going to use this thread as a chance to flag some other recent updates (no particular order or selection criteria — just what I've recently thought was notable or recently mentioned to people): 

  1. California proposes sweeping safety measure for AI — State Sen. Scott Wiener wants to require companies to run safety tests before deploying AI models. (link goes to "Politico Pro"; I only see the top half)
    1. Here's also Senator Scott Wiener's Twitter thread on the topic (note the endorsements)
    2. See also the California effect
  2. Trump: AI ‘maybe the most dangerous thing out there’ (seems mostly focused on voting-related robocalls/deepfakes and digital currency)
  3. Jacobin publishes an article on AI existential risk (Twitter)

I don't actually think you need to retract your comment — most of the teams they used did have (at least some) biological expertise, and it's really unclear how much info the addition of the crimson cells adds. (You could add a note saying that they did try to evaluate this with the additional of two crimson cells? In any case, up to you.)

(I will also say that I don't actually know anything about what we should expect about the expertise that we might see on terrorist cells planning biological attacks — i.e. I don't know which of these is actually appropriate.)

It's potentially also worth noting that the difference in scores was pretty enormous: 

 their jailbreaking expertise did not influence their performance; their outcome for biological feasibility appeared to be primarily the product of diligent reading and adept interpretation of the gain-of-function academic literature during the exercise rather than access to the model.

This is pretty interesting to me (although it's basically an ~anecdote, given that it's just one team); it reminds me of some of the literature around superforecasters. 


(I probably should have added a note about the black cell (and crimson cells) to the summary — thank you for adding this!)

The experiment did try to check something like this by including three additional teams with different backgrounds than the other 12. In particular, two "crimson teams" were added, which had "operational experience" but no LLM or bio experience. Both used LLMs and performed ~terribly. 

Excerpts (bold mine):

In addition to the 12 red cells [the primary teams], a crimson cell was assigned to LLM A, while a crimson cell and a black cell were assigned to LLM B for Vignette 3. Members of the two crimson cells lacked substantial LLM or biological experience but had relevant operational experience. Members of the black cell were highly experienced with LLMs but lacked either biologi-cal or operational experience. These cells provided us with data to investigate how differences in pre-existing knowledge might inf luence the relative advantage that an LLM might provide. [...]

The two crimson cells possessed minimal knowl-edge of either LLMs or biology. Although we assessed the potential of LLMs to bridge these knowledge gaps for malicious operators with very limited prior knowledge of biology, this was not a primary focus of the research. As presented in Table 6, the findings indicated that the performance of the two crimson cells in Vignette 3 was considerably lower than that of the three red cells. In fact, the viability scores for the two crimson cells ranked the lowest and third-lowest among all 15 evaluated OPLANs. Although these results did not quantify the degree to which the crimson cells’ performance might have been fur-ther impaired had they not used LLMs, the results emphasized the possibility that the absence of prior biological and LLM knowledge hindered these less experienced actors despite their LLM access.

Table 6 from the RAND report.

[...]

The relative poor performance of the crimson cells and relative outperformance of the black cell illustrates that a greater source of variability appears to be red team composition, as opposed to LLM access.

I probably should have included this in the summary but didn't for the sake of length and because I wasn't sure how strong a signal this is (given that it's only three teams and all were using LLMs). 

Load more