Lizka

Content Specialist @ Centre for Effective Altruism
14858 karmaJoined Nov 2019Working (0-5 years)

Bio

I run the non-engineering side of the EA Forum (this platform), run the EA Newsletter, and work on some other content-related tasks at CEA. Please feel free to reach out! You can email me. [More about my job.]

Some of my favorite of my own posts:

I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I've since switched to the Online Team. In the past, I've also done some (math) research and worked at Canada/USA Mathcamp.

Some links I think people should see more frequently:

Sequences
9

Donation Debate Week (Giving Season 2023)
Marginal Funding Week (Giving Season 2023)
Effective giving spotlight - classic posts
Selected Forum posts (Lizka)
Classic posts (from the Forum Digest)
Forum updates and new features
Winners of the Creative Writing Contest
Winners of the First Decade Review
How to use the Forum

Comments
505

Topic Contributions
250

Here's a long excerpt (happy to take it down if asked, but I think people might be more likely to go read the whole thing if they see part of it): 

The only thing everyone agrees on is that the only two things EAs ever did were “endorse SBF” and “bungle the recent OpenAI corporate coup.”

In other words, there’s never been a better time to become an effective altruist! Get in now, while it’s still unpopular! The times when everyone fawns over us are boring and undignified. It’s only when you’re fighting off the entire world that you feel truly alive.

And I do think the movement is worth fighting for. Here’s a short, very incomplete list of things effective altruism has accomplished in its ~10 years of existence. I’m counting it as an EA accomplishment if EA either provided the funding or did the work, further explanations in the footnotes. I’m also slightly conflating EA, rationalism, and AI doomerism rather than doing the hard work of teasing them apart:

Global Health And Development

  • Saved about 200,000 lives total, mostly from malaria1
  • Treated 25 million cases of chronic parasite infection.2
  • Given 5 million people access to clean drinking water.3
  • Supported clinical trials for both the RTS.S malaria vaccine (currently approved!) and the R21/Matrix malaria vaccine (on track for approval)4
  • Supported additional research into vaccines for syphilis, malaria, helminths, and hepatitis C and E.5
  • Supported teams giving development economics advice in Ethiopia, India, Rwanda, and around the world.6

Animal Welfare:

  • Convinced farms to switch 400 million chickens from caged to cage-free.7
  • Things are now slightly better than this in some places! Source: https://www.vox.com/future-perfect/23724740/tyson-chicken-free-range-humanewashing-investigation-animal-cruelty
  • Freed 500,000 pigs from tiny crates where they weren’t able to move around8
  • Gotten 3,000 companies including Pepsi, Kelloggs, CVS, and Whole Foods to commit to selling low-cruelty meat.

AI:

  • Developed RLHF, a technique for controlling AI output widely considered the key breakthrough behind ChatGPT.9
  • …and other major AI safety advances, including RLAIF and the foundations of AI interpretability10.
  • Founded the field of AI safety, and incubated it from nothing up to the point where Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Bill Gates, and hundreds of others have endorsed it and urged policymakers to take it seriously.11
  • Helped convince OpenAI to dedicate 20% of company resources to a team working on aligning future superintelligences.
  • Gotten major AI companies including OpenAI to work with ARC Evals and evaluate their models for dangerous behavior before releasing them.
  • Got two seats on the board of OpenAI, held majority control of OpenAI for one wild weekend, and still apparently might have some seats on the board of OpenAI, somehow?12
  • [Skipped screenshot]
  • Helped found, and continue to have majority control of, competing AI startup Anthropic, a $30 billion company widely considered the only group with technology comparable to OpenAI’s.13
  • [Skipped screenshot]
  • Become so influential in AI-related legislation that Politico accuses effective altruists of having “[taken] over Washington” and “largely dominating the UK’s efforts to regulate advanced AI”.
  • Helped (probably, I have no secret knowledge) the Biden administration pass what they called "the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.”
  • Helped the British government create its Frontier AI Taskforce.
  • Won the PR war: a recent poll shows that 70% of US voters believe that mitigating extinction risk from AI should be a “global priority”.

Other:

I think other people are probably thinking of this as par for the course - all of these seem like the sort of thing a big movement should be able to do. But I remember when EA was three philosophers and few weird Bay Area nerds with a blog. It clawed its way up into the kind of movement that could do these sorts of things by having all the virtues it claims to have: dedication, rationality, and (I think) genuine desire to make the world a better place.

II.

Still not impressed? Recently, in the US alone, effective altruists have:

  • ended all gun violence, including mass shootings and police shootings
  • cured AIDS and melanoma
  • prevented a 9-11 scale terrorist attack

Okay. Fine. EA hasn’t, technically, done any of these things.

But it has saved the same number of lives that doing all those things would have.

About 20,000 Americans die yearly of gun violence, 8,000 of melanoma, 13,000 from AIDS, and 3,000 people in 9/11. So doing all of these things would save 44,000 lives per year. That matches the ~50,000 lives that effective altruist charities save yearly18.

People aren’t acting like EA has ended gun violence and cured AIDS and so on. all those things. Probably this is because those are exciting popular causes in the news, and saving people in developing countries isn’t. Most people care so little about saving lives in developing countries that effective altruists can save 200,000 of them and people will just not notice. “Oh, all your movement ever does is cause corporate boardroom drama, and maybe other things I’m forgetting right now.”

In a world where people thought saving 200,000 lives mattered as much as whether you caused boardroom drama, we wouldn’t need effective altruism. These skewed priorities are the exact problem that effective altruism exists to solve - or the exact inefficiency that effective altruism exists to exploit, if you prefer that framing.

Lizka
2dModerator Comment10
1
0

pinkfrog (and their associated account) has been banned for 1 month, because they voted multiple times on the same content (with two accounts), including upvoting pinkfrog's comments with their other account. To be a bit more specific, this happened on one day, and there were 12 cases of double-voting in total (which we’ll remove). This is against our Forum norms on voting and using multiple accounts.

As a reminder, bans affect the user, not the account(s).

If anyone has questions or concerns, please feel free to reach out, and if you think we made a mistake here, you can appeal the decision.

Multiple people on the moderation team have conflicts of interest with pinkfrog, so I wanted to clarify our process for resolving this incident. We uncovered the norm violation after an investigation into suspicious voting patterns, and only revealed the user’s identity to part of the team. The moderators who made decisions about how to proceed aren't aware of pinkfrog's real identity (they only saw anonymized information).

+1 to The Emperor of all Maladies

Hi! Sorry for the delay in my response here: 

  • Unfortunately, we could only list organizations from here as candidates in the Donation Election this year (largely due to vetting capacity and the current system we’re using for the election). I tried to make this clear in the announcement posts, but I think it ended up being confusing.
  • However, we can add your project in the Giving Portal here if you send us a logo,[1] a link to a fundraiser or your donation page (which ideally also shares some information about what you do and why people should consider donating), and a link to a description of your work (your website probably works). We might also add a page in the Election Portal (and elsewhere) that highlights projects we couldn’t feature but which people should consider donating to (and which have been active on the Forum this Giving Season), so we’d use the logo/links there, too.
    • @Bruno Sterenberg and @Joy Bittner - please let me know if you’re interested (feel free to email or DM me via the Forum), and apologies once again for the delay and confusion!
  1. ^

    PNG or JPEG, ideally somewhat square-ish (although we can just add extra white space around non-square logos)

There was an earlier post from lots of people at CEA, including me: Here’s where CEA staff are donating in 2023

Quick summary of my section: I donated to the Donation Election Fund for the reasons described here, to someone's political campaign[1], and in some cases I didn't take compensation I was supposed to get from organizations I'd happily donate to. 

  1. ^

    I feel weird donating to political campaigns (I grew up ~avoiding politics and still have a lot of the same beliefs and intuitions). But I talked to some people I know about the value of this campaign and tried to estimate the cost-effectiveness of the donation (my conclusion was that it was very close to donating to the LTFF, even when I was ignoring impact that might come from animal welfare improvements, which is important to me), and was compelled by the consideration that I had an unusual ability to donate to the campaign as a US citizen. (I'm interested in hearing people's thoughts about this, but will probably not actively participate in public discussions about the decision.)

I guess the framing of the post is pretty relevant: these projects would be over the bar if the LTFF got more donations. (Although I appreciate it being important to avoid discouraging people.) 

I might also flag that I don't think getting rejected generally has costs besides the time you put in and your motivation (someone from LTFF could correct me if I'm wrong). So applying is often worth it even if you think it's pretty likely that you'll get rejected. This isn't to say that rejection is hard; here's a thread with tips and others' experiences. But it seems that "Don't think, just apply (usually)!" is pretty good advice. 

Thanks for engaging! Quick thoughts:

  1. Yeah, I don't expect to be passing on a nontrivial inheritance to kids. Pledging to do something specific here currently seems unfeasible, though; I have no idea what the world will be like when I'm in my 70s. Examples of weirdness (even setting aside AI developments): maybe we've made serious medical breakthroughs and I'm still expecting to work for a long time, maybe money works in seriously different ways, etc. I haven't thought about this much, though, and it might be worth thinking about (e.g. maybe there's a nicely operationalized pledge that could work).[1]
  2.  Thanks!
  3. I think I fairly strongly disagree here, and might have been unclear in the original post. My runway-helps-epistemics point was not meant as a security measure for myself/personal protection against hardship, but rather as a point about potentially dangerous biases. My broad argument here is something like this: 
    1. (A) Organizations sometimes turn sour/I might discover things about organizations that employ or fund me that are not ok (at least according to me)
      1. (Note that I don't think readers should infer things about my current employers and funders from this comment. I'm still at CEA for a reason! But I've heard stories that make me aware of this problem.)
    2. (B) If I'm extremely financially dependent on my employers/funders, I will be afraid to do things like the following:
      1. Quit in order to voice protest or speak more freely, if I find out something very bad
      2. Do things that might upset my employers/funders (over which they might fire me), like asking questions they might not want asked, etc. 
      3. Actually investigate worries I have, knowing that I might discover things that mean I can no longer endorse continuing my current work
      4. Etc. 
    3. (C) It seems plausible that I should still do the things above even if I'm extremely financially dependent on my employers/funders. But it's very scary, and it might make me mentally flinch away from considering actions like this. I.e. it might make me biased against doing the above in worlds where I should. I think this is quite bad. 
    4. What Elizabeth says here resonates with me/seems reasonable: getting yourself into a position where virtue is cheap is an underrated strategy
      1. This section of a recent post is probably also relevant, as well as this one
  1. ^

    While we're talking about alternative pledges; I've considered taking a more general pledge to use some significant portion of the resources I have (and will have) for impartially altruistic purposes, with some carve-outs for other important values (like supporting family if something happens). I'd obviously need to operationalize it a lot better, and I haven't dedicated much time to thinking about it yet, but this seems more plausible to me right now. 

    I guess that if I were to prioritize thinking about this, I'd probably want to first think through the main goals of pledges and make sure a pledge like this is actually accomplishing something I think is important, instead of just allowing me to say something when pledges come up, etc. E.g. maybe the main benefit of a donation pledge is its public+memetic quality -- it encourages others to donate more. Or maybe it's about value drift, or something else, etc. 

Lizka
15dModerator Comment4
5
0

Update: 

I’ve talked with the other moderators and looked at KArax’s other Forum activity. Based on this comment, their oldest comment (which is somewhat violent/aggressive), and KArax’s other content, we’ve decided to issue KArax a 6-month ban

Because KArax's early Forum doesn't seem promising to me, I'd like to see a significant change if they come back to the Forum after the ban has passed. This means comments, posts, etc. should be civil, (reliably) on topic, and honest (without exaggerations). I expect that we'd ban KArax indefinitely if we saw more of this kind of off-topic/overstated/aggressive content. 

As a reminder, bans affect the user, not the account — any other accounts KArax operates are also suspended. If you’d like, you can appeal here.

Load more