All of Lizka's Comments + Replies

Examples of someone admitting an error or changing a key conclusion

Good point, thanks! I'm really impressed, seems like a very hard switch to make. 

Examples of someone admitting an error or changing a key conclusion

Thanks a bunch for sharing this! I think this is really cool. 

Preventing a US-China war as a policy priority
Lizka4dModerator Comment12

As a moderator, I think this comment is unnecessarily rude and breaks Forum norms. Please don't leave any more comments like this or you will be banned from the Forum. 

On Deference and Yudkowsky's AI Risk Estimates
Lizka4dModerator Comment30

The moderation team is issuing Charles a 3-month ban. 

Critiques of EA that I want to read

The issue was that we were letting people upload files as submissions. If you uploaded a file, your email or name would be shared (and we had a note explaining this in the description of the question that offered the upload option). Nearly no one was using the upload option, and if you didn't upload anything, your information wasn't shared

Unfortunately, Google's super confusing UI says: "The name and photo associated with your Google account will be recorded when you upload files and submit this form. Your email is not part of your response," which ... (read more)

On Deference and Yudkowsky's AI Risk Estimates
Lizka6dModerator Comment10

Here are some things we think break Forum norms

  • Rude/hostile language and condescension, especially from Charles He
  • Gwern brings in an external dispute — a thread in which Charles accuses them of doxing an anonymous critic on LessWrong. We think that bringing in external disputes interferes with good discourse; it moves the thread away from discussion of the topic in question, and more towards discussions of individual users’ characters
  • The conversation about the external dispute gets increasingly unproductive

The mentioned thread about doxing also brea... (read more)

Lizka4dModerator Comment30

The moderation team is issuing Charles a 3-month ban. 

I honestly don't see such a problem with Gwern calling out out Charles' flimsy argument and hypocrisy using an example, be it a part of an external dispute.

On the other hand, I think Charles' uniformly low comment quality should have had him (temporarily) banned long ago (sorry Charles). The material is generally poorly organised, poorly researched, often intentionally provocative, sometimes interspersed with irrelevant images, and high in volume. One gets the impression of an author who holds their reader in contempt.

EA Forum feature suggestion thread
Lizka6dModerator Comment4

The moderators feel that several comments in this thread break Forum norms. In particular: 

  • Charles He points out that Gwern has doxed someone on a different website, LessWrong, seemingly in response to criticism. We’re not in a position to address this because it happened outside the EA Forum and isn't about a Forum user, but we do take this seriously and wouldn’t have approved of this on the EA Forum.
  • However, we feel that Charles’s comment displays a lack of care and further doxes the user in question since the comment lists the user’s full name (whi
... (read more)
8Charles He5d
I agree with this comment and it seems I should be banned, and I encourage you to apply the maximum ban. This is because: 1. The moderator comment above is correct 2. Additionally, in the comment that initiated this issue, I claimed I was protecting an individual. Yet, as the moderator pointed out, I seemed to be “further doxxing” him. So it seems my claims are a lie or hypocritical. I think this is a severe fault. 3. In the above, and other incidents, it seems like I am the causal factor—without me, the incidents won’t exist. Also, this has taken up a lot of time: 1. For this event, at least one moderator meeting has occurred and several messages notifying me (which seems a lot of effort). 1. I have gotten warnings in the past, such as from two previous bans (!) 2. One moderately-senior moderator EA has reached out for a call now. I think this use of time (including very senior EAs) is generous. While I’m not confident I understand the nature of the proposed call, I’m unsure my behavior or choices will change. Since the net results may not be valuable to these EAs, I declined this call. I do not promise to remedy my behavior, and I won’t engage with these generous efforts at communication. So, in a way requiring the least amount of further effort or discussion, you should apply a ban, maybe a very long or permanent one.
On Deference and Yudkowsky's AI Risk Estimates
Lizka7dModerator Comment10

The moderators feel that some comments in this thread break Forum norms and are discussing what to do about it.

Lizka6dModerator Comment10

Here are some things we think break Forum norms

  • Rude/hostile language and condescension, especially from Charles He
  • Gwern brings in an external dispute — a thread in which Charles accuses them of doxing an anonymous critic on LessWrong. We think that bringing in external disputes interferes with good discourse; it moves the thread away from discussion of the topic in question, and more towards discussions of individual users’ characters
  • The conversation about the external dispute gets increasingly unproductive

The mentioned thread about doxing also brea... (read more)

How bad would nuclear winter caused by a US-Russia nuclear exchange be?
Lizka7dModerator Comment9

This comment is not civil, and this sort of discourse is not appropriate for the Forum. The moderation team will issue a ban on the poster if we see this activity again.

1TedSeay6d
Do you consider the edited version more acceptable?
EA Organization Updates: May-June 2022

Thanks for asking! Unless I got things wrong when I was transferring the Google Doc to the Forum post, there wasn't anything from M-Z or from I-M. (Some organizations on the list didn't have an update this month, apparently, and also the list of organizations is pretty early-alphabet-heavy.)

2abrahamrowe8d
Thanks!
You Don't Need To Justify Everything

Thanks for posting this! I really appreciate it. 

I want to highlight some relevant posts: 

I think they're especially relevant for this section:

But possible self-defeating dynamics aren’t the only issue. Another is that pressure to justify everything can cause people to come up with justifi

... (read more)
Announcing a contest: EA Criticism and Red Teaming

I'd be happy to see this kind of process, and don't think it's against the rules of the contest. You might not want to tag early versions with the contest tag if you don't expect them to win and don't think panelists should bother voting on them, but tagging the early versions wouldn't count against you for the final version. 

On a different note (taking off my contest-oragnizer hat, putting on my Forum hat): I think people should feel free to post butterfly ideas with the idea that they will develop them further. The Forum exists in part for this kind... (read more)

New cause area: Violence against women and girls
Lizka18dModerator Comment57

Hey everyone, the moderators want to point out that this topic is heated for several reasons:

  • abuse/violence is already a topic people understandably have strong feelings about
  • the discussion in this thread got into comparing two populations and asking which of them has it worse, which might make people feel like the issues are being trivialized or dismissed. I think it might be best to evaluate the issues separately and see if they are promising as cause areas (e.g. via the ITN framework).

We want to ask everyone to be especially careful when discussing topics this sensitive. 

AGI Ruin: A List of Lethalities

FYI: LessWrong currently has an AGI Safety FAQ / all-dumb-questions-allowed thread --- if you have questions or things you're confused about, this could be a good opportunity for you.

What do we want the world to look like in 10 years?

The finalists from the Future of Life Institute's Worldbuilding Contest have produced some interesting additions to this topic

Announcing a contest: EA Criticism and Red Teaming

It's permitted, yes! 

The team of coauthors who write the winning submission will get the prize, and can share it as the members see fit. A good default might be to just split the prize evenly, and if you're collaborating on something that might win a prize that you think should be distributed differently, I'd recommend that you agree on this in advance. 

(No need to apologize. I don't think we discussed co-authorship anywhere in the post. I'm now thinking we should consider adding it to the Q&A section, so thank you for bringing it up!)

Notes on impostor syndrome

Appendix to the post

I’ve attended and helped organize sessions that discuss the theory of impostor syndrome (or the impostor phenomenon). Here are brief notes adapted from one such session that we ran at Canada/USA Mathcamp (this is mostly not my original work!).

The theory

The impostor phenomenon is heavily tied to the process of “discounting,” which is the process by which validation from an outside source is disregarded as inauthentic.

Examples: 

  1. My supervisor praises me. I tell myself she’s just being nice. 
  2. I get a good grade. I tell myself I got
... (read more)
Terminate deliberation based on resilience, not certainty

Thanks for this post! As someone who's agonized over some career (and other) decisions, I really appreciate it. It also seems to apply for e.g. shallow investigations into potential problems/causes (e.g., topic). Also, I love the graphs. 

A few relevant posts and thoughts: 

... (read more)
Revisiting the karma system

You can now look at Forum posts from all time and sort them by inflation-adjusted karma. I highly recommend that readers explore this view! 

1Tobias Häberli22d
That's very cool! Does it adjust the karma for when the post was posted? Or does it adjust for when the karma was given/taken? For example: The post with the highest inflation-adjusted karma was posted 2014, and had 70 upvotes out of 69 total votes in 2019 [https://web.archive.org/web/20190427044159/https://forum.effectivealtruism.org/posts/FpjQMYQmS3rWewZ83/effective-altruism-is-a-question-not-an-ideology] and now sits at 179 upvotes out of 125 total votes. Does the inflation adjustment consider that the average size of a vote after 2019 was around 2?
1Guy Raveh1mo
I love this. I've been really noticing the upvote inflation, and that's when I only started being active here last November.
5aaron_mai1mo
Out of curiosity: how do you adjust for karma inflation?
What are some artworks relevant to EA?

From Wikipedia: "The Butter Battle Book is a rhyming story written by Dr. Seuss. It was published by Random House on January 12, 1984. It is an anti-war story; specifically, a parable about arms races in general, mutually assured destruction and nuclear weapons in particular."

Page from around the middle of the book 

It's short and on point, and I quite like the ending. 

The Effective Altruism Handbook

Hi Chris, thanks for suggesting this! I'll add it. 

EA Speaker Repository?

Thanks for posting this question! You can see an incomplete list of speakers from past EA Global conferences here: https://www.eaglobal.org/speakers/ 

And you can see lots of videos here: https://www.youtube.com/c/EffectiveAltruismVideos/featured 

(Although you might already be aware of both of these resources.)

3Lauren Zitney1mo
Thank you so much, Lizka! I will take a look at these!
EA Forum feature suggestion thread

Thanks for this suggestion! You can in fact see your past upvotes, although the feature is really not easily discoverable right now, sadly. 

1Sophia1mo
amazing, thanks :)
Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter

I'll echo some of the other commenters: thanks for writing this, I appreciated the post! I don't entirely agree with everything you say, but I do really want to see more art and thought put into aesthetics.  (On the other hand, I'm not sure how much of our resources we should put into this.)

You might be interested in: 

... (read more)
5Étienne Fortier-Dubois1mo
Great links, thanks! I agree that how much resources is a core question. It seems plausible to me that there's currently a low-level baseline of caring about aesthetics, fiction, art etc. that has been sufficient so far, but EA will need a bit more intentionality as it grows.
Against “longtermist” as an identity

Thanks for this comment! 

I think you're right that I'm proving too much with the broad argument, and I like your reframing of the arguments I'm making as things to be wary of. I'm still uncomfortable with longtermism-as-identity, though --- possibly because I'm less certain of the four beliefs in (0). 

I'd be interested in drawing the boundaries more carefully and trying to see when a worldview (that is dependent on empirical knowledge) can safely(ish) become an identity without messing too much with my ability to reason clearly. 

3Zach Stein-Perlman1mo
+1 Also, I think this all depends on the person in addition to the identity: both what someone believes and their epistemic/psychological characteristics are relevant to whether they should identify as [whatever]. So I would certainly believe you that you shouldn't currently identify as a longtermist, and I might be convinced that a significant number of self-identified longtermists shouldn't so identify, but I highly doubt that nobody should.
Against “longtermist” as an identity

Fair point, thanks!

I think it's probably not great to have "effective altruist" as an identity either (I largely agree with Jonas's post and the others I linked), although I disagree with the case you're making for this. 

I think that my case against EA-as-identity would be more on the (2) side, to use the framing of your post. Yours seems to be from (1), and based (partly) on the claim that "EA" requires the assumption that "you have to try to do the most good" (which I think is false).  (I also think you're pointing to the least falsifiable of t... (read more)

4Jay Bailey1mo
One thing I'm curious about - how do you effectively communicate the concept of EA without identifying as an effective altruist?
Lizka's Shortform

I keep coming back to this map/cartogram. It's just so great. 

2DavidNash1mo
I tried to do something similar a while ago looking at under-5 mortality. [https://imgur.com/sPO1UtK]
New? Start here! (Useful links)

Thanks for pointing this out! Yeah, the link was broken, but it should work now. 

Open Thread: Spring 2022

The shortform should in fact appear in recent activity -- not sure what happened there. 

And I agree that we should grow and develop low-barrier ways of interacting with the Forum.

EA Forum feature suggestion thread

Thanks for pointing out that this is not discoverable! I've added a note about this to the user manual, but I agree that it should also just be easier to notice as you're exploring the platform. 

EA Forum feature suggestion thread

Thanks for pointing this out, and for linking to the user! I've deleted their account. 

For now, if you ever come across a spam user, please feel free to let me know (you can DM me on the Forum or you can email forum@effectivealtruism.org ), but I agree that a feature like this should exist.  

My thoughts on nanotechnology strategy research as an EA cause area

As a moderator, I agree with Michael. The comment Michael's replying to goes against Forum norms.

List of lists of EA-related open philosophy research questions

Thanks for sharing this! 

I also recommend looking at Michael's "central directory of open research questions," which has a lot of topics to explore. 

The Rodent Birth Control Landscape

I really appreciated this in-depth post, both because it seems like a well researched analysis of a problem with some concrete suggestions and because it's easy to read at different levels; the section headers and summaries made it really easy to get a sense of the overall take and then dive deeper into the sub-topics that I was most confused about. 

Thank you! I was hoping it would be useful to people to consult just the heading they were curious about.

You should write on the EA Forum

I agree that a lot of this is baked into the UX, and really appreciate the feedback!

Confused about funding shortages and earning to give
Answer by LizkaApr 26, 202210

I think a recent post has a good discussion on this topic: "EA needs money more than ever"

What are some artworks relevant to EA?

I keep coming back to this Calvin and Hobbes strip, which captures an important part of the EA mindset (something we're trying to fight), I think:

Relevant links: 

Editing Advice for EA Forum Users

I suspect there is a decent amount of overlap

I strongly agree. I struggle a lot with points 1 and 2. I've also seen many Forum posts make the mistakes you describe. :) 

(Thanks for posting!)

Should we have a tag for 'unfunded ideas/projects' on the EA Forum wiki, and if so, what should we call it?

Agreed that there should be a clearer system here. Currently, a number of related tags exist: 

  1. Funding request (open)
  2. EA funding
  3. Room for more funding
  4. Funding opportunities
  5. Requests (open)
  6. Bounty (open)
  7. Job listing (open)
  8. Take action (which also lists lots of other things)

We're reworking the tagging system a bit, so let us know if you have more ideas! 

Avian influenza is causing farmers to kill millions of chickens

Thanks for posting this. I really appreciate it in part because it's a clear write-up of something important that's happening that might not be on a lot of people's radars. 

Questions I have after reading this : 

  • How unusual is this situation? (How often is there is there a flu that's spreading and as pathogenic as H5N1?)
  • Is there anything people reading this or people in EA can or should do?
6Charles He2mo
To onlookers: These flus are endemic to factory farms, they occur constantly, there is basically always an outbreak in the US or CAD. (Currently there appears to be a major flare up, inferring from the existence of the article.) I can’t immediately search or show the endemic nature, as I am on mobile, but one can confirm this by using google and using the date range feature. Note that these diseases are generally not human to human transmissible, that is one reason they don’t attract alarm. Another reason they don’t get attention, is how normalized or ignored the situation on factory farms are. The environment in factory farms is so bizarre and so hostile to animals, to the degree, it is emotionally upsetting to receive factual descriptions of the situation, and so the situation is self-censoring. Relevant to the OP, to enter certain kids of factory farms requires full suiting in bio hazard suits, for the very reason of preventing these outbreaks. This vulnerability isn’t normal. The is the because animals are essentially very physically unhealthy and vulnerable throughout their lives.
EA coworking/lounge space on gather.town

Thanks for setting this up! 

In case anyone's interested, there's also the EA Focusmate group

Forecasting Newsletter: March 2022

Thanks for this newsletter, and congrats on getting to 1000!

2NunoSempere3mo
Cheers
Is there a list of causes that have been evaluated and rejected?

I agree that collections of "we investigated a possible intervention/focus area and decided not to go for it for XYZ reasons" could be a really useful resource.

 It's also probably worth emphasizing that there aren't causes that are universally rejected by EA as a community or movement. (As you say, this might be organization-specific: some organizations will have decided to focus on some "causes" -- specific risks, or specific philosophies or approaches to improving the world -- over others.)

I don't have a good answer to this, but I will say that the ... (read more)

COVID memorial: 1ppm

I really appreciate this post (and the other two obituaries Gavin posted). (Thank you!)

EA should taboo "EA should"

This is incredibly useful, thanks for pointing it out! Adding it to the "Semi-related thoughts" section. :)

1A_lark3mo
This also seems relevant: Shoulding at the Universe: https://m.youtube.com/watch?v=RpXyy2RLnEU [https://m.youtube.com/watch?v=RpXyy2RLnEU]
Is misinformation a serious problem, and is it tractable?

Excerpt from Deepfakes: A Grounded Threat Assessment - Center for Security and Emerging Technology (I haven't read the whole paper):

This paper examines the technical literature on deepfakes to assess the threat they pose. It draws two conclusions. First, the malicious use of crudely generated deepfakes will become easier with time as the technology commodifies. Yet the current state of deepfake detection suggests that these fakes can be kept largely at bay. 

Second, tailored deepfakes produced by technically sophisticated actors will represent the grea

... (read more)
Load More