New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Animal Justice Appreciation Note Animal Justice et al. v A.G of Ontario 2024 was recently decided and struck down large portions of Ontario's ag-gag law. A blog post is here. The suit was partially funded by ACE, which presumably means that many of the people reading this deserve partial credit for donating to support it. Thanks to Animal Justice (Andrea Gonsalves, Fredrick Schumann, Kaitlyn Mitchell, Scott Tinney), co-applicants Jessica Scott-Reid and Louise Jorgensen, and everyone who supported this work!
5
harfe
2h
0
Consider donating all or most of your Mana on Manifold to charity before May 1. Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then.
GiveWell and Open Philanthropy just made a $1.5M grant to Malengo! Congratulations to @Johannes Haushofer and the whole team, this seems such a promising intervention from a wide variety of views
Quote from VC Josh Wolfe: > Biology. We will see an AWS moment where instead of you having to be a biotech firm that opens your own wet lab or moves into Alexandria Real Estate, which is you know, specializes in hosting biotech companies, in in all these different regions approximate to academic research centers. You will be able to just take your experiment and upload it to the cloud where there are cloud-based robotic labs. We funded some of these. There's one company called Stratios. > > There's a ton that are gonna come on wave, and this is exciting because you can be a scientist on the beach in the Bahamas, pull up your iPad, run an experiment. The robots are performing 90% of the activity of Pouring something from a beaker into another, running a centrifuge, and then the data that comes off of that. > > And this is the really cool part. Then the robot and the machines will actually say to you, “Hey, do you want to run this experiment but change these 4 parameters or these variables?” And you just click a button “yes” as though it's reverse prompting you, and then you run another experiment. So the implication here is that the boost in productivity for science, for generation of truth, of new information, of new knowledge, That to me is the most exciting thing. And the companies that capture that, forget about the societal dividend, I think are gonna make a lot of money. https://overcast.fm/+5AWO95pnw/46:15
An alternate stance on moderation (from @Habryka.) This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason.  I found it thought provoking. I'd recommend reading it. > Thanks for making this post!  > > One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to participate in discussion on their own posts (so seeing a harsh rate-limit of something like "1 comment per 3 days" is not equivalent to a general ban from LessWrong, but should be more interpreted as "please comment primarily on your own posts", though of course it shares many important properties of a ban). This is a pretty opposite approach to the EA forum which favours bans. > Things that seem most important to bring up in terms of moderation philosophy:  > > Moderation on LessWrong does not depend on effort > > "Another thing I've noticed is that almost all the users are trying.  They are trying to use rationality, trying to understand what's been written here, trying to apply Baye's rule or understand AI.  Even some of the users with negative karma are trying, just having more difficulty." > > Just because someone is genuinely trying to contribute to LessWrong, does not mean LessWrong is a good place for them. LessWrong has a particular culture, with particular standards and particular interests, and I think many people, even if they are genuinely trying, don't fit well within that culture and those standards.  > > In making rate-limiting decisions like this I don't pay much attention to whether the user in question is "genuinely trying " to contribute to LW,  I am mostly just evaluating the effects I see their actions having on the quality of the discussions happening on the site, and the quality of the ideas they are contributing.  > > Motivation and goals are of course a relevant component to model, but that mostly pushes in the opposite direction, in that if I have someone who seems to be making great contributions, and I learn they aren't even trying, then that makes me more excited, since there is upside if they do become more motivated in the future. I sense this is quite different to the EA forum too. I can't imagine a mod saying I don't pay much attention to whether the user in question is "genuinely trying". I find this honesty pretty stark. Feels like a thing moderators aren't allowed to say. "We don't like the quality of your comments and we don't think you can improve". > Signal to Noise ratio is important > > Thomas and Elizabeth pointed this out already, but just because someone's comments don't seem actively bad, doesn't mean I don't want to limit their ability to contribute. We do a lot of things on LW to improve the signal to noise ratio of content on the site, and one of those things is to reduce the amount of noise, even if the mean of what we remove looks not actively harmful.  > > We of course also do other things than to remove some of the lower signal content to improve the signal to noise ratio. Voting does a lot, how we sort the frontpage does a lot, subscriptions and notification systems do a lot. But rate-limiting is also a tool I use for the same purpose. > Old users are owed explanations, new users are (mostly) not > > I think if you've been around for a while on LessWrong, and I decide to rate-limit you, then I think it makes sense for me to make some time to argue with you about that, and give you the opportunity to convince me that I am wrong. But if you are new, and haven't invested a lot in the site, then I think I owe you relatively little.  > > I think in doing the above rate-limits, we did not do enough to give established users the affordance to push back and argue with us about them. I do think most of these users are relatively recent or are users we've been very straightforward with since shortly after they started commenting that we don't think they are breaking even on their contributions to the site (like the OP Gerald Monroe, with whom we had 3 separate conversations over the past few months), and for those I don't think we owe them much of an explanation. LessWrong is a walled garden.  > > You do not by default have the right to be here, and I don't want to, and cannot, accept the burden of explaining to everyone who wants to be here but who I don't want here, why I am making my decisions. As such a moderation principle that we've been aspiring to for quite a while is to let new users know as early as possible if we think them being on the site is unlikely to work out, so that if you have been around for a while you can feel stable, and also so that you don't invest in something that will end up being taken away from you. > > Feedback helps a bit, especially if you are young, but usually doesn't > > Maybe there are other people who are much better at giving feedback and helping people grow as commenters, but my personal experience is that giving users feedback, especially the second or third time, rarely tends to substantially improve things.  > > I think this sucks. I would much rather be in a world where the usual reasons why I think someone isn't positively contributing to LessWrong were of the type that a short conversation could clear up and fix, but it alas does not appear so, and after having spent many hundreds of hours over the years giving people individualized feedback, I don't really think "give people specific and detailed feedback" is a viable moderation strategy, at least more than once or twice per user. I recognize that this can feel unfair on the receiving end, and I also feel sad about it. > > I do think the one exception here is that if people are young or are non-native english speakers. Do let me know if you are in your teens or you are a non-native english speaker who is still learning the language. People do really get a lot better at communication between the ages of 14-22 and people's english does get substantially better over time, and this helps with all kinds communication issues. Again this is very blunt but I'm not sure it's wrong.  > We consider legibility, but its only a relatively small input into our moderation decisions > > It is valuable and a precious public good to make it easy to know which actions you take will cause you to end up being removed from a space. However, that legibility also comes at great cost, especially in social contexts. Every clear and bright-line rule you outline will have people budding right up against it, and de-facto, in my experience, moderation of social spaces like LessWrong is not the kind of thing you can do while being legible in the way that for example modern courts aim to be legible.  > > As such, we don't have laws. If anything we have something like case-law which gets established as individual moderation disputes arise, which we then use as guidelines for future decisions, but also a huge fraction of our moderation decisions are downstream of complicated models we formed about what kind of conversations and interactions work on LessWrong, and what role we want LessWrong to play in the broader world, and those shift and change as new evidence comes in and the world changes. > > I do ultimately still try pretty hard to give people guidelines and to draw lines that help people feel secure in their relationship to LessWrong, and I care a lot about this, but at the end of the day I will still make many from-the-outside-arbitrary-seeming-decisions in order to keep LessWrong the precious walled garden that it is. > > I try really hard to not build an ideological echo chamber > > When making moderation decisions, it's always at the top of my mind whether I am tempted to make a decision one way or another because they disagree with me on some object-level issue. I try pretty hard to not have that affect my decisions, and as a result have what feels to me a subjectively substantially higher standard for rate-limiting or banning people who disagree with me, than for people who agree with me. I think this is reflected in the decisions above. > > I do feel comfortable judging people on the methodologies and abstract principles that they seem to use to arrive at their conclusions. LessWrong has a specific epistemology, and I care about protecting that. If you are primarily trying to...  > > * argue from authority,  > * don't like speaking in probabilistic terms,  > * aren't comfortable holding multiple conflicting models in your head at the same time,  > * or are averse to breaking things down into mechanistic and reductionist terms,  > > then LW is probably not for you, and I feel fine with that. I feel comfortable reducing the visibility or volume of content on the site that is in conflict with these epistemological principles (of course this list isn't exhaustive, in-general the LW sequences are the best pointer towards the epistemological foundations of the site). It feels cringe to read that basically if I don't get the sequences lessWrong might rate limit me. But it is good to be open about it. I don't think the EA forum's core philosophy is as easily expressed. > If you see me or other LW moderators fail to judge people on epistemological principles but instead see us directly rate-limiting or banning users on the basis of object-level opinions that even if they seem wrong seem to have been arrived at via relatively sane principles, then I do really think you should complain and push back at us. I see my mandate as head of LW to only extend towards enforcing what seems to me the shared epistemological foundation of LW, and to not have the mandate to enforce my own object-level beliefs on the participants of this site. > > Now some more comments on the object-level:  > > I overall feel good about rate-limiting everyone on the above list. I think it will probably make the conversations on the site go better and make more people contribute to the site.  > > Us doing more extensive rate-limiting is an experiment, and we will see how it goes. As kave said in the other response to this post, the rule that suggested these specific rate-limits does not seem like it has an amazing track record, though I currently endorse it as something that calls things to my attention (among many other heuristics). > > Also, if anyone reading this is worried about being rate-limited or banned in the future, feel free to reach out to me or other moderators on Intercom. I am generally happy to give people direct and frank feedback about their contributions to the site, as well as how likely I am to take future moderator actions. Uncertainty is costly, and I think it's worth a lot of my time to help people understand to what degree investing in LessWrong makes sense for them. 

Popular comments

Recent discussion

Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.


Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse

...
Continue reading

Not to be confused with The Macrostrategy Partnership: https://www.macrostrategy.co.uk/

15
Linch
7h
Sure, social aggression is a rather subjective call. I do think decoupling/locality norms are relevant here. "Garden variety incompetence" may not have been the best choice of words on Sean's part,[1] but it did seem like a) a locally scoped comment specifically answering a question that people on the forum understandably had, b) much of it empirically checkable (other people formerly at FHI, particularly ops staff, could present their perspectives re: relationship management), and c) Bostom's capacity as director is very much relevant to the discussion of the organization's operations or why the organization shut down.  Your comment first presents what I consider to be a core observation that is true and important, namely, FHI did a lot of good work, and this type of magic might not be easy to replicate if you do everything with apparent garden-variety competence. But afterwards, it also brought in a bunch of what I consider to be extraneous details on Sean's competency, judgment, and integrity. The points you raise are also more murkily defined and harder to check. So overall I think of your comment as more escalatory. 1. ^ or maybe it was under the circumstances. I don't know the details here, maybe the phrase was carefully chosen. 
12
Rebecca
8h
The point I’m trying to make is that there are many ways you can be influential (including towards people that matter) and only some of them increase prestige. People can talk about your ideas without ever mentioning or knowing your name, you can be a polarising figure who a lot of influential people like but who it’s taboo to mention, and so on. I also do think you originally meant (or conveyed) a broader meaning of influential - as you mention economic output and the dustbins of history, which I would consider to be about broad influence.

This is an anonymous account (Ávila is not a real person). I am posting on this account to avoid potentially negative effects on my future job prospects.

SUMMARY:

  • I've been rejected from 18 jobs or internships, 12 of which are "in EA."
  • I briefly spell out my background information
...
Continue reading

Hey, Zack from XLab here. I'd be happy to provide a couple sentence feedback on your application if you send me an email. 

The most common reasons for rejection before an interview were things like no indication of having US citizenship or student visa, ChatGPT-seeming responses, responses to the exercise that didn't clearly and compellingly indicate how it was relevant for global catastrophic risk mitigation, or lack of clarity on how mission aligned the applicant was.

We appreciate the feedback, though.      

2
GraceAdams
3h
Thanks for the kind feedback about our hiring process! I'll encourage the team to write up how we have approached the hiring for some roles where we think we ran a good process! [Edit: Actually Michael Townsend wrote this in the past about our hiring process, which is worth reading]
2
Rebecca
4h
Have you applied for 80k career advice?
harfe posted a Quick Take 2h ago

Consider donating all or most of your Mana on Manifold to charity before May 1.

Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then.

Continue reading
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk).

Introduction

I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a ...

Continue reading

I think the idea of a motivational shadow is a good one, and it can be useful to think about these sorts of filters on what sorts of evidence/argument/research people are willing to share, especially if people are afraid of social sanction.

However, I am less convinced by this concrete application. You present a hierarchy of activities in order of effort required to unlock, and suggest that something like 'being paid full time to advocate for this' pushes people up multiple levels:

  • Offhand comment
  • irate Tweet
  • Low-effort blog post
  • Sensationalised newpaper articl
... (read more)
4
Linch
5h
I genuinely don't know if this is an interesting/relevant question that's unique to EA. To me, the obvious follow-up question here is whether EA is unique or special in having this (average) level of vitriol in critiques of us? Like is the answer to "why so much EA criticism is hostile and lazy" the same answer to "why is so much criticism, period, hostile and lazy?" Or are there specific factors of EA that's at all relevant here? I haven't been sufficiently embedded in other intellectual or social movements. I was a bit involved in global development before and don't recall much serious vitirol, maybe something like Easterly or Moyo are closest. I guess maybe MAGA implicitly doesn't like global dev?  But otoh I've heard of other people involved in say animal rights who say that the "critiques" of EA are all really light and milquetoast by comparison. I'd really appreciate answers from people who have been more "around the block" than I have. 
2
Rebecca
9h
This is interesting, thanks. Though I wanted to flag that the volume of copyediting errors means I’m unlikely to share it with others.
yanni kyriacos posted a Quick Take 4h ago

I think it would be good if lots of EAs answered this twitter poll, so we could get a better sense for the communities views on the topic of Enlightenment / Awakening: https://twitter.com/SpencrGreenberg/status/1782525718586413085?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet

Continue reading

Crosspost of my blog.  

You shouldn’t eat animals in normal circumstances. That much is, in my view, quite thoroughly obvious. Animals undergo cruel, hellish conditions that we’d confidently describe as torture if they were inflicted on a human (or even a dog). No hamburger is worth that kind of cruelty. However, not all animals are the same. Contra Napoleon in Animal Farm, all animals are not equal.

Cows are big. The average person eats 2400 chickens but only 11 cows in their life. That’s mostly because chickens are so many times smaller than cows, so you can only get so many chicken sandwiches out of a single chicken. But how much worse is chicken than cow?

Brian Tomasik devised a helpful suffering calculator chart. It has various columns—one for how sentient you think the animals are, compared to humans, one for how long the animals lives, etc. You can change the numbers around if you...

Continue reading

How much weight should we give the long-term future, given that nobody may be around to experience it? Both economists and philosophers see extinction risk as a rationale for discounting future costs and benefits. David Thorstad has recently claimed it poses a major challenge...

Continue reading

I've raised related points here, and also here with followup, about how exponential decay with a fixed decay rate is not a good model to use for estimating long-term survival probability.

Cullen posted a Quick Take 7h ago

Quote from VC Josh Wolfe:

Biology. We will see an AWS moment where instead of you having to be a biotech firm that opens your own wet lab or moves into Alexandria Real Estate, which is you know, specializes in hosting biotech companies, in in all these different regions approximate to academic research centers. You will be able to just take your experiment and upload it to the cloud where there are cloud-based robotic labs. We funded some of these. There's one company called Stratios.

There's a ton that are gonna come on wave, and this is exciting because you can be a scientist on the beach in the Bahamas, pull up your iPad, run an experiment. The robots are performing 90% of the activity of Pouring something from a beaker into another, running a centrifuge, and then the data that comes off of that.

And this is the really cool part. Then the robot and the machines will actually say to you, “Hey, do you want to run this experiment but change these 4 parameters or these variables?” And you just click a button “yes” as though it's reverse prompting you, and then you run another experiment. So the implication here is that the boost in productivity for science, for generation of truth, of new information, of new knowledge, That to me is the most exciting thing. And the companies that capture that, forget about the societal dividend, I think are gonna make a lot of money.

https://overcast.fm/+5AWO95pnw/46:15

Continue reading