This is a special post for quick takes by Nathan Young. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Bird flu is probably fine right now. Let's not cry wolf.

I've been looking into  H5N1 bird flu and built this dashboard (https://birdflurisk.com)

To me the indicators suggest it's likely gonna be fine. You can see the forecasts are pretty low and even if these resolve positive it probably won't be a big deal to humans (see note in a sec)

I think it's worth becoming well calibrated on risk ie only crying wolf when there is a wolf and right now I see no wolf, so as a community we improve our calibration by saying "bird flu will almost certainly be fine"

That said, it probably will involve farms full of chickens being tortured to death if they catch the disease. This is tragic. I suggest it requires a different comms strategy though. 

Also there may be inflation with the political ramifications of that. 

Let me know what would make the dashboard more useful to you.

2
Nix_Goldowsky-Dill
Nice dashboard! It was confusing to see the hover text "formed from an average" when all three of the indicator values were higher than the 'average'. I'd suggest making a concise version of the explanation in "how is the risk index calculated" more prominent, or removing the word "average" from the hover text.
1
Ebenezer Dukakis
Do you know of anything which directly addresses the argument I made here? My vague impression is that the optimism I see is based on outside-view type forecasts, and people have mostly not taken inside views. I haven't thought much about bird flu recently though.

I feel like I want 80k to do more cause prioritisation if they are gonna direct so many people. Seems like 5 years ago they had their whole ranking thing which was easy to check. Now I am less confident in the quality of work that is directing lots of people in a certain direction.

Idk, many of the people they are directing would just do something kinda random which an 80k rec easily beats. I'd guess the number of people for whom 80k makes their plans worse in an absolute sense is kind of low and those people are likely to course correct.

Otoh, I do think people/orgs in general should consider doing more strategy/cause prio research, and if 80k were like "we want to triple the size of our research team to work out the ideal marginal talent allocation across longtermist interventions" that seems extremely exciting to me. But I don't think 80k are currently being irresponsible (not that you explicitly said that, for some reason I got a bit of that vibe from your post).

8
Ben Millwood🔸
80k could be much better than nothing and yet still missing out on a lot of potential impact, so I think your first paragraph doesn't refute the point.
4
NickLaing
I agree with this, and have another tangential issue, which might be party of why cause prioritizing seems unclear? Their website seems confusing and overloaded to me. Compare giving what we can's page which has good branding and simple language. IMO 80,000 hours page has too much text and too much going on front page. Bring both websites up on your phone and judge for yourself. These are the front page of EA for many people so are pretty important. These websites aren't really for most of us, they are for fresh people so need to be punchy, straightforward and attractive. After clicking a couple pages bank things can get heavier.

Compare giving what we can's page which has good branding and simple language. IMO 80,000 hours page has too much text and too much going on front page. Bring both websites up on your phone and judge for yourself.

My understanding is that 80k have done a bunch of A/B testing which suggested their current design outcompetes ~most others (presumably in terms of click-throughs / amount of time users spend on key pages).

You might not like it, but this is what peak performance looks like.

2
NickLaing
Love this response, peak performance ha. I hope I'm wrong and this is the deal, that would be an excellent approach. Would be interesting to see what the other designs they tested were, but obviously I won't.

Have your EA conflicts on... THE FORUM!

In general, I think it's much better to first attempt to have a community conflict internally before I have it externally. This doesn't really apply to criminal behaviour or sexual abuse. I am centrally talking about disagreements, eg the Bostrom stuff, fallout around the FTX stuff, Nonlinear stuff, now this manifest stuff. 

Why do I think this?

  • If I want to credibly signal I will listen and obey norms, it seems better to start with a small discourse escalation rather than a large one. Starting a community discussion on twitter is like jumping straight to a shooting war. 
  • Many external locations (eg twitter, the press) have very skewed norms/incentives to the forum and so many parties can feel like they are the victim. I find when multiple parties feel they are weaker and victimised that is likely to cause escalation. 
  • Many spaces have less affordance for editing comments, seeing who agrees with who, having a respected mutual party say "woah hold up there"
  • It is hard to say "I will abide by the community sentiment" if I have already started the discussion elsewhere in order to shame people. And if I don't intend to abide by the commu
... (read more)

This is also a argument for the forum's existence generally, if many of the arguments would otherwise be had on Twitter.

2
NickLaing
For sure when it comes to any internet based discussion, to promote quality discourse slowish long form >>>> rapid short form.
3
Sinclair Chen
I agree with the caveat that certain kinds of more reasonable discussion can't happen on the forum because the forum is where people are fighting. For instance, because of the controversy I've been thinking a lot recently about antiracism recently - like what would effective antiracism look like; what lessons can we take from civil rights and what do we have to contribute (cool ideas on how to leapfrog past or fix education gaps? discourse norms that can facilitate hard but productive discussions about racism? advocating for literal reparations?) I have deleted a shortform I was writing on this because I think ppl would not engage with it positively. and I suspect I am missing the point somehow. I suspect people actually just want to fight, and the point is to be angry. On the meta level, I have been pretty frustrated (with both sides though not equally) on the manner in which some people are arguing, and the types of arguments they use, and the motivations they. I think in some ways it is better to complain about that off the forum. It's worse for feedback, but that's also a good thing because the cycle of righteous rage does not continue on the forum. And you get different perspectives (i wonder if a crux here is that you have a lot of twitter followers and I don't. If you tweet you are speaking to an audience; if I tweet I am speaking to weird internet friends)
2
Nathan Young
So I sort of agree, though depending on the topic I think it could quickly get a lot of eyes on it. I would prefer to discuss most things that are controversial/personal, not on twitter.

I note that in some sense I have lost trust that the EA community gives me a clear prioritisation of where to donate.

Some clearer statements:

  • I still think GiveWell does great work
  • I still generally respect the funding decisions of Open Philanthropy
  • I still think this forum has a higher standard than most place
  • It is hard to know exactly how high impact animal welfare funding opportunities interact with x-risk ones
  • I don't know what the general consensus on the most impactful x-risk funding opportunities are
  • I don't really know what orgs do all-considered work on this topic. I guess the LTFF?
  • I am more confused/inattentive and this community is covering a larger set of possible choices so it's harder to track what consensus is

Since it looks like you're looking for an opinion, here's mine:

To start, while I deeply respect GiveWell's work, in my personal opinion I still find it hard to believe that any GiveWell top charity is worth donating to if you're planning to do the typical EA project of maximizing the value of your donations in a scope sensitive and impartial way. ...Additionally, I don't think other x-risks matter nearly as much as AI risk work (though admittedly a lot of biorisk stuff is now focused on AI-bio intersections).

Instead, I think the main difficult judgement call in EA cause prioritization right now is "neglected animals" (eg invertebrates, wild animals) versus AI risk reduction.

AFAICT this also seems to be somewhat close to the overall view of the EA Forum as well as you can see in some of the debate weeks (animals smashed humans) and the Donation Election (where neglected animal orgs were all in the top, followed by PauseAI).

This comparison is made especially difficult because OP funds a lot of AI but not any of the neglected animal stuff, which subjects the AI work to significantly more diminished marginal returns.

To be clear, AI orgs still do need money. I think there's a vibe that ... (read more)

7
CB🔸
I agree with this comment. Thanks for this clear overview.  The only element where I might differ is whether AI really is >10x neglected animals.  My main issue is that while AI is a very important topic, it's very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact.  First, it's hard to know what will work and what won't accidentally increase capabilities. More importantly, if we end up in a future aligned with human values but not animals or artificial sentience, this could still be a very bad world in which a large number of individuals are suffering (e.g., if factory farming continues indefinitely).  My tentative and not very solid view is that work at the intersection of AI x animals is promising (eg work that aims to get AI companies to commit towards not committing animal mistreatment), and attempts for a pause are interesting (since they give us more time to figure out stuff). If you think that an aligned AGI will truly maximise global utility, you will have a more positive outlook. But since I'm rather risk averse, I devote most of my resources to neglected animals. 
9
Peter Wildeford
I'm very uncertain about whether AI really is >10x neglected animals and I cannot emphasize enough that reasonable and very well-informed people can disagree on this issue and I could definitely imagine changing my mind on this over the next year. This is why I framed my comment the way I did hopefully making it clear that donating to neglected animal work is very much an answer I endorse. I also agree it's very hard to know whether AI organizations will have an overall positive or negative (or neutral) impact. I think there's higher-level strategic issues that make the picture very difficult to ascertain even with a lot of relevant information (imo Michael Dickens does a good job of overviewing this even if I have a lot of disagreements). Also the private information asymmetry looms large here. I also agree that "work that aims to get AI companies to commit towards not committing animal mistreatment" is an interesting and incredibly underexplored area. I think this is likely worth funding if you're knowledgable about the space (I'm not) and know of good opportunities (I currently don't). I do think risk aversion is underrated as a reasonable donor attitude and does make the case for focusing on neglected animals stronger.
1
CB🔸
Makes sense ! I understand the position. Regarding AI x animals donation opportunities, all of this is pretty new but I know a few. Hive launched a Ai for Animals website, with an upcoming conference: https://www.aiforanimals.org/ I also know about Electric Sheep, which has made a fellowship on the topic : https://electricsheep.teachable.com/
4
Nathan Young
I think I am happy to take this as the point I am trying to make. I don't see a robust systematic take on where to donate in animals and AI.  Isn't it reasonable to expect the EA community to synthesise one of these, rather than each of us having to do our own?    
2
Peter Wildeford
Yeah I think so, though there still is a lot of disagreement about crucial considerations. I the OP advice list is about as close as it’s going to get.
2
Nathan Young
I think that feels like a failure of the community in some sense, or maybe a reduction in ambition.

I think it's normal, and even good that the EA community doesn't have a clear prioritization of where to donate. People have different values and different beliefs, and so prioritize donations to different projects.

It is hard to know exactly how high impact animal welfare funding opportunities interact with x-risk ones

What do you mean? I don't understand how animal welfare campaigns interact with x-risks, except for reducing the risk of future pandemics, but I don't think that's what you had in mind (and even then, I don't think those are the kinds of pandemics that x-risk minded people worry about)

I don't know what the general consensus on the most impactful x-risk funding opportunities are

It seems clear to me that there is no general consensus, and some of the most vocal groups are actively fighting against each other.

I don't really know what orgs do all-considered work on this topic. I guess the LTFF?

You can see Giving What We Can recommendations for global catrastrophic risk reduction on this page[1] (i.e. there's also Longview's Emerging Challenges Fund). Many other orgs and foundations work on x-risk reduction, e.g. Open Philanthropy.

I am more confused/inattentive and th

... (read more)
3
Charlie_Guthmann
do you feel confident about your moral philosophy?
2
Nathan Young
I don't quite know what this means, but probably no.
2
Charlie_Guthmann
To rank interventions or causes as a whole (so not just making comparisons of outputs apples to apples), you need to have a moral framework. Unless you (1) believe there is an objectively correct moral framework and (2) trust that EA is both good at cost-benefit analysis and moral philosophy, I think you may be hoping for too much. 

If anyone who disagrees with me on the manifest stuff who considers themselves inside the EA movement, I'd like to have some discussions with a focus on consensus-building. ie we chat in DMs and the both report some statements we agreed on and some we specifically disagreed on.  

Edited:

@Joseph Lemien asked for positions I hold:

  • The EA forum should not seek to have opinions on non-EA events. I don't mean individual EAs shouldn't have opinions, I mean that as a group we shouldn't seek to judge individual event. I don't think we're very good at it.
  •  I don't like Hanania's behaviour either and am a little wary of systems where norm breaking behaviour gives extra power, such as being endlessly edgy. But I will take those complaints to the manifold community internally.
  • EAGs are welcome to invite or disinvite whoever CEA likes. Maybe one day I'll complain. But do I want EAGs to invite a load of manifest's edgiest speakers? Not particularly. 
  • It is fine for there to be spaces with discussion that I find ugly. If people want to go to these events, that's up to them.
  • I dislike having unresolved conflicts which ossify into an inability to talk about things. Someone once told me tha
... (read more)
4
Joseph
Nathan, could you summarize/clarify for us readers what your views are? (or link to whatever comment or document has those views?) I suspect that I agree with you on a majority of aspects and disagree on a minority, but I'm not clear on what your views are. I'd be interested to see some sort of informal and exploratory 'working group' on inclusion-type stuff within EA, and have a small group conversation once a month or so, but I'm not sure if there are many (any?) people other than me that would be interested in having discussions and trying to figure out some actions/solutions/improvements.[1] 1. ^ We had something like this for talent pipelines and hiring (it was High Impact Talent Ecosystem, and it was somehow connected to or organized by SuccessIf, but I'm not clear and exactly what the relationship was), but after a few months the organizer stopped and I'm not clear on why. In fact, I'm vaguely considering picking up the baton and starting some kind of a monthly discussion group about talent pipelines, coaching/developing talent, etc.
2
Nathan Young
Oooh that's interesting. I'd be interested to hear what the conclusions are.
4
Jason
One limitation here: you have a view about Manifest. Your interlocutor would have a different view. But how do we know if those views are actually representative of major groupings? My hunch is that, if equipped with a mind probe, we would find at least two major axes with several meaningfully different viewpoints on each axis. Overall, I'd predict that I would find at least four sizable clusters, probably five to seven.
2
Nathan Young
So I ran a poll with 100 ish respondents and if you want to run the k-means analysis you can find those clusters yourself. The anonymous data is downloadable here. https://viewpoints.xyz/polls/ea-and-manifest/results  Beyond that, yes you are likely right, but I don't know how to have that discussion better. I tried using polls and upvoted quotes as a springboard in this post (Truth-seeking vs Influence-seeking - a narrower discussion) but people didn't really bite there. Suggestions welcome. It is kind of exhausting to keep trying to find ways to get better samples of the discourse, without a sense that people will eventually go "oh yeah this convinces me". If I were more confident I would have more energy for it. 
4
Jason
I don't think those were most of the questions I was looking for, though. This isn't a criticism: running the poll early risks missing important cruxes and fault lines that haven't been found yet; running it late means that much of the discussion has already happened. There are also tradeoffs with viewpoints.xyz being accessible (=better sampling) and the data being rich enough. Limitation to short answer stems with a binary response (plus an ambiguous "skip") lends itself to identifying two major "camps" more easily that clusters within those camps. In general, expanding to five-point Likert scales would help, as would some sort of branching. For example, I'd want to know -- conditional on "Manifest did wrong here" / "the platforming was inappropriate" -- what factors were more or less important to the respondent's judgment. On a 1-5 scale, how important do you find [your view that the organizers did not distance themselves from the problematic viewpoints / the fit between the problematic viewpoints and a conference for the forecasting community / an absence of evidence that special guests with far-left or at least mainstream viewpoints on the topic were solicited / whatever]. And: how much would the following facts or considerations, if true, change your response to a hypothetical situation like the Manifest conference? Again, you can't get how much on a binary response. Maybe all that points out to polling being more of a post-dialogue event, and accepting that we would choose discussants based on past history & early reactions. For example, I would have moderately high confidence that user X would represent a stance close to a particular pole on most issues, while I would represent a stance that codes as "~ moderately progressive by EA Forum standards." 
5
Nathan Young
Often it feels like I can never please people on this forum. I think the poll is significantly better than no poll. 
3
Jason
Yeah, I agree with that! I don't find it inconsistent with the idea that the reasonable trade-offs you made between various characteristics in the data-collection process make the data you got not a good match for the purposes I would like data for. They are good data for people interested in the answer to certain other questions. No one can build a (practical) poll for all possible use cases, just as no one can build a (reasonably priced) car that is both very energy-efficient and has major towing/hauling chops.
2
Joseph
As useful as viewpoints.xyz is, I will mention that for maybe 50% or 60% of the questions, my reaction was "it depends." I suppose you can't really get around that unless the person creating the questions spends much more time to carefully craft them (which sort of defeats the purpose of a quick-and-dirty poll), or unless you do interviews (which are of course much more costly). I do think there is value in the quick-and-dirty MVP version, but it's usefullness has a pretty noticable upper bound.

Holding powerful people accountable. 

Reposted from a twitter thread.

I have made a number of prediction markets holding powerful people accountable[1]. Powerful people (and their friends) really can exert a lot of pressure with an angry email or dm (n = 2-5). If you are powerful, please consider how big your muscles are before you give pushback

I have quite thick skin, but I don't know whether such people are going around dming everyone like this. Likewise, this is a flaw I sometimes have and I have learned to be very light tough on pushback to non-frie... (read more)

8
Larks
Presumably not, as most people are not going around creating crime prediction markets that dramatically raise the salience of an implicit accusation. From their point of view I can see their response as being extremely restrained - you are making probabilistic public accusations that will predictably make them look bad, no matter how low the market price, and they're not responding publicly at all.
5
Ian Turner
Can you give an example of the sort of prediction market you’re referring to, or what kind of consequences there have been?
2
Nathan Young
I have made many markets about important people, whether they will do crimes, whether things were crimes, whether there will be conflict, whether things will replicate or are accurate. In at least 3 cases from people telling me it was extremely costly to this person or that person emotively or with blaming. 
2
Chris Leong
I don't know exactly what markets you're referring to, but have you considered that they could be right?  And maybe it's worth the trade-off, but if you're consistently applying the principle of "more information is always good", you should want to know when people are annoyed or angry with you (although it might turn out that when you reflect you conclude that there are limits on this principle).

I want to once again congratulate the forum team on this voting tool. I think by doing this, the EA forum is at the forefront of internal community discussions. No communities do this well and it's surprising how powerful it is. 

Suggestion. 

Debate weeks every other week and we vote on what the topic is.

I think if the forum had a defined topic (especially) in advance, I would be more motivated to read a number of post on that topic. 

One of the benefits of the culture war posts is that we are all thinking about the same thing. If we did that on topics perhaps with dialogues from experts, that would be good and on a useful topic.

9
Jason
Every other week feels exhausting, at least if the voting went in a certain direction.
7
NickLaing
I would pitch for every 2 months, but I like the sentiment of doing it a bit more.
5
Toby Tremlett🔹
A crux for me at the moment is whether we can shape debate weeks in a way which leads to deep rather than shallow engagement. If we were to run debate weeks more often, I'd (currently) want to see them causing people to change their mind, have useful conversations, etc... It's something I'll be looking closely at when we do a post-mortem on this debate week experiment. 
2
Toby Tremlett🔹
Also, every other week seems prima facie a bit burdensome for un-interested users. Additionally, I want top-down content to only be a part of the Forum. I wouldn't want to over-shepherd discussion and end up with less wide-ranging and good quality posts.  Happy to explore other ways to integrate polls etc if people like them and they lead to good discussions though. 
4
yanni kyriacos
Hi Nathan! I like suggestions and would like to see more suggestions. But I don't know what the theory of change is for the forum, so I find it hard to look at your suggestion and see if it maps onto the theory of change. Re this: "One of the benefits of the culture war posts is that we are all thinking about the same thing." I'd be surprised if 5% of EAs spent more than 5 minutes thinking about this topic and 20% of forum readers spent more than 5 minutes thinking about it. I'd be surprised if there were more than 100 unique commenters on posts related to that topic. Why does this matter? Well, prioritising a minority of subject-matter interested people over the remaining majority could be a good way to shrink your audience.
2
Nathan Young
Why is shrinking audience bad? If this forum focused more on EA topics and some people left I  am not sure that would be bad. I guess it would be slightly good on expectation. And to be clear I mean if we focused on "are AIs deserving of moral value" "what % of money should be spent on animal welfare"
2
Chris Leong
I agree that there's a lot of advantage of occasionally bringing a critical mass of attention to certain topics where this moves the community's understanding forward vs. just hoping we end up naturally having the most important conversations.
1
Ebenezer Dukakis
Weird idea: What if some forum members were chosen as "jurors", and their job is to read everything written during the debate week, possibly ask questions, and try to come to a conclusion? I'm not that interested in AI welfare myself, but I might become interested if such "jurors" who recorded their opinion before and after made a big update in favor of paying attention to it. To keep the jury relatively neutral, I would offer people the chance to sign up to "be a juror during the first week of August", before the topic for the first week of August is actually known.

@Toby Tremlett🔹 @Will Howard🔹 

Where can i see the debate week diagram if I want to look back at it?

Here's a screenshot (open in new tab to see it in slightly higher resolution). I've also made a spreadsheet with the individual voting results, which gives all the info that was on the banner just in a slightly more annoying format.

We are also planning to add native way to look back at past events as they appeared on the site :), although this isn't a super high priority atm.

2
NickLaing
Nice one - even the tab to bring up the posts isn't super easy to access (or I'm just a bit of a tech fail lol.)  It surprises me a bit (and I'm even impressed in a way) that so many EAs are all in on one side there.

An alternate stance on moderation (from @Habryka.)

This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason. 

I found it thought provoking. I'd recommend reading it.

Thanks for making this post! 

One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to participate in discussion on their own posts (so seeing a harsh rate-limit of something like "1 comment per 3 days" is not equivalent to a general ban from LessWrong, but should be more interpreted as "please comment primarily on your own posts", though of course it shares many important properties of a ban).

This is a pretty opposite approach to the EA forum which favours bans.

Things that seem most important to bring up in terms of moderation philosophy: 

Moderation on LessWrong does not depend on effort

"Another thing I've noticed is that almost all the users are trying.  They are trying to use rationality, trying to understan

... (read more)

This is a pretty opposite approach to the EA forum which favours bans.

If you remove ones for site-integrity reasons (spamming DMs, ban evasion, vote manipulation), bans are fairly uncommon. In contrast, it sounds like LW does do some bans of early-stage users (cf. the disclaimer on this list), which could be cutting off users with a high risk of problematic behavior before it fully blossoms. Reading further, it seems like the stuff that triggers a rate limit at LW usually triggers no action, private counseling, or downvoting here.

As for more general moderation philosophy, I think the EA Forum has an unusual relationship to the broader EA community that makes the moderation approach outlined above a significantly worse fit for the Forum than for LW. As a practical matter, the Forum is the ~semi-official forum for the effective altruism movement. Organizations post official announcements here as a primary means of publishing them, but rarely on (say) the effectivealtruism subreddit. Posting certain content here is seen as a way of whistleblowing to the broader community as a whole. Major decisionmakers are known to read and even participate in the Forum.

In contrast (although I am not... (read more)

6
Habryka
This also roughly matches my impression. I do think I would prefer the EA community to either go towards more centralized governance or less centralized governance in the relevant way, but I agree that given how things are, the EA Forum team has less leeway with moderation than the LW team. 
0
Nathan Young
Wait it seems like a higher proportion of EA forum moderations are bans, but that LW does more moderation and more is rate limits? Is that not right?
4
Habryka
My guess is LW both bans and rate-limits more. 
3
Nathan Young
Apart from choosing who can attend their conferences which are the de facto place that many community members meet, writing their intro to EA, managing the effective altruism website and offering criticism of specific members behaviour.  Seems like they are the de facto people who decide what is or isn't valid way to practice effective altruism. If anything more than the LessWrong team (or maybe rationalists are just inherently unmanageable).  I agree on the ironic point though. I think you might assume that the EA forum would moderate more than LW, but that doesn't seem to be the case. 
7
JP Addison🔸
I want to throw in a bit of my philosophy here. Status note: This comment is written by me and reflects my views. I ran it past the other moderators, but they might have major disagreements with it. I agree with a lot of Jason’s view here. The EA community is indeed much bigger than the EA Forum, and the Forum would serve its role as an online locus much less well if we used moderation action to police the epistemic practices of its participants. I don’t actually think this that bad. I think it is a strength of the EA community that it is large enough and has sufficiently many worldviews that any central discussion space is going to be a bit of a mishmash of epistemologies.[1] Some corresponding ways this viewpoint causes me to be reluctant to apply Habryka’s philosophy:[2] Something like a judicial process is much more important to me. We try much harder than my read of LessWrong to apply rules consistently. We have the Forum Norms doc and our public history of cases forms something much closer to a legal code + case law than LW has. Obviously we’re far away from what would meet a judicial standard, but I view much of my work through that lens. Also notable is that all nontrivial moderation decisions get one or two moderators to second the proposal. Related both to the epistemic diversity, and the above, I am much more reluctant to rely on my personal judgement about whether someone is a positive contributor to the discussion. I still do have those opinions, but am much more likely to use my power as a regular user to karma-vote on the content. Some points of agreement:  Agreed. We are much more likely to make judgement calls in cases of new users. And much less likely to invest time in explaining the decision. We are still much less likely to ban new users than LessWrong. (Which, to be clear, I don’t think would have been tenable on LessWrong when they instituted their current policies, which was after the launch of GPT-4 and a giant influx of low quality
4
Jason
I think the banned individual should almost always get at least one final statement to disagree with the ban after its pronouncement. Even the Romulans allowed (will allow?) that. Absent unusual circumstances, I think they -- and not the mods -- should get the last word, so I would also allow a single reply if the mods responded to the final statement. More generally, I'd be interested in ~"civility probation," under which a problematic poster could be placed for ~three months as an option they could choose as an alternative to a 2-4 week outright ban. Under civility probation, any "probation officer" (trusted non-mod users) would be empowered to remove content too close to the civility line and optionally temp-ban the user for a cooling-off period of 48 hours. The theory of impact comes from the criminology literature, which tells us that speed and certainty of sanction are more effective than severity. If the mods later determined after full deliberation that the second comment actually violated the rules in a way that crossed the action threshold, then they could activate the withheld 2-4 week ban for the first offense and/or impose a new suspension for the new one.  We are seeing more of this in the criminal system -- swift but moderate "intermediate sanctions" for things like failing a drug test, as opposed to doing little about probation violations until things reach a certain threshold and then going to the judge to revoke probation and send the offender away for at least several months. As far as due process, the theory is that the offender received their due process (consideration by a judge, right to presumption of innocence overcome only by proof beyond a reasonable doubt) in the proceedings that led to the imposition of probation in the first place.
-1
Nathan Young
"will allow?" very good.
2
Nathan Young
Yeah seems fair.

The front page agree disagree thing is soo coool. Great work forum team. 

7
Toby Tremlett🔹
Thanks Nathan! People seem to like it so we might use it again in the future. If you or anyone else has feedback that might improve the next iteration of it, please let us know! You can comment here or just dm. 
6
Ozzie Gooen
I think it's neat!  But I think there's work to do on the display of the aggregate. 1. I imagine there should probably be a table somewhere at least (a list of each person and what they say).  2. This might show a distribution, above. 3. There must be some way to just not have the icons overlap with each other like this. Like, use a second dimension, just to list them. Maybe use a wheat plot? I think strip plots and swarm plots could also be options.   
6
JP Addison🔸
I'm excited that we exceeded our goals enough to have the issue :)
4
Lorenzo Buonanno🔸
I would personally go for a beeswarm plot. But even just adding some random y and some transparency seems to improve things document.querySelectorAll('.ForumEventPoll-userVote').forEach(e => e.style.top = `${Math.random()*100-50}px`); document.querySelectorAll('.ForumEventPoll-userVote').forEach(e => e.style.opacity = `0.7`);  
2
Sarah Cheng
Really appreciate all the feedback and suggestions! This is definitely more votes than we expected. 😅 I implemented a hover-over based on @Agnes Stenlund's designs in this PR, though our deployment is currently blocked (by something unrelated), so I'm not sure how long it will take to make it to the live site. I may not have time to make further changes to the poll results UI this week, but please keep the comments coming - if we decide to run another debate or poll event, then we will iterate on the UI and take your feedback into account.

Looks great!

I tried to make it into a beeswarm, and while IMHO it does look nice it also needs a bunch more vertical space (and/or smaller circles)

4
Nathan Young
Also adding a little force works too, eg here. There are pretty easy libraries for this. 
4
Lorenzo Buonanno🔸
The orange line above the circles makes it look like there's a similar number of people at the extreme left and the extreme right, which doesn't seem to be the case
5
Jason
I don't think it would help much for this question, but I could imagine using this feature for future questions in which the ability to answer anonymously would be important. (One might limit this to users with a certain amount of karma to prevent brigading.)
2
Brad West🔸
I note some of my confusion that might have been shared by others. I initially had thought that the option from users was between binary "agree" and "disagree" and thought the method by which a user could choose was by dragging to one side or another. I see now that this would signify maximal agreement/disagreement, although maybe users like me might have done so in error. Perhaps something that could indicate this more clearly would be helpful to others.
2
Toby Tremlett🔹
Thanks Brad, I didn't foresee that! (Agree react Brad's comment if you experienced the same thing). Would it have helped if we had marked increments along the slider? Like the below but prettier? (our designer is on holiday)  
2
Brad West🔸
Yeah, if there were markers like "neutral", "slightly agree", "moderately agree", "strongly agree", etc. that might make it clearer. After the decision by the user registers, a visual display that states something like "you've indicated that you strongly agree with the statement X.  Redrag if this does not reflect your view or if something changes your mind and check out where the rest of the community falls on this question by clicking here." 
6
Ozzie Gooen
Another idea could be to ask, "How many EA resources should go do this, per year, for the next 10 years?"  Options could be things like,  "$0", "$100k", "1M", "100M", etc. Also, maybe there could be a second question for, "How sure are you about this?" 
2
Toby Tremlett🔹
Interesting. Certainty could also be a Y-axis, but I think that trades off against simplicity for a banner. 
2
Toby Tremlett🔹
I'd love to hear more from the disagree reactors. They should feel very free to dm.  I'm excited to experiment more with interactive features in the future, so critiques are especially useful now!

Lab grown meat -> no-kill meat

This tweet recommends changing the words we use to discuss lab-grown meat. Seems right.

There has been a lot of discussion of this, some studies were done on different names, and GFI among others seem to have landed on "cultivated meat".

1
EffectiveAdvocate🔸
What surprises me about this work is that it does not seem to include the more aggressive (for lack of a better word) alternatives I have heard being thrown around, like "Suffering-free", or "Clean", or "cruelty-free".
1
Saul Munn
could you link to a few of the discussions & studies?
4
Julia_Wise🔸
https://en.wikipedia.org/wiki/Cultured_meat#Nomenclature
6
Jeff Kaufman 🔸
For what it's worth, my first interpretation of "no-kill meat" is that you're harvesting meat from animals in ways that don't kill them. Like amputation of parts that grow back.
2
Eevee🔹
I love this wording!
1
Saul Munn
i'd be curious to see the results of e.g. focus groups on this — i'm just now realizing how awful of a name "lab grown meat" is, re: the connotations.

I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX".

Changes:

  • Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are due diligence people (which seems to be what was missing).
  • Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it's swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features.
  • More talking about honesty. Not really sure this was the problem. The issue wasn't the median EA it was in the tails. Are the tails of EA more honest? Hard to say
  • We have now had a big crisis so it's less costly to say "this might be like that big crisis". Though notably this might also be too cheap - we could flinch away from doing ambitious things
  • Large orgs seem slightly more beholden to comms/legal to avoid saying or doing the wrong thing.
  • OpenPhil is hiring more internally

Non-changes:

  • Still very centralised. I'm pretty pro-elite, so I'm not sure this is a problem in and of itself, though I have come to think that elites in general are less competent than I thought before (see FTX and OpenAI crisis)
  • Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him
  • Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future
8
Ben Millwood🔸
For both of these comments, I want a more explicit sense of what the alternative was. Many well-connected EAs had a low opinion of Sam. Some had a high opinion. Should we have stopped the high-opinion ones from affiliating with him? By what means? Equally, suppose he finds skepticism from (say) Will et al, instead of a warm welcome. He probably still starts the FTX future fund, and probably still tries to make a bunch of people regranters. He probably still talks up EA in public. What would it have taken to prevent any of the resultant harms? Likewise, what does not ignoring the base rate of scamminess in crypto actually look like? Refusing to take any money made through crypto? Should we be shunning e.g. Vitalik Buterin now, or any of the community donors who made money speculating?
4
Jason
Not a complete answer, but I would have expected communication and advice for FTXFF grantees to have been different. From many well connected EAs having a low opinion of him, we can imagine that grantees might have been urged to properly set up corporations, not count their chickens before they hatched, properly document everything and assume a lower-trust environment more generally, etc. From not ignoring the base rate of scamminess in crypto, you'd expect to have seen stronger and more developed contingency planning (remembering that crypto firms can and do collapse in the wake of scams not of their own doing!), more decisions to build more organizational reserves rather than immediately ramping up spending, etc.
2
Michael_PJ
The measures you list would have prevented some financial harm to FTXFF grantees, but it seems to me that that is not the harm that people have been most concerned about. I think it's fair for Ben to ask about what would have prevented the bigger harms.
2
Jason
Ben said "any of the resultant harms," so I went with something I saw a fairly high probability. Also, I mostly limit this to harms caused by "the affiliation with SBF" -- I think expecting EA to thwart schemes cooked up by people who happen to be EAs (without more) is about as realistic as expecting (e.g.) churches to thwart schemes cooked up by people who happen to be members (without more). To be clear, I do not think the "best case scenario" story in the following three paragraphs would be likely. However, I think it is plausible, and is thus responsive to a view that SBF-related harms were largely inevitable.  In this scenario, leaders recognized after the 2018 Alameda situation that SBF was just too untrustworthy and possibly fraudulent (albeit against investors) to deal with -- at least absent some safeguards (a competent CFO, no lawyers who were implicated in past shady poker-site scandals, first-rate and comprehensive auditors). Maybe SBF wasn't too far gone at this point -- he hadn't even created FTX in mid-2018 -- and a costly signal from EA leaders (we won't take your money) would have turned him -- or at least some of his key lieutenants -- away from the path he went down? Let's assume not, though.   If SBF declined those safeguards, most orgs decline to take his money and certainly don't put him on podcasts. (Remember that, at least as of 2018, it sounds like people thought Alameda was going nowhere -- so the motivation to go against consensus and take SBF money is much weaker at first.) Word gets down to the rank-and-file that SBF is not aligned, likely depriving him of some of his FTX workforce. Major EA orgs take legible action to document that he is not in good standing with them, or adopt a public donor-acceptability policy that contains conditions they know he can't/won't meet. Major EA leaders do not work for or advise the FTXFF when/if it forms.  When FTX explodes, the comment from major EA orgs is that they were not fully convinced he was
3
Jason
Is there any reason to doubt the obvious answer -- it was/is an easy way for highly-skilled quant types in their 20s and early 30s to make $$ very fast?
3
Nathan Young
seems like this is a pretty damning conclusion that we haven't actually come to terms with if it is the actual answer
5
Jason
It's likely that no single answer is "the" sole answer. For instance, it's likely that people believed they could assume that trusted insiders were more significantly more ethical than the average person. The insider-trusting bias has bitten any number of organizations and movements (e.g., churches, the Boy Scouts). However, it seems clear from Will's recent podcast that the downsides of being linked to crypto were appreciated at some level. It would take a lot for me to be convinced that all that $$ wasn't a major factor.

Some things I don't think I've seen around FTX, which are probably due to the investigation, but still seems worth noting. Please correct me if these things have been said.

  • I haven't seen anyone at the FTXFF acknowledge fault for negligence in not noticing that a defunct phone company (north dimension) was paying out their grants.
    • This isn't hugely judgemental from me, I think I'd have made this mistake too, but I would like it acknowledged at some point
    • Since writing this it's been pointed out that there were grants paid from FTX and Alameda accounts also. Ooof.

The FTX Foundation grants were funded via transfers from a variety of bank 
accounts, including North Dimension-8738 and Alameda-4456 (Primary Deposit Accounts), as 
well as Alameda-4464 and FTX Trading-9018

  • I haven't seen anyone at CEA acknowledge that they ran an investigation in 2019-2020 on someone who would turn out to be one of the largest fraudsters in the world and failed to turn up anything despite seemingly a number of flags.
    • I remain confused
  • As I've written elsewhere I haven't seen engagement on this point, which I find relatively credible, from one of the Time articles:

"Bouscal recalled speaking to Mac Aulay

... (read more)

Extremely likely that the lawyers have urged relevant people to remain quiet on the first two points and probably the third as well.

6
Nathan Young
Yeah seems right, but uh still seems worth saying.
4
ChanaMessinger
Did you mean for the second paragraph of the quoted section to be in the quote section? 
2
Nathan Young
I can't remember but you're right that it's unclear.
3
Rían O.M
I haven't read too much into this and am probably missing something.  Why do you think FTXFF was receiving grants via north dimension? The brief googling I did only mentioned north dimension in the context of FTX customers sending funds to FTX (specifically this SEC complaint). I could easily have missed something. 
7
Jason
Grants were being made to grantees out of North Dimension's account -- at least one grant recipient confirmed receiving one on the Forum (would have to search for that). The trustee's second interim report shows that FTXFF grants were being paid out of similar accounts that received customer funds. It's unclear to me whether FTX Philanthrophy (the actual 501c3) ever had any meaningful assets to its name, or whether (m)any of the grants even flowed through accounts that it had ownership of.
3
Nathan Young
Seems pretty bad, no?

Certainly very concerning. Two possible mitigations though:

  • Any finding of negligence would only apply to those with duties or oversight responsibilities relating to operations. It's not every employee or volunteer's responsibility to be a compliance detective for the entire organization.
  • It's plausible that people made some due dilligence efforts that were unsuccessful because they were fed false information and/or relied on corrupt experts (like "Attorney-1" in the second interim trustee report). E.g., if they were told by Legal that this had been signed off on and that it was necessary for tax reasons, it's hard to criticize a non-lawyer too much for accepting that. Or more simply, they could have been told that all grants were made out of various internal accounts containing only corporate monies (again, with some tax-related justification that donating non-US profits through a US charity would be disadvantageous).
1
Rían O.M
Ah, thank you!  I searched for that comment. I think this is probably the one you're referencing. 
2
Nathan Young
I know of at least 1 other case.

People voting without explaining is good. 

I often see people thinking that this is bragading or something when actually most people just don't want to write a response, they either like or dislike something

If it were up to me I might suggest an anonymous "I don't know" button and an anonymous "this is poorly framed" button.

When I used to run a lot of facebook polls, it was overwhelmingly men who wrote answers, but if there were options to vote, the gender was much more even. My hypothesis was that a kind of argumentative usually man tended to enjoy writing long responses more. And so blocking lower effort/less antagonistic/ more anonymous responses meant I heard more from this kind of person. 

I don't know if that is true on the forum, but I would guess that the higher effort it is to respond the more selective the responses become in some direction. I guess I'd ask if you think that the people spending the most effort are likely to be the most informed. In my experience, they aren't.

More broadly I think it would be good if the forum optionally took some information about users - location, income, gender, cause area, etc and on answers with more than say 10 votes would dis... (read more)

It seems like we could use the new reactions for some of this. At the moment they're all positive but there could be some negative ones. And we'd want to be able to put the reactions on top level posts (which seems good anyway).

6
Joseph
I think that it is generally fine to vote without explanations, but it would be nice to know why people are disagreeing or disliking something. Two scenarios come to mind: * If I write a comment that doesn't make any claim/argument/proposal and it gets downvotes, I'm unclear what those downvotes mean. * If I make a post with a claim/argument/proposal and it gets downvoted without any comments, it isn't clear what aspect of the post people have a problem with. I remember writing in a comment several months ago about how I think that theft from an individual isn't justified even if many people benefit from it, and multiple people disagreed without continuing the conversation. So I don't know why they disagreed, or what part of the argument they through was wrong. Maybe I made a simple mistake, but nobody was willing to point it out. I also think that you raise good points regarding demographics and the willingness of different groups of people to voice their perspectives.
2
Nathan Young
I agree it would be nice to know, but in every case someone has decided they do want to vote but don't want to comment. Sometimes I try and cajole an answer, but ultimately I'm glad they gave me any information at all.
1
Rebecca
What is bragading?
4
Brad West🔸
Think he was referring to "brigading", referred to in this thread   Generally, it is voting more out of allegiance or affinity to a particular person, rather than an assessment of the quality of the post/comment.

I know of at least 1 NDA of an EA org silencing someone for discussing what bad behaviour that happened at that org. Should EA orgs be in the practice of making people sign such NDAs?

I suggest no.

4
ChanaMessinger
I think I want a Chesterton's TAP for all questions like this that says "how normal are these and why" whenever we think about a governance plan.
2
Peter Wildeford
What's a "Chesterton's TAP"?
2
ChanaMessinger
Not a generally used phrase, just my attempting to point to "a TAP for asking Chesterton's fence-style questions"
2
Peter Wildeford
What's a TAP? I'm still not really sure what you're saying.
4
NunoSempere
"Trigger action pattern", a technique for adopting habits proposed by CFAR <https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps>.
7
Peter Wildeford
Thanks! "Chesterton's TAP" is the most rationalist buzzword thing I've ever heard LOL, but I am putting together that what Chana said is that she'd like there to be some way for people to automatically notice (the trigger action pattern) when they might be adopting an abnormal/atypical governance plan and then reconsider whether the "normal" governance plan may be that way for a good reason even if we don't immediately know what that reason is (the Chesterton's fence)?
2
ChanaMessinger
Oh, sorry! TAPs are a CFAR / psychology technique. https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps
2
Nathan Young
I am unsure what you mean? As in, because other orgs do this it's probably normal? 
4
ChanaMessinger
I have no idea, but would like to! With things like "organizational structure" and "nonprofit governance", I really want to understand the reference class (even if everyone in the reference class does stupid bad things and we want to do something different).
0
Yitz
Strongly agree that moving forward we should steer away from such organizational structures; much better that something bad is aired publicly before it has a chance to become malignant

Feels like we've had about 3 months since the FTX collapse with no kind of leadership comment. Uh that feels bad. I mean I'm all for "give cold takes" but how long are we talking.

3
Ian Turner
Do you think this is not due to "sound legal advice"?

I am pretty sure there is no strong legal reason for people to not talk at this point. Not like totally confident but I do feel like I've talked to some people with legal expertise and they thought it would probably be fine to talk, in addition to my already bullish model.

2[comment deleted]

The OpenAI stuff has hit me pretty hard. If that's you also, look after yourself. 

I don't really know what accurate thought looks like here.

3
ChanaMessinger
Yeah, same
1
yanni
I hope you're doing ok Nathan. Happy to chat in DM's if you like ❤️
1
Xing Shi Cai
It will settle down soon enough. Not much will change as for most breaking news story. But I am thinking if I should switch to Claude.
<