This is a special post for quick takes by Nathan Young. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I feel like I want 80k to do more cause prioritisation if they are gonna direct so many people. Seems like 5 years ago they had their whole ranking thing which was easy to check. Now I am less confident in the quality of work that is directing lots of people in a certain direction.

8
calebp
Idk, many of the people they are directing would just do something kinda random which an 80k rec easily beats. I'd guess the number of people for whom 80k makes their plans worse in an absolute sense is kind of low and those people are likely to course correct. Otoh, I do think people/orgs in general should consider doing more strategy/cause prio research, and if 80k were like "we want to triple the size of our research team to work out the ideal marginal talent allocation across longtermist interventions" that seems extremely exciting to me. But I don't think 80k are currently being irresponsible (not that you explicitly said that, for some reason I got a bit of that vibe from your post).
8
Ben Millwood🔸
80k could be much better than nothing and yet still missing out on a lot of potential impact, so I think your first paragraph doesn't refute the point.
4
NickLaing
I agree with this, and have another tangential issue, which might be party of why cause prioritizing seems unclear? Their website seems confusing and overloaded to me. Compare giving what we can's page which has good branding and simple language. IMO 80,000 hours page has too much text and too much going on front page. Bring both websites up on your phone and judge for yourself. These are the front page of EA for many people so are pretty important. These websites aren't really for most of us, they are for fresh people so need to be punchy, straightforward and attractive. After clicking a couple pages bank things can get heavier.

Compare giving what we can's page which has good branding and simple language. IMO 80,000 hours page has too much text and too much going on front page. Bring both websites up on your phone and judge for yourself.

My understanding is that 80k have done a bunch of A/B testing which suggested their current design outcompetes ~most others (presumably in terms of click-throughs / amount of time users spend on key pages).

You might not like it, but this is what peak performance looks like.

2
NickLaing
Love this response, peak performance ha. I hope I'm wrong and this is the deal, that would be an excellent approach. Would be interesting to see what the other designs they tested were, but obviously I won't.

@Toby Tremlett🔹 @Will Howard🔹 

Where can i see the debate week diagram if I want to look back at it?

Here's a screenshot (open in new tab to see it in slightly higher resolution). I've also made a spreadsheet with the individual voting results, which gives all the info that was on the banner just in a slightly more annoying format.

We are also planning to add native way to look back at past events as they appeared on the site :), although this isn't a super high priority atm.

2
NickLaing
Nice one - even the tab to bring up the posts isn't super easy to access (or I'm just a bit of a tech fail lol.)  It surprises me a bit (and I'm even impressed in a way) that so many EAs are all in on one side there.

I want to once again congratulate the forum team on this voting tool. I think by doing this, the EA forum is at the forefront of internal community discussions. No communities do this well and it's surprising how powerful it is. 

Have your EA conflicts on... THE FORUM!

In general, I think it's much better to first attempt to have a community conflict internally before I have it externally. This doesn't really apply to criminal behaviour or sexual abuse. I am centrally talking about disagreements, eg the Bostrom stuff, fallout around the FTX stuff, Nonlinear stuff, now this manifest stuff. 

Why do I think this?

  • If I want to credibly signal I will listen and obey norms, it seems better to start with a small discourse escalation rather than a large one. Starting a community discussion on twitter is like jumping straight to a shooting war. 
  • Many external locations (eg twitter, the press) have very skewed norms/incentives to the forum and so many parties can feel like they are the victim. I find when multiple parties feel they are weaker and victimised that is likely to cause escalation. 
  • Many spaces have less affordance for editing comments, seeing who agrees with who, having a respected mutual party say "woah hold up there"
  • It is hard to say "I will abide by the community sentiment" if I have already started the discussion elsewhere in order to shame people. And if I don't intend to abide by the commu
... (read more)

This is also a argument for the forum's existence generally, if many of the arguments would otherwise be had on Twitter.

2
NickLaing
For sure when it comes to any internet based discussion, to promote quality discourse slowish long form >>>> rapid short form.
3
Sinclair Chen
I agree with the caveat that certain kinds of more reasonable discussion can't happen on the forum because the forum is where people are fighting. For instance, because of the controversy I've been thinking a lot recently about antiracism recently - like what would effective antiracism look like; what lessons can we take from civil rights and what do we have to contribute (cool ideas on how to leapfrog past or fix education gaps? discourse norms that can facilitate hard but productive discussions about racism? advocating for literal reparations?) I have deleted a shortform I was writing on this because I think ppl would not engage with it positively. and I suspect I am missing the point somehow. I suspect people actually just want to fight, and the point is to be angry. On the meta level, I have been pretty frustrated (with both sides though not equally) on the manner in which some people are arguing, and the types of arguments they use, and the motivations they. I think in some ways it is better to complain about that off the forum. It's worse for feedback, but that's also a good thing because the cycle of righteous rage does not continue on the forum. And you get different perspectives (i wonder if a crux here is that you have a lot of twitter followers and I don't. If you tweet you are speaking to an audience; if I tweet I am speaking to weird internet friends)
2
Nathan Young
So I sort of agree, though depending on the topic I think it could quickly get a lot of eyes on it. I would prefer to discuss most things that are controversial/personal, not on twitter.

If anyone who disagrees with me on the manifest stuff who considers themselves inside the EA movement, I'd like to have some discussions with a focus on consensus-building. ie we chat in DMs and the both report some statements we agreed on and some we specifically disagreed on.  

Edited:

@Joseph Lemien asked for positions I hold:

  • The EA forum should not seek to have opinions on non-EA events. I don't mean individual EAs shouldn't have opinions, I mean that as a group we shouldn't seek to judge individual event. I don't think we're very good at it.
  •  I don't like Hanania's behaviour either and am a little wary of systems where norm breaking behaviour gives extra power, such as being endlessly edgy. But I will take those complaints to the manifold community internally.
  • EAGs are welcome to invite or disinvite whoever CEA likes. Maybe one day I'll complain. But do I want EAGs to invite a load of manifest's edgiest speakers? Not particularly. 
  • It is fine for there to be spaces with discussion that I find ugly. If people want to go to these events, that's up to them.
  • I dislike having unresolved conflicts which ossify into an inability to talk about things. Someone once told me tha
... (read more)
4
Joseph Lemien
Nathan, could you summarize/clarify for us readers what your views are? (or link to whatever comment or document has those views?) I suspect that I agree with you on a majority of aspects and disagree on a minority, but I'm not clear on what your views are. I'd be interested to see some sort of informal and exploratory 'working group' on inclusion-type stuff within EA, and have a small group conversation once a month or so, but I'm not sure if there are many (any?) people other than me that would be interested in having discussions and trying to figure out some actions/solutions/improvements.[1] 1. ^ We had something like this for talent pipelines and hiring (it was High Impact Talent Ecosystem, and it was somehow connected to or organized by SuccessIf, but I'm not clear and exactly what the relationship was), but after a few months the organizer stopped and I'm not clear on why. In fact, I'm vaguely considering picking up the baton and starting some kind of a monthly discussion group about talent pipelines, coaching/developing talent, etc.
2
Nathan Young
Oooh that's interesting. I'd be interested to hear what the conclusions are.
4
Jason
One limitation here: you have a view about Manifest. Your interlocutor would have a different view. But how do we know if those views are actually representative of major groupings? My hunch is that, if equipped with a mind probe, we would find at least two major axes with several meaningfully different viewpoints on each axis. Overall, I'd predict that I would find at least four sizable clusters, probably five to seven.
2
Nathan Young
So I ran a poll with 100 ish respondents and if you want to run the k-means analysis you can find those clusters yourself. The anonymous data is downloadable here. https://viewpoints.xyz/polls/ea-and-manifest/results  Beyond that, yes you are likely right, but I don't know how to have that discussion better. I tried using polls and upvoted quotes as a springboard in this post (Truth-seeking vs Influence-seeking - a narrower discussion) but people didn't really bite there. Suggestions welcome. It is kind of exhausting to keep trying to find ways to get better samples of the discourse, without a sense that people will eventually go "oh yeah this convinces me". If I were more confident I would have more energy for it. 
4
Jason
I don't think those were most of the questions I was looking for, though. This isn't a criticism: running the poll early risks missing important cruxes and fault lines that haven't been found yet; running it late means that much of the discussion has already happened. There are also tradeoffs with viewpoints.xyz being accessible (=better sampling) and the data being rich enough. Limitation to short answer stems with a binary response (plus an ambiguous "skip") lends itself to identifying two major "camps" more easily that clusters within those camps. In general, expanding to five-point Likert scales would help, as would some sort of branching. For example, I'd want to know -- conditional on "Manifest did wrong here" / "the platforming was inappropriate" -- what factors were more or less important to the respondent's judgment. On a 1-5 scale, how important do you find [your view that the organizers did not distance themselves from the problematic viewpoints / the fit between the problematic viewpoints and a conference for the forecasting community / an absence of evidence that special guests with far-left or at least mainstream viewpoints on the topic were solicited / whatever]. And: how much would the following facts or considerations, if true, change your response to a hypothetical situation like the Manifest conference? Again, you can't get how much on a binary response. Maybe all that points out to polling being more of a post-dialogue event, and accepting that we would choose discussants based on past history & early reactions. For example, I would have moderately high confidence that user X would represent a stance close to a particular pole on most issues, while I would represent a stance that codes as "~ moderately progressive by EA Forum standards." 
5
Nathan Young
Often it feels like I can never please people on this forum. I think the poll is significantly better than no poll. 
3
Jason
Yeah, I agree with that! I don't find it inconsistent with the idea that the reasonable trade-offs you made between various characteristics in the data-collection process make the data you got not a good match for the purposes I would like data for. They are good data for people interested in the answer to certain other questions. No one can build a (practical) poll for all possible use cases, just as no one can build a (reasonably priced) car that is both very energy-efficient and has major towing/hauling chops.
2
Joseph Lemien
As useful as viewpoints.xyz is, I will mention that for maybe 50% or 60% of the questions, my reaction was "it depends." I suppose you can't really get around that unless the person creating the questions spends much more time to carefully craft them (which sort of defeats the purpose of a quick-and-dirty poll), or unless you do interviews (which are of course much more costly). I do think there is value in the quick-and-dirty MVP version, but it's usefullness has a pretty noticable upper bound.

Suggestion. 

Debate weeks every other week and we vote on what the topic is.

I think if the forum had a defined topic (especially) in advance, I would be more motivated to read a number of post on that topic. 

One of the benefits of the culture war posts is that we are all thinking about the same thing. If we did that on topics perhaps with dialogues from experts, that would be good and on a useful topic.

9
Jason
Every other week feels exhausting, at least if the voting went in a certain direction.
7
NickLaing
I would pitch for every 2 months, but I like the sentiment of doing it a bit more.
5
Toby Tremlett🔹
A crux for me at the moment is whether we can shape debate weeks in a way which leads to deep rather than shallow engagement. If we were to run debate weeks more often, I'd (currently) want to see them causing people to change their mind, have useful conversations, etc... It's something I'll be looking closely at when we do a post-mortem on this debate week experiment. 
2
Toby Tremlett🔹
Also, every other week seems prima facie a bit burdensome for un-interested users. Additionally, I want top-down content to only be a part of the Forum. I wouldn't want to over-shepherd discussion and end up with less wide-ranging and good quality posts.  Happy to explore other ways to integrate polls etc if people like them and they lead to good discussions though. 
4
yanni kyriacos
Hi Nathan! I like suggestions and would like to see more suggestions. But I don't know what the theory of change is for the forum, so I find it hard to look at your suggestion and see if it maps onto the theory of change. Re this: "One of the benefits of the culture war posts is that we are all thinking about the same thing." I'd be surprised if 5% of EAs spent more than 5 minutes thinking about this topic and 20% of forum readers spent more than 5 minutes thinking about it. I'd be surprised if there were more than 100 unique commenters on posts related to that topic. Why does this matter? Well, prioritising a minority of subject-matter interested people over the remaining majority could be a good way to shrink your audience.
2
Nathan Young
Why is shrinking audience bad? If this forum focused more on EA topics and some people left I  am not sure that would be bad. I guess it would be slightly good on expectation. And to be clear I mean if we focused on "are AIs deserving of moral value" "what % of money should be spent on animal welfare"
2
Chris Leong
I agree that there's a lot of advantage of occasionally bringing a critical mass of attention to certain topics where this moves the community's understanding forward vs. just hoping we end up naturally having the most important conversations.
1
Ebenezer Dukakis
Weird idea: What if some forum members were chosen as "jurors", and their job is to read everything written during the debate week, possibly ask questions, and try to come to a conclusion? I'm not that interested in AI welfare myself, but I might become interested if such "jurors" who recorded their opinion before and after made a big update in favor of paying attention to it. To keep the jury relatively neutral, I would offer people the chance to sign up to "be a juror during the first week of August", before the topic for the first week of August is actually known.

Lab grown meat -> no-kill meat

This tweet recommends changing the words we use to discuss lab-grown meat. Seems right.

There has been a lot of discussion of this, some studies were done on different names, and GFI among others seem to have landed on "cultivated meat".

1
EffectiveAdvocate🔸
What surprises me about this work is that it does not seem to include the more aggressive (for lack of a better word) alternatives I have heard being thrown around, like "Suffering-free", or "Clean", or "cruelty-free".
1
Saul Munn
could you link to a few of the discussions & studies?
4
Julia_Wise🔸
https://en.wikipedia.org/wiki/Cultured_meat#Nomenclature
6
Jeff Kaufman 🔸
For what it's worth, my first interpretation of "no-kill meat" is that you're harvesting meat from animals in ways that don't kill them. Like amputation of parts that grow back.
2
Eevee🔹
I love this wording!
1
Saul Munn
i'd be curious to see the results of e.g. focus groups on this — i'm just now realizing how awful of a name "lab grown meat" is, re: the connotations.

The front page agree disagree thing is soo coool. Great work forum team. 

7
Toby Tremlett🔹
Thanks Nathan! People seem to like it so we might use it again in the future. If you or anyone else has feedback that might improve the next iteration of it, please let us know! You can comment here or just dm. 
6
Ozzie Gooen
I think it's neat!  But I think there's work to do on the display of the aggregate. 1. I imagine there should probably be a table somewhere at least (a list of each person and what they say).  2. This might show a distribution, above. 3. There must be some way to just not have the icons overlap with each other like this. Like, use a second dimension, just to list them. Maybe use a wheat plot? I think strip plots and swarm plots could also be options.   
6
JP Addison🔸
I'm excited that we exceeded our goals enough to have the issue :)
4
Lorenzo Buonanno🔸
I would personally go for a beeswarm plot. But even just adding some random y and some transparency seems to improve things document.querySelectorAll('.ForumEventPoll-userVote').forEach(e => e.style.top = `${Math.random()*100-50}px`); document.querySelectorAll('.ForumEventPoll-userVote').forEach(e => e.style.opacity = `0.7`);  
2
Sarah Cheng
Really appreciate all the feedback and suggestions! This is definitely more votes than we expected. 😅 I implemented a hover-over based on @Agnes Stenlund's designs in this PR, though our deployment is currently blocked (by something unrelated), so I'm not sure how long it will take to make it to the live site. I may not have time to make further changes to the poll results UI this week, but please keep the comments coming - if we decide to run another debate or poll event, then we will iterate on the UI and take your feedback into account.

Looks great!

I tried to make it into a beeswarm, and while IMHO it does look nice it also needs a bunch more vertical space (and/or smaller circles)

4
Nathan Young
Also adding a little force works too, eg here. There are pretty easy libraries for this. 
4
Lorenzo Buonanno🔸
The orange line above the circles makes it look like there's a similar number of people at the extreme left and the extreme right, which doesn't seem to be the case
5
Jason
I don't think it would help much for this question, but I could imagine using this feature for future questions in which the ability to answer anonymously would be important. (One might limit this to users with a certain amount of karma to prevent brigading.)
2
Brad West🔸
I note some of my confusion that might have been shared by others. I initially had thought that the option from users was between binary "agree" and "disagree" and thought the method by which a user could choose was by dragging to one side or another. I see now that this would signify maximal agreement/disagreement, although maybe users like me might have done so in error. Perhaps something that could indicate this more clearly would be helpful to others.
2
Toby Tremlett🔹
Thanks Brad, I didn't foresee that! (Agree react Brad's comment if you experienced the same thing). Would it have helped if we had marked increments along the slider? Like the below but prettier? (our designer is on holiday)  
2
Brad West🔸
Yeah, if there were markers like "neutral", "slightly agree", "moderately agree", "strongly agree", etc. that might make it clearer. After the decision by the user registers, a visual display that states something like "you've indicated that you strongly agree with the statement X.  Redrag if this does not reflect your view or if something changes your mind and check out where the rest of the community falls on this question by clicking here." 
6
Ozzie Gooen
Another idea could be to ask, "How many EA resources should go do this, per year, for the next 10 years?"  Options could be things like,  "$0", "$100k", "1M", "100M", etc. Also, maybe there could be a second question for, "How sure are you about this?" 
2
Toby Tremlett🔹
Interesting. Certainty could also be a Y-axis, but I think that trades off against simplicity for a banner. 
2
Toby Tremlett🔹
I'd love to hear more from the disagree reactors. They should feel very free to dm.  I'm excited to experiment more with interactive features in the future, so critiques are especially useful now!

An alternate stance on moderation (from @Habryka.)

This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason. 

I found it thought provoking. I'd recommend reading it.

Thanks for making this post! 

One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to participate in discussion on their own posts (so seeing a harsh rate-limit of something like "1 comment per 3 days" is not equivalent to a general ban from LessWrong, but should be more interpreted as "please comment primarily on your own posts", though of course it shares many important properties of a ban).

This is a pretty opposite approach to the EA forum which favours bans.

Things that seem most important to bring up in terms of moderation philosophy: 

Moderation on LessWrong does not depend on effort

"Another thing I've noticed is that almost all the users are trying.  They are trying to use rationality, trying to understan

... (read more)

This is a pretty opposite approach to the EA forum which favours bans.

If you remove ones for site-integrity reasons (spamming DMs, ban evasion, vote manipulation), bans are fairly uncommon. In contrast, it sounds like LW does do some bans of early-stage users (cf. the disclaimer on this list), which could be cutting off users with a high risk of problematic behavior before it fully blossoms. Reading further, it seems like the stuff that triggers a rate limit at LW usually triggers no action, private counseling, or downvoting here.

As for more general moderation philosophy, I think the EA Forum has an unusual relationship to the broader EA community that makes the moderation approach outlined above a significantly worse fit for the Forum than for LW. As a practical matter, the Forum is the ~semi-official forum for the effective altruism movement. Organizations post official announcements here as a primary means of publishing them, but rarely on (say) the effectivealtruism subreddit. Posting certain content here is seen as a way of whistleblowing to the broader community as a whole. Major decisionmakers are known to read and even participate in the Forum.

In contrast (although I am not... (read more)

6
Habryka
This also roughly matches my impression. I do think I would prefer the EA community to either go towards more centralized governance or less centralized governance in the relevant way, but I agree that given how things are, the EA Forum team has less leeway with moderation than the LW team. 
0
Nathan Young
Wait it seems like a higher proportion of EA forum moderations are bans, but that LW does more moderation and more is rate limits? Is that not right?
4
Habryka
My guess is LW both bans and rate-limits more. 
3
Nathan Young
Apart from choosing who can attend their conferences which are the de facto place that many community members meet, writing their intro to EA, managing the effective altruism website and offering criticism of specific members behaviour.  Seems like they are the de facto people who decide what is or isn't valid way to practice effective altruism. If anything more than the LessWrong team (or maybe rationalists are just inherently unmanageable).  I agree on the ironic point though. I think you might assume that the EA forum would moderate more than LW, but that doesn't seem to be the case. 
7
JP Addison🔸
I want to throw in a bit of my philosophy here. Status note: This comment is written by me and reflects my views. I ran it past the other moderators, but they might have major disagreements with it. I agree with a lot of Jason’s view here. The EA community is indeed much bigger than the EA Forum, and the Forum would serve its role as an online locus much less well if we used moderation action to police the epistemic practices of its participants. I don’t actually think this that bad. I think it is a strength of the EA community that it is large enough and has sufficiently many worldviews that any central discussion space is going to be a bit of a mishmash of epistemologies.[1] Some corresponding ways this viewpoint causes me to be reluctant to apply Habryka’s philosophy:[2] Something like a judicial process is much more important to me. We try much harder than my read of LessWrong to apply rules consistently. We have the Forum Norms doc and our public history of cases forms something much closer to a legal code + case law than LW has. Obviously we’re far away from what would meet a judicial standard, but I view much of my work through that lens. Also notable is that all nontrivial moderation decisions get one or two moderators to second the proposal. Related both to the epistemic diversity, and the above, I am much more reluctant to rely on my personal judgement about whether someone is a positive contributor to the discussion. I still do have those opinions, but am much more likely to use my power as a regular user to karma-vote on the content. Some points of agreement:  Agreed. We are much more likely to make judgement calls in cases of new users. And much less likely to invest time in explaining the decision. We are still much less likely to ban new users than LessWrong. (Which, to be clear, I don’t think would have been tenable on LessWrong when they instituted their current policies, which was after the launch of GPT-4 and a giant influx of low quality
4
Jason
I think the banned individual should almost always get at least one final statement to disagree with the ban after its pronouncement. Even the Romulans allowed (will allow?) that. Absent unusual circumstances, I think they -- and not the mods -- should get the last word, so I would also allow a single reply if the mods responded to the final statement. More generally, I'd be interested in ~"civility probation," under which a problematic poster could be placed for ~three months as an option they could choose as an alternative to a 2-4 week outright ban. Under civility probation, any "probation officer" (trusted non-mod users) would be empowered to remove content too close to the civility line and optionally temp-ban the user for a cooling-off period of 48 hours. The theory of impact comes from the criminology literature, which tells us that speed and certainty of sanction are more effective than severity. If the mods later determined after full deliberation that the second comment actually violated the rules in a way that crossed the action threshold, then they could activate the withheld 2-4 week ban for the first offense and/or impose a new suspension for the new one.  We are seeing more of this in the criminal system -- swift but moderate "intermediate sanctions" for things like failing a drug test, as opposed to doing little about probation violations until things reach a certain threshold and then going to the judge to revoke probation and send the offender away for at least several months. As far as due process, the theory is that the offender received their due process (consideration by a judge, right to presumption of innocence overcome only by proof beyond a reasonable doubt) in the proceedings that led to the imposition of probation in the first place.
-1
Nathan Young
"will allow?" very good.
2
Nathan Young
Yeah seems fair.

I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX".

Changes:

  • Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are due diligence people (which seems to be what was missing).
  • Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it's swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features.
  • More talking about honesty. Not really sure this was the problem. The issue wasn't the median EA it was in the tails. Are the tails of EA more honest? Hard to say
  • We have now had a big crisis so it's less costly to say "this might be like that big crisis". Though notably this might also be too cheap - we could flinch away from doing ambitious things
  • Large orgs seem slightly more beholden to comms/legal to avoid saying or doing the wrong thing.
  • OpenPhil is hiring more internally

Non-changes:

  • Still very centralised. I'm pretty pro-elite, so I'm not sure this is a problem in and of itself, though I have come to think that elites in general are less competent than I thought before (see FTX and OpenAI crisis)
  • Little discussion of why or how the affiliation with SBF happened despite many well connected EAs having a low opinion of him
  • Little discussion of what led us to ignore the base rate of scamminess in crypto and how we'll avoid that in future
8
Ben Millwood🔸
For both of these comments, I want a more explicit sense of what the alternative was. Many well-connected EAs had a low opinion of Sam. Some had a high opinion. Should we have stopped the high-opinion ones from affiliating with him? By what means? Equally, suppose he finds skepticism from (say) Will et al, instead of a warm welcome. He probably still starts the FTX future fund, and probably still tries to make a bunch of people regranters. He probably still talks up EA in public. What would it have taken to prevent any of the resultant harms? Likewise, what does not ignoring the base rate of scamminess in crypto actually look like? Refusing to take any money made through crypto? Should we be shunning e.g. Vitalik Buterin now, or any of the community donors who made money speculating?
4
Jason
Not a complete answer, but I would have expected communication and advice for FTXFF grantees to have been different. From many well connected EAs having a low opinion of him, we can imagine that grantees might have been urged to properly set up corporations, not count their chickens before they hatched, properly document everything and assume a lower-trust environment more generally, etc. From not ignoring the base rate of scamminess in crypto, you'd expect to have seen stronger and more developed contingency planning (remembering that crypto firms can and do collapse in the wake of scams not of their own doing!), more decisions to build more organizational reserves rather than immediately ramping up spending, etc.
2
Michael_PJ
The measures you list would have prevented some financial harm to FTXFF grantees, but it seems to me that that is not the harm that people have been most concerned about. I think it's fair for Ben to ask about what would have prevented the bigger harms.
2
Jason
Ben said "any of the resultant harms," so I went with something I saw a fairly high probability. Also, I mostly limit this to harms caused by "the affiliation with SBF" -- I think expecting EA to thwart schemes cooked up by people who happen to be EAs (without more) is about as realistic as expecting (e.g.) churches to thwart schemes cooked up by people who happen to be members (without more). To be clear, I do not think the "best case scenario" story in the following three paragraphs would be likely. However, I think it is plausible, and is thus responsive to a view that SBF-related harms were largely inevitable.  In this scenario, leaders recognized after the 2018 Alameda situation that SBF was just too untrustworthy and possibly fraudulent (albeit against investors) to deal with -- at least absent some safeguards (a competent CFO, no lawyers who were implicated in past shady poker-site scandals, first-rate and comprehensive auditors). Maybe SBF wasn't too far gone at this point -- he hadn't even created FTX in mid-2018 -- and a costly signal from EA leaders (we won't take your money) would have turned him -- or at least some of his key lieutenants -- away from the path he went down? Let's assume not, though.   If SBF declined those safeguards, most orgs decline to take his money and certainly don't put him on podcasts. (Remember that, at least as of 2018, it sounds like people thought Alameda was going nowhere -- so the motivation to go against consensus and take SBF money is much weaker at first.) Word gets down to the rank-and-file that SBF is not aligned, likely depriving him of some of his FTX workforce. Major EA orgs take legible action to document that he is not in good standing with them, or adopt a public donor-acceptability policy that contains conditions they know he can't/won't meet. Major EA leaders do not work for or advise the FTXFF when/if it forms.  When FTX explodes, the comment from major EA orgs is that they were not fully convinced he was
3
Jason
Is there any reason to doubt the obvious answer -- it was/is an easy way for highly-skilled quant types in their 20s and early 30s to make $$ very fast?
3
Nathan Young
seems like this is a pretty damning conclusion that we haven't actually come to terms with if it is the actual answer
5
Jason
It's likely that no single answer is "the" sole answer. For instance, it's likely that people believed they could assume that trusted insiders were more significantly more ethical than the average person. The insider-trusting bias has bitten any number of organizations and movements (e.g., churches, the Boy Scouts). However, it seems clear from Will's recent podcast that the downsides of being linked to crypto were appreciated at some level. It would take a lot for me to be convinced that all that $$ wasn't a major factor.

People voting without explaining is good. 

I often see people thinking that this is bragading or something when actually most people just don't want to write a response, they either like or dislike something

If it were up to me I might suggest an anonymous "I don't know" button and an anonymous "this is poorly framed" button.

When I used to run a lot of facebook polls, it was overwhelmingly men who wrote answers, but if there were options to vote, the gender was much more even. My hypothesis was that a kind of argumentative usually man tended to enjoy writing long responses more. And so blocking lower effort/less antagonistic/ more anonymous responses meant I heard more from this kind of person. 

I don't know if that is true on the forum, but I would guess that the higher effort it is to respond the more selective the responses become in some direction. I guess I'd ask if you think that the people spending the most effort are likely to be the most informed. In my experience, they aren't.

More broadly I think it would be good if the forum optionally took some information about users - location, income, gender, cause area, etc and on answers with more than say 10 votes would dis... (read more)

It seems like we could use the new reactions for some of this. At the moment they're all positive but there could be some negative ones. And we'd want to be able to put the reactions on top level posts (which seems good anyway).

6
Joseph Lemien
I think that it is generally fine to vote without explanations, but it would be nice to know why people are disagreeing or disliking something. Two scenarios come to mind: * If I write a comment that doesn't make any claim/argument/proposal and it gets downvotes, I'm unclear what those downvotes mean. * If I make a post with a claim/argument/proposal and it gets downvoted without any comments, it isn't clear what aspect of the post people have a problem with. I remember writing in a comment several months ago about how I think that theft from an individual isn't justified even if many people benefit from it, and multiple people disagreed without continuing the conversation. So I don't know why they disagreed, or what part of the argument they through was wrong. Maybe I made a simple mistake, but nobody was willing to point it out. I also think that you raise good points regarding demographics and the willingness of different groups of people to voice their perspectives.
2
Nathan Young
I agree it would be nice to know, but in every case someone has decided they do want to vote but don't want to comment. Sometimes I try and cajole an answer, but ultimately I'm glad they gave me any information at all.
1
Rebecca
What is bragading?
4
Brad West🔸
Think he was referring to "brigading", referred to in this thread   Generally, it is voting more out of allegiance or affinity to a particular person, rather than an assessment of the quality of the post/comment.

Some things I don't think I've seen around FTX, which are probably due to the investigation, but still seems worth noting. Please correct me if these things have been said.

  • I haven't seen anyone at the FTXFF acknowledge fault for negligence in not noticing that a defunct phone company (north dimension) was paying out their grants.
    • This isn't hugely judgemental from me, I think I'd have made this mistake too, but I would like it acknowledged at some point
    • Since writing this it's been pointed out that there were grants paid from FTX and Alameda accounts also. Ooof.

The FTX Foundation grants were funded via transfers from a variety of bank 
accounts, including North Dimension-8738 and Alameda-4456 (Primary Deposit Accounts), as 
well as Alameda-4464 and FTX Trading-9018

  • I haven't seen anyone at CEA acknowledge that they ran an investigation in 2019-2020 on someone who would turn out to be one of the largest fraudsters in the world and failed to turn up anything despite seemingly a number of flags.
    • I remain confused
  • As I've written elsewhere I haven't seen engagement on this point, which I find relatively credible, from one of the Time articles:

"Bouscal recalled speaking to Mac Aulay

... (read more)

Extremely likely that the lawyers have urged relevant people to remain quiet on the first two points and probably the third as well.

6
Nathan Young
Yeah seems right, but uh still seems worth saying.
4
ChanaMessinger
Did you mean for the second paragraph of the quoted section to be in the quote section? 
2
Nathan Young
I can't remember but you're right that it's unclear.
3
Rían O.M
I haven't read too much into this and am probably missing something.  Why do you think FTXFF was receiving grants via north dimension? The brief googling I did only mentioned north dimension in the context of FTX customers sending funds to FTX (specifically this SEC complaint). I could easily have missed something. 
7
Jason
Grants were being made to grantees out of North Dimension's account -- at least one grant recipient confirmed receiving one on the Forum (would have to search for that). The trustee's second interim report shows that FTXFF grants were being paid out of similar accounts that received customer funds. It's unclear to me whether FTX Philanthrophy (the actual 501c3) ever had any meaningful assets to its name, or whether (m)any of the grants even flowed through accounts that it had ownership of.
3
Nathan Young
Seems pretty bad, no?

Certainly very concerning. Two possible mitigations though:

  • Any finding of negligence would only apply to those with duties or oversight responsibilities relating to operations. It's not every employee or volunteer's responsibility to be a compliance detective for the entire organization.
  • It's plausible that people made some due dilligence efforts that were unsuccessful because they were fed false information and/or relied on corrupt experts (like "Attorney-1" in the second interim trustee report). E.g., if they were told by Legal that this had been signed off on and that it was necessary for tax reasons, it's hard to criticize a non-lawyer too much for accepting that. Or more simply, they could have been told that all grants were made out of various internal accounts containing only corporate monies (again, with some tax-related justification that donating non-US profits through a US charity would be disadvantageous).
1
Rían O.M
Ah, thank you!  I searched for that comment. I think this is probably the one you're referencing. 
2
Nathan Young
I know of at least 1 other case.

I know of at least 1 NDA of an EA org silencing someone for discussing what bad behaviour that happened at that org. Should EA orgs be in the practice of making people sign such NDAs?

I suggest no.

4
ChanaMessinger
I think I want a Chesterton's TAP for all questions like this that says "how normal are these and why" whenever we think about a governance plan.
2
Peter Wildeford
What's a "Chesterton's TAP"?
2
ChanaMessinger
Not a generally used phrase, just my attempting to point to "a TAP for asking Chesterton's fence-style questions"
2
Peter Wildeford
What's a TAP? I'm still not really sure what you're saying.
4
NunoSempere
"Trigger action pattern", a technique for adopting habits proposed by CFAR <https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps>.
7
Peter Wildeford
Thanks! "Chesterton's TAP" is the most rationalist buzzword thing I've ever heard LOL, but I am putting together that what Chana said is that she'd like there to be some way for people to automatically notice (the trigger action pattern) when they might be adopting an abnormal/atypical governance plan and then reconsider whether the "normal" governance plan may be that way for a good reason even if we don't immediately know what that reason is (the Chesterton's fence)?
2
ChanaMessinger
Oh, sorry! TAPs are a CFAR / psychology technique. https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps
2
Nathan Young
I am unsure what you mean? As in, because other orgs do this it's probably normal? 
4
ChanaMessinger
I have no idea, but would like to! With things like "organizational structure" and "nonprofit governance", I really want to understand the reference class (even if everyone in the reference class does stupid bad things and we want to do something different).
0
Yitz
Strongly agree that moving forward we should steer away from such organizational structures; much better that something bad is aired publicly before it has a chance to become malignant

Feels like we've had about 3 months since the FTX collapse with no kind of leadership comment. Uh that feels bad. I mean I'm all for "give cold takes" but how long are we talking.

3
Ian Turner
Do you think this is not due to "sound legal advice"?

I am pretty sure there is no strong legal reason for people to not talk at this point. Not like totally confident but I do feel like I've talked to some people with legal expertise and they thought it would probably be fine to talk, in addition to my already bullish model.

2[comment deleted]

The OpenAI stuff has hit me pretty hard. If that's you also, look after yourself. 

I don't really know what accurate thought looks like here.

3
ChanaMessinger
Yeah, same
1
yanni
I hope you're doing ok Nathan. Happy to chat in DM's if you like ❤️
1
Xing Shi Cai
It will settle down soon enough. Not much will change as for most breaking news story. But I am thinking if I should switch to Claude.

I want to say thanks to people involved in the EA endeavour. I know things can be tough at times, but you didn't have to care about this stuff, but you do. Thank you, it means a lot to me. Let's make the world better!

I am really not the person to do it, but I still think there needs to be some community therapy here. Like a truth and reconciliation committee. Working together requires trust and I'm not sure we have it. 

Poll: https://viewpoints.xyz/polls/ftx-impact-on-ea

Results: https://viewpoints.xyz/polls/ftx-impact-on-ea/results

6
ChanaMessinger
Curious if you have examples of this being done well in communities you've been aware of? I might have asked you this before. I've been part of an EA group where some emotionally honest conversations were had, and I think they were helpful but weren't a big fix. I think a similar group later did a more explicit and formal version and they found it helpful.
4
Nathan Young
I've never seen this done well. I guess I'd read about the truth and reconciliation committees in South Africa and Ireland.

I intend to strong downvote any article about EA that someone posts on here that they themselves have no positive takes on. 

If I post an article, I have some reason I liked it. Even a single line. Being critical isn't enough on it's own. If someone posts an article, without a single quote they like, with the implication it's a bad article, I am minded to strong downvote so that noone else has to waste their time on it. 

4
James Herbert
What do you make of this post? I've been trying to understand the downvotes. I find it valuable in the same way that I would have found it valuable if a friend had sent me it in a DM without context, or if someone had quote tweeted it with a line like 'Prominent YouTuber shares her take on FHI closing down'.  I find posts like this useful because it's valuable to see what external critics are saying about EA. This helps me either a) learn from their critiques or b) rebut their critiques. Even if they are bad critiques and/or I don't think it's worth my time rebutting them, I think I should be aware of them because it's valuable to understand how others perceive the movement I am connected to. I think this is the same for other Forum users. This being the case, according to the Forum's guidance on voting, I think I should upvote them. As Lizka says here, a summary is appreciated but isn't necessary. A requirement to include a summary or an explanation also imposes a (small) cost on the poster, thus reducing the probability they post. But I think you feel differently? 

I think the strategy fortnight worked really well. I suggest that another one is put in the calendar (for say 3 months time) and then rather than dripfeeding comment we sort of wait and then burst it out again. 

It felt better to me, anyway to be like "for these two weeks I will engage"

I also thought it was pretty decent, and it caused me to get a post out that had been sitting in my drafts for quite a while.

Joe Rogan (largest podcaster in the world) giving repeated concerned mediocre x-risk explanations suggests that people who have contacts with him should try and get someone on the show to talk about it.

eg listen from 2:40:00 Though there were several bits like this during the show. 

I notice some people (including myself) reevaluating their relationship with EA. 

This seems healthy. 

When I was a Christian it was extremely costly for me to reduce my identification and resulted in a delayed and much more final break than perhaps I would have wished[1]. My general view is that people should update quickly, and so if I feel like moving away from EA, I do it when I feel that, rather than inevitably delaying and feeling ick.

Notably, reducing one's identification with the EA community need not change one's poise towards effective work/donations/earn to give. I doubt it will change mine. I just feel a little less close to the EA community than once I did, and that's okay.

I don't think I can give others good advice here, because we are all so different. But the advice I would want to hear is "be part of things you enjoy being part of, choose an amount of effort to give to effectiveness and try to be a bit more effective with that each month, treat yourself kindly because you too are a person worthy of love" 

  1. ^

    I think a slow move away from Christianity would have been healthier for me. Strangely I find it possible to imagine still being a Christian, had thi

... (read more)

I've said that people voting anonymously is good, and I still think so, but when I have people downvoting me for appreciating little jokes that other people most on my shortform, I think we've become grumpy. 

4
NickLaing
Completely agree, I would love humour to be more appreciated on the forum. Rarely does a joke slip through appreciated/unpunished.
2
titotal
In my experience, this forum seems kinda hostile to attempts at humour (outside of april fools day). This might be a contributing factor to the relatively low population here!
5
Nathan Young
I get that, though it feels like shortforms should be a bit looser. 
1
yanni kyriacos
haha whenever I try humour / sarcasm I get shot directly into the sun. 

The Scout Mindset deserved 1/10th of the marketing campaign of WWOTF. Galef is a great figurehead for rational thinking and it would have been worth it to try and make her a public figure.

4
Ozzie Gooen
I think much of the issue is that: 1. It took a while to ramp up to being able to do things such as the marketing campaign for WWOTF. It's not trivial to find the people and buy-in necessary. Previous EA books haven't had similar. 2. Even when you have that capacity, it's typically much more limited than we'd want. I imagine EAs will get better at this over time. 

How are we going to deal emotionally with the first big newspaper attack against EA?

EA is pretty powerful in terms of impact and funding.

It seems only an amount of time before there is a really nasty article written about the community or a key figure.

Last year the NYT wrote a hit piece on Scott Alexander and while it was cool that he defended himself, I think he and the rationalist community overreacted and looked bad.

I would like us to avoid this.

If someone writes a hit piece about the community, Givewell, Will MacAskill etc, how are we going to avoid a kneejerk reaction that makes everything worse?

I suggest if and when this happens:

  1. individuals largely don't respond publicly unless they are very confident they can do so in a way that leads to deescalation.

  2. articles exist to get clicks. It's worth someone (not necessarily me or you) responding to an article in the NYT, but if, say a niche commentator goes after someone, fewer people will hear it if we let it go.

  3. let the comms professionals deal with it. All EA orgs and big players have comms professionals. They can defend themselves.

  4. if we must respond (we often needn't) we should adopt a stance of grace, curiosity and hu

... (read more)