All of Nathan Young's Comments + Replies

It resolved to my personal credence so you shouldn’t take that more seriously than “nathan thinks it unlikely that”

This doesn't feel like a great response to me.

2
David Mathers
7h
I find it easy to believe there was a heated argument but no threats, because it is easy for things to get exaggerated, and the line between telling someone you no longer trust them because of a disagreement and threatening them is unclear when you are a powerful person who might employ them. But I find Will's claim that the conversation wasn't even about whether Sam was trustworthy or anything related to that, to be really quite hard to believe. It would be weird for someone to be mistaken or exaggerate about that, and I feel like a lie is unlikely, simply because I don't see what anyone would gain from lying to TIME about this.

What's your response to this accusation, in Time? This behaviour doesn't sound like you but Naia outright lying would surprise me from my interactions with her. 

Bouscal recalled speaking to Mac Aulay immediately after one of Mac Aulay’s conversations with MacAskill in late 2018. “Will basically took Sam’s side,” said Bouscal, who recalls waiting with Mac Aulay in the Stockholm airport while she was on the phone. (Bouscal and Mac Aulay had once dated; though no longer romantically involved, they remain close friends.) “Will basically threatened Tara,”

... (read more)

Nathan's comment here is one case where I really want to know what the people giving agree/disagree votes intended to express. Agreement/disagreement that the behaviour "doesn't sound like Will'? Agreement/disagreement that Naia would be unlikely to be lying? General approval/disapproval of the comment? 

I believe that was discussed in the episode with Spencer. Search for 'threatened' in the transcript linked here.
 

00:22:30 Spencer Greenberg

And then the other thing that some people have claimed is that when Alameda had that original split up early on, where some people in the fact about trans community fled, that you had somehow threatened one of the people that had left. What? What was that all about?

00:22:47 Will MacAskill

Yeah. I mean, so yeah, it felt pretty.

00:22:50 Will MacAskill

This last when I read that because, yeah, certainly didn't have a me... (read more)

Where does it talk about non-immigrants or non-voluntary in this quote?

3
Concerned EA Forum User
9h
Another quote that hopefully makes it even clearer: Also, a quote from "Ives Parr" in this very thread:

1 person has received jail time. FTXFF had business practices that led to far more harm than Nonlinear's. 

8
Jason
6h
What does "involved in" mean? The most potentially plausible version of this compares people peripherally involved in FTX (under a broad definition) to the main players in Nonlinear.

I think jailtime counts as social sanction! 

An alternate stance on moderation (from @Habryka.)

This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason. 

I found it thought provoking. I'd recommend reading it.

Thanks for making this post! 

One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to parti

... (read more)

Seems like that solution has worked well for years. Why is it not scaling now? It’s not like the forum is loads bigger than a year ago.

2
Jason
3h
I expect an increase in malicious actors as AI develops, both because of greater acute conflict with people with a vested interest in weakening EA, and because AI assistance will lower the barrier to plausible malicious content. I think it would take time and effort to develop consensus on community rules related to this kind of content, and so would rather not wait until the problem was acutely upon us.

Sorry are you claiming that there is no value to extra lives but extra deaths are still bad so the virus that keeps killing living people is worse?
 

If so it seems the issue is that if extra lives are valueless, so are extra deaths.

Yeah maybe? 

Do you want to discuss it? I can understand value people would have taken from the post even while disagreeing with its general thrust. Also it's pretty hard for those people to suggest why they supported it if that's gonna tar them with being a racist, so it's possible they have reasons I can't guess. 

I guess there is the possibility of brigading though that always seems less likely to me than people seem to think it is. 

It also seems plausible that people saw it as some kind of free speech bellwether, though that seems mistaken to me (you can just downvote the bad stuff and upvote the good).

Seems a shame. My understanding was they did good work.

Some interesting stuff to read on the topic of when helpful things probably hurt people:

Helping is helpful

  • My understanding is that minimum wage literature generally finds the minimum wage is good on net

Helping is hurtful

seems like this is a pretty damning conclusion that we haven't actually come to terms with if it is the actual answer

5
Jason
1d
It's likely that no single answer is "the" sole answer. For instance, it's likely that people believed they could assume that trusted insiders were more significantly more ethical than the average person. The insider-trusting bias has bitten any number of organizations and movements (e.g., churches, the Boy Scouts). However, it seems clear from Will's recent podcast that the downsides of being linked to crypto were appreciated at some level. It would take a lot for me to be convinced that all that $$ wasn't a major factor.

I think it's kind of weird that the bar is no longer "<0 karma" but "quick and thorough rejection". I didn't even see the article until this whole thing came up. People are allowed to think articles you don't like have merit, it's one of the benefits of hidden voting. 

I can imagine why someone would upvote that. But overall I think it was an article I wouldn't recommend most people spend time on. 

It feels like you want there to be some harsher punishment/censorship/broader discussion here. Is that the case?

I think it's kind of weird that the bar is no longer "<0 karma" but "quick and thorough rejection".

This doesn't strike me as weird. It is reasonable that people would react strongly to information suggesting that a position enjoys moderate-to-considerable support in the community.

Let's suppose someone posted content equivalent to the infamous Bostrom listserv message today. I doubt (m)any people of color would walk away feeling comfortable being in this community merely because the post ended up with <0 karma. Information suggesting moderate-to-consi... (read more)

I take your (and others') argument to be that the negative score showed the forum "worked as it should" and that the community in some holistic sense rejected the post's claims. That argument is very weak if it is based solely on the score being slightly negative (since that could be obtained just by 51% of votes). The argument is strong if the negative score is strong and signals robust rejection. Roughly, the voting pattern was:

  • Group A - early rejection, -14 score at least
  • Group B - subsequent support, +38 score at least (possible selection effect, unknow
... (read more)

I found the original quote and pointed out you were being misquoted. That seems the relevant update here, over the specific words I used to describe that. 

I wrote about why I think the original post was bad on the post, but in short, it is long and seems to imply doing genetics work that is banned/disapproved of in the West in poor countries. You seem to say that's an error on my part, in which case, please title it differently and make clearer what you are suggesting. 

1
Ives Parr
1d
That's fine. I was adding more clarity. I think the title is accurate and the content of my article is clear that I am not suggesting violating anyone's consent or the law. Did you read the article? I don't see how you draw these conclusions from the title alone or how the title is misleading. I gave policy recommendations which mostly involved funding research.

My sense is they weren't able to track their finances. Would you agree with that? Is there evidence I can look at for that?

Caroline claims she was able to track their finances well enough to (a) establish that they couldn’t afford to buy out Binance and (b) calculate a -$2.7bn NAV-excluding SamCoins for Alameda and recommend against $3bn of venture investments, both in 2021. I gave some links for that in OP. Then they calculated out how to repay lenders in June 2022, creating the spreadsheet that was central to the eventual guilty findings. So I don’t think they were completely clueless when it came to 10 figure numbers or the big picture more generally.

I suppose I consider it... (read more)

2
Ives Parr
1d
I don't know how to respond to this. It's clearly laid out in a list at the end of 8 points in the conclusion. I am not advocating for awful stuff, nor illegal stuff, nor coercive stuff. I don't want to trial it there--but I want the developing world to have access to this technology so couples can voluntarily use it. No. I think people are either not reading it or being deliberately dishonest and I don't think it's because of the title. 
7
Jason
1d
Among other things, I don't think that solution scales well. As the voting history for this post shows, people with these kinds of views may have some voting power at their disposal (whether that be from allies or brigadiers). So we'd need a significant amount of voting power to quickly downvote this kind of content out of sight. As someone with a powerful strong downvote, I try to keep the standards for deploying that pretty high -- to use a legal metaphor, I tend to give a poster a lot of "due process" before strong downvoting because a -9 can often contribute to the effect of squelching someone's voice.  If we rely on voters to downvote content like this, that feels like either asking them to devote their time to careful reading of distasteful stuff they have no interest in, or asking them to actively and reflexively downvote stuff that looks off-base based on a quick scan. As to the first, few if any of us get paid for this. I think the latter is actually worse than an appropriate content ban -- it risks burying content that should have been allowed to show on the frontpage for a while. If we don't deploy strongvotes on fairly short notice, the content is going to be on the front page for a while and the problems that @titotal brought up with strongly apply.  Finally, I am very skeptical that there would be any actionable, plausibly cost-effective actions for EAs to take even if we accepted much of the argument here (or on other eugenics & race topics). That does further reassure me that there is no great loss expecting those who wish to have those discussions to do so in their own space. The Forum software is open-source; they can run their own server.

I find a good heuristic is not to push huge changes on other (especially less powerful) people. I would be more sympathetic to pieces arguing that people in the West should be able to test their children for intelligence or just a piece trying to educate people about IQ. 

I wrote some more here: https://forum.effectivealtruism.org/posts/gaSHkEf3SnKhcSPt2/the-effective-altruist-case-for-using-genetic-enhancement-to?commentId=CDZrkj23QjGr8u97P 

-1
Ives Parr
2d
I am not advocating for pushing changes on anyone. I am advocating for the voluntary use of this technology and accelerating research. See more in my response on that comment.

I edited this post several times because I kept finding new things. About +6 karma was from an earlier edit. 

The post is at -22 karma. I don't think this is "An instance of white supremacist and Nazi ideology creeping onto the EA Forum".

I was going to say I found this quote very compelling, but the full quote is quite different to what you've quoted in this piece.

Quote in this artice:

If you are worried that an immigrant may be more likely to vote Democrat/Left, commit a crime, retain their non-Western culture or be on welfare and believe that it is et

... (read more)

It seems really important to note that the author is talking about a voluntary option in exchange for immigration as opposed to a mandatory process.

As "Ives Parr" confirmed in this thread, this is not a "voluntary option". This is the state making it illegal for certain people — including people who are not immigrants — to have children because of their "non-Western culture". It is a mandatory, coercive process. 

A key quote from the Substack article:

I can't see this particular form of birth restriction as particularly more egregious than restricting s

... (read more)
-6
Ives Parr
2d

I agree in terms of random discussions of race, but this one was related to a theory of impact, so it does seem relevant for this forum.

I don't think we need to fear this discussion, the arguments can be judged on their own merit. If they are wrong, we will find them to be wrong.

If anything, I think on difficult topics those of us with the energy should take time to argue carefully so that those who find the topic more difficult don't have to. 

But I'm not in favour of banning discussion of theories of impact, however we look upon them.

Jason
2d18
10
3

But you can couch almost anything in terms of a theory of impact, at least tenuously, including stuff a lot worse than this. The standard can't be "anything goes, as long as the author makes some attempt to tie to some theory of impact."

No online discussion space can be all things to all people (cf. titotal's first and second points).

In general I think large changes shouldn't happen without consent. Seems a pretty bad idea to push onto poor nations when rich nations don't allow this. Note how this is different from vaccinations and cash transfers which are both legal and desired by those receiving them.

If westerners want to genetically enhance their kids they can, and if we give money to those in poverty and they decide to use it for genetic enhancement (unlikely), fair enough. But trialling things that we in the west find deeply controversial in poorer nations seems probably awful, wh... (read more)

-3
Ives Parr
2d
It seems unfair to me that people are downvoting me without reading my article. What function does the downvote serve except to suppress ideas if those using it are not even reading the article? This seems out of line with EA virtues. At one point (no longer it appears), it appeared my article was not even searchable with the EA forum search function and the analytics suggested that the average person who viewed it was reading about 10% (4-5 minutes/40 minutes) of it. Perhaps they are reading it but not "closely", I cannot be certain. Maybe it's inflated by people who are responding to comments or just looking again.  But I have responded in a respectful manner to constructive comments. If someone has constructive thoughts, they can share them in the comments. I think that would contribute more to the EA community and improve people's ability to think clearly about the issue than merely downvoting. I have also asked how to better advocate for my cause, and still received many downvotes. What can I do and what should I do to avoid being downvoted by people who are (most likely) not even reading my article? In this article, I am not advocating for violating consent. Why do you think otherwise? I said: In my policy proposal, I am not advocating for forcing this on people. I do say: None of the policy proposals involve forcing this on anyone. I want to make the technology available for voluntary use, and I think that should be EAs aim. The technologies that I mention are emerging technologies and most have them have yet to be created. I want EA to accelerate the advances so people can voluntarily use the technology. I am not advocating violating consent. The use of PGT is not entirely legal for cosmetic/IQ in all countries (for example prohibited in Germany), but it is legal in the US and some others. IVF is legal almost all over the world. Besides, restricting people from making voluntary reproductive choices is actually coercion, not lifting legal restrictions

Seems like the externalities of that action are either covered by the electricity cost or should be offset as a bundle. 

In either case it doesn't seem worth removing the functionality. 

I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX".

Changes:

  • Board swaps. Yeah maybe good, though many of the people who left were very experienced. And it's not clear whether there are due diligence people (which seems to be what was missing).
  • Orgs being spun out of EV and EV being shuttered. I mean, maybe good though feels like it's swung too far. Many mature orgs should run on their own, but small orgs do have many replicable features.
  • More talking about honesty. Not rea
... (read more)
6
Ben Millwood
1d
For both of these comments, I want a more explicit sense of what the alternative was. Many well-connected EAs had a low opinion of Sam. Some had a high opinion. Should we have stopped the high-opinion ones from affiliating with him? By what means? Equally, suppose he finds skepticism from (say) Will et al, instead of a warm welcome. He probably still starts the FTX future fund, and probably still tries to make a bunch of people regranters. He probably still talks up EA in public. What would it have taken to prevent any of the resultant harms? Likewise, what does not ignoring the base rate of scamminess in crypto actually look like? Refusing to take any money made through crypto? Should we be shunning e.g. Vitalik Buterin now, or any of the community donors who made money speculating?
3
Jason
2d
Is there any reason to doubt the obvious answer -- it was/is an easy way for highly-skilled quant types in their 20s and early 30s to make $$ very fast?

I believe the theory is that Alamada had accepted money on behalf of FTX and FTX thought that they'd transferred it but they hadn't. And in the summer Alameda lost it. Honestly even writing that it looks like fraud, since they should have transferred it immediately.

6
Ben Millwood
5h
Even Alameda accepting money for FTX at all was probably bank fraud, even if they had transferred it immediately, because they told the banks that the accounts would not be used for that (there's a section in the OP about this). See also this AML / KYC explainer, which I admit I have not read all of but seems pretty good. In particular: (I found out about this explainer because Matt Levine at Bloomberg linked to it; a lot of what I know about financial crime in the US I learned from his Money Stuff column)

On the Sam Harris podcast MacAskill and Harris seem to think it plausible that most of the 8bn$ of losses came in Summer 2022 - that Alameda was accepting funding on behalf of FTX, was meant to transfer it but didn't. To me this seems too generous to FTX. Does anyone know?

6
FTXwatcher
2d
The framing sounds too generous, since I do not think there was any plan to transfer it at any point. But I can see a grain of truth here. As covered in OP, SBF seemed to think Alameda's balance sheet by October 2022 was very roughly: * $18bn of illiquid assets * $6bn of liquid assets * -$15bn of customer liabilities Then the customers started to withdraw, $5bn in liquid assets was returned, then they more or less ran out and declared bankruptcy.  So what might this have looked like before returning money to crypto lenders in June 2022? I did not find any concrete figure for what the size of those transfers was, but I've generally assumed around $10bn. If so then the pre-return state was: * $18bn of illiquid assets * $16bn of liquid assets * -$15bn of customer liabilities * -$10bn of crypto lender liabilities So on this hypothetical, at this point Alameda/FTX have liquid assets exceeding customer liabilities; in a crisis of confidence they can meet a full bank run from customers, though of course at cost of still going bankrupt and blowing up the lenders.  To be clear, this is all very rough and speculative. But I can readily imagine that Alameda had sufficient liquid assets to cover customer liabilities before they repaid their other major funding source. Of course the problems of the commingled assets, investing of customer assets, lying to everyone, and various other crimes would remain.
4
Jason
3d
What does it mean for "losses" to have "come" (or not) at a specific point in time in this context? I can accept that depositor losses had not "come" if FTX would have been able to pay depositors the fiat or crypto in their accounts in a matter of a few days. I specify that time period because depositors should have understood that a few days' delay might be necessary for non-fraud reasons like technical failure. Beyond that, I'd probably say the loss had come but that there were prospects for mitigation. If a thief takes my car, I'd normally say the loss occurred when I was deprived of my right (here, of possession). That I might get the car back from the person to whom the thief sold, or cash compensation from the thief's bank account, does not negate the timing of that loss.

What do you mean by "(Author of the post)" 

8
Habryka
3d
I am the author of the linked post that DPiepgrass was commenting on: https://www.lesswrong.com/posts/HCAyiuZe9wz8tG6EF/my-tentative-best-guess-on-how-eas-and-rationalists
1
Tobias Dänzer
3d
He meant that he wrote the linked post on hypotheses for how EAs and rationalists sometimes go crazy.

How is this as a snapshot of the discussion so far?

You can edit the image here and post as a comment: https://link.excalidraw.com/l/82wslD39E6w/5wUzJOIPnRl 

Kind of frustrating that there isn't a single place for it to be discussed. 

There seems some pretty large things I disagree with in each of your arguments:

The second is a situation in which some highly capable AI that developers and users honestly think is safe or fine turns out not to be as safe as they imagined.

This seems exactly the sort of situation I want AI developers to think long and hard about. Frankly your counterexample looks like an example to me.

Autonomy seems like a primary feature of the highly capable advanced AI currently being built. None of the comparators typically used shares this feature. Surely, that should

... (read more)

I sense this post shouldn't be a community post. I know it's written to the EA community, but it's discussing a specific project. Feels like it shouldn't be relegated to the community section because of style.

2
Robi Rahman
6d
I agree. Mods, is there a reason why I can't downvote the community tag on this post?

I think that kind of thinking is appropriate in all these cases. The Whytham abbey purchase was an investment, but it is reasonable to compare the cost compared to other investments in these terms.

Thanks for writing this.

I am confused why people are defensive of @Sam Bankman-Fried. I am fond of him as person and he was gracious to me personally. I even checked up on him after the crash. But that doesn't change the fact he did a massive crime. 

It doesn't seem hard to say that I want Sam to be well as a person (and Caroline, Nishad, Gary and anyone else close to them) whilst also saying this was a huge and deliberate fraud. And I don't even think we need to have discussions about utilitarianism. Why trade so sloppily? Why hide it for such a long ... (read more)

I don't love this article but it's fine. In general many other articles about EA are too negative so it doesn't really seem worth writing a big correction when the median person who hears about EA probably hears about the right thing.

Specifically, are new readers gonna believe that EA has done a load of useful soul-searching because this articles says it? I doubt it. There are enough articles saying that EAs are a bunch of cynical psychopaths that many will probably assume this is the fluff piece (that it is). 

I don't really think this meta discussion... (read more)

Maybe, but nonetheless it is true. I don't read 'em. Do you?

I guess I feel a lot of things:

  • Empathy - I try to save slugs and snails etc, so I get this feeling that we should take all lives mattering seriously. There is something caring and beautiful in this and I like this intuition
  • Confusion - I have felt this about veganism a bit recently. I don't really think it's worth the amount of stress it caused me to be vegan in terms of animal lives saved. Perhaps I should do it for a month a year to remind me of the cost, but I until I hit diminishing returns on my work I should probably do that. I used to think "if I wer
... (read more)
8
yanni kyriacos
23d
"I used to think "if I were in slave owning times I should have divested entirely" but I dunno these days. Probably my anti-slavery resources were better spent first and foremost funding abolitionists. I don't know the exact cost" - I appreciate this level of honesty and skepticism ❤️

This case is harder, but I'll note that in general I don't read EV explanations of spending less than $100mn. If there wasn't all the controversy, I doubt I'd care and probably I don't want EV feeling the need to explain every $20mn expenditure. Though this case may be different, hard to think about.

This seems like the wrong order of magnitude to apply this logic at, $20mn is close to 1% of the money that OpenPhil has disbursed over its lifetime ($2.8b)

While I would say $100mn is probably too high a bar, buying Whytham Abbey wasn't really $20mn expenditure as they'll sell it and get most of this back. So the actual expenditure (cost related to the transaction, running costs, overhead, gain/loss, not including any reputational cost) of the purchase is probably between $1mn and $4mn (depending on what they manage to sell it for).

You can read what you want of course, but I don't think ginormous cost is the sole factor that justifies scrutiny. 

For example, if an EA org spent it's donor money on a very expensive watch for their CEO, I'd would expect some very good justification. The thousands of dollars might not be large in the grand scale, but that's still money that could have gone to an effective cause, and it could be evidence of bad decision making, wastefulness, or even corruption. 

Has EVF ever had a $100MM expenditure? If I recall the 990s and Charity Commission reports, annual expenses were in the tens of millions of USD and GBP, and some of that was EA Funds grantmaking + GWWC passthrough.

Yeah more broadly I try to only share criticism if it has points that someone thinks are valuable. I don't think it's defensible to say "oh I thought people might want to read it". I should take responsibility - "why am I putting it in front of people".

I have a piece I'm writing with some similar notes to this, may I send it to you when I'm done?

4
Richard Y Chappell
1mo
Sure, feel free!

Okay, this should be a personal blog then I think

Yeah that seems right. Not sure what options one can click on crossposting to point that out. (I think the forum has a personal blog option, but I'm not sure that's so appropriate on LessWrong)

2
Lorenzo Buonanno
1mo
Your post is tagged personal blog on LessWrong, idk if you tagged it that way explicitly or if it was done by mods. For cross-posts to the EA forum, I think you might have an option in the ... menu at the top, or you can ask mods to move it to personal blog
  • How could it have better signalled it wasn't a puff piece?
  • It sort of is a bit of a puff piece. I tried to talk about some negatives but I don't know that it's particularly even handed.
  • I tend to get quite a lot of downvotes in general, so some is probably that.
  • Beyond that, the title is quite provocative - I just used the title on my blog, but I guess I could have chosen something more neutral 
2
Owen Cotton-Barratt
1mo
Yeah the tone makes sense for a personal blog (and in general the piece makes more sense for an audience who can mostly be expected to know Katja already). I think it could have signalled more not-being-a-puff-piece by making the frame less centrally about Katja and more about the virtues you wanted to draw attention to. It's something like: those, rather than the person, are the proper object-of-consideration for these large internet audiences. Then you could also mention that the source of inspiration was the person.

Though sometimes denouncement posts are net positive right? Like probably not the nonlinear one, but I guess more denouncement of SBF prior would have been good. 

I agree it's sort of a red flag, but it seems relevant whether this is a puff piece, right? 

2
Owen Cotton-Barratt
1mo
Extremely relevant for my personal assessment! For the social fabric stuff it seems more important whether it's legibly not a puff piece. Had I downvoted (and honestly I was closer to upvoting), the intended signal would have been something like "small ding for failing to adequately signal that it's not a puff piece" (such signalling is cheaper for things that actually aren't puff pieces, so asking for it is relatively cheap and does some work to maintain a boundary against actual puff pieces). It would have warranted a bigger ding if I'd thought it was a puff piece. It's still possible I'm miscalibrated and this is transparently not a puff piece to ~everyone. (Although the voting pattern on my comment suggests that my feeling was not an outlier ... I guess this is unsurprising to me as one of the reasons I wrote the comment was seeing that your post had some downvotes and thinking it might be helpful to voice my guess about why.)

If I were to do another, what should it be about?

It's very easy to use for personal forecasts.

I think also the quality of the comments has gone down. I have less expectation that I am gonna read interesting things.

A way your decisions are underrated is that this charity, if it existed, would possibly be much easier to fundraise for than GiveWell. Like rather than talking about bednets you'd have pictures of actual children. Perhaps typical donors would give to that competitively.

Have you considered writing up these as manifund impact grants. I can imagine some people might buy having saved some fraction of a child and then you'd have more money to spend. Likewise if you saw promising opportunities you could put them on there. 

Finally I find it pretty tragic tha... (read more)

Thanks Nathan for the encouragement!

Thanks for the manifund idea, but to be honest in the short term at least I'm focused on OneDay Health and am not looking to either do this systematically or set up a charity around this at this stage (although the encouragement has been great and I'd be open to it in future). I also think if someone was going to start a charity around this, as a few people have suggested it might be fairly straightforward to target non-EA donors which I believe where possible is better than supping from the limited EA money pots.

The whi... (read more)

Yeah I initially wrote that late at night in a mood. Ooops.

I think it might be worth testing what has happened to comments from high karma users/old accounts in the last year compared to previous years. I would predict a significantly higher drop off. 

Why? I guess I think there is inter-party conflict in EA between those who wish for it to be a charming well behaved space and those who wish anything to be able to be discussed. And each group taxes the other a bit and finds friction. This is expensive to all parties so demand shifts down. I am a bit di... (read more)

If you compare this recent post I did, it has about the same karma on lesswrong and the forum, but on Lesswrong there are 36 comments, on here, 6. If I were gonna focus on writing forecasting articles, I'd write for LessWrong. 

2
JWS
1mo
(I posted this response to a pre-edit version of Nathan's post) I'd sort of noticed your EA Forum drop-off (though perhaps this is my pattern-matching after the fact), if you had any more thoughts on the 'vibe-shift' as you perceive would be open to hearing them - either in reply or DM. When I post my 'EA-Forum Data in 2023' post I'll make a note to see if there's a drop-off in commenting/Forum engagement. I do think Forum usage in late 2022/early 2023 was abnormally high (for obvious reasons)
Load more