What's your response to this accusation, in Time? This behaviour doesn't sound like you but Naia outright lying would surprise me from my interactions with her.
...Bouscal recalled speaking to Mac Aulay immediately after one of Mac Aulay’s conversations with MacAskill in late 2018. “Will basically took Sam’s side,” said Bouscal, who recalls waiting with Mac Aulay in the Stockholm airport while she was on the phone. (Bouscal and Mac Aulay had once dated; though no longer romantically involved, they remain close friends.) “Will basically threatened Tara,”
Nathan's comment here is one case where I really want to know what the people giving agree/disagree votes intended to express. Agreement/disagreement that the behaviour "doesn't sound like Will'? Agreement/disagreement that Naia would be unlikely to be lying? General approval/disapproval of the comment?
I believe that was discussed in the episode with Spencer. Search for 'threatened' in the transcript linked here.
00:22:30 Spencer Greenberg
And then the other thing that some people have claimed is that when Alameda had that original split up early on, where some people in the fact about trans community fled, that you had somehow threatened one of the people that had left. What? What was that all about?
00:22:47 Will MacAskill
Yeah. I mean, so yeah, it felt pretty.
00:22:50 Will MacAskill
This last when I read that because, yeah, certainly didn't have a me...
1 person has received jail time. FTXFF had business practices that led to far more harm than Nonlinear's.
An alternate stance on moderation (from @Habryka.)
This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason.
I found it thought provoking. I'd recommend reading it.
...Thanks for making this post!
One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to parti
Seems like that solution has worked well for years. Why is it not scaling now? It’s not like the forum is loads bigger than a year ago.
Sorry are you claiming that there is no value to extra lives but extra deaths are still bad so the virus that keeps killing living people is worse?
If so it seems the issue is that if extra lives are valueless, so are extra deaths.
Yeah maybe?
Do you want to discuss it? I can understand value people would have taken from the post even while disagreeing with its general thrust. Also it's pretty hard for those people to suggest why they supported it if that's gonna tar them with being a racist, so it's possible they have reasons I can't guess.
I guess there is the possibility of brigading though that always seems less likely to me than people seem to think it is.
It also seems plausible that people saw it as some kind of free speech bellwether, though that seems mistaken to me (you can just downvote the bad stuff and upvote the good).
Some interesting stuff to read on the topic of when helpful things probably hurt people:
Helping is helpful
Helping is hurtful
seems like this is a pretty damning conclusion that we haven't actually come to terms with if it is the actual answer
I think it's kind of weird that the bar is no longer "<0 karma" but "quick and thorough rejection". I didn't even see the article until this whole thing came up. People are allowed to think articles you don't like have merit, it's one of the benefits of hidden voting.
I can imagine why someone would upvote that. But overall I think it was an article I wouldn't recommend most people spend time on.
It feels like you want there to be some harsher punishment/censorship/broader discussion here. Is that the case?
I think it's kind of weird that the bar is no longer "<0 karma" but "quick and thorough rejection".
This doesn't strike me as weird. It is reasonable that people would react strongly to information suggesting that a position enjoys moderate-to-considerable support in the community.
Let's suppose someone posted content equivalent to the infamous Bostrom listserv message today. I doubt (m)any people of color would walk away feeling comfortable being in this community merely because the post ended up with <0 karma. Information suggesting moderate-to-consi...
I take your (and others') argument to be that the negative score showed the forum "worked as it should" and that the community in some holistic sense rejected the post's claims. That argument is very weak if it is based solely on the score being slightly negative (since that could be obtained just by 51% of votes). The argument is strong if the negative score is strong and signals robust rejection. Roughly, the voting pattern was:
I found the original quote and pointed out you were being misquoted. That seems the relevant update here, over the specific words I used to describe that.
I wrote about why I think the original post was bad on the post, but in short, it is long and seems to imply doing genetics work that is banned/disapproved of in the West in poor countries. You seem to say that's an error on my part, in which case, please title it differently and make clearer what you are suggesting.
My sense is they weren't able to track their finances. Would you agree with that? Is there evidence I can look at for that?
Caroline claims she was able to track their finances well enough to (a) establish that they couldn’t afford to buy out Binance and (b) calculate a -$2.7bn NAV-excluding SamCoins for Alameda and recommend against $3bn of venture investments, both in 2021. I gave some links for that in OP. Then they calculated out how to repay lenders in June 2022, creating the spreadsheet that was central to the eventual guilty findings. So I don’t think they were completely clueless when it came to 10 figure numbers or the big picture more generally.
I suppose I consider it...
I find a good heuristic is not to push huge changes on other (especially less powerful) people. I would be more sympathetic to pieces arguing that people in the West should be able to test their children for intelligence or just a piece trying to educate people about IQ.
I wrote some more here: https://forum.effectivealtruism.org/posts/gaSHkEf3SnKhcSPt2/the-effective-altruist-case-for-using-genetic-enhancement-to?commentId=CDZrkj23QjGr8u97P
I edited this post several times because I kept finding new things. About +6 karma was from an earlier edit.
The post is at -22 karma. I don't think this is "An instance of white supremacist and Nazi ideology creeping onto the EA Forum".
I was going to say I found this quote very compelling, but the full quote is quite different to what you've quoted in this piece.
Quote in this artice:
...If you are worried that an immigrant may be more likely to vote Democrat/Left, commit a crime, retain their non-Western culture or be on welfare and believe that it is et
It seems really important to note that the author is talking about a voluntary option in exchange for immigration as opposed to a mandatory process.
As "Ives Parr" confirmed in this thread, this is not a "voluntary option". This is the state making it illegal for certain people — including people who are not immigrants — to have children because of their "non-Western culture". It is a mandatory, coercive process.
A key quote from the Substack article:
...I can't see this particular form of birth restriction as particularly more egregious than restricting s
I agree in terms of random discussions of race, but this one was related to a theory of impact, so it does seem relevant for this forum.
I don't think we need to fear this discussion, the arguments can be judged on their own merit. If they are wrong, we will find them to be wrong.
If anything, I think on difficult topics those of us with the energy should take time to argue carefully so that those who find the topic more difficult don't have to.
But I'm not in favour of banning discussion of theories of impact, however we look upon them.
But you can couch almost anything in terms of a theory of impact, at least tenuously, including stuff a lot worse than this. The standard can't be "anything goes, as long as the author makes some attempt to tie to some theory of impact."
No online discussion space can be all things to all people (cf. titotal's first and second points).
In general I think large changes shouldn't happen without consent. Seems a pretty bad idea to push onto poor nations when rich nations don't allow this. Note how this is different from vaccinations and cash transfers which are both legal and desired by those receiving them.
If westerners want to genetically enhance their kids they can, and if we give money to those in poverty and they decide to use it for genetic enhancement (unlikely), fair enough. But trialling things that we in the west find deeply controversial in poorer nations seems probably awful, wh...
Seems like the externalities of that action are either covered by the electricity cost or should be offset as a bundle.
In either case it doesn't seem worth removing the functionality.
I am not confident that another FTX level crisis is less likely to happen, other than that we might all say "oh this feels a bit like FTX".
Changes:
I believe the theory is that Alamada had accepted money on behalf of FTX and FTX thought that they'd transferred it but they hadn't. And in the summer Alameda lost it. Honestly even writing that it looks like fraud, since they should have transferred it immediately.
On the Sam Harris podcast MacAskill and Harris seem to think it plausible that most of the 8bn$ of losses came in Summer 2022 - that Alameda was accepting funding on behalf of FTX, was meant to transfer it but didn't. To me this seems too generous to FTX. Does anyone know?
How is this as a snapshot of the discussion so far?
You can edit the image here and post as a comment: https://link.excalidraw.com/l/82wslD39E6w/5wUzJOIPnRl
There seems some pretty large things I disagree with in each of your arguments:
The second is a situation in which some highly capable AI that developers and users honestly think is safe or fine turns out not to be as safe as they imagined.
This seems exactly the sort of situation I want AI developers to think long and hard about. Frankly your counterexample looks like an example to me.
...Autonomy seems like a primary feature of the highly capable advanced AI currently being built. None of the comparators typically used shares this feature. Surely, that should
I sense this post shouldn't be a community post. I know it's written to the EA community, but it's discussing a specific project. Feels like it shouldn't be relegated to the community section because of style.
I think that kind of thinking is appropriate in all these cases. The Whytham abbey purchase was an investment, but it is reasonable to compare the cost compared to other investments in these terms.
Thanks for writing this.
I am confused why people are defensive of @Sam Bankman-Fried. I am fond of him as person and he was gracious to me personally. I even checked up on him after the crash. But that doesn't change the fact he did a massive crime.
It doesn't seem hard to say that I want Sam to be well as a person (and Caroline, Nishad, Gary and anyone else close to them) whilst also saying this was a huge and deliberate fraud. And I don't even think we need to have discussions about utilitarianism. Why trade so sloppily? Why hide it for such a long ...
I don't love this article but it's fine. In general many other articles about EA are too negative so it doesn't really seem worth writing a big correction when the median person who hears about EA probably hears about the right thing.
Specifically, are new readers gonna believe that EA has done a load of useful soul-searching because this articles says it? I doubt it. There are enough articles saying that EAs are a bunch of cynical psychopaths that many will probably assume this is the fluff piece (that it is).
I don't really think this meta discussion...
I guess I feel a lot of things:
This case is harder, but I'll note that in general I don't read EV explanations of spending less than $100mn. If there wasn't all the controversy, I doubt I'd care and probably I don't want EV feeling the need to explain every $20mn expenditure. Though this case may be different, hard to think about.
This seems like the wrong order of magnitude to apply this logic at, $20mn is close to 1% of the money that OpenPhil has disbursed over its lifetime ($2.8b)
While I would say $100mn is probably too high a bar, buying Whytham Abbey wasn't really $20mn expenditure as they'll sell it and get most of this back. So the actual expenditure (cost related to the transaction, running costs, overhead, gain/loss, not including any reputational cost) of the purchase is probably between $1mn and $4mn (depending on what they manage to sell it for).
You can read what you want of course, but I don't think ginormous cost is the sole factor that justifies scrutiny.
For example, if an EA org spent it's donor money on a very expensive watch for their CEO, I'd would expect some very good justification. The thousands of dollars might not be large in the grand scale, but that's still money that could have gone to an effective cause, and it could be evidence of bad decision making, wastefulness, or even corruption.
Has EVF ever had a $100MM expenditure? If I recall the 990s and Charity Commission reports, annual expenses were in the tens of millions of USD and GBP, and some of that was EA Funds grantmaking + GWWC passthrough.
Yeah more broadly I try to only share criticism if it has points that someone thinks are valuable. I don't think it's defensible to say "oh I thought people might want to read it". I should take responsibility - "why am I putting it in front of people".
Yeah that seems right. Not sure what options one can click on crossposting to point that out. (I think the forum has a personal blog option, but I'm not sure that's so appropriate on LessWrong)
Though sometimes denouncement posts are net positive right? Like probably not the nonlinear one, but I guess more denouncement of SBF prior would have been good.
I think also the quality of the comments has gone down. I have less expectation that I am gonna read interesting things.
A way your decisions are underrated is that this charity, if it existed, would possibly be much easier to fundraise for than GiveWell. Like rather than talking about bednets you'd have pictures of actual children. Perhaps typical donors would give to that competitively.
Have you considered writing up these as manifund impact grants. I can imagine some people might buy having saved some fraction of a child and then you'd have more money to spend. Likewise if you saw promising opportunities you could put them on there.
Finally I find it pretty tragic tha...
Thanks Nathan for the encouragement!
Thanks for the manifund idea, but to be honest in the short term at least I'm focused on OneDay Health and am not looking to either do this systematically or set up a charity around this at this stage (although the encouragement has been great and I'd be open to it in future). I also think if someone was going to start a charity around this, as a few people have suggested it might be fairly straightforward to target non-EA donors which I believe where possible is better than supping from the limited EA money pots.
The whi...
Yeah I initially wrote that late at night in a mood. Ooops.
I think it might be worth testing what has happened to comments from high karma users/old accounts in the last year compared to previous years. I would predict a significantly higher drop off.
Why? I guess I think there is inter-party conflict in EA between those who wish for it to be a charming well behaved space and those who wish anything to be able to be discussed. And each group taxes the other a bit and finds friction. This is expensive to all parties so demand shifts down. I am a bit di...
It resolved to my personal credence so you shouldn’t take that more seriously than “nathan thinks it unlikely that”