Hmm. I think if I had been in an abusive situation such as the ones OP describes, and I (privately) went to the Community Health team about it, and the only outcomes were what you just listed, I would have considered it a waste of my time and emotional energy.
Edit: waste of my time relative to "going public", that is.
We were familiar with many (but not all) of the concerns raised in Ben’s post based on our own investigation.
What happened as a result of this, before Ben posted?
Thanks for writing, I hope things change.
PS: I think the name "Ratrick Bayesman" will live in my head for at least 5 years
Yeah. (as a note I am also a fan of the animal welfare stuff).
This is good suggestion.
I think most of this stuff is too dry to hold my attention by itself. I would like a social environment that was engaging yet systematically directed my attention more often to things I care about. This happens naturally if I am around people who are interesting/fun but also highly engaged and motivated about a topic. As such I have focused on community and community spaces more than, for example, finding a good randomista newsletter or extracting randomista posts f...
from private convos I am pretty sure that the tweet about mike vassar is in reference to this https://forum.effectivealtruism.org/posts/7b9ZDTAYQY9k6FZHS/abuse-in-lesswrong-and-rationalist-communities-in-bloomberg?commentId=FCcEMhiwtkmr7wS84 (which is about Mike Vassar, not Jacy)
there may or may not be other things informing it, but it's not about Jacy.
"It doesn't exist" is too strong for sure. I consider GiveWell central to the randomista part and it was my entrypoint into EA at large. Founder's Pledge was also pretty randomista back when I was applying for a job there in college. I don't know anything about HLI.
There may be a thriving community around GiveWell etc that I am ignorant to. Or maybe if I tried to filter out non-randomista stuff from my mind then I would naturally focus more on randomista stuff when engaging EA feeds.
The reality is that I find stuff like "people just doing AI ca...
I can certainly empathize with the longtermist EA community being hard to ignore. It's much flashier and more controversial.
For what it's worth I think it would be possible and totally reasonable for you to filter out longtermist (and animal welfare, and community-building, etc.) EA content and just focus on the randomista stuff you find interesting and inspiring. You could continue following GiveWell, Founders Pledge's global health and development work, and HLI. Plus, many of Charity Entrepreneurship's charities are randomista-influenced.
For exampl...
...17. I get a lot of messages these days about people wanting me to moderate or censor various forms of discussion on LessWrong that I think seem pretty innocuous to me, and the generators of this usually seem to be reputation related. E.g. recently I've had multiple pretty influential people ping me to delete or threaten moderation action against the authors of posts and comments talking about: How OpenAI doesn't seem to take AI Alignment very seriously, why gene drives against Malaria seem like a good idea, why working on intelligence enhancement is a good
- I think Doing Good Better was already substantially misleading about the methodology that the EA community has actually historically used to find top interventions. Indeed it was very "randomista" flavored in a way that I think really set up a lot of people to be disappointed when they encountered EA and then realized that actual cause prioritization is nowhere close to this formalized and clear-cut.
I feel like I joined EA for this "randomista" flavored version of the movement. I don't really feel like the version of EA I thought I was joining exists even ...
Just curious - do you not feel like GiveWell, Happier Lives Institute, and some of Founders Pledge's work, for example, count as randomista-flavoured EA?
My critique seems resilient to this consideration. The fact that managers do not publicly criticize employees is not evidence of discomfort or awkwardness. Under the very obvious model of "how would a manager get what they want re: an employee", public criticism is not a sensical lever to want to use.
Related, there’s far more public criticism from Google employees about their management than there is their management about their employees. This plays out on a lot of levels.
The nature of A having power over B is that A doesn't need to coordinate with others in order to get what A wants with respect to B. It would be really bizarre for management to publicly criticize employees whom they can just fire. There is simply no benefit. This explains much more of the variance than anything to do with awkwardness or "punching down".
I agree that management doesn't get much benefit by giving valuable public negative feedback to people. However, I'd push back on the idea that management can "just fire" people they don't like.
Many managers are middle managers. They likely have a lot of gripes with their teams, but they need to work with someone, and often, it would be incredibly awkward or controversial to fire a lot of people.
As somebody in the industry I have to say Alameda/FTX pushing MAPS was surreal and cannot be explained as good faith investing by a competent team.
As far as I can tell there is no reason to condemn fraud, but not the stuff SBF openly endorsed, except that fraud happened and hit the "bad" outcome.
From https://conversationswithtyler.com/episodes/sam-bankman-fried/
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noni...
I have to say I didn't expect "all remaining assets across ftx empire 'hacked' and apps updated to have malware" as an outcome.
(as an aside it also seems quite unusual to apply this impartiality to the finances of EAs. If EAs were going to be financially impartial it seems like we would not really encourage trying to earn money in competitive financially zero sum ways such as a quant finance career or crypto trading)
Seriously, imagine dedicating your life to EA and then finding out you lost your life savings because one group of EAs defrauded you and the other top EAs decided you shouldn't be alerted about it for as long as possible specifically because it might lead to you reaching safety. Of course none of the in-the-know people decided to put up their own money to defend against bank run, just decided it would be best if you kept doing so.
In that situation I have to say I would just go and never look back.
Aspiring to be impartially altruistic doesn't mean we should shank eachother. The so-impartial-we-will-harvest-your-organs-and-steal-your-money version of EA has no future as a grassroots movement or even room to grow as far as I can tell.
This community norm strategy works if you determine that retaining socioeconomically normal people doesn't actually matter and you just want to incubate billionaires, but I guess we have to hope the next billionare is not so (allegedly) impartial towards their users' welfare.
Seriously, imagine dedicating your life to EA and then finding out you lost your life savings because one group of EAs defrauded you and the other top EAs decided you shouldn't be alerted about it for as long as possible specifically because it might lead to you reaching safety. Of course none of the in-the-know people decided to put up their own money to defend against bank run, just decided it would be best if you kept doing so.
In that situation I have to say I would just go and never look back.
I would like to be involved in the version of EAs where we look after eachother's basic wellness even if it's bad for FTX or other FTX depositors. I think people will find this version of EA more emotionally safe and inspiring.
To me there is just no normative difference between trying to suppress information and actively telling people they should go deposit on FTX when distress occurred (without communicating any risks involved), knowing that there was a good chance they'd get totally boned if they did so. Under your model this would be no net detriment, ...
Hm, yeah I guess my intuition is the opposite. To me, one of the central parts of effective altruism is that it's impartial, meaning we shouldn't put some people's welfare over other's.
I think in this case it's particularly important to be impartial, because EA is a group of people that benefitted a lot from FTX, so it seems wrong for us to try to transfer the harms it is now causing onto other people.
What I think: I think that FTX was insolvent such that even if FTT price was steady, user funds were not fully backed. That is, they literally bet the money on a speculative investment and lost it, and this caused a multibillion dollar financial hole. It is also possible that some or all of the assets - liabilities deficit was caused by a hack that happened months ago that they did not report.
As far as I can tell, you don't think this. Well, if you really don't think that, and it turns out you were wrong, then I'd like you to update. I think probabil...
What I think: I think that FTX was insolvent such that even if FTT price was steady, user funds were not fully backed.
Yes you are right, I disagree. I think this collapse happened because of the FTT "attack" (or honestly, huge vulnerability) and Alameda was forced to defend. Without this depletion, SBF or FTX could cover these funds in a routine sense and we wouldn't hear about this.
...That is, they literally bet the money on a speculative investment and lost it, and this caused a multibillion dollar financial hole. It is also possible that some or all
You're Agrippa! The guy with very short timelines, is Berkeley adjacent and knows that cool DxE person.
No, I do care about you! I respect you quite a bit. I was wrong and I retract what I said before in at least a few comments, and I apologize for my behavior. Also, I'll be happy to take any negative repercussions.
😳 That's nice of you, thanks.
I'm actually not a guy though I don't take any offense to the assumption, given my username.
Maybe Nuno would escrow for us.
I'm probably down for $500, would need to talk to my partner about going mu...
Also, thanks for taking a position on both. We are on the same side of 50/50 for the "gambled deposits" question, though. I wish we could come up with something we disagree on that might also resolve sooner, I'll think on it...
Maybe we disagree on just how big FTX's financial hole is? Could we bet on "as of today, FTX liabilities - FTX liabilites >= 4bn"? I'd go positive on that one.
Dunno... Really can't tell what you believe. You commented that folks are being too negative yet seem to also think that FTX "gambled" user deposits, which sounds pretty negative to me (though we can disagree about whether it was good to have done this). Oh wellz.
For 50/50, I'll take negative, will not resolve affirmatively on:
- "SBF found guilty of literally anything / pays a fine of over 1M for literally any reason, by 2024".
Cool, what size bet? And, after we figure that out, any thoughts on an escrow?
We seem to have very different ideas of what "operationalization" means...
How about "By April, will evidence come out that FTX gambled deposits rather than keeping it in reserves?" ? There's already a literal prediction market up on that one!
We could do "SBF found guilty of literally anything / pays a fine of over 1M for literally any reason, by 2024" ? If that's not operationalization I really have to give up here.
I do have a real name by the way!!
BTW I am assuming you are willing to bet in the thousands. If not, I really don't consider that a bad thing, but lmk please!
As an aside it is surprising to me that I seem at all to you like the type of person Sam might have been surrounded with. I don't think anyone remotely insider-y has ever even slightly felt that way about me.
I will take a bet like "found guilty for X/paid a fine of X", which are actual events that happen.
OK whatevs, which side of 50/50 do you want? And by what date? (and for that matter what X? Fraud???)
That said I really dunno why you don't like "FTX used user funds to make risky investments" or "FTX speculated using user funds" etc. Is there nobody we might mutually trust to neutrally trust to resolve such a thing?
I'm sorry but I really don't understand why you think it's not adequate. "Fraud" is quite well-defined, and "loss of user funds" is also quite well-defined.
I would offer odds on like, criminal prosecution results, but that will take such a long time to resolve that I don't think it makes an attractive bet. As you point out there are also jurisdictional questions.
Is "SBF lied about the safety of user funds on FTX.com" better to you?
"FTX used user funds to make risky investments"?
"SBF mislead users about the backing of their FTX.com accounts"?
The real world event would be "FTX committed fraud that caused >1bn loss of user funds". But if it's a bet somebody has to arbitrate the outcome, you know?
I just picked EA forum users as an arbitator since like, that's the venue here. But if you have any other picks for arbitrator that would be fine. You can pick yourself but I'm not sure I'd agree to that bet. Likewise I assumed you wouldn't go for it if I picked me. And if it's the 2 of us well then we might tie.
> but do you have like an an account on a prediction market
Multiple
> Are you a...
I think that putting up probabilities is and should be expected. I think that actual financial betting shouldn't be expected but is certainly welcome.
If I was going to dispute the first thing I would do is ask for probabilities. It seems weird to try to argue with you about whether your predictions are wrong if I don't even know what they are. For all I know we have the same predictions and just a different view of other posters' predictions.
In one year we can make a thread that asks EA forum users to vote on whether they believe >90% odds that SBF fraudulently handled funds (that is, in a way that was directly contradictory to public representation of handling funds) in a way that costed FTX.com customers >1bn in losses.
If a majority of users (whose accounts existed since yesterday, to prevent shenanigans) vote yes, then the bet resolves YES. Otherwise NO.
Which side of 50/50 do you want?
If you don't want to make a single quantifiable prediction on this topic, after making claims about other people's predictions being "too negative", yes I consider that both evasive and inadequate.
If you really believe people are being "too negative" in their speculation, I thought you might be willing to put your money where your mouth is in some way. If you're not, then you're not, but it's got nothing to do with how well defined legality is, the moral meaning of illegality, etcetera.
Edit: I don't actually really think that a social expectation of ...
Can you just operationalize a few things yourself and attach numbers to them? That sounds easiest.
For example, your odds on whether SBF literally goes to prison within the next 4 years...
(Though I think there are better ways to operationalize)
If you can't come up with a way to operationalize a prediction on this topic in any straightforwardly falsifiable way then that's okay I guess, though kind of sad.
Would you be open to stating some probabilities on this topic -- for example, your probability that Sam gets convicted of fraud, is conclusively found out to have committed fraud, etcetera?
I ask because I'd potentially be interested in making some financial bets with you!
I really take issue with #2 here. Bank run exacerbation saved my friend's life savings. Expectations of collapse can save your life if, you know, there's a collapse.
It really seems insanely cruel to say we shouldn't inform people because it might be bad for FTX (namely in the event of insolvency). Where are our priorities? I'm very glad that my friends did not observe your #2 preference here.
Of course the best way to help FTX against a bank run would have been to deposit your own funds at the first sign of distress. As of writing I think it's still not too late!
There seem like two obvious models:
1) intractability model, where AGI = doom and the only safe move is not to make it
2) race / differential progress model, where safety needs to be ahead of capabilities by some amount, before capabilities reaches point X
As far as I can tell, alignment is advancing a lot slower per researcher than capabilities. So even if you contribute 1 year on capabilities and 10 on alignment, your effect under differential progress was just bad, and your effect under intractability was badder.
I'm curious how much the "having align...
We simply have a specific bar for admissions and everyone above that bar gets admitted
A) Does this represent a change from previous years? Previous comms have gestured at a desire to get a certain mixture of credentials, including beginners. This is also consistent with private comms and my personal experience.
B) Its pretty surprising that Austin, a current founder of a startup that received 1M in EA related funding from FTX regrants, would be below that bar!
Maybe you are saying that there is a bar above which you will get in, but below w...
A) Yes we had different admissions standards a few years ago. I agree that’s confusing and I think we could have done better communication around the admissions standards. I think our FAQ page and admissions page are the most up-to-date resources.
B) I can't comment in too much depth on other people's admissions, but I'll note that Austin was accepted into SF and DC 22 after updating his application.
It’s currently the case that there’s a particular bar for which we’ll admit people, though it’s not an exact science and we make each judgement call on its own ...
I had a pretty painful experience where I was in a pretty promising position in my career, already pretty involved in EA, and seeking direct work opportunities as a software developer and entrepreneur. I was rejected from EAG twice in a row while my partner, a newbie who just wanted to attend for fun (which I support!!!) was admitted both times. I definitely felt resentful and jealous in ways that I would say I coped with successfully but wow did it feel like the whole thing was lame and unnecessary.
I felt rejected from EA at large and yeah I do thin...
I’m really sorry to hear this. It is concerning to hear that being rejected from EAG made you feel like you were “turned away from even hanging out with people.” This is not our intention, and I’d be happy to chat with you about other resources and opportunities for in-person meetings with other EAs.
We also get things wrong sometimes so I’m sad to hear you feel like our decision impacted your trajectory away from a highly devoted version of your life. The EAG admissions process is not intended to evaluate you as a person, it is for determining whethe...
Damn, that really sucks. :| Thanks for sharing.
Adding my three related cents:
Relatedly to time, I wish we knew more about how much money is spent on community building. It might be very surprising! (hint hint)
Sorry I did not realize that OP doesn't solicit donations from non megadonors. I agree this recontextualizes how we should interpret transparency.
Given the lack of donor diversity, tho, I am confused why their cause areas would be so diverse.
Well this is still confusing to me
in the case of criminal justice reform, there were some key facts of the decision-making process that aren’t public and are unlikely to ever be public
Seems obviously true and in fact a continued premise of your post is that there are key facts absent that could explain or fail to explain one decision or the other. Is this particularly true in crminal justice reform? Compared to IDK orgs like AMF (which are hyper transparent by design) maybe, compared to stuff around AI risk I think not.
...My guess is that a “highl
Yeah I mean, no kidding. But it's called Open Philanthropy. It's easy to imagine there exists a niche for a meta-charity with high transaparency and visibility. It also seems clear that Open Philanthropy advertises as a fulfillment of this niche as much as possible and that donors do want this.
I don't understand this point. Can you spell it out?
From my perspective, Open Phil's main legible contribution is a) identifying great donation opportunities, b) recommending Cari Tuna and Dustin Moskovitz to donate to such opportunities, and c) building up an ...
We are currently at around 50 ideas and will hit 100 this summer.
This seems like a great opportunity to sponsor a contest on the forum.
Also, there is an application out there for running polls where users make pairwise comparisons over items in a pool and a ranking is imputed. It's not necessary for all pairs to be compared, the system scales with a high number of alternatives. I don't remember what it's called, it was a research project presented by a group when I was in college. I do think it could be a good way to extract a ranking from a crowd (a...
It cracks me up that this is the first comment you've ever gotten posting here, it really is not the norm.
The comment is using what I call “EA rhetoric” which has sort of evolved on the forum over the years, where posts and comments are padded out with words and other devices. To the degree this is intended to evasive, this is further bad as it harms trust. These devices are perfectly visible to outsiders.
I agree that this has evolved on the forum over the years and it is driving me insane. Seems like a total race to the bottom to appear as the most thorough thinker. You're also right to point out that it is completely visible to outsiders.
It's interesting that you say that given what is in my eyes a low amount of content in this comment. What is a model or model-extracted part that you liked in this comment?
Decent discussion on Twitter, especially from @MichaelDello
https://twitter.com/brianluidog/status/1534738045483683840
To me the biggest challenge in assessing impact is empirical question of how much any supply increase in meat or meat-like stuff leads to replacement of other meat. But this would apply as well to accepted cause areas of meat replacers and cell culture.
Substitution is unclear. In my experience it's very clear that scallop is served as a main course protein in contexts where the alternative is clearly fish, or most often shrimp. So insofar that substitution occurs, we'd mainly see substitution of shrimp and fish.
However, it is not clear how much substitution of meat in fact occurs at all as supply increases. People generally seem to like eating meat and meat-like stuff. I don't know data here but meat consumption is globally on the rise.
https://www.animal-ethics.org/snails-and-bivalves-a-discussion-of-possible-edge-cases-for-sentience/#:~:text=Many%20argue%20that%20because%20bivalves,bivalves%20do%20in%20fact%20swim
I found this discussion interesting. To me it seems like they feel aversion -- not sure how that is any different from suffering -- so it is just a question of "how much?".
Given your position I am concerned about the arms race accelerationism messaging in this post. Substantively, the major claims of this post are "China AI progress poses a serious threat we must overcome via AI progress (that is, we are in an arms race)" and "society may regulate AI such that projects that don't meet a very high standard of safety will not be deployable". The argument is that pursuing safety follows from these premises, mostly the latter.
This can be interpreted in a number of ways, charitably or uncharitably. Independent of that, I do... (read more)