I'm experimenting with "norms-pledges" to help reduce forum anxiety. Maybe it could be a good social technology IDK. Click [Show More] to read them all:
🕊 Fresh Slate After Disagreement Pledge: I hereby pledge that if we disagree on the forum, I will not hold it against you. (1) I will try not to allow a disagreement to meaningfully impact how I treat you in further discourse, should we meet in another EA Forum thread, on another website or virtual space, or IRL. I know that if we disagree, it doesn't necessarily mean we will disagree on other topics, nor does it necessarily imply we are on opposing teams. We are most likely on the same team in that we both wish to have the most good done possible and are working in service of finding out what that means. (2) Relatedly, I pledge to not claim to know what you believe in future, I can only confidently claim to know what you wrote or believed at a given time, and I can say what I think you believe given that. I know that people change their minds, and it may be you or me who does so, so I understand that the disagreement may not even still stand and is not necessarily set in stone.
👨👩👧👦 No Gatekeeping Pledge: I hereby pledge that if I am seeking a collaborator, providing an opportunity, or doing hiring or anything akin to hiring, and you would otherwise be a top candidate if not for the following, I will try not to gatekeep: (1) If an opinion you've shared or broken-norm you've done (on the EA forum or elsewhere) is relevant in a potentially negative way to our collaboration, that I will ask you about it to gain clarity. I will not assume that such an incident means you will not be suitable for a role. I will especially try hard not to make assumptions about your suitability based on old or isolated incidents, or if your digital footprint is too small to get a good picture of who you are and how you think about things. (2) I will not penalize based on someone being a social or professional newcomer or being otherwise unknown to me or my colleagues. If the person is a top candidate otherwise, I will do my due diligence to determine cultural fit separate from that.
🤔 Rationalist Discourse Pledge: (1) I hereby pledge to try to uphold rationalist discourse norms as presented here and here, and comedically summed up here.
🦸♀️Preferring My Primary Account Pledge: (1) I hereby pledge that this is my main EA Forum account. I will never use multiple accounts to manipulate the system, as by casting multiple votes or stating similar opinions with different accounts. (2) I also pledge that, although I can't be sure what comes, I strongly intend to not use an anonymous or different account (alt or sockpuppet), or any account other than this, my primary account. I pledge that I am willing to take on some reputational risks on this, my primary account, in service of putting truth, transparency, integrity, and a complete narrative over my own anxiety, and to give ideas I think are worth advocating for the best chance at adoption. Therefore I pledge that I will not use an alternate account out of general anxiety around personal or professional retribution or losing clout. CAVEAT 1: I reserve the right to use an alt account in cases where *specific* retribution or other danger can be expected in my particular instance. As example: I reserve the right to use an alt account out of concern about riling up a suspected stalker, specific known bad-faith actor, specific known slanderer, etc. CAVEAT 2: I also reserve the right to use an alt account for the benefit of others. Example: in cases where revealing my own identity would reveal the identity or betray the privacy of some other party I am discussing.
🙇♂️Humility in Pledging Pledge: I hereby pledge that I take these pledges for my own self-improvement and for altruistic reasons. It's okay to disagree that pledges are useful and important for you. (1) I don't expect others should necessarily take a norms pledge. I believe the pledges only work if people take them after deep consideration, and I don't expect I can know all the considerations for others' situations. Therefore I understand there may be situations that it is actually right that a user avoid taking a pledge. Therefore I will not judge others for not having taken a pledge, including that I will not dismiss other's character if I see other accounts without a pledge. (3) Additionally, I don't presume that others not taking a pledge means they would even necessarily act differently than that pledge would imply. I don't assume their intentions are even different from mine. Perhaps a person is new to the idea or just trying to protect their energy by not opening themselves to criticism. (3) I won't automatically dismiss a user's reasoning if I see the user violating norms pledges I've made. I still will give their claims a chance to stand on their own merits. (4) If you see me violating a pledge I've taken, I will always appreciate if you bring it up to me.
As someone who read the whole piece, I think you could just read the bolded lines and read the explanatory bits below for those lines you find interesting/key. It's also already an outline, so you could just read the bullets further to the left, and read the further-right bits as your curiosity and ethical compass direct you. Reading the left-ward bits can always be assumed to function as a summary of an outline (and author's fault if it doesn't).
[EDIT: This is what Angelina Li did above, nice :) Hopefully if anyone finds any bit intriguing, they go read more in the source :)
The rest is me reflecting on EAs and appropriateness of summaries vs different types of info-gleaning]
I'm not confident that summarizing pieces like this for an EA audience [like, typical summary paragraph-style] really works tbh. Different EAs will need very different things from it. Eg, community builders will be way more interested in the CB section and want to read it in detail even if they disagree, so as to understand the modes of thinking that others might adopt and what they might want to refute or adapt to.
This is also, after all, just someone's personal reflections and won't necessarily be the way EAs move forward on any of these things. And for reflections, summaries often cut reasoning and therefore lead to information cascades that need to be addressed later, I think. We already have way too much deference and information cascades in EA anyway, so I'd rather see people lean more toward engaging with material semi-deeply that is relevant to them or not repeat ideas at all tbh. This leads me to say that each reader should be proactive [by reading the bolded/leftward parts of the the outline themselves], and try to sort out the bits they care about or want to improve their thinking on and read anything further on that carefully.
It's totally okay to say "this isn't really my bag, and I trust others to get it right eventually, so I'm not gonna engage with this". And if you don't trust others to get it right eventually (and the FTX debacle is certainly around a low-trust theme), I still think EAs should engage semi-deeply (enough to evaluate trust in others or actually do the better job yourself) or hardly at all (even if this means pulling back from EA til you have the spoons to check-in deeply on your concerns), because engaging lightly will probably only waste your time, confuse discussion, and waste the time of others if they retroactively have to correct misunderstandings that spread thanks to poor-quality/surface-level engagement. [I've gone on a long time which makes it sound like a big ask, but honestly I am just talking about semi-deep engagement (eg, reading the leftward parts of the full outline as the author intended when in flow with the work and any further details as needed) vs light engagement (reading a summary which I don't think works for long pieces like this), not mandating very-deep engagement (reading the piece in full detail). So I think most people can do it.]
That said, I appreciate your sentiment, and I think a table of contents and better section titles would be extremely helpful for easier semi-deep engagement. Also, using numbered outline instead of bullet points. I think these are also easier asks less likely to get future posts hung up in procrastination-land.
This is an add-on to this comment I wrote and sort of to all the SBF-EA-related stuff I've written recently. I write this add-on mostly for personal reasons.
I've argued that we should have patience when assigning blame for EA leadership and not assume leaders deserve blame or ever were necessarily incompetent in a way that counterfactual leaders would not have been. But this point is distinct from thinking there was nothing we could do or no signs to pay attention to. I don't want to be seen as arguing there was nothing that EAs in general could do, so here are my actual thoughts of what was, on it's own, enough to warrant taking action of distancing ourselves from SBF, which it looks like basically all of us, and non-EAs, missed.
FWIW I do think mistakes were made around SBF. I'm just not willing to pin it on EA leaders specifically (yet), or even EAs/EA itself specifically (to the exclusion of others). Anyone, including journalists and finance people, could have seen red flags who watched SBF's interviews. IMO, the major red flags in retrospect were things anyone who was paying attention (I was only a bit but even I messed up here) could see:
(1) SBF talking about ponzi schemes, and some of his testimony regarding crypto regulation (I think?) which apparently made the ponzi scheme possibility look more real.
Personal take and regrets: I saw neither of these myself but my newly-EA gf thought they were morally troubling before the crash and told me. We had a couple short conversations about it which basically led to "Oof, IDK what to say" from me. I thought of looking it all up, or messaging prominent EAs on her word alone. But I did not, mostly due to confusion about what it meant... "Isn't this just the nature of crypto as an asset as something all people buying crypto should know? Or is this unethical? Am I getting into the moral dilemma that EAs just shouldn't do finance to E2G? Is that a bullet I want to bite, because I might have to argue that? And what's my 'ask', what am I hoping to happen as result of my messaging someone?"
I didn't think of it as a red flag for upcoming crash and bankruptcy, and I didn't expect something to come out that could be formally charged as fraud. I guess someone who knows about ponzi schemes can say if I was dumb to not think of any of this. But it was a red flag that he didn't care about FTX users, and he might not be "a good guy" (even by consequentialist standards, the balance gets way more complicated and you can't be anywhere close to sure enough to take such risks with citizen's money). And regardless of SBF, it was a tip that the public consciousness was about to slant against crypto (even more than the growing disdain for "crypto bros" betrays) and that's a risk of association.
I still kick myself for not messaging someone. It wouldn't have been that much notice, a couple months maybe? But maybe could have helped EA distance itself proactively. Sigh.
(2) SBF's violation of kelly criteria/biting bullets on St. Petersburg paradox.
Personal take and expressing shock/light scolding: I never knew how "all-in" he was, but I'd have found that super alarming, and this I think I'd have tried (more seriously than about #1) to talk to someone about it. Basically all I know about betting is that you "never bet it all. Always leave enough to bet another day", but I know it as the golden rule. It still troubles me that EAs and others seemingly thought SBF's responses were philosophically neutral or something, when actually it was a glaring red flag that the companywould fail, even without fraud. And also a red flag that he was kind of self-deluding, or trying too hard to be clever via breaking rules. Like. If you want to make more money to do more good, just do the thing that is already known to make the most money in the long run (kelly), don't instead pull numbers out of your ass to reinvent a wheel, except inevitably worse than before. This also tied into SBF acting way too morally sure of himself--personally I'd never bet earth's entire future without others' consent because of one moral theory coupled with the multiple universe theory, in regards to a situation that is called a paradox for good reason (it's not supposed to be an easy decision, which generally means you should defer to group consensus!).
This all said: I think EAs' philosophical naivety here, or brushing it off, is disappointingly normal? As proven by no one else in the world writing a hit piece about SBF (that I'm aware of). Bystander effect too maybe, since that stuff is way more public than ex-Alameda employee complaints (but CEA investigated those, at least kinda idk yet), it'd be normal to think "Well, lots of people are seeing this, and if no one else, including FTX investors, sees it as a problem, I guess it must be okay." Idk. I'd like EAs and non-EAs to do better at pinpointing problematic actors in this regard (and we can only control EA so we should focus on this failure mode a bit), but my complaints are all qualitatively different from what the Time article is talking about.
I expect I'm not the only one who feels as I do re: 1 and 2, including vague and specific guilt, even though I was by all accounts a total outsider. I'm guessing most people just don't talk about it, and if I'm not the only one, that's one reason it feels very weird to me to pin it on EA leaders (as of right now).
That basically everyone missed or ignored these flags, does not, I think, bode well that replacing EA leaders means it would have been caught, or that replacements will do better. As a silver lining, I expect odds of catching bad signs like this to go up in future for all potential leaders, because we will have learned this hard lesson and the lesson will be made overt to any new elected people. But I still think we want at least one designated person who would have caught it with or without ex-Alameda reports, regardless of what could have been gleaned from those reports, because I think some sort of fiasco could have been caught with or without those. Surprisingly, I consider those relatively minor flags compared to 1 and 2. The difference is that for those, it's EA leaders who take the blame, whereas for 1 and 2 it's basically everyone who was paying a bit of attention.
Most humans won't catch troubling dark-triad actors. That's probably okay because we don't want most people to have low-trust types of personalities. As things stand, I'd be more in favor of adding a new person to the leadership mix, or hiring a social-risk specialist or something for the CH team, whose overt job it is to catch signs of unethical and troubling behavior by EA and EA adjacent people, who is structurally greenlighted to navigate possibly-manipulative people as though they are probably acting in bad faith, so as to not be as easily fooled as most leadership, I think, would be in cases like SBF's:]
I could say a lot more, and be more precise, and doublecheck some stuff in #1 which I still never did, but this is just a shortform.
(Sorry I took so long to come back) Thanks for clarifying. Hm I'm surprised then that it really seems like the journalist didn't turn up such quotes about fraud. I do think you are right that many of them expected a crash-and-burn... of some sort. I feel like I should have written something more precise like "crash and burn 3 years later, after making 15B on paper" which comes with so many signals over the years that if I were such a person I'd end up discounting my early suspicions. If I were in their or CEA's shoes I'd probably have expected something like what happened with Tara's company, a crash and burn pretty soon after (2019?), so I'd be assuming something got fixed along the way if not. Especially given how an ex-employee(s?) talked about burning through the Asian arbitrage dollars with bad trade decisions.. they'd have had to fix it, right, or they'd have gone belly-up way sooner? I guess crypto was just that much of a gold rush and so few "adults in the room" that they could keep fudging their numbers for that long..?
Maybe the investigation was worse than useless in the end, but reasonably any action taken was going to start with an investigation. It depends the quality of the investigation, but for now I'm much more comfortable considering this a bug of the world than something to blame CEA for.
[Edit: This isn't to say that I think no mistakes were made. But my complaints are not focused on EA leaders specifically (I'm hesitant to call out any single person til CEA's commissioned investigation is complete), and are different from what the article discusses. I discuss that in shortform]
I've read this comment a few times, and my brain goes "???" whenever I get to your last clause: "I also had quite a strong reaction that nobody seemed to be acting on all of these warning flags"
I just don't get it in a way that connects to my reading of the article. What are "all these warning flags" and what counts as "inaction"? I don't want to say your take is wrong because you are sort of sharing feelings, but like.... according to the article, ex-Alameda employees don't seem to think that those flags were warning flags for the massive fraud and crash-and-burn failure that was to come. And re: inaction, the article says CEA did an internal investigation in 2019 (it drops the info kinda randomly. As you say, the article isn't well-optimized to come away with an understanding of the details). And idk what new warning flags came after 2019, I'm not seeing any in the article.
I mostly like your comment, but I'm also left wondering... Do you know things not in the article? Did I miss something? [Is this just a "vibe" we will disagree on regardless?] I can't quite reconcile your take.
[Edit: I had been thinking about asking this over DM for a couple days, but now that this post is no longer an active topic, I figured, "what the hay, ask it in thread". However you can answer over DM if you prefer, or ignore cuz the post is giving dying breaths, np.]
I vouch for Monica as a kind, intelligent person who has indeed focused much of her career around helping people with autism :) I haven't worked with her myself, but we are in the same local EA community.
If you have autism, (with or without formal diagnosis), there's nothing to lose by reaching out, and a lot you might gain :)
Big improvement on the left sidebar. Good job all 👍
That makes sense. It is really shocking. I agree on blaming regulators [although I don't give others a pass].
[I think a section on regulations def belongs from the POV of improving world models too. Before I added my long thinking-out-loud footnote, I didn't realize just how much it all points at regulators as the original permissiveness break.]
Yeah my thoughts exactly. Or, it doesn't send a big signal to non-finance people. But like, I think it should send a signal to people in finance, eg the auditors. That FTX should have been able to afford a different service and yet didn't. Or maybe, idk, should have revealed their internals for different certification (better than GAAP cert idk, I know nothing). I just think it should have raised flags for the auditor. If someone is enlisting you for purposes of increasing trust but they are clearly not doing their damnedest, according to their abilities, to ensure that trust is accurate. The US consumers can't be expected to know the difference, but the auditor should. I think.
Yeah, turns out there was not only sloppiness here though. Like Enron, things were labelled to look much better than they really were. Like, stocks of coins held in such volumes and accounted as their current exchange price such that it looked like billions in net worth, IIRC, but of course if they would have sold the coins, the value of the coins would have plummeted far before they got through their sale order, so there is no way they could have expected to gather that much USD for the coins they held. This might be normal for stocks valuation? But my impression from the tone is that it was handled differently/worse. Maybe there were other things too, tbh I didn't look very closely. Plus there was SBF's backdoor (that was real right?) and I'd bet other weird money movements I assume he has noticed by now.
Nice job, I was super happy to notice that :) :thumb: