tugbazsen

9Joined Dec 2016

Comments
3

I'm not sure that the mistake vs. fraud angle is very meaningful beyond the question of is SBF wrong for taking stupid risks or wrong for doing fraud - this is an important distinction, yes, but in terms of how EA should be reassessing it's own frameworks and norms, I think a number of important issues are the same in both cases.

Here’s my model of what happened: SBF was busy. Things were confusing. He made a lot of decisions. One of them blew up. 

I feel like this scenario should also be the cause for a significant reassessment of the people involved and community norms, oversight, etc. because there's a point where mistakes being made, in this case more or less unilaterally, at such a huge scale should be made as difficult as possible, not supported by a "take risks and see what sticks" mentality. There's a very important difference between crashing a website or printing flyers with a typo and losing investor money. It's a mistake that leads to serious harm to people that FTX had a legal and moral duty that was not fulfilled, even if it was a mistake. Even if all tings were perfect and a mistake occurred as the result of an act-of-god type fluke, there would still be good reasons to call into question SBF's capability and judgement as a steward for funds, particularly in the absence of signs that there were meaningful efforts to mitigate, which to the best of my knowledge there are none. Yes, mistakes can be hard to foresee, and yes, they are a part of doing business, but I don't feel like mistakes of such a huge scale should be just treated as learning opportunities as a lack of necessary and serious consequences disincentivizes taking the necessary care and effort to avoid future mistakes. 

Whether this blowup was caused by intentional deception or an honest mistake, the EA community has been extremely quick to change its tune. In less than a week, everyone has gone from “SBF, golden boy” to “What a criminal, don’t be like that guy”. EA leaders have posted denunciations of fraud, and distanced themselves from Sam; the most upvoted all-time EA Forum post is a community condemnation; the entire Future Fund team up and resigned.

If anything, I think this should be a sign that EA and related organizations should probably be keeping their donors, particularly their megadonors, at arms length instead of treating them as golden boy poster children for the movement, allowing them significant influence, and integrating them so deeply into the movement. To me, this is a matter of good governance more so than PR, but it does cut both ways. Allowing megadonors to buy influence and becoming so deeply enmeshed that the fortunes of the donors and the movement are intertwined (both literally and metaphorically) doesn't just allow for reputation washing, it allows the donor to buy influence and intellectual/ideological control.  It also, as evidenced by this post, creates a dynamic where there can be an expectation of "loyalty" to donors despite what appears to be blatant fraud because of their previous donations and involvement. No one should be able to buy influence over or the loyalty of a movement that is supposed to empirically guided.

 Risk-taking and ambition are two sides of the same coin. If you swarm to denouncing risks that failed, you do not understand what it takes to succeed. 

Again, even in the absence of fraud, I would argue that norms should not be to encourage taking significant risks with significant potential downfalls and counterfactuals, particularly when you're taking risks with other people's money without being a clear go-ahead to do it. The world where SBF didn't make a mistake/do fraud and lose 15 billion dollars but played it safe to make and donate 5 billion is the better option, both for EAs and for the creditors of FTX, and for SBF himself.  Arguably, the world where SBF continued to just ETG on Jane Street may even a better world than the current one depending on the fallout of the current situation. Cultivating a culture where risk taking and ambition is encouraged regardless of scale is, imo, bad.

As someone who spends a fair amount of time on job boards out of curiosity, while I can't speak to technical CS roles, this is definitely true for a lot entry-level research and operations roles and while I agree with your analysis of the implications, I think this may actually be a good thing. Outside of paying workers more being a good thing on principle, there are two main reasons why I think so:

  1. When people hear that EA orgs pay more than market value, some of them may end up going into EA to get these roles. I personally know at least two people who enrolled in the online fellowship program because of the high wages in the EA job market.  While there will probably be some people who end up LARPing EAness, some will become actual EAs. I personally think think the amount of resources expended on community building is a bigger issue in terms of effectiveness and optics, and that this may be a good organic way to get some people interested in EA. 
  2. More importantly, I think the focus on EA knowledge, experience and alignment in a lot of these roles is misguided. These things are very important for some roles (policy setting for orgs, community building, etc) somewhat important but not necessary for others (research roles come to mind; good researchers should be able to internalize and apply a framework to their work and work-related decisionmaking even if they were not previously aware of it or buy into it), and not important at all for some (like office management or strictly boots on the ground operations work). For the latter two categories, the idea that EA orgs should employ EA is most likely a significant a significant contributor to talent bottlenecks. If these high salaries were used to hire the best people regardless of their EA status would both just get the best people and help give some of the best non-EA people deep exposure to EA ideas and principles.  Some screening for a degree of cultural fit for things like openness, interest in the work etc. would probably be enough to ensure extreme misalignment does not happen. 

I realize that the goal to eventually promote up from the inside into leadership goals does present an important caveat to the second point; I think that that can be solved fairly easily by either a) not doing that if necessary, which is not great but not terrible, or b) transparently emphasizing that cultural fit with the principles guiding the mission of the org in the context of work is important in these decision is important to this during hiring and promotion and allowing employees to engage with and immerse themselves in these principles as part of their work.

I'm not very well-versed in the CS aspect of ML or AI, but I really enjoyed reading about Redwood's work and reading this post reminded me of something I found striking then. I am not a trained EFL teacher but I have a decent grasp of some of the theory and some experience with classroom observation at different levels and teaching/tutoring EFL, and the examples in your "Redwood Research’s current project" write up are very similar in a lot of ways to mistakes intermediate or almost proficient EFL speakers would make (not being able to track  what pronouns are referring to across longer bodies of texts, not being able to grasp the implications of certain verbs when applied to certain objects etc). This makes me think that getting both language acquisition experts and EFL researchers' perspectives on your data may also be interesting and useful to this kind of research.