All of Jonas V's Comments + Replies

I recall feeling most worried about hacks resulting in loss of customer funds, including funds not lent out for margin trading. I was also worried about risky investments or trades resulting in depleting cash reservers that could be used to make up for hacking losses.

I don't think I ever generated the thought "customer monies need to be segregated, and they might not be", primarily because at the time I wasn't familiar with financial regulations. 

E.g. in 2023 I ran across an article written in ~2018 that commented an SIPC payout in a case of a broker co-mingling customer funds with an associated trading firm. If I had read that article in 2021, I would have probably suspected FTX of doing this.

Based on some of the follow-up questions, I decided to share this specific example of my thinking at the time (which didn't prevent me from losing some of my savings in the bankruptcy):

4
Jason
4d
Do you recall what your conception of a possible customer loss resulting "from bankruptcy" was, and in particular whether it was (at least largely) limited to "monies lent out for margin trading"? Although I haven't done any research, if user accounts had been appropriately segregated and safeguarded, FTX's creditors (in a hypothetical "normal" bankruptcy scenario) shouldn't have been able to make claims against them. There might have been an exception for those involved in margin trading

A 10-15% annual risk of startup failure is not alarming, but a comparable risk of it losing customer funds is. Your comment prompted me to actually check my prediction logs, and I made the following edit to my original comment:

  • predicting a 10% annual risk of FTX collapsing with FTX investors and the Future Fund (though not customers) FTX investors, the Future Fund, and possibly customers losing all of their money, 
    • [edit: I checked my prediction logs and I actually did predict a 10% annual risk of loss of customer funds in November 2021, though I lower
... (read more)

I don't think so, because:

  • A 10–15% annual risk was predicted by a bunch of people up until late 2021, but I'm not aware of anyone believing that in late 2022, and Will points out that Metaculus was predicting ~1.3% at the time. I personally updated downwards on the risk because 1) crypto markets crashed, but FTX didn't, which seems like a positive sign, 2) Sequoia invested, 3) they got a GAAP audit.
  • I don't think there was a great implementation of the trade. Shorting FTT on Binance was probably a decent way to do it, but holding funds on Binance for that p
... (read more)
2
Ben_West
5d
Thanks! That's helpful. In particular, I wasn't tracking the 2021 versus 2022 thing.

Is a 10-15% annual risk of failure for a two-year-old startup alarming? I thought base rates were higher, which makes me think I'm misunderstanding your comment.

You also mention that the 10% was without loss of costumer funds, but the Metaculus 1.3% was about loss of costumer funds, which seems very different.

10% chance of yearly failure without loss of customer funds seems more than reasonable, even after Sequoia invested, in such a high-variance environment, and not necessarily a red flag.

I agree it's probably a pretty bad idea but I don't think this supports your conclusion that "the EA community may have hard a hard time seeing through tech hype"

I disagree with that quote but I do think the fact that Will is reporting this story now with a straight face is a bad sign. 

My steelman would be "look if you think two people would have a positive-sum interaction and it's cheap to facilitate that, doing so is a good default". It's not obvious to me that Will spent more than 30 seconds on this. But the defense is "it was cheap and I didn't think about it very hard", not "Sam had ideas for improving twitter".

  • Going even further on legibly acting in accordance with common-sense virtues than one would otherwise, because onlookers will be more sceptical of people associated with EA than they were before. 
    • Here’s an analogy I’ve found helpful. Suppose it’s a 30mph zone, where almost everyone in fact drives at 35mph. If you’re an EA, how fast should you drive?  Maybe before it was ok to go at 35, in line with prevailing norms. Now I think we should go at 30.

 

Wanting to push back against this a little bit:

  • The big issue here is that SBF was reckless
... (read more)
6
Ben Millwood
5d
I hear Will not as saying that going 35mph is in itself wrong in this analogy (necessarily), but that EA is now more-than-average vulnerable to attack and mistrust, so we need to signal our trustworthiness more clearly than others do.

From personal experience, I thought community health would be responsible, and approached them about some concerns I had, but they were under-resourced in several ways.

I normally think of community health as dealing with interpersonal stuff, and wouldn't have expected them to be equipped to evaluate whether a business was being run responsibly. It seems closer to some of the stuff they're doing now, but at the time the team was pretty constrained by available staff time (and finding it difficult to hire), so I wouldn't expect them to have been doing anything outside of their core competency.

Maybe a lesson is that we should be / should have been clearer about scopes, so there's more of an opportunity to notice when something doesn't belong to anyone?

3
RyanCarey
7d
This is who I thought would be responsible too, along with the CEO of CEA, that they report to, (and those working for the FTX Future Fund, although their conflictedness means they can't give an unbiased evaluation). But since the FTX catastrophe, the community health team has apparently broadened their mandate to include "epistemic health" and "Special Projects", rather than narrowing it to focus just on catastrophic risks to the community, which would seem to make EA less resilient in one regard, than it was before. Of course I'm not necessarily saying that it was possible to put the pieces together ahead of time, just that if there was one group responsible for trying, they were it.

I'd be interested in specific scenarios or bad outcomes that we may have averted. E.g., much more media reporting on the EA-FTX association resulting in significantly greater brand damage? Prompting the legal system into investigating potential EA involvement in the FTX fraud, costing enormous further staff time despite not finding anything? Something else? I'm still not sure what example issues we were protecting against.

Jason
7d38
3
2
1

much more media reporting on the EA-FTX association resulting in significantly greater brand damage?

Most likely concern in my eyes. 

The media tends to report on lawsuits when they are filed, at which time they merely contain unsubstantiated allegations and the defendant is less likely to comment. It's unlikely that the media would report on the dismissal of a suit, especially if it was for reasons seen as somewhat technical rather than as a clear vindication of the EA individual/organization.

Moreover, it is pretty likely to me that EVF or other EA-aff... (read more)

Jonas V
8d102
7
0
4
2

I broadly agree with the picture and it matches my perception. 

That said, I'm also aware of specific people who held significant reservations about SBF and FTX throughout the end of 2021 (though perhaps not in 2022 anymore), based on information that was distinct from the 2018 disputes. This involved things like:

  • predicting a 10% annual risk of FTX collapsing with FTX investors and the Future Fund (though not customers) FTX investors, the Future Fund, and possibly customers losing all of their money, 
    • [edit: I checked my prediction logs and I actua
... (read more)

Based on some of the follow-up questions, I decided to share this specific example of my thinking at the time (which didn't prevent me from losing some of my savings in the bankruptcy):

predicting a 10% annual risk of FTX collapsing with FTX investors and the Future Fund (though not customers) losing all of their money,

Do you know if this person made any money off of this prediction? I know that shorting cryptocurrency is challenging, and maybe the annual fee from taking the short side of a perpetual future would be larger than 10%, not sure, but surely once the FTX balance sheet started circulating that should have increased the odds that the collapse would happen on a short time scale enough for this trade to be profitable?[1]


  1. I f

... (read more)
huw
7d46
15
2

A meta thing that frustrates me here is I haven’t seen much talking about incentive structures. The obvious retort to negative anecdotal evidence is the anecdotal evidence Will cited about people who had previous expressed concerns who continued to affiliate with FTX and the FTXFF, but to me, this evidence is completely meaningless because continuing to affiliate with FTX and FTXFF meant a closer proximity to money. As a corollary, the people who refused to affiliate with them did so at significant personal & professional cost for that two-year period.... (read more)

Jonas V
8d30
10
4
1
1

I disagree-voted because I have the impression that there's a camp of people who left Alameda that has been misleading in their public anti-SBF statements, and has a separate track record of being untrustworthy.

So, given that background, I think it's unlikely that Will threatened someone in a strong sense of the word, and possible that Bouscal or MacAulay might be misleading, though I haven't tried to get to the bottom of it.

I wish this post's summary was clearer on what, exactly, readers could/should do to help with vote pairing. I think this could be valuable during the 2024 election!

Vote pairing seems to be more cost-effective than making calls, going door to door, or other standard forms of changing election outcomes, provided you are in the very special circumstances which make it effective.

What are those circumstances?

Tens of thousands of people have participated in swaps

Do you have a source for this? How many of those were in swing states?

I worry 'wholesomeness' overemphasizes doing what's comfortable and convenient and feels good, rather than what makes the world better:

  • As mentioned, wholesomeness could stifle visionaries, and this downside wasn't discussed further.
  • Fighting to abolish slavery wasn't a particularly wholesome act, in fact it created a lot of unwholesome conflict. Protests aren't wholesome. I expect a lot of important future work to look and feel unwholesome. (I'm aware you could fit it into the framework somehow, but it's an awkward fit.)
  • I worry it'll make EA focus even more
... (read more)

I definitely think it's important to consider (and head off) ways that it could go wrong!

Your first two bullets are discussed a bit further in the third essay which I'll put up soon. In short, I completely agree that sometimes you need visionary thought or revolutionary action. At the same time I think revolutionary action -- taken by people convinced that they are right -- can be terrifying and harmful (e.g. the Cultural Revolution). I'd really prefer if people engaging in such actions felt some need to first feel into what is unwholesome about them, so t... (read more)

If someone else had written my comment, I would ask myself how good that person's manipulation detection skills are. If I judge them to be strong, I would deem the comment to be significant evidence, and think it more likely that Owen has a flaw that he healed, and less likely that he's a manipulator. If I judge them to be weak (or I simply don't have enough information about the person writing the comment), I would not update. 

If there are a lot of upvotes on my comment, that may indicate that readers are naïvely trusting me and making an error, or have good reason to trust my judgment, or have independently reached similar conclusions. I think it's most likely a combination of all of these three factors.

Yeah, I think there's a lot more to be said about this topic, and I'm glad that your said some of it - thanks!

Over the years, I’ve done a fair amount of community building, and had to deal with a pretty broad range of bad actors, toxic leadership, sexual misconduct, manipulation tactics and the like. Many of these cases were associated with a pattern of narcissism and dark triad spectrum traits, self-aggrandizing behavior, manipulative defense tactics, and unwillingness to learn from feedback. I think people with this pattern rarely learn and improve, and in most cases should be fired and banned from the community even if they are making useful contribut... (read more)

I think what Jonas has written is reasonable, and I appreciate all the work he did to put in proper caveats. I also don’t want to pick on Owen in particular here; I don’t know anything besides what has been publicly said, and some positive interactions I had with him years ago. That said: I think the fact that this comment is so highly upvoted indicates a systemic error, and I want to talk about that.

The evidence Jonas provides is equally consistent with “Owen has a flaw he has healed” and “Owen is a skilled manipulator who charms men, and harasses women”.... (read more)

How can the EA community better support neurodivergent community members who feel like they might make mistakes without realizing it?

I've talked to some people who are involved with OpenAI secondary markets, and they've broadly corroborated this.

One source told me that after a specific year (didn't say when), the cap can increase 20% per year, and the company can further adjust the cap as they fundraise.

If you're running an event and Lighthaven isn't an option for some reason, you may be interested in Atlantis: https://forum.effectivealtruism.org/posts/ya5Aqf4kFXLjoJeFk/atlantis-berkeley-event-venue-available-for-rent 

(FYI, Atlas won't be ending up with a budget shortfall as a result of this.)

This seems the most plausible speculation so far, though probably also wrong: https://twitter.com/dzhng/status/1725637133883547705

[This comment is no longer endorsed by its author]Reply
6
Lorenzo Buonanno
5mo
If you think it's more plausible than misalignment with OpenAI's mission, you could make some mana on 
5
Jackson Wagner
5mo
Nice!  I like this a lot more than the chaotic multi-choice markets trying to figure out exactly why he was fired.

(Shorting TLT seems a reasonably affordable way to implement this strategy I guess, though you're only going short nominal interest rates.)

The really important question that I suspect everyone is secretly wondering about: If you book the venue, will you be able to have the famous $2,000 coffee table as a centerpiece for your conversations? I imagine that after all the discourse about it, many readers may feel compelled to book Lighthaven to see the table in action!

I think there aren't really any attendees who are doing meta work for a single cause. Instead, it seems to be mostly people who are doing meta work for multiple areas.

(I also know of many people doing AI safety meta work who were not invited.)

3
Rebecca
7mo
Yeah I interpreted the scope of the forum as 'meta-EA'/meta-meta rather than meta-[specific causes].

Yeah, I disagree with this on my inside view—I think "come up with your own guess of how bad and how likely future pandemics could be, with the input of others' arguments" is a really useful exercise, and seems more useful to me than having a good probability estimate of how likely it is. I know that a lot of people find the latter more helpful though, and I can see some plausible arguments for it, so all things considered, I still think there's some merit to that.

How to fix EA "community building"

Today, I mentioned to someone that I tend to disagree with others on some aspects of EA community building, and they asked me to elaborate further. Here's what I sent them, very quickly written and only lightly edited:

Hard to summarize quickly, but here's some loose gesturing in the direction:

  • We should stop thinking about "community building" and instead think about "talent development". While building a community/culture is important and useful, the wording overall sounds too much like we're inward-focused as opposed to t
... (read more)
3
James Herbert
8mo
To be clear, are you saying your preference for the phrase 'talent development' over 'community building' is based on your concern that people hear 'community building' and think, 'Oh, these people are more interested in investing in their community as an end in itself than they are in improving the world'?
1
trevor1
8mo
All of this looks fantastic and like it should have been implemented 10 years ago. This is not something to sleep on. The only nitpick I have is with how object level vs social reality is described. Lots of people are nowhere near ready to make difficult calculations, e.g. the experience from the COVID reopening makes it hard to predict that pandemic lockdown in the next 5 years is 40% even if that is the correct number. There's lots of situations where the division of labor is such that deferring to people at FHI etc. is the right place to start, since these predictions are really important and not about people giving their own two cents or beginning learning the ropes of forecasting, which is what happens all too often (of course, that shouldn't get in the way of new information and models travelling upwards, or fun community building/talent development workshops where people try out forecasting to see if its a good fit for them).

I find it very interesting to think about the difference between what a talent development project would look like vs. a community-building project.

Feeling a bit tired to type a more detailed response, but I think I mostly agree with what you say here.

Hmm, I personally think "discover more skills than they knew. feel great, accomplish great things, learn a lot" applies a fair amount to my past experiences, and I think aiming too low was one of the biggest issues in my past, and I think EA culture is also messing up by discouraging aiming high, or something.

I think the main thing to avoid is something like "blind ambition", where your plan involves multiple miracles and the details are all unclear. This seems also a fairly frequent phenomenon.

2
Joseph Lemien
8mo
I think that you in particular might be quite non-representative of EAs in general, in terms of "success" in the EA context. If I imagine a distribution of "EA success," you are probably very far to the right.
2
Elizabeth
8mo
Accepting your self-report as a given, I have a bunch of questions. I want to say that I'm not against ambition. From my perspective I'm encouraging more ambition, by focusing on things that might actually happen instead of daydreams.  Does the failure mode I'm describing (people spinning their wheels on fake ambition) make sense to you? Have you seen it?  I'm really surprised to hear you describe EA as discouraging aiming high. Everything I see encourages aiming high, and I see a bunch of side effects of aiming too high littered around me. Can you give some examples of what you're worried about? What do you think would have encouraged more of the right kind of ambition for you? Did it need to be "you can solve global warming?", or would "could you aim 10x higher?" be enough? 

If there's no strategy to profitably bet on long-term real interest rates increasing, you can't infer timelines from real interest rates. I think the investment strategies outlined in this post don't work, and I don't know if there's a strategy that works.

I want to caution against the specific trading strategies suggested in this post:

... (read more)
2
Jonas V
6mo
(Shorting TLT seems a reasonably affordable way to implement this strategy I guess, though you're only going short nominal interest rates.)

I wouldn't be too surprised if someone on the GAP leadership team had indeed participated in an illegal straw donor scheme, given media reports and general impressions of how recklessly some of the SBF-adjacent politics stuff was carried out. But, I do think the specific title allegation is worded too strongly and sweepingly given the lack of clear evidence, and will probably turn out to be wrong.

I think both the original post and this comment don't do a great job at capturing the nuance of what's going on. I think the original post makes many valid points and isn't "90% just a confusion", and I suspect in fact may end up looking like it was correct about some of its key claims. But, I also suspect it'll be wrong on a lot of the details/specifics, and in particular it seems problematic to me that it makes fairly severe accusations without clear caveats on what we know and what we don't know at this point.

(I downvoted both this comment and the OP.)

9
I_machinegun_Kelly
1y
I probably should have said "90% of the title allegation is just a confusion." I don't have much to say about the other accusations scattered through the post which aren't relevant to the legal claim. Some of these might be correct but if they aren't relevant to the core accusation they can be somewhat confusing in their own right, if not clearly demarcated, by further muddling things.

At least 300 political contributions are suspected by the investigation to have been made with straw donors

This claim is false; the source says that they "were made in the name of a straw donor or paid for with corporate funds". Which is frustrating, as the former would be much more surprising and interesting than the latter.

I think this was the maximally best April Fool's post.

Uh oh, better reduce the humor by 20% or we're courting peril.

I read his comment differently, but I'll stop engaging now as I don't really have time for this many follow-ups, sorry!

What if the investor decided to invest knowing there was an X% chance of being defrauded, and thought it was a good deal because there's still an at least (100-X)% chance of it being a legitimate and profitable business? For what number X do you think it's acceptable for EAs to accept money?

Fraud base rates are 1-2%; some companies end up highly profitable for their investors despite having committed fraud. Should EA accept money from YC startups? Should EA accept money from YC startups if they e.g. lied to their investors?

I think large-scale defrauding un... (read more)

I don't know the acceptable risk level either. I think it is clearly below 49%, and includes at least fraud against bondholders and investors that could reasonably be expected to cause them to lose money from what they paid in.

It's not so much the status of the company as a fraud-commiter that is relevant, but the risk that you are taking and distributing money under circumstances that are too close to conversion (e.g., that the monies were procured by fraud and that the investors ultimately suffer a loss). I can think of two possible safe harbors under wh... (read more)

I think that's false; I think the FTX bankruptcy was hard to anticipate or prevent (despite warning flags), and accepting FTX money was the right judgment call ex ante.

9
Jason
1y
I think Jack's point was that having some technical expertise reduces the odds of a Bad Situation happening at a general level, not that it would have prevented exposure to the FTX bankruptcy specifically. If one really does not want technical expertise on the board, a possible alternative is hiring someone with the right background to serve as in-house counsel, corporate secretary, or a similar role -- and then listening to that person. Of course, that costs money.

I expect a 3-person board with a deep understanding of and commitment to the mission to do a better job selecting new board members than a 9-person board with people less committed to the mission. I also expect the 9-person board members to be less engaged on average.

(I avoid the term "value-alignment" because different people interpret it very differently.)

That was an example; I'd want it to exclude any type of fraud except for the large-scale theft from retail customers that is the primary concern with FTX.

2
Jason
1y
Although at that point -- at least in my view -- the bet is only about a subset of knowledge that could have rendered it ethically unacceptable to be involved with FTXFF. Handing out money which you believed more likely than not to have been obtained by defrauding investors or bondholders would also be unacceptable, albeit not as heinous as handing out money you believed more likely than not to have been stolen from depositors. (I also think the ethically acceptable risk is less than "more likely than not" but kept that in to stay consistent with Nathan's proposed bet which used "likely.")

I think 9-member boards are often a bad idea because they tend to have lots of people who are shallowly engaged, rather than a smaller number of people who are deeply engaged, tend to have more diffusion of responsibility, and tend to have much less productive meetings than smaller groups of people. While this can be mitigated somewhat with subcommittees and specialization, I think the optimal number of board members for most EA orgs is 3–6.

no lawyers/accountants/governance experts

I have a fair amount of accounting / legal / governance knowledge and as part of my board commitments think it's a lot less relevant than deeply understanding the mission and strategy of the relevant organization (along with other more relevant generalist skills like management, HR, etc.). Edit: Though I do think if you're tied up in the decade's biggest bankruptcy, legal knowledge is actually really useful, but this seems more like a one-off weird situation.

It seems intuitive that your chances of ending up in a one off weird situation are reduced if you have people who understand the risks properly in advance. I think a lot of what people with technical expertise do on Boards is reduce blind spots.

4
Jason
1y
It's clear to me that the pre-FTX collapse EVF board, at least, needed more "lawyers/accountants/governance" expertise. If someone had been there to insist on good governance norms, I don't believe that statutory inquiry would likely have been opened - at a minimum it would have been narrower. Given the very low base rate of SIs, I conclude that the external evidence suggests the EVF UK board was very weak in legal/accounting/governance etc. capabilities.

I would be willing to take the other side of this bet, if the definition of "fraud" is restricted to "potentially stealing customer funds" and excludes thinks like lying to investors.

4
Jason
1y
So: excludes securities fraud?

You seem to imply that it's fine if some board members are not value-aligned as long as the median board member is. I strongly disagree: This seems a brittle setup because the median board member could easily become non-value-aligned if some of the more aligned board members become busy and step down, or have to recuse due to a COI (which happens frequently), or similar. 

I'm very surprised that you think a 3 person Board is less brittle than a bigger Board with varying levels of value alignment. How do 3 person Boards deal with all the things you list that can affect Board make up? They can't, because the Board becomes instantly non-quorate.

9
Jason
1y
I don't agree with that characterization. On my 6/3 model, you'd need four recusals among the heavily aligned six and zero among the other three for the median member to be other; three for the median to be between heavily aligned and other. If you're having four of six need to recuse on COI grounds, there are likely other problems with board composition at play. Also, suggesting that alignment is not the "emphasis" for each and every board seat doesn't mean that you should put misaligned or truly random people in any seat. One still should expect a degree of alignment, especially in seat seven of the nine-seat model. Just like one should expect a certain level of general board-member competence in the six seats with alignment emphasis.

TL;DR: You're incorrectly assuming I'm into Nick mainly because of value alignment, and while that's a relevant factor, the main factor is that he has an unusually deep understanding of EA/x-risk work that competent EA-adjacent professionals lack.

I might write a longer response. For now, I'll say the following:

  • I think a lot of EA work is pretty high-context, and most people don't understand it very well. E.g., when I ran EA Funds work tests for potential grantmakers (which I think is somewhat similar to being a board member), I observed that highly skilled
... (read more)

I agree with you: When I wrote "knew specifics about potential fraud", I meant it roughly in the sense you described. 

To my current knowledge, Nick did not have access to evidence that the funds were likely fraudulently obtained. (Though it's not clear that I would know if it were the case.)

3
Nathan Young
1y
I think I'd bet at like 6% that evidence will come out in the next 10 years that Nick knew funds were likely fraudulently obtained. I think by normal definitions of those words it seems very unlikely to me.

Overall, I think Nick did the right thing ex ante when he chose to run the Future Fund and accept SBF's money (unless he knew specifics about potential fraud). 

If he should be removed from the board, I think we either need an argument of the form "we have specific evidence to doubt that he's trustworthy" or "being a board member requires not just absence of evidence of untrustworthiness, but proactively distancing yourself from any untrustworthy actors, even if collaborating with them would be beneficial". I don't buy either of these.

Jason
1y35
10
1

"[K]new specifics about potential fraud" seems too high a standard. Surely there is some percentage X at which "I assess the likelihood that these funds have been fraudulently obtained as X%" makes it unacceptable to serve as distributor of said funds, even without any knowledge of specifics of the potential fraud. 

I think your second paragraph hinges on the assumption that Nick merely had sufficient reason to see SBF as a mere "untrustworthy actor[]" rather than something more serious. To me, there are several gradations between "untrustworthy actor[... (read more)

Jonas V
1y79
29
25

I would still like an argument that they shouldn't be removed from boards, when almost any other org would. I would like the argument made and seen to be made. 

 

Here's my tentative take:

  • It's really hard to find competent board members that meet the relevant criteria
  • Nick (together with Owen) did a pretty good job turning CEA from a highly dysfunctional into a functional organization during CEA's leadership change in 2018/2019. 
  • Similarly, while Nick took SBF's money, he didn't give SBF a strong platform or otherwise promote him a lot, a
... (read more)

Thanks for making the case. I'm not qualified to say how good a Board member Nick is, but want to pick up on something you said which is widely believed and which I'm highly confident is false.

Namely - it isn't hard to find competent Board members. There are literally thousands of them out there, and charities outside EA appoint thousands of qualified, diligent Board members every year. I've recruited ~20 very good Board members in my career and have never run an open process that didn't find at least some qualified, diligent people, who did a good job.

EA ... (read more)

Overall, I think Nick did the right thing ex ante when he chose to run the Future Fund and accept SBF's money (unless he knew specifics about potential fraud). 

If he should be removed from the board, I think we either need an argument of the form "we have specific evidence to doubt that he's trustworthy" or "being a board member requires not just absence of evidence of untrustworthiness, but proactively distancing yourself from any untrustworthy actors, even if collaborating with them would be beneficial". I don't buy either of these.

I think it's very reasonable to remove Will, and much less clear whether to remove Nick. I would like to see some nuanced distinction between the two of them. My personal take is that Nick did an okay job and should probably stay on the relevant boards. Honestly I feel pretty frustrated by the lack of distinction between Will and Nick in this discussion.

3[anonymous]1y
I don't think it's unreasonable for people to generally be lumping them together. They were both on the Future Fund. They were both informed about bad things SBF had done. Nick ran the team while Will was only an advisor; OTOH Will spoke more favorably about SBF in public. You might see a lot of nuance between the two here, but I think most of us just see basic facts like this and the main debate is around questions like "Should leadership have seen this coming?" "Should leaders be removed when they cause a lot of ex post harm?" "Shouldn't community leaders be elected anyway?"

Personally, I think it's useful if this decision is made by people who competently investigate the case and gather all the information, not by people acting primarily based on public information like this post. Even though I know Owen well, I personally find it hard to say how likely Owen is to make mistakes again; it seems plausible to me that he can learn from his mistakes and continue to be highly involved in the community without causing any further issues, and it also seems possible that he would  continue to make similar mistakes. It seems to me... (read more)

9
Jason
1y
Sure, you could add non-disqualified CH staff to the "pick one" I described upthread on who could clear his return. My point was that if Owen doesn't propose an acceptable return-to-influence plan, it is ultimately the responsibility of those who give him that power to satisfy themselves that returning it is warranted.
Load more