Ben Kuhn

CTO @ Wave
3000 karmaJoined Aug 2014Working (6-15 years)Somerville, MA, USA


I'm the CTO of Wave. We build financial infrastructure for unbanked people in sub-Saharan Africa.

Personal site (incl various non-EA-related essays): https://www.benkuhn.net/

Email: ben dot s dot kuhn at the most common email address suffix


Don't forget Zenefits!

In 2016 an internal legal investigation at Zenefits found the company's licensing was out of compliance and that Conrad had created a browser extension to skirt training requirements for selling insurance in California.[15] After self-reporting these issues, Zenefits hired an independent third party to do an internal audit of its licensing controls and sent the report to all 50 states.[16] The California Department of Insurance as well as the Massachusetts Division of Insurance began investigations of their own based on Zenefits' report.[17][18] Parker Conrad resigned as CEO and director in February and COO David O. Sacks was named as his replacement.

Zenefits was valued at $4.5b in 2015 and was all downhill after the incident; they did three rounds of layoffs in four years and were eventually acquired by a no-name company for an undisclosed price in 2022. It's unclear how much of that decline was directly a result of the fraud, vs. the founder's departure, vs. them always having had poor fundamentals and being overvalued at $4.5b due to hype.

Why do you think people think it's unimportant (rather than, e.g., important but very difficult to achieve due to the age skew issue mentioned in the post)?

I agree that it's downstream of this, but strongly agree with ideopunk that mission alignment is a reasonable requirement to have.* A (perhaps the) major cause of organizations becoming dysfunctional as they grow is that people within the organization act in ways that are good for them, but bad for the organization overall—for example, fudging numbers to make themselves look more successful, ask for more headcount when they don't really need it, doing things that are short-term good but long-term bad (with the assumption that they'll have moved on before the bad stuff kicks in), etc. (cf. the book Moral Mazes.) Hiring mission-aligned people is one of the best ways to provide a check on that type of behavior.

*I think some orgs maybe should be more open to hiring people who are aligned with the org's particular mission but not part of the EA community—eg that's Wave's main hiring demographic—but for orgs with more "hardcore EA" missions, it's not clear how much that expands their applicant pool.

Whoops! Fixed, it was just supposed to point to the same advice-offer post as the first paragraph, to add context :)

In addition to having a lot more on the line, other reasons to expect better of ourselves:

  • EA had (at least potential) access to a lot of information that investors may not have, in particular about Alameda's early exodus in 2018.
  • EA had much more time to investigate and vet SBF—there's typically a very large premium for investors to move fast during fundraising, to minimize distraction for the CEO/team.

Because of the second point, many professional investors do surprisingly little vetting. For example, SoftBank is pretty widely reputed to be "dumb money;" IIRC they shook hands on huge investments in Uber and WeWork on the basis of a single meeting, and their flagship Vision Fund lost 8% (~$8b) this past quarter alone. I don't know about OTPP but I imagine they could be similarly diligence-light given their relatively short history as a venture investor. Sequoia is less famously dumb than those two, but still may not have done much vetting if FTX was perceived to be a "hot" deal with lots of time pressure.

Answer by Ben KuhnNov 15, 202240

Is it likely that FTX/Alameda currently have >50% voting power over Anthropic?

Extremely unlikely. While Anthropic didn't disclose the valuation, it would be highly unusual for a company to take >50% dilution in a single funding round.

Definitely! In this case I appear to have your email so reached out that way, but for anyone else who's reading this comment thread, Forum messages or the email address in the post both work as ways to get in touch!

In the "a case for hope" section, it looks like your example analysis assumes that the "AGI timeline" and "AI safety timeline" are independent random variables, since your equation describes sampling from them independently. Isn't that really unlikely to be true?

Can someone clarify whether I'm interpreting this paragraph correctly?

Effective Ventures (EV) is a federation of organisations and projects working to have a large positive impact in the world. EV was previously known as the Centre for Effective Altruism but the board decided to change the name to avoid confusion with the organisation within EV that goes by the same name.

I think what this means is that the CEA board is drawing a distinction between the CEA legal entity / umbrella organization (which is becoming EV) and the public-facing CEA brand (which is staying CEA). AFAIK this change wasn't announced anywhere separately, only in passing at the beginning of this post which sounds like it's mostly intended to be about something else?

(As a minor point of feedback on why I was confused: the first sentence of the paragraph makes it sound like EV is a new organization; then the first half of the second sentence makes it sound like EV is a full rebrand of CEA; and only at the end of the paragraph does it make clear that there is intended to be a sharp distinction between CEA-the-legal-entity and CEA-the-project, which I wasn't previously aware of.)

Sorry that was confusing! I was attempting to distinguish:

  1. Direct epistemic problems: money causes well-intentioned people to have motivated cognition etc. (the downside flagged by the "optics and epistemics" post)
  2. Indirect epistemic problems as a result of the system's info processing being blocked by not-well-intentioned people

I will try to think of a better title!

Load more