Ben_Kuhn

I'm the CTO of Wave, where we're bringing financial infrastructure to sub-Saharan Africa.

Personal site (incl various non-EA-related essays): https://www.benkuhn.net/

Email: ben dot s dot kuhn at the most common email address suffix

Comments

My mistakes on the path to impact

I note that the framing / example case has changed a lot between your original comment / my reply (making a $5m grant and writing "person X is skeptical of MIRI" in the "cons" column) and this parent comment ("imagine I pointed a gun to your head and... offer you to give you additional information;" "never stopping at [person X thinks that p]"). I'm not arguing for entirely refusing to trust other people or dividing labor, as you implied there. I specifically object to giving weight to other people's top-line views on questions where there's substantial disagreement, based on your overall assessment of that particular person's credibility / quality of intuition / whatever, separately from your evaluation of their finer-grained sub-claims.

If you are staking $5m on something, it's hard for me to imagine a case where it makes sense to end up with an important node in your tree of claims whose justification is "opinions diverge on this but the people I think are smartest tend to believe p." The reason I think this is usually bad is that (a) it's actually impossible to know how much weight it's rational to give someone else's opinion without inspecting their sub-claims, and (b) it leads to groupthink/herding/information cascades.

As a toy example to illustrate (a): suppose that for MIRI to be the optimal grant recipient, it both needs to be the case that AI risk is high (A) and that MIRI is the Best organization working to mitigate it (B). A and B are independent. The prior is (P(A) = 50, P(B) = 50). Alice and Bob have observed evidence with a 9:1 odds ratio in favor of A, so think (P(A) = 90, P(B) = 50). Carol has observed evidence with a 9:1 odds ratio in favor of B. Alice, Bob and Carol all have the same top-line view of MIRI (P(A and B) = 0.45), but the rational aggregation of Alice and Bob's "view" is much less positive than the rational aggregation of Bob and Carol's.

It's interesting that you mention hierarchical organizations because I think they usually follow a better process for dividing up epistemic labor, which is to assign different sub-problems to different people rather than by averaging a large number of people's beliefs on a single question. This works better because the sub-problems are more likely to be independent from each other, so they don't require as much communication / model-sharing to aggregate their results.

In fact, when hierarchical organizations do the other thing—"brute force" aggregate others' beliefs in situations of disagreement—it usually indicates an organizational failure. My own experience is that I often see people do something a particular way, even though they disagree with it, because they think that's my preference; but it turns out they had a bad model of my preferences (often because they observed a contextual preference in a different context) and would have been better off using their own judgment.

My mistakes on the path to impact

if you make a decision with large-scale and irreversible effects on the world (e.g. "who should get this $5M grant?") I think it would usually be predictably worse for the world to ignore others' views

Taking into account specific facts or arguments made by other people seems reasonable here. Just writing down e.g. "person X doesn't like MIRI" in the "cons" column of your spreadsheet seems foolish and wrongheaded.

Framing it as "taking others' views into account" or "ignoring others' views" is a big part of the problem, IMO—that language itself directs people towards evaluating the people rather than the arguments, and overall opinions rather than specific facts or claims.

My mistakes on the path to impact

Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to.
...
My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.

I also get a lot of this vibe from (parts of) the EA community, and it drives me a little nuts. Examples:

  • Moral uncertainty, giving other moral systems weight "because other smart people believe them" rather than because they seem object-level reasonable
  • Lots of emphasis on avoiding accidentally doing harm by being uninformed
  • People bring up "intelligent people disagree with this" as a reason against something rather than going through the object-level arguments

Being epistemically modest by, say, replacing your own opinions with the average opinion of everyone around you, might improve the epistemics of the majority of people (in fact it almost must by definition), but it is a terrible idea on a group level: it's a recipe for information cascades, groupthink and herding.

In retrospect, it's not surprising that this has ended up with numerous people being scarred and seriously demoralized by applying for massively oversubscribed EA jobs.

I guess it's ironic that 80,000 Hours—one of the most frequent repeaters of the "don't accidentally cause harm" meme—seems to have accidentally caused you quite a bit of harm with this advice (and/or its misinterpretations being repeated by others)!

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

I haven't had the opportunity to see this play out over multiple years/companies, so I'm not super well-informed yet, but I think I should have called out this part of my original comment more:

Not to mention various high-impact roles at companies that don't involve formal management at all.

If people think management is their only path to success then sure, you'll end up with everyone trying to be good at management. But if instead of starting from "who fills the new manager role" you start from "how can <person X> have the most impact on the company"—with a menu of options/archetypes that lean on different skillsets—then you're more likely to end up with people optimizing for the right thing, as best they know how.

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

I had a hard time answering this and I finally realized that I think it's because it sort of assumes performance is one-dimensional. My experience has been quite far from that: the same engineer who does a crap job on one task can, with a few tweaks to their project queue or work style, crush it at something else. In fact, making that happen is one of the most important parts of my (and all managers') jobs at Wave—we spend a lot of time trying to route people to roles where they can be the most successful.

Similarly, management is also not one-dimensional: different management roles need different skill sets which overlap with individual-contributor roles in different ways. Not to mention various high-impact roles at companies that don't involve formal management at all. So I think my tl;dr answer would be "you should try to figure out how your current highest performers on various axes can have more leveraged impact on your company, which is often some flavor of management, but it depends a lot on the people and roles involved."

For example, take engineering at Wave. Our teams are actually organized in such a way that most engineers are on a team led by (i.e. whose task queue is prioritized by) a product manager. Each engineer also has an engineering mentor who is responsible for giving them feedback, conducts 1:1s with them, contributes to their performance, etc.

Product managers don't have to be technical at all, and some of the best ones aren't, but some of the best engineers also move laterally into product management because the ways in which they are good engineers overlap a lot with that role. For engineering mentors, they usually need to be more technically skilled than their mentees, but they don't necessarily have to be the best engineers in the company; skill at teaching and resonance with the role of mentor is more important.

We also have a "platform" team which works on engineer-facing tooling and infrastructure. Currently, I'm leading this team, but in the end state I expect it to have a more traditional engineering manager. For this person, some dimensions of engineering competence will be quite important, others won't, and they'll need extra skills that are not nearly as important to individual contributors (prioritization, communication, organization...). I expect they would probably be one of our "best performers" by some metrics, but not by others.

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

I'll let Lincoln add his as well, but here are a few things we do that I think are really helpful for this:

  1. We've found our bimonthly in-person "offsites" to be extremely important. For new hires, I often see their happiness and productivity increase a lot after their first retreat because it becomes easier and more fun for them to work with their coworkers.
  2. Having the right cadence of standing meetings (1-on-1s, team meetings, retrospectives, etc.) becomes much more important since issues are less likely to surface in "hallway" conversations.
  3. We try to make it really easy for people to upgrade conversations to video calls, both by frequently encouraging them to do so, and by making sure that every new hire has a "get to know you" call with as many coworkers as possible in their first few weeks.

(Your mileage may vary with these, of course! In particular, one relevant difference between Wave and other remote organizations is that I think Wave leans more heavily on "synchronous" calls relative to "asynchronous" Slack/email messages. This is important for us since 80%+ of us speak English as a third-plus language—it's easier to clear up misunderstandings on a call!)

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

Agree that if you put a lot of weight on the efficient market hypothesis, then starting a company looks bad and probably isn't worth it. Personally, I don't think markets are efficient enough for this to be a dominant consideration (see e.g. my response here for partial justification; not sure it's possible to give a convincing full justification since it seems like a pretty deep worldview divergence between us and the more modest-epistemology-focused wing of the EA movement).

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

2. For personal work, it's annoying, but not a huge bottleneck—my internet in Jijiga (used in Dan's article) was much worse than anywhere else I've been in Africa. (Ethiopia has a monopoly, state-run telecom that provides among the worst service in the world.) You do have to put in some effort to managing usage (e.g. tracking things that burn network via Little Snitch, caching docs offline, minimizing Docker image size), but it's not terrible.

It is a sufficient bottleneck to reading some blogs that I wrote a simple proxy to strip bloat from web pages while I was in Senegal. But, those are mostly pathologically un-optimized blogs—e.g., their page weight was larger than the page-weight of the web-based IDE (Glitch) that I used to write the proxy.

3. Network latency has been a major bottleneck for our programming; for instance, we wrote a custom UDP-based transport layer protocol to speed up our app because TCP handshakes were too slow (I gave a talk on this if you're curious). We also adopted GraphQL relatively early in part because it helped us reduce request/response sizes and number of roundtrips.

On the UX design side, a major obstacle is that many of our users aren't particularly literate (let alone tech-literate). For instance, we often communicate with users via (in-app) voice recordings instead of the more traditional text announcements. More generally, it's is a strong forcing function to keep our app simple so that the UI can be easily memorized and reading is as optional as possible. It also pushes us towards having more in-person touch points with our users—for instance, agents often help new users download the app and learn how to use it, and pre-COVID we had large teams of distributors who would go to busy markets and sign people up for the app in-person.

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

The main outcome metric we try to optimize is currently number of monthly active users, because our business has strong network effects. We can't share exact stats for various reasons, but I am allowed to say that we crossed 1m users in June, and our growth rates are sufficiently high that our current user base is substantially larger than that. We're currently growing more quickly than most well-known fintech companies of similar sizes that I know of.

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

On EA providing for-profit funding: hard to say. Considerations against:

  • Wave looks like a very good investment by non-EA standards, so additional funding from EAs wouldn't have affected our fundraising very much (not sure how much this generalizes to other companies)
  • At later stages, this is very capital-intensive, so probably wouldn't make sense except as a thing for eg Open Phil to do with its endowment
  • Founding successful companies requires putting a lot of weight on inside-view considerations, a trait that's not particularly compatible with typical EA epistemology. (Notably, Wave gets the most of this trait from Drew, the CEO, who, while value-aligned with EA, finds it hard to engage with standard EA-style reasoning for this reason.)

Considerations in favor:

  • Helps keep the company controlled by value-aligned people (not sure how important this is, I think the founders of Wave will end up retaining full control)
  • If the companies are good, it doesn't actually cost anything except tying up capital for a while

Overall, I think it could make sense at early stages, where people matter more and metrics matter less (and capital goes further), but even at early stages there's probably much more of a talent constraint than a funding constraint.

Load More