All of Ben_Kuhn's Comments + Replies

The Cost of Rejection

Interesting. It sounds like you're saying that there are many EAs investing tons of time in doing things that are mostly only useful for getting particular roles at 1-2 orgs. I didn't realize that.

In addition to the feedback thing, this seems like a generally very bad dynamic—for instance, in your example, regardless of whether she gets feedback, Sally has now more or less wasted years of graduate schooling.

3John_Maxwell2moI don't know that. But it seems like a possibility. [EDIT: Sally's story was inspired by cases I'm familiar with, although it's not an exact match.] And even if it isn't happening very much, it seems like we might want it to happen -- we might prefer EAs branch out and become specialists in a diverse set of areas instead of the movement being an army of generalists.
Early career EA's should consider joining fast-growing startups in emerging technologies

Top and (sustainably) fast-growing (over a long period of time) are roughly synonymous, but fast-growing is the upstream thing that causes it to be a good learning experience.

Note that billzito didn't specify, but the important number here is userbase or revenue growth, not headcount growth; the former causes the latter, but not vice versa, and rapid headcount growth without corresponding userbase growth is very bad.

People definitely can see rapidly increasing responsibility in less-fast-growing startups, but it's more likely to be because they're over-hir... (read more)

The Cost of Rejection

It sounds like you interpreted me as saying that rejecting resumes without feedback doesn't make people sad. I'm not saying that—I agree that it makes people sad (although on a per-person basis it does make people much less sad than rejecting them without feedback during later stages, which is what those points were in support of—having accidentally rejected people without feedback at many different steps, I'm speaking from experience here).

However, my main point is that providing feedback on resume applications is much more costly to the organization, not... (read more)

I think part of our disagreement might be that I see Wave as being in a different situation relative to some other EA organizations. There are a lot of software engineer jobs out there, and I'm guessing most people who are rejected by Wave would be fairly happy at some other software engineer job.

By contrast, I could imagine that stories like the following happening fairly frequently with other EA jobs:

  • Sally discovers the 80K website and gets excited about effective altruism. She spends hours reading the site and planning her career.

  • Sally converges

... (read more)
The Cost of Rejection

Note that at least for Rethink Priorities, a human[1] reads through all applications; nobody is rejected just because of their resume. 

I'm a bit confused about the phrasing here because it seems to imply that "Alice's application is read by a human" and "if Alice is rejected it's not just because of her resume" are equivalent, but many resume screen processes (including eg Wave's) involve humans reading all resumes and then rejecting people (just) because of them.

I mean the entire initial application (including the screening questions) is read, not just the resume, and the resume plays a relatively small part of this decision, as (we currently believe) resumes have low predictive validity for our roles. 

The Cost of Rejection

I'm unfamiliar with EA orgs' interview processes, so I'm not sure whether you're talking about lack of feedback when someone fails an interview, or when someone's application is rejected before doing any interviews. It's really important to differentiate these because because providing feedback on someone's initial application is a massively harder problem:

  • There are many more applicants (Wave rejects over 50% of applications without speaking to them and this is based on a relatively loose filter)
  • Candidates haven't interacted with a human yet, so are more l
... (read more)
3John_Maxwell2moAre you speaking from experience on these points? They don't seem obvious to me. In my experience, having my resume go down a black hole for a job I really want is incredibly demoralizing. I'd much rather get a bit of general feedback on where it needs to be stronger. And since I'm getting rejected at the resume stage either way, it seems like the "frustration that my resume underrates my skills" factor would be constant. I'm also wondering if there is a measurement issue here -- giving feedback could greatly increase the probability that you will learn that a candidate is frustrated, conditional on them feeling frustrated. It's interesting that the author of the original post works as a therapist, i.e. someone paid to hear private thoughts we don't share with others. This issue could be much bigger than EA hiring managers realize.
6Linch2moThis is a good point, my comment exchange with Peter [https://forum.effectivealtruism.org/posts/Khon9Bhmad7v4dNKe/the-cost-of-rejection#:~:text=Peter%20Wildeford-,1d,-27] was referring to people who did at least one interview or short work trial (2 hours), rather than rejected at the initial step. Note that at least for Rethink Priorities, a human[1] reads through all applications; nobody is rejected just because of their resume. [1] It used to be Peter and Marcus, and then as we've expanded, researchers on the relevant team, and now we have a dedicated hiring specialist ops person who (among other duties) review the initial application.
Frank Feedback Given To Very Junior Researchers

I don't have research management experience in particular, but I have a lot of knowledge work (in particular software engineering) management experience.

IMO, giving insufficient positive feedback is a common, and damaging,  blind spot for managers, especially those (like you and me) who expect their reports to derive most of their motivation from being intrinsically excited about their end goal. If unaddressed, it can easily lead to your reports feeling demotivated and like their work is pointless/terrible even when it's mostly good.

People use feedbac... (read more)

5NunoSempere3moGood point, thanks.
Announcing "Naming What We Can"!

Looks like if this doesn't work out, I should at least update my surname...

8EdoArad8moI can't wait for a new Bennian paradigm shift
My mistakes on the path to impact

I note that the framing / example case has changed a lot between your original comment / my reply (making a $5m grant and writing "person X is skeptical of MIRI" in the "cons" column) and this parent comment ("imagine I pointed a gun to your head and... offer you to give you additional information;" "never stopping at [person X thinks that p]"). I'm not arguing for entirely refusing to trust other people or dividing labor, as you implied there. I specifically object to giving weight to other people's top-line views on questions where there's substantial di... (read more)

9Max_Daniel1yI think I perceive less of a difference between the examples we've been discussing, but after reading your reply I'm also less sure if and where we disagree significantly. I read your previous claim as essentially saying "it would always be bad to include the information that some person X is skeptical about MIRI when making the decision whether to give MIRI a $5M grant, unless you understand more details about why X has this view". I still think this view basically commits you to refusing to see information of that type in the COVID policy thought experiment. This is essentially for the reasons (i)-(iii) I listed above: I think that in practice it will be too costly to understand the views of each such person X in more detail. (But usually it will be worth it to do this for some people, for instance for the reason spelled out in your toy model. As I said: I do think it will often be even more valuable to understand someone's specific reasons for having a belief.) Instead, I suspect you will need to focus on the few highest-priority cases, and in the end you'll end up with peopleX1,…,Xlwhose views you understand in great detail, peopleY1,…,Ymwhere your understanding stops at other fairly high-level/top-line views (e.g. maybe you know what they think about "will AGI be developed this century?" but not much about why), and peopleZ1,…,Znof whom you only know the top-line view of how much funding they'd want to give to MIRI. (Note that I don't think this is hypothetical. My impression is that there are in fact long-standing disagreements about MIRI's work that can't be fully resolved or even broken down into very precise subclaims/cruxes, despite many people having spent probably hundreds of hours on this. For instance, in the writeups to their first grants to MIRI, Open Phil remark that "We found MIRI’s work especially difficult to evaluate" [https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-i
My mistakes on the path to impact

if you make a decision with large-scale and irreversible effects on the world (e.g. "who should get this $5M grant?") I think it would usually be predictably worse for the world to ignore others' views

Taking into account specific facts or arguments made by other people seems reasonable here. Just writing down e.g. "person X doesn't like MIRI" in the "cons" column of your spreadsheet seems foolish and wrongheaded.

Framing it as "taking others' views into account" or "ignoring others' views" is a big part of the problem, IMO—that language itself directs people towards evaluating the people rather than the arguments, and overall opinions rather than specific facts or claims.

If 100 forecasters (who I roughly respect) look at the likelihood of a future event and think it's ~10% likely, and I look at the same question and think it's ~33% likely, I think I will be incorrect in  my private use of reason for my all-things-considered-view to not update  somewhat downwards from 33%. 

I think this continues to be true even if we all in theory have access to the same public evidence, etc. 

Now, it does depend a bit on the context of what this information is for. For example if I'm asked to give my perspective on a gro... (read more)

I think we disagree. I'm not sure why you think that even for decisions with large effects one should only or mostly take into account specific facts or arguments, and am curious about your reasoning here.

I do think it will often be even more valuable to understand someone's specific reasons for having a belief. However, (i) in complex domains achieving a full understanding would be a lot of work, (ii) people usually have incomplete insight into the specific reasons for why they hold a certain belief themselves and instead might appeal to intuition, (iii) ... (read more)

My mistakes on the path to impact

Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to.
...
My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.

I also get a lot of this vibe from (parts of) the EA community, and it drives me a little nuts. Examples:

  • Moral uncertainty, giv
... (read more)

I'm somewhat sympathetic to the frustration you express. However, I suspect the optimal response isn't to be more or less epistemically modest indiscriminately. Instead, I suspect the optimal policy is something like:

  • Always be clear and explicit to what extent a view you're communicating involves deference to others.
  • Depending on the purpose of a conversation, prioritize (possibly at different stages) either object-level discussions that ignore others' views or forming an overall judgment that includes epistemic deference.
    • E.g. when the purpose is to learn,
... (read more)

Lots of emphasis on avoiding accidentally doing harm by being uninformed

I gave a talk about this, so I consider myself to be one of the repeaters of that message. But I also think I always tried to add a lot of caveats, like "you should take this advice less seriously if you're the type of person who listens to advice like this" and similar. It's a bit hard to calibrate, but I'm definitely in favor of people trying new projects, even at the risk of causing mild accidental harm, and in fact I think that's something that has helped me grow in the past.

If you... (read more)

I think I probably agree with the general thrust of this comment, but disagree on various specifics.

'Intelligent people disagree with this' is a good reason against being too confident in one's opinion. At the very least, it should highlight there are opportunities to explore where the disagreement is coming from, which should hopefully help everyone to form better opinions.

I also don't feel like moral uncertainty is a good example of people deferring too much.

A different way to look at this might be that if 'good judgement' is something that lots of peopl... (read more)

That last paragraph is a good observation, and I don’t think it’s entirely coincidental. 80k has a few instances in their history of accidentally causing harm, which has led them (correctly) to be very conservative about it as an organisation.

The thing is, career advice and PR are two areas 80k is very involved in and which have particular likelihood of causing as much harm as good, due to bad advice or distorted messaging. Most decisions individual EAs make are not like this, and it’s a mistake if they treat 80k’s caution as a reflection of how cautious they should be. Or worse, act even more cautiously reasoning the combined intelligence of the 80k staff is greater than their own (likely true, but likely irrelevant).

See also answers here mentioning that EA feels "intellectually stale". A friend says he thinks a lot of impressive people have left the EA movement because of this :(

I feel bad, because I think maybe I was one of the first people to push the "avoid accidental harm" thing.

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

I haven't had the opportunity to see this play out over multiple years/companies, so I'm not super well-informed yet, but I think I should have called out this part of my original comment more:

Not to mention various high-impact roles at companies that don't involve formal management at all.

If people think management is their only path to success then sure, you'll end up with everyone trying to be good at management. But if instead of starting from "who fills the new manager role" you start from "how can <person X> have the most impact on the company"... (read more)

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

I had a hard time answering this and I finally realized that I think it's because it sort of assumes performance is one-dimensional. My experience has been quite far from that: the same engineer who does a crap job on one task can, with a few tweaks to their project queue or work style, crush it at something else. In fact, making that happen is one of the most important parts of my (and all managers') jobs at Wave—we spend a lot of time trying to route people to roles where they can be the most successful.

Similarly, management is also not one-dimensional: ... (read more)

4Ben_West1yThanks Ben. I like this answer, but I feel like every time I have seen people attempt to implement it they still end up facing a trade-off. Consider moving someone from role r1 to role r2. I think you are saying that the person you choose for r2 should be the person you expect to be best at it, which will often be people who aren't particularly good at r1. This seems fine, except that r2 might be more desirable than r1. So now a) the people who are good at r1 feel upset that someone who was objectively performing worse than them got a more desirable position, and b) they respond by trying to learn/demonstrate r2-related skills rather than the r1 stuff they are good at. You might say something like "we should try to make the r1 people happy with r1 so r2 isn't more desirable" which I agree is good, but is really hard to do successfully. An alternative solution is to include proficiency in r1 as part of the criteria for who gets position r2. This addresses (a) and (b) but results in r2 staff being less r2-skilled. I'm curious if you disagree with this being a trade-off?
We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

I'll let Lincoln add his as well, but here are a few things we do that I think are really helpful for this:

  1. We've found our bimonthly in-person "offsites" to be extremely important. For new hires, I often see their happiness and productivity increase a lot after their first retreat because it becomes easier and more fun for them to work with their coworkers.
  2. Having the right cadence of standing meetings (1-on-1s, team meetings, retrospectives, etc.) becomes much more important since issues are less likely to surface in "hallway" conversations.
  3. We try to make
... (read more)
We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

Agree that if you put a lot of weight on the efficient market hypothesis, then starting a company looks bad and probably isn't worth it. Personally, I don't think markets are efficient enough for this to be a dominant consideration (see e.g. my response here for partial justification; not sure it's possible to give a convincing full justification since it seems like a pretty deep worldview divergence between us and the more modest-epistemology-focused wing of the EA movement).

2Ben_West1yThat makes sense, thanks!
We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

2. For personal work, it's annoying, but not a huge bottleneck—my internet in Jijiga (used in Dan's article) was much worse than anywhere else I've been in Africa. (Ethiopia has a monopoly, state-run telecom that provides among the worst service in the world.) You do have to put in some effort to managing usage (e.g. tracking things that burn network via Little Snitch, caching docs offline, minimizing Docker image size), but it's not terrible.

It is a sufficient bottleneck to reading some blogs that I wrote a simple proxy to strip bloat from web pages while... (read more)

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

The main outcome metric we try to optimize is currently number of monthly active users, because our business has strong network effects. We can't share exact stats for various reasons, but I am allowed to say that we crossed 1m users in June, and our growth rates are sufficiently high that our current user base is substantially larger than that. We're currently growing more quickly than most well-known fintech companies of similar sizes that I know of.

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

On EA providing for-profit funding: hard to say. Considerations against:

  • Wave looks like a very good investment by non-EA standards, so additional funding from EAs wouldn't have affected our fundraising very much (not sure how much this generalizes to other companies)
  • At later stages, this is very capital-intensive, so probably wouldn't make sense except as a thing for eg Open Phil to do with its endowment
  • Founding successful companies requires putting a lot of weight on inside-view considerations, a trait that's not particularly compatible with typical EA ep
... (read more)
We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

Cool! With the understanding that these aren't your opinions, I'm going to engage with them anyway bc I think they're interesting. I think for all four of these I agree that they directionally push toward for-profits being less good, but that people overestimate the magnitude of the effect.

For-profit entrepreneurship has built-in incentives that already cause many entrepreneurs to try and implement any promising opportunities. As a result, we'd expect it to be drastically less neglected, or at least drastically less neglected relative to nonprofit opportun

... (read more)

Same with eg OpenAI which got $1b in nonprofit commitments but still had to become (capped) for-profit in order to grow.

If you look at OpenAI's annual filings, it looks like the $1b did not materialize.

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

Hmm. This argument seems like it only works if there are no market failures (i.e. ideas where it's possible to capture a decent fraction of the value created), and it seems like most nonprofits address some sort of market failure? (e.g. "people do not understand the benefits of vitamin-fortified food," "vaccination has strong positive externalities"...)

3lincolnq1yYeah, that seems right to me, and is a good model that predicts the existing nonprofit startup ideas! My point is that it seems like a very narrow slice of all value-producing ideas.
We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

I agree with most of what Lincoln said and would also plug Why and how to start a for-profit company serving emerging markets as material on this, if you haven't read it yet :)

Can you elaborate on the "various reasons" that people argue for-profit entrepreneurship is less promising than nonprofit entrepreneurship or provide any pointers on reading material? I haven't run across these arguments.

Thank you both for your thoughtful answers.

To clarify, I don't have a strong opinion on this comparison myself, and would love to hear more points of view on this. Sadly I'm not aware of any reading materials on this topic, but have heard the following arguments made in one on one conversations:

  1. For-profit entrepreneurship has built-in incentives that already cause many entrepreneurs to try and implement any promising opportunities. As a result, we'd expect it to be drastically less neglected, or at least drastically less neglected relative to nonprofit opp
... (read more)
We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

Great questions!

What are common failure cases/traps to avoid

I don't know about "most common" as I think it varies by company, but the worst one for me was allowing myself to get distracted by problems that were more rewarding in the short term, but less important or leveraged. I wrote a bit about this in Attention is your scarcest resource.

How much should I be directly coding vs "architecting" vs process management

Related to the above, you should never be coding anything that's even remotely urgent (because it'll distract you too much from non-coding probl... (read more)

Weird Wealth Creation ideas - Mobile Money

Sorry for the minimalist website :) A couple clarifications:

  • We indeed split our businesses into Sendwave (international money transfer) and Wave (mobile money). Wave.com is the website for the latter.
  • The latter currently operates only in Senegal and Cote d'Ivoire (stay tuned though).
  • In addition to charging no fees for deposits or withdrawals, we charge a flat 1% to send. All in, I believe we're about 80% cheaper than Orange Money for typical transaction sizes.
  • We don't provide services to Orange—if you saw the logo on the website it's just because we let ou
... (read more)
Weird Wealth Creation ideas - Mobile Money

Hey Marc, cool that you're thinking about this!

I work for Wave, we build mobile money systems in Senegal, Cote d'Ivoire, and hopefully soon other countries. Here are some thoughts on these interventions based on Wave's experience:

Interventions 1-2 (creating accounts): I think for most people that don't use mobile money, in countries where mobile money is available, "not having an account" is not the main blocker. It's more likely to be something like

  • They don't live near enough to an agent
  • Mobile money charges fees that is too high given the typical amounts
... (read more)
1MarcSerna1yThank you for this extremely informative response. This was way beyond my expectations!
The case for investing to give later

Some of your "conservative" parameter estimates are surprising to me.

For instance, your conservative estimate of the effect of diminishing marginal returns is 2% per year or 10% over 5y. If (say) the total pool of EA-aligned funds grows by 50% over the next 5 years due to additional donors joining—which seems extremely plausible—it seems like that should make the marginal opportunity much more than 10% less good.

You also wrote

we’ll stick with 5% as a conservative estimate for real expected returns on index fund investing

but used 7% as your conservative estimate in the spreadsheet and in the bottom-line estimates you reported.

3SjirH1yI'm not sure whether it would, considering, for example, the large room for funding GiveWell opportunities have had for multiple years (and will likely keep having) and their seemingly hardly diminishing cost-effectiveness on the margin (though data are obviously noisy here/there are other explanations). But I do take your point that this is not a very conservative estimate. I'll update them from 1%/2% to 2%/4%, thank you! See the rest of the paragraph you refer to: the 5% is my conservative estimate for index investing, the 7% for investing more generally.
CEA's Plans for 2020

I'm looking forward to CEA having a great 2020 under hopefully much more stable and certain leadership!

I’d welcome feedback on these plans via this form or in the comments, especially if you think there’s something that we’re missing or could be doing better.

This is weakly held since I don't have any context on what's going on internally with CEA right now.

That said: of the items listed in your summary of goals, it looks like about 80% of them involve inward-facing initiatives (hiring, spinoffs, process improvements, str... (read more)

4JP Addison2yAdding a little bit to Max's comment. — When I count the number of our staff working on each section, I get ~half of staff focused on the external-facing goals. And that's on top of the business as usual work, which is largely external facing. I was one of the people pushing for more object level work this quarter, but my feeling of the distance between what it was and what I wanted it to be was not as high as it might seem from a simple count of the number of goals.[1] [#fn-2DuKWyMNoihzNoGNR-1] -------------------------------------------------------------------------------- 1. Which, to be clear, you had no way of knowing about, and you explicitly called out that it was weakly-held. ↩︎ [#fnref-2DuKWyMNoihzNoGNR-1]

I think this is a really important point, and one I’ve been thinking a lot about over the past month. As you say, I do think that having a strategy is an important starting point, but I don’t want us to get stuck too meta. We’re still developing our strategy, but this quarter we’re planning to focus more on object-level work.  Hopefully we can share more about strategy and object-level work in the future. 

That said, I also think that we’ve made a lot of object-level progress in the last year, and we plan to make more this year, so we might have u

... (read more)
Concerning the Recent 2019-Novel Coronavirus Outbreak

Hmm. You're betting based on whether the fatalities exceed the mean of Justin's implied prior, but the prior is really heavy-tailed, so it's not actually clear that your bet is positive EV for him. (e.g., "1:1 odds that you're off by an order of magnitude" would be a terrible bet for Justion because he has 2/3 credence that there will be no pandemic at all).

Justin's credence for P(a particular person gets it | it goes world scale pandemic) should also be heavy-tailed, since the spread of infections is a preferential attac... (read more)

3Sean_o_h2yThis seems fair. I suggested the bet quite quickly. Without having time to work through the math of the bet, I suggested something that felt on the conservative side from the point of view of my beliefs. The more I think about it, (a) the more confident I am in my beliefs and (b) the more I feel it was not as generous as I originalyl thought*. I have a personal liking for binary bets rather than proportional payoffs. As a small concession in light of the points raised, I'd be happy to offer to modify the terms retroactively to make them more favourable to Justin, offering either of the following. (i) Doubling the odds against me to 10:1 odds (rather than 5:1) on the original claim (at least an order of magnitude lower than his fermi). So his £50 would get £500 of mine. OR (ii) 5:1 on at least 1.5 orders of magnitude (50x) lower than his fermi (rather than 10x). (My intuition is that (ii) is a better deal than (i) but I haven't worked it through) (*i.e. at time of bet - I think the likelihood of this being a severe global pandemic is now diminishing further in my mind)

Oops. I searched for the title of the link before posting, but didn't read the titles carefully enough to find duplicates that edited the title. Should have put more weight on my prior that this would already have been posted :)

Why and how to start a for-profit company serving emerging markets

I'm guessing that they assumed we were exaggerating the numbers in order to make them more interested in working with us. The fact that you're so ready to call anyone who lies about user numbers a "scammer" may itself be part of the cultural difference here :)

6Aaron Gertler2yOh, I see that I was confused: I was thinking of a "user number" and a "transaction number" as things related to Wave's bank account -- as though you were trying to share information for something like direct deposit and being accused of lying. The quote makes much more sense if it's "number of users" and "number of transactions".
Why and how to start a for-profit company serving emerging markets

Examples (mostly from Senegal since that's where I have the most experience, caveat that these are generalizations, all of them could be confounded by other stuff, the world is complicated, etc.):

  • Most Senegalese companies seem to place a much stronger emphasis on bureaucracy and paperwork.
  • When interacting with potential business partners in East Africa, we eventually realized that when we told them our user/transaction numbers, they often assumed that we were lying unless the claim was endorsed by someone they had a trusted connection to.
  • In the US, we
... (read more)
2Aaron Gertler2yThis is confusing. Did they just think you were scammers, not a real business at all? Or did they think of you as a business that was suspiciously quick to share this information, and trying to... I don't know, make a power play? Something else?
6Raemon2yThe book The Culture Map [https://smile.amazon.com/Culture-Map-Breaking-Invisible-Boundaries/dp/1610392507?sa-no-redirect=1] explores these sorts of problems, comparing many cultures' norms and advising on how to bridge the differences. Some advice it gives for this particular example (at least in several 'strong hierarchy' cultures), is instead of a higher-ranking asking direct questions of lower-ranking people, the boss can ask a team of lower-ranked people to work together to submit a proposal, where "who exactly criticized which thing" is a bit obfuscated.
Why and how to start a for-profit company serving emerging markets

Broadly agree, but:

You might end up making more impact if you started a startup in your own country, and just earned-to-give your earnings to GiveWell / EA organizations. This is because I think there are very few startups that benefit the poorest of the poor, since the poorest people don't even have access to basic needs.

Can't you just provide people basic needs then though? Many of Wave's clients have no smartphone and can't read. Low-cost Android phones (e.g. Tecno Mobile) probably provided a lot of value to people who previously did... (read more)

Why and how to start a for-profit company serving emerging markets

Haha this is probably the first time someone said that about one of my essays—I’m flattered, and excited to potentially write follow ups!

Is there anything in particular you’re curious about? Sometimes it’s hard to be sure of what’s novel vs obvious/common knowledge.

3Aaron Gertler2yI'd be interested in hearing about challenges in keeping employees around, both on the local and international side. If there were cases where employees from the developed world quit after trying to live in Africa, what seemed like the major factors behind their not wanting to continue? If there were cases where local employees didn't fit in well, what happened? What has Wave done to improve comfort/productivity for both kinds of employee?
3Afrothunder2yHi Ben! I second this comment; I would love to learn more from your experience. In particular, I would love to learn more about how you have balanced working in Silicon Valley and implementation contexts during different stages of your venture, as well as more about some of the initial challenges you faced with developing/launching the product that are specific to the start-up space in development context. I am personally also very interested in this kind of career trajectory!
The Future of Earning to Give
I imagine that there a large fraction of EAs who expect to be more productive in direct work than in an ETG role. But I'm not too clear why we should believe that. The skills and manpower needed by EA organizations appear to be a small subset of the total careers that the world needs, and it would seem an odd coincidence if the comparative advantage of people who believe in EA happens to overlap heavily with the needs of EA organizations. Remember that EA principles suggest that you should donate to approximately one charity (i.e. the current best one
... (read more)
1Michael_Wiebe2yTo add to Ben's argument, uncertainty about which cause is the best will rationalize diversifying across multiple causes. If we use confidence intervals instead of point estimates, it's plausible that the top causes will have overlapping confidence intervals.

I agree with most of your comment.

>Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.

That seems like almost the opposite of what the 80k post says. It says the people who get hired are not very replaceable. But it also appears to say that people who get evaluated as average by EA orgs are 2 or more standard deviations less productive, which seems to imply that they're pretty replaceable.

2Raemon2y(edit: whoops, responded to wrong comment)
Long-term Donation Bunching?

If you're really worried about value drift, you might be able to use a bank account that requires two signatures to withdraw funds, and add a second signatory whom you trust to enforce your precommitment to donate?

I haven't actually tried to do this, but I know businesses sometimes have this type of control on their accounts, and it might be available to consumers too.

"Why Nations Fail" and the long-termist view of global poverty

Whoops, sorry about the quotes--I was writing quickly and intended them to denote that I was using "solve" in an imprecise way, not attributing the word to you, but that is obviously not how it reads. Edited.

"Why Nations Fail" and the long-termist view of global poverty

These theoretical claims seem quite weak/incomplete.

  • In practice, autocrats' time horizons are highly finite, so I don't think a theoretical mutual-cooperation equilibrium is very relevant. (At minimum, the autocrat will eventually die.)
  • All your suggestions about oligarchy improving the tyranny of the majority / collective action problems only apply to actions that are in the oligarchy's interests. You haven't made any case that the important instances of these problems are in an oligarchy's interests to solve, and it doesn't seem likely to me.
3cole_haus2yYes, I agree they're very incomplete--as advertised. I also think the original claims they're responding to are pretty incomplete. I. I agree that time horizons are finite. If you're taking that as meaning that the defect/defect equilibrium reigns due to backward induction on a fixed number of games, that seems much too strong to me. Both empirically and theoretically, cooperation becomes much more plausible in indefinitely iterated games. Does the single shot game that Acemoglu and Robinson implicitly describe really seem like a better description of the situation to you? It seems very clear to me that it's not a good fit. If I had to choose between a single shot game and an iterated game as a model, I'd choose the iterated game every time (and maybe just set the discount rate more aggressively as needed--as the post points out, we can interpret the discount rate as having to do with the probability of deposition). Maybe the crux here is the average tenure of autocrats and who we're thinking of when we use the term? II. (I don't say "solve" anywhere in the post so I think the quote marks there are a bit misleading.) I agree that to come up with something closer to a conclusion, you'd have to do something like analyze the weighted value of each of these structural factors. Even in the absence of such an analysis, I think getting a fuller list of the structural advantages and disadvantages gets us closer to the truth than a one-sided list. Also, if we accept the claim that Acemoglu and Robinson's empirical evidence is weak, then the fact that I haven't presented any evidence on the real-world importance of these theoretical mechanisms becomes a bit less troubling. It means there's something closer to symmetry in the absence of good evidence bearing on the relative importance of structural advantages and disadvantages in each type of society. My intuition is that majoritarian tyrannies and collective action problems are huge, pervasive problems in the contemp
"Why Nations Fail" and the long-termist view of global poverty

What's the shift you think it would imply in animal advocacy?

7zdgroff2yAs I've been doing research this summer, I've become a bit more tentative and wary of acting like we know much, but my general intuition is that (a) our focus should not be on saving animals now but on securing whatever changes save future animals, so ethical changes and institutional changes; (b) I think institutional changes are the most promising avenue for this, and the question is which institutional changes last longest; (c) we should look for path dependencies. It's unclear to me what advocacy changes this means, but I think it makes the case for, e.g., the Nonhuman Rights Project or circus bans stronger than they are in the short term. I think this is a crucial area of research though. For path dependencies, the biggest one right now I think is whether clean and plant-based meat succeed. The shift from longtermism here I think is that rather than trying to get products to market the fastest, we should ask what in general makes industries most likely to succeed or fail and just optimize for the probability of success. As an example, this makes me inclined to favor clean meat companies supporting regulations and transparency.
"Why Nations Fail" and the long-termist view of global poverty

I had one of his quotes on partial attribution bias (maybe even from that interview) in mind as I wrote this!

EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge?

Yikes; this is pretty concerning data. Great find!

I'd be curious to hear from anyone at GWWC how this updates them, and in particular how it bears on their "realistic calculation" of their cost effectiveness, which assumes 5% annualized attrition. (That's not an apples to apples comparison, so their estimate isn't necessarily off by literally 10x, but it seems like it must be off by quite a lot, unless the survey data is somehow biased.)

We're definitely aware that Giving What We Can's 2015 analysis comes away with a more optimistic conclusion than other more recent data sources like the EA Survey indicate (and I believe the Slate Star Codex survey, though I haven't seen a careful analysis of that one as it bears on Giving What We Can). We've just made some improvements to the donation recording platform, and once a few last things are ironed out we'll be sending out reminders for members to record their donations that may not have been recorded. Once people have had time to respond to those reminders, we plan to do an update on our 2015 estimates of members' follow-through.

Please use art to convey EA!

I suspect that straightforwardly taking specific EA ideas and putting them into fiction is going to be very hard to do in a non-cringeworthy way (as pointed out by elle in another comment). I'd be more interested in attempts to write fiction that conveys an EA mindset without being overly conceptual.

For instance, a lot of today's fiction seems cynical and pessimistic about human nature; the characters frequently don't seem to have goals related to anything other than their immediate social environment; and they often don't pursue those ... (read more)

4Milan_Griffes3y+1. Any fiction that believably shows a bunch of disparate folks solving coordination problems seems really good on this dimension. (Children of Men [https://en.wikipedia.org/wiki/Children_of_Men] comes to mind...)
Structure EA organizations as WSDNs?
worker cooperatives have positive impacts on both firm productivity and employee welfare; there is a lot more research showing that worker ownership is modestly better than regular capitalist ownership

This is causal language, but as far as I can tell (at least per the 2nd paper) the studies are all correlational? By default I'm very skeptical of ability to control for confounders in a correlational analysis here. Are there any studies with a more robust way to infer causation?

2kbog3yThe 1st paper says that the studies generally do a good job of ruling out reverse causality through econometric techniques.
2kbog3yDon't know for management. For employee ownership some of the studies in https://www.nber.org/books/krus08-1 [https://www.nber.org/books/krus08-1] unpack the causal stories of benefits.
Is preventing child abuse a plausible Cause X?

(PS: if you're interested in posting but unsure about content, I'd be excited to help answer any q's or read a draft! My email is in my profile.)

Is EA unscalable central planning?

What EA is currently doing would definitely not scale to 10%+ of the population doing the same thing. However, that's not a strong argument against not doing it right now. You can't start a political party with support from 0.01% of the population!

In general, we should do things that don't scale but are optimal right now, rather than things that do scale but aren't optimal right now, because without optimizing for the current scale, you die before reaching the larger scale.

2Aaron Gertler3yAlso, we're very far from a world where even most people in EA choose careers based on 80K's advice. I'd guess that among EA community members with "direct work" jobs, many or even most of them mostly used their own judgment to evaluate which career path would optimize their impact. (If "optimizing impact" was even their goal, that is; many of us chose jobs partly or mostly based on things like "personal interest" and "who we'd get to work with" rather than 100% "what will help most".) And of course, most members don't have "direct work" jobs; they just donate and/or discuss EA while working in positions that 80K doesn't recommend anymore (or never did), because they found the jobs before they found EA or because they don't take 80K recommendations seriously enough to want to switch jobs (or any of a dozen other reasons).
1Nathan Young3yThanks :) Do we acknowledge our activities will change as we grow? Are we transparent about our mission?
Is preventing child abuse a plausible Cause X?

I would be extremely interested if you were to hypothetically write an "intro to child protection/welfare for EAs" post on this forum! (And it would probably be a great candidate for a prize as well!) I think the number of upvotes on this comment show that other people agree :)

Personally, I have ~zero knowledge of this topic (and probably at least as many misconceptions as accurate beliefs!) and would be happy to start learning about it from scratch.

"Cause X" usually refers to an issue that is (one of) the most important one(s) to work ... (read more)

6Milan_Griffes3y+1. I'd also be very interested in a book review post of The Body Keeps the Score [https://www.goodreads.com/book/show/18693771-the-body-keeps-the-score]. (I may do this myself at some point, but not sure when I'll have the capacity.)
Does climate change deserve more attention within EA?

While climate change doesn't immediately appear to be neglected, it seems possible that many people/orgs "working on climate change" aren't doing so particularly effectively.

Historically, it seems like the environmental movement has an extremely poor track record at applying an "optimizing mindset" to problems and has tended to advocate solutions based on mood affiliation rather than reasoning about efficiency. A recent example would be the reactions to the California drought which blame almost anyone except the actual biggest... (read more)

I agree that the environmental movement is extremely poor at optimisation. This being said, there are a number of very large philanthropists and charities who do take a sensible approach to climate change, so I don't think this is a case in which EAs could march in and totally change everything. Much of Climateworks' giving takes a broadly EA approach, and they oversee the giving of numerous multi-billion dollar foundations. Gates also does some sensible work on the energy innovation side. Nevertheless, most money in the space does seem to be spe... (read more)

Long-Term Future Fund: April 2019 grant recommendations

If one person-year is 2000 hours, then that implies you're valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.

This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I'm sure there are other overheads that I don't... (read more)

I actually think the $10k grant threshold doesn't make a lot of sense even if we assume the details of this "opportunity cost" perspective are correct. Grants should fulfill the following criterion:

"Benefit of making the grant" ≥ "Financial cost of grant" + "CEA's opportunity cost from distributing a grant"

If we assume that there are large impact differences between different opportunities, as EAs generally do, a $5k grant could easily have a benefit worth $50k to the EA community, and therefore easily be w... (read more)

Long-Term Future Fund: April 2019 grant recommendations

I think we should think carefully about the norm being set by the comments here.

This is an exceptionally transparent and useful grant report (especially Oliver Habryka's). It's helped me learn a lot about how the fund thinks about things, what kind of donation opportunities are available, and what kind of things I could (hypothetically if I were interested) pitch the LTF fund on in the future. To compare it to a common benchmark, I found it more transparent and informative than a typical GiveWell report.

But the fact that Habryka now must defend a... (read more)

Now that the dust has settled a bit, I'm curious what Habryka & the other fund managers think of the level of community engagement that occurred on this report...

  • What kinds of engagement seemed helpful?
  • What kinds of engagement seemed unnecessary?
  • What kinds of engagement were emotionally expensive to address?
  • Does it seem sustainable to write up grantmaker reasoning at this level of detail, for each grantmaking round going forward?
  • Does it seem sustainable to engage with questions & comments from the community at this level of detail, for each grantmaking round going forward?

Relatedly, is Oli getting compensated for the work he's putting in to the Longterm Future Fund?

Seems good to move towards a regime wherein:

  • The norm is to write up detailed, public grant reports
  • Community members ask a bunch of questions about the grant decisions
  • The norm is that a representative of the grant-making staff fields all of these questions, and is compensated for doing so

+1

I think it's great that the Fund is trending towards more transparency & a broader set of grantees (cf. November 2018 grant report, cf. July 2018 concerns about the Fund).

And I really appreciate the level of care & attention that Oli is putting towards this thread. I've found the discussion really helpful.

I strongly agree with this. EA funds seemed to have a tough time finding grant makers who were both qualified and had sufficient time, and I would expect that to be partly because of the harsh online environment previous grant makers faced. The current team seems to have impressively addressed the worries people had in terms of donating to smaller and more speculative projects, and providing detailed write-ups on them. I imagine that in depth, harsh attacks on each grant decision will make it still harder to recruit great people for these committees, and m... (read more)

Agree with this, especially the comments about rudeness. This also means that I disagree with Oli's comment elsewhere in this thread:

that people should feel free to express any system-1 level reactions they have to these grants.

In line with what Ben says, I think people should apply a filter to their system-1 level reactions, and not express them whatever they are.

Long-Term Future Fund: April 2019 grant recommendations

Wow! This is an order of magnitude larger than I expected. What's the source of the overhead here?

Here is my rough fermi:

My guess is that there is about one full-time person working on the logistics of EA Grants, together with about half of another person lost in overhead, communications, technology (EA Funds platform) and needing to manage them.

Since people's competence is generally high, I estimated the counterfactual earnings of that person at around $150k, with an additional salary from CEA of $60k that is presumably taxed at around 30%, resulting in a total loss of money going to EA-aligned people of around ($150k + 0.3 * $60k) * 1.5 = $252... (read more)

My new article on EA and the systemic change objection

This is true as far as it goes, but I think that many EAs, including me, would endorse the idea that "social movements are the [or at least a] key drivers of change in human history." It seems perverse to assume otherwise on a forum whose entire point is to help the progress of a social movement that claims to e.g. help participants have 100x more positive impact in the world.

More generally, it's true that your chance of convincing "constitutionally disinclined" people with two papers is low. But your chance is zero of convincing a... (read more)

EA is vetting-constrained

I'm very interested in hearing from grantmakers about their take on this problem (especially those at or associated with CEA, which it seems like has been involved in most of the biggest initiatives to scale out EA's vetting, through EA Grants and EA Funds).

  • What % of grant applicants are in the "definitely good enough" vs "definitely (or reasonably confidently) not good enough" vs "uncertain + not enough time/expertise to evaluate" buckets?
  • (Are these the right buckets to be looking at?)
  • What do you feel your biggest c
... (read more)

(Funding manager of the EA Meta Fund here)

We have run an application round for our last distribution for the first time. I conducted the very initial investigation which I communicated to the committee. Previous grantees came all through our personal network.

Things we learnt during our application round:

i) We got significantly fewer applications than we expected and would have been able to spend more time vetting projects. This was not a bottleneck. After some investigation through personal outreach I have the impression there are not many projects being s... (read more)

After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation
It seems easier to increase the efficiency of your work than the quality.

In software engineering, I've found the exact opposite. It's relatively easy for me to train people to identify and correct flaws in their own code–I point out the problems in code review and try to explain the underlying heuristics/models I'm using, and eventually other people learn the same heuristics/models. On the other hand, I have no idea how to train people to work more quickly.

(Of course there are many reasons why other types of work might be different from software eng!)

5PeterMcCluskey3yI expect that good software engineers are more likely to figure out for themselves how to be more efficient than they are to figure out how to increase their work quality. So it's not obvious what to infer from "it's harder for an employer to train people to work faster" - does it just mean that the employer has less need to train the slow, high quality worker?
4Jonas Vollmer3yGood point, agree it depends on the type of work.
Load More