All of Ben Kuhn's Comments + Replies

Don't forget Zenefits!

In 2016 an internal legal investigation at Zenefits found the company's licensing was out of compliance and that Conrad had created a browser extension to skirt training requirements for selling insurance in California.[15] After self-reporting these issues, Zenefits hired an independent third party to do an internal audit of its licensing controls and sent the report to all 50 states.[16] The California Department of Insurance as well as the Massachusetts Division of Insurance began investigations of their own based on Zenefits' repo

... (read more)
2
Ben_West🔸
Thanks! I don't really understand if this technically qualifies as fraud, but seems spiritually similar – I added it to the table and cited you.

Why do you think people think it's unimportant (rather than, e.g., important but very difficult to achieve due to the age skew issue mentioned in the post)?

8
Yonatan Cale
Examples: * A funded, reputable, [important in my opinion] EA org that I helped a bit with hiring an engineer for a [in my opinion] key role had, on their first draft, something like "we'd be happy to hire a top graduate from a coding bootcamp" * I spoke to 2-3 senior product managers looking for their way into EA, while at the same time..: * (almost?) no EA org is hiring product people * In my opinion, many EA orgs could use serious help from senior product people (Please don't write here if you can guess what orgs I'm talking about, I left them anonymous on purpose)   From these examples I infer the orgs are not even trying. It's not that they're trying and failing due to, for example, an age skew in the community.   I also have theories for why this would be the case, but most of my opinion comes from my observations.

I agree that it's downstream of this, but strongly agree with ideopunk that mission alignment is a reasonable requirement to have.* A (perhaps the) major cause of organizations becoming dysfunctional as they grow is that people within the organization act in ways that are good for them, but bad for the organization overall—for example, fudging numbers to make themselves look more successful, ask for more headcount when they don't really need it, doing things that are short-term good but long-term bad (with the assumption that they'll have moved on before t... (read more)

0
Miguel
In fortune 500 companies, rarely you find people that are exceptional on the get go. Most of them that have succeeded were allowed to grow themselves within parameterized environments of multidisciplinary scope so they can have the room to combine ideas. Can EA develop the EA/ longtermist attitude in exceptionally talented people? I believe digging this question brutally can point every EA founder /Directorship role on how to deal with developing management talent..

Whoops! Fixed, it was just supposed to point to the same advice-offer post as the first paragraph, to add context :)

In addition to having a lot more on the line, other reasons to expect better of ourselves:

  • EA had (at least potential) access to a lot of information that investors may not have, in particular about Alameda's early exodus in 2018.
  • EA had much more time to investigate and vet SBF—there's typically a very large premium for investors to move fast during fundraising, to minimize distraction for the CEO/team.

Because of the second point, many professional investors do surprisingly little vetting. For example, SoftBank is pretty widely reputed to be "dumb money;" I... (read more)

Strongly agree with these points and think the first is what makes the overwhelming difference on why EA should have done better. Multiple people allege (both publicly on the forum and people who have told me in confidence)  to have told EA leadership that SBF was doing thinks that strongly break with EA values since the Alameda situation of 2018.

 This doesn't imply we should know about any particular illegal activity SBF might have been undertaking, but I would expect SBF to not have been so promoted throughout the past couple of years. This is ... (read more)

Very much agree. Some EAs knew SBF for almost a decade and plausibly interacted with him for hundreds of hours (including in non-professional settings which are usually more revealing of someone's character and personality).

Answer by Ben Kuhn40
7
0

Is it likely that FTX/Alameda currently have >50% voting power over Anthropic?

Extremely unlikely. While Anthropic didn't disclose the valuation, it would be highly unusual for a company to take >50% dilution in a single funding round.

This paywalled article mentions a $4B valuation for the round:

Definitely! In this case I appear to have your email so reached out that way, but for anyone else who's reading this comment thread, Forum messages or the email address in the post both work as ways to get in touch!

In the "a case for hope" section, it looks like your example analysis assumes that the "AGI timeline" and "AI safety timeline" are independent random variables, since your equation describes sampling from them independently. Isn't that really unlikely to be true?

1
Esben Kran
And just to dive into some of these dynamics: * AI might help us develop more aligned AI, leading to a correlation between the two's incidence date * The same correlation might happen as a result of new ways to look at AGI leading to more novel and innovative avenues in alignment research (though with earlier effect) * Progress in alignment is very likely to translate directly into AGI progress (see the Pragmatic AI Safety agenda, OpenAI, and DeepMind) * An actual AI takeover will diminish humanity's ability to come up with an alignment solution, though the TAI probably wants to solve the problem for next-gen AGI * Takeoff speeds will of course significantly affect these dynamics Just off the top of my mind, I'd be curious to hear more.
1
Esben Kran
Indeed, I think there are a lot of dynamics that might arise in the combination of these two timelines. This is also one of the reasons why it is used solely for illustrating the point that we might be able to calculate this if we can model these dynamics. We hope to use our reports as a way to dive deeper and deeper into the materia of how to properly analyze our progress. A next post will have more details in relation to this.

Can someone clarify whether I'm interpreting this paragraph correctly?

Effective Ventures (EV) is a federation of organisations and projects working to have a large positive impact in the world. EV was previously known as the Centre for Effective Altruism but the board decided to change the name to avoid confusion with the organisation within EV that goes by the same name.

I think what this means is that the CEA board is drawing a distinction between the CEA legal entity / umbrella organization (which is becoming EV) and the public-facing CEA brand (whi... (read more)

Yep, your interpretation is correct. We didn't want to make a big deal about this rebrand because for most people the associations they have with "CEA" are for the organization which is still called CEA. (But over the years, and especially as the legal entity has grown and taken on more projects, we've noticed a number of times where the ambiguity between the two has been somewhat frustrating.) Sorry for the confusion!

Sorry that was confusing! I was attempting to distinguish:

  1. Direct epistemic problems: money causes well-intentioned people to have motivated cognition etc. (the downside flagged by the "optics and epistemics" post)
  2. Indirect epistemic problems as a result of the system's info processing being blocked by not-well-intentioned people

I will try to think of a better title!

4
Habryka [Deactivated]
Ah, yes, the new title seems better. Thanks for writing this!

Since someone just commented privately to me with this confusion, I will state for the record that this commenter seems likely to be impersonating Matt Yglesias, who already has an EA Forum account with the username "Matthew Yglesias." (EDIT: apparently it actually is the same Matt with a different account!)

(Object-level response: I endorse Larks' reply.)

5
JP Addison🔸
This is not true, just a duplicate account issue.

Please note that the Twitter thread linked in the first paragraph starts with a highly factually inaccurate claim. In reality, at EAGxBoston this year there were five talks on global health, six on animal welfare, and four talks and one panel on AI (alignment plus policy). Methodology: I collected these numbers by filtering the official conference app agenda by topic and event type.

I think it's unfortunate that the original tweet got a lot of retweets / quote-tweets and Jeff hasn't made a correction. (There is a reply saying "I should add, friend is not 10... (read more)

3
Jeffrey Mason
Here's a corrective: https://twitter.com/JeffJMason/status/1511663114701484035?t=MoQZV653AZ_K1f2-WVJl7g&s=19 Unfortunately I can't do anything about where it shows up. Elon needs to get working on that edit button.
2
Nathan Young
Sorry I've made an edit to this effect. An oversight on my part.  This is exactly why I wish I could allow others to edit my posts - then you could edit it too!

This must be somewhat true but FWIW, I think it's probably less true than most outsiders would expect—I don't spend very much personal time on in-country stuff (because I have coworkers who are local to those countries who will do a much better job than I could) and so end up having pretty limited (and random/biased) context on what's going on!

2
Elizabeth
I think that still ends up net good if your biases are decorrelated from existing grantmaker biases? 

IIRC a lot of people liked this post at the time, but I don't think the critiques stood up well. Looking back 7 years later, I think the critique that Jacob Steinhardt wrote in response (which is not on the EA forum for some reason?) did a much better job of identifying more real and persistent problems:

  • Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities.
  • Over-confident claims coupled with insufficient background research.
  • Over-reliance on a small set o
... (read more)

Interesting. It sounds like you're saying that there are many EAs investing tons of time in doing things that are mostly only useful for getting particular roles at 1-2 orgs. I didn't realize that.

In addition to the feedback thing, this seems like a generally very bad dynamic—for instance, in your example, regardless of whether she gets feedback, Sally has now more or less wasted years of graduate schooling.

3
John_Maxwell
I don't know that. But it seems like a possibility. [EDIT: Sally's story was inspired by cases I'm familiar with, although it's not an exact match.] And even if it isn't happening very much, it seems like we might want it to happen -- we might prefer EAs branch out and become specialists in a diverse set of areas instead of the movement being an army of generalists.

Top and (sustainably) fast-growing (over a long period of time) are roughly synonymous, but fast-growing is the upstream thing that causes it to be a good learning experience.

Note that billzito didn't specify, but the important number here is userbase or revenue growth, not headcount growth; the former causes the latter, but not vice versa, and rapid headcount growth without corresponding userbase growth is very bad.

People definitely can see rapidly increasing responsibility in less-fast-growing startups, but it's more likely to be because they're over-hir... (read more)

It sounds like you interpreted me as saying that rejecting resumes without feedback doesn't make people sad. I'm not saying that—I agree that it makes people sad (although on a per-person basis it does make people much less sad than rejecting them without feedback during later stages, which is what those points were in support of—having accidentally rejected people without feedback at many different steps, I'm speaking from experience here).

However, my main point is that providing feedback on resume applications is much more costly to the organization, not... (read more)

I think part of our disagreement might be that I see Wave as being in a different situation relative to some other EA organizations. There are a lot of software engineer jobs out there, and I'm guessing most people who are rejected by Wave would be fairly happy at some other software engineer job.

By contrast, I could imagine that stories like the following happening fairly frequently with other EA jobs:

  • Sally discovers the 80K website and gets excited about effective altruism. She spends hours reading the site and planning her career.

  • Sally converges

... (read more)

Note that at least for Rethink Priorities, a human[1] reads through all applications; nobody is rejected just because of their resume. 

I'm a bit confused about the phrasing here because it seems to imply that "Alice's application is read by a human" and "if Alice is rejected it's not just because of her resume" are equivalent, but many resume screen processes (including eg Wave's) involve humans reading all resumes and then rejecting people (just) because of them.

I mean the entire initial application (including the screening questions) is read, not just the resume, and the resume plays a relatively small part of this decision, as (we currently believe) resumes have low predictive validity for our roles. 

I'm unfamiliar with EA orgs' interview processes, so I'm not sure whether you're talking about lack of feedback when someone fails an interview, or when someone's application is rejected before doing any interviews. It's really important to differentiate these because because providing feedback on someone's initial application is a massively harder problem:

  • There are many more applicants (Wave rejects over 50% of applications without speaking to them and this is based on a relatively loose filter)
  • Candidates haven't interacted with a human yet, so are more l
... (read more)
7
John_Maxwell
Are you speaking from experience on these points? They don't seem obvious to me. In my experience, having my resume go down a black hole for a job I really want is incredibly demoralizing. I'd much rather get a bit of general feedback on where it needs to be stronger. And since I'm getting rejected at the resume stage either way, it seems like the "frustration that my resume underrates my skills" factor would be constant. I'm also wondering if there is a measurement issue here -- giving feedback could greatly increase the probability that you will learn that a candidate is frustrated, conditional on them feeling frustrated. It's interesting that the author of the original post works as a therapist, i.e. someone paid to hear private thoughts we don't share with others. This issue could be much bigger than EA hiring managers realize.
7
Linch
This is a good point, my comment exchange with Peter was referring to people who did at least one interview or short work trial (2 hours), rather than rejected at the initial step. Note that at least for Rethink Priorities, a human[1] reads through all applications; nobody is rejected just because of their resume.  [1] It used to be Peter and Marcus, and then as we've expanded, researchers on the relevant team, and now we have a dedicated hiring specialist ops person who (among other duties) review the initial application.

I don't have research management experience in particular, but I have a lot of knowledge work (in particular software engineering) management experience.

IMO, giving insufficient positive feedback is a common, and damaging,  blind spot for managers, especially those (like you and me) who expect their reports to derive most of their motivation from being intrinsically excited about their end goal. If unaddressed, it can easily lead to your reports feeling demotivated and like their work is pointless/terrible even when it's mostly good.

People use feedbac... (read more)

6
NunoSempere
Good point, thanks.

Looks like if this doesn't work out, I should at least update my surname...

8
EdoArad
I can't wait for a new Bennian paradigm shift

I note that the framing / example case has changed a lot between your original comment / my reply (making a $5m grant and writing "person X is skeptical of MIRI" in the "cons" column) and this parent comment ("imagine I pointed a gun to your head and... offer you to give you additional information;" "never stopping at [person X thinks that p]"). I'm not arguing for entirely refusing to trust other people or dividing labor, as you implied there. I specifically object to giving weight to other people's top-line views on questions where there's substantial di... (read more)

9
Max_Daniel
I think I perceive less of a difference between the examples we've been discussing, but after reading your reply I'm also less sure if and where we disagree significantly.  I read your previous claim as essentially saying "it would always be bad to include the information that some person X is skeptical about MIRI when making the decision whether to give MIRI a $5M grant, unless you understand more details about why X has this view". I still think this view basically commits you to refusing to see information of that type in the COVID policy thought experiment. This is essentially for the reasons (i)-(iii) I listed above: I think that in practice it will be too costly to understand the views of each such person X in more detail.  (But usually it will be worth it to do this for some people, for instance for the reason spelled out in your toy model. As I said: I do think it will often be even more valuable to understand someone's specific reasons for having a belief.) Instead, I suspect you will need to focus on the few highest-priority cases, and in the end you'll end up with people X1,…,Xl whose views you understand in great detail, people Y1,…,Ym where your understanding stops at other fairly high-level/top-line views (e.g. maybe you know what they think about "will AGI be developed this century?" but not much about why), and people Z1,…,Zn of whom you only know the top-line view of how much funding they'd want to give to MIRI. (Note that I don't think this is hypothetical. My impression is that there are in fact long-standing disagreements about MIRI's work that can't be fully resolved or even broken down into very precise subclaims/cruxes, despite many people having spent probably hundreds of hours on this. For instance, in the writeups to their first grants to MIRI, Open Phil remark that "We found MIRI’s work especially difficult to evaluate", and the most recent grant amount was set by a committee that "average[s] individuals’ allocations" . See also this

if you make a decision with large-scale and irreversible effects on the world (e.g. "who should get this $5M grant?") I think it would usually be predictably worse for the world to ignore others' views

Taking into account specific facts or arguments made by other people seems reasonable here. Just writing down e.g. "person X doesn't like MIRI" in the "cons" column of your spreadsheet seems foolish and wrongheaded.

Framing it as "taking others' views into account" or "ignoring others' views" is a big part of the problem, IMO—that language itself directs people towards evaluating the people rather than the arguments, and overall opinions rather than specific facts or claims.

If 100 forecasters (who I roughly respect) look at the likelihood of a future event and think it's ~10% likely, and I look at the same question and think it's ~33% likely, I think I will be incorrect in  my private use of reason for my all-things-considered-view to not update  somewhat downwards from 33%. 

I think this continues to be true even if we all in theory have access to the same public evidence, etc. 

Now, it does depend a bit on the context of what this information is for. For example if I'm asked to give my perspective on a gro... (read more)

I think we disagree. I'm not sure why you think that even for decisions with large effects one should only or mostly take into account specific facts or arguments, and am curious about your reasoning here.

I do think it will often be even more valuable to understand someone's specific reasons for having a belief. However, (i) in complex domains achieving a full understanding would be a lot of work, (ii) people usually have incomplete insight into the specific reasons for why they hold a certain belief themselves and instead might appeal to intuition, (iii) ... (read more)

Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to.
...
My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.

I also get a lot of this vibe from (parts of) the EA community, and it drives me a little nuts. Examples:

  • Moral uncertainty, giv
... (read more)

I'm somewhat sympathetic to the frustration you express. However, I suspect the optimal response isn't to be more or less epistemically modest indiscriminately. Instead, I suspect the optimal policy is something like:

  • Always be clear and explicit to what extent a view you're communicating involves deference to others.
  • Depending on the purpose of a conversation, prioritize (possibly at different stages) either object-level discussions that ignore others' views or forming an overall judgment that includes epistemic deference.
    • E.g. when the purpose is to learn,
... (read more)

Lots of emphasis on avoiding accidentally doing harm by being uninformed

I gave a talk about this, so I consider myself to be one of the repeaters of that message. But I also think I always tried to add a lot of caveats, like "you should take this advice less seriously if you're the type of person who listens to advice like this" and similar. It's a bit hard to calibrate, but I'm definitely in favor of people trying new projects, even at the risk of causing mild accidental harm, and in fact I think that's something that has helped me grow in the past.

If you... (read more)

I think I probably agree with the general thrust of this comment, but disagree on various specifics.

'Intelligent people disagree with this' is a good reason against being too confident in one's opinion. At the very least, it should highlight there are opportunities to explore where the disagreement is coming from, which should hopefully help everyone to form better opinions.

I also don't feel like moral uncertainty is a good example of people deferring too much.

A different way to look at this might be that if 'good judgement' is something that lots of peopl... (read more)

That last paragraph is a good observation, and I don’t think it’s entirely coincidental. 80k has a few instances in their history of accidentally causing harm, which has led them (correctly) to be very conservative about it as an organisation.

The thing is, career advice and PR are two areas 80k is very involved in and which have particular likelihood of causing as much harm as good, due to bad advice or distorted messaging. Most decisions individual EAs make are not like this, and it’s a mistake if they treat 80k’s caution as a reflection of how cautious they should be. Or worse, act even more cautiously reasoning the combined intelligence of the 80k staff is greater than their own (likely true, but likely irrelevant).

See also answers here mentioning that EA feels "intellectually stale". A friend says he thinks a lot of impressive people have left the EA movement because of this :(

I feel bad, because I think maybe I was one of the first people to push the "avoid accidental harm" thing.

I haven't had the opportunity to see this play out over multiple years/companies, so I'm not super well-informed yet, but I think I should have called out this part of my original comment more:

Not to mention various high-impact roles at companies that don't involve formal management at all.

If people think management is their only path to success then sure, you'll end up with everyone trying to be good at management. But if instead of starting from "who fills the new manager role" you start from "how can <person X> have the most impact on the company"... (read more)

I had a hard time answering this and I finally realized that I think it's because it sort of assumes performance is one-dimensional. My experience has been quite far from that: the same engineer who does a crap job on one task can, with a few tweaks to their project queue or work style, crush it at something else. In fact, making that happen is one of the most important parts of my (and all managers') jobs at Wave—we spend a lot of time trying to route people to roles where they can be the most successful.

Similarly, management is also not one-dimensional: ... (read more)

4
Ben_West🔸
Thanks Ben. I like this answer, but I feel like every time I have seen people attempt to implement it they still end up facing a trade-off.   Consider moving someone from role r1 to role r2. I think you are saying that the person you choose for r2 should be the person you expect to be best at it, which will often be people who aren't particularly good at r1. This seems fine, except that r2 might be more desirable than r1. So now a) the people who are good at r1 feel upset that someone who was objectively performing worse than them got a more desirable position, and b) they respond by trying to learn/demonstrate r2-related skills rather than the r1 stuff they are good at. You might say something like "we should try to make the r1 people happy with r1 so r2 isn't more desirable" which I agree is good, but is really hard to do successfully. An alternative solution is to include proficiency in r1 as part of the criteria for who gets position r2. This addresses (a) and (b) but results in r2 staff being less r2-skilled. I'm curious if you disagree with this being a trade-off?

I'll let Lincoln add his as well, but here are a few things we do that I think are really helpful for this:

  1. We've found our bimonthly in-person "offsites" to be extremely important. For new hires, I often see their happiness and productivity increase a lot after their first retreat because it becomes easier and more fun for them to work with their coworkers.
  2. Having the right cadence of standing meetings (1-on-1s, team meetings, retrospectives, etc.) becomes much more important since issues are less likely to surface in "hallway" conversations.
  3. We try to make
... (read more)

Agree that if you put a lot of weight on the efficient market hypothesis, then starting a company looks bad and probably isn't worth it. Personally, I don't think markets are efficient enough for this to be a dominant consideration (see e.g. my response here for partial justification; not sure it's possible to give a convincing full justification since it seems like a pretty deep worldview divergence between us and the more modest-epistemology-focused wing of the EA movement).

2
Ben_West🔸
That makes sense, thanks!

2. For personal work, it's annoying, but not a huge bottleneck—my internet in Jijiga (used in Dan's article) was much worse than anywhere else I've been in Africa. (Ethiopia has a monopoly, state-run telecom that provides among the worst service in the world.) You do have to put in some effort to managing usage (e.g. tracking things that burn network via Little Snitch, caching docs offline, minimizing Docker image size), but it's not terrible.

It is a sufficient bottleneck to reading some blogs that I wrote a simple proxy to strip bloat from web pages while... (read more)

The main outcome metric we try to optimize is currently number of monthly active users, because our business has strong network effects. We can't share exact stats for various reasons, but I am allowed to say that we crossed 1m users in June, and our growth rates are sufficiently high that our current user base is substantially larger than that. We're currently growing more quickly than most well-known fintech companies of similar sizes that I know of.

On EA providing for-profit funding: hard to say. Considerations against:

  • Wave looks like a very good investment by non-EA standards, so additional funding from EAs wouldn't have affected our fundraising very much (not sure how much this generalizes to other companies)
  • At later stages, this is very capital-intensive, so probably wouldn't make sense except as a thing for eg Open Phil to do with its endowment
  • Founding successful companies requires putting a lot of weight on inside-view considerations, a trait that's not particularly compatible with typical EA ep
... (read more)

Cool! With the understanding that these aren't your opinions, I'm going to engage with them anyway bc I think they're interesting. I think for all four of these I agree that they directionally push toward for-profits being less good, but that people overestimate the magnitude of the effect.

For-profit entrepreneurship has built-in incentives that already cause many entrepreneurs to try and implement any promising opportunities. As a result, we'd expect it to be drastically less neglected, or at least drastically less neglected relative to nonprofit opportun

... (read more)

Same with eg OpenAI which got $1b in nonprofit commitments but still had to become (capped) for-profit in order to grow.

If you look at OpenAI's annual filings, it looks like the $1b did not materialize.

Hmm. This argument seems like it only works if there are no market failures (i.e. ideas where it's possible to capture a decent fraction of the value created), and it seems like most nonprofits address some sort of market failure? (e.g. "people do not understand the benefits of vitamin-fortified food," "vaccination has strong positive externalities"...)

3
lincolnq
Yeah, that seems right to me, and is a good model that predicts the existing nonprofit startup ideas! My point is that it seems like a very narrow slice of all value-producing ideas.

I agree with most of what Lincoln said and would also plug Why and how to start a for-profit company serving emerging markets as material on this, if you haven't read it yet :)

Can you elaborate on the "various reasons" that people argue for-profit entrepreneurship is less promising than nonprofit entrepreneurship or provide any pointers on reading material? I haven't run across these arguments.

Thank you both for your thoughtful answers.

To clarify, I don't have a strong opinion on this comparison myself, and would love to hear more points of view on this. Sadly I'm not aware of any reading materials on this topic, but have heard the following arguments made in one on one conversations:

  1. For-profit entrepreneurship has built-in incentives that already cause many entrepreneurs to try and implement any promising opportunities. As a result, we'd expect it to be drastically less neglected, or at least drastically less neglected relative to nonprofit opp
... (read more)

Great questions!

What are common failure cases/traps to avoid

I don't know about "most common" as I think it varies by company, but the worst one for me was allowing myself to get distracted by problems that were more rewarding in the short term, but less important or leveraged. I wrote a bit about this in Attention is your scarcest resource.

How much should I be directly coding vs "architecting" vs process management

Related to the above, you should never be coding anything that's even remotely urgent (because it'll distract you too much from non-coding probl... (read more)

Sorry for the minimalist website :) A couple clarifications:

  • We indeed split our businesses into Sendwave (international money transfer) and Wave (mobile money). Wave.com is the website for the latter.
  • The latter currently operates only in Senegal and Cote d'Ivoire (stay tuned though).
  • In addition to charging no fees for deposits or withdrawals, we charge a flat 1% to send. All in, I believe we're about 80% cheaper than Orange Money for typical transaction sizes.
  • We don't provide services to Orange—if you saw the logo on the website it's just because we let ou
... (read more)

Hey Marc, cool that you're thinking about this!

I work for Wave, we build mobile money systems in Senegal, Cote d'Ivoire, and hopefully soon other countries. Here are some thoughts on these interventions based on Wave's experience:

Interventions 1-2 (creating accounts): I think for most people that don't use mobile money, in countries where mobile money is available, "not having an account" is not the main blocker. It's more likely to be something like

  • They don't live near enough to an agent
  • Mobile money charges fees that is too high given the typical amounts
... (read more)
1
MarcSerna
Thank you for this extremely informative response. This was way beyond my expectations!

Some of your "conservative" parameter estimates are surprising to me.

For instance, your conservative estimate of the effect of diminishing marginal returns is 2% per year or 10% over 5y. If (say) the total pool of EA-aligned funds grows by 50% over the next 5 years due to additional donors joining—which seems extremely plausible—it seems like that should make the marginal opportunity much more than 10% less good.

You also wrote

we’ll stick with 5% as a conservative estimate for real expected returns on index fund investing

but used 7% as your conservative estimate in the spreadsheet and in the bottom-line estimates you reported.

5
Sjir Hoeijmakers🔸
I'm not sure whether it would, considering, for example, the large room for funding GiveWell opportunities have had for multiple years (and will likely keep having) and their seemingly hardly diminishing cost-effectiveness on the margin (though data are obviously noisy here/there are other explanations). But I do take your point that this is not a very conservative estimate. I'll update them from 1%/2% to 2%/4%, thank you! See the rest of the paragraph you refer to: the 5% is my conservative estimate for index investing, the 7% for investing more generally.

I'm looking forward to CEA having a great 2020 under hopefully much more stable and certain leadership!

I’d welcome feedback on these plans via this form or in the comments, especially if you think there’s something that we’re missing or could be doing better.

This is weakly held since I don't have any context on what's going on internally with CEA right now.

That said: of the items listed in your summary of goals, it looks like about 80% of them involve inward-facing initiatives (hiring, spinoffs, process improvements, str... (read more)

4
JP Addison🔸
Adding a little bit to Max's comment. — When I count the number of our staff working on each section, I get ~half of staff focused on the external-facing goals. And that's on top of the business as usual work, which is largely external facing. I was one of the people pushing for more object level work this quarter, but my feeling of the distance between what it was and what I wanted it to be was not as high as it might seem from a simple count of the number of goals.[1] ---------------------------------------- 1. Which, to be clear, you had no way of knowing about, and you explicitly called out that it was weakly-held. ↩︎

I think this is a really important point, and one I’ve been thinking a lot about over the past month. As you say, I do think that having a strategy is an important starting point, but I don’t want us to get stuck too meta. We’re still developing our strategy, but this quarter we’re planning to focus more on object-level work.  Hopefully we can share more about strategy and object-level work in the future. 

That said, I also think that we’ve made a lot of object-level progress in the last year, and we plan to make more this year, so we might have u

... (read more)

Hmm. You're betting based on whether the fatalities exceed the mean of Justin's implied prior, but the prior is really heavy-tailed, so it's not actually clear that your bet is positive EV for him. (e.g., "1:1 odds that you're off by an order of magnitude" would be a terrible bet for Justion because he has 2/3 credence that there will be no pandemic at all).

Justin's credence for P(a particular person gets it | it goes world scale pandemic) should also be heavy-tailed, since the spread of infections is a preferential attac... (read more)

3
Sean_o_h
This seems fair. I suggested the bet quite quickly. Without having time to work through the math of the bet, I suggested something that felt on the conservative side from the point of view of my beliefs. The more I think about it, (a) the more confident I am in my beliefs and (b) the more I feel it was not as generous as I originalyl thought*. I have a personal liking for binary bets rather than proportional payoffs. As a small concession in light of the points raised, I'd be happy to offer to modify the terms retroactively to make them more favourable to Justin, offering either of the following. (i) Doubling the odds against me to 10:1 odds (rather than 5:1) on the original claim (at least an order of magnitude lower than his fermi). So his £50 would get £500 of mine. OR (ii) 5:1 on at least 1.5 orders of magnitude (50x) lower than his fermi (rather than 10x). (My intuition is that (ii) is a better deal than (i) but I haven't worked it through) (*i.e. at time of bet - I think the likelihood of this being a severe global pandemic is now diminishing further in my mind)

Oops. I searched for the title of the link before posting, but didn't read the titles carefully enough to find duplicates that edited the title. Should have put more weight on my prior that this would already have been posted :)

I'm guessing that they assumed we were exaggerating the numbers in order to make them more interested in working with us. The fact that you're so ready to call anyone who lies about user numbers a "scammer" may itself be part of the cultural difference here :)

6
Aaron Gertler 🔸
Oh, I see that I was confused: I was thinking of a "user number" and a "transaction number" as things related to Wave's bank account -- as though you were trying to share information for something like direct deposit and being accused of lying. The quote makes much more sense if it's "number of users" and "number of transactions".

Examples (mostly from Senegal since that's where I have the most experience, caveat that these are generalizations, all of them could be confounded by other stuff, the world is complicated, etc.):

  • Most Senegalese companies seem to place a much stronger emphasis on bureaucracy and paperwork.
  • When interacting with potential business partners in East Africa, we eventually realized that when we told them our user/transaction numbers, they often assumed that we were lying unless the claim was endorsed by someone they had a trusted connection to.
  • In the US, we
... (read more)
2
Aaron Gertler 🔸
This is confusing. Did they just think you were scammers, not a real business at all? Or did they think of you as a business that was suspiciously quick to share this information, and trying to... I don't know, make a power play? Something else?
6
Raemon
The book The Culture Map explores these sorts of problems, comparing many cultures' norms and advising on how to bridge the differences. Some advice it gives for this particular example (at least in several 'strong hierarchy' cultures), is instead of a higher-ranking asking direct questions of lower-ranking people, the boss can ask a team of lower-ranked people to work together to submit a proposal, where "who exactly criticized which thing" is a bit obfuscated.

Broadly agree, but:

You might end up making more impact if you started a startup in your own country, and just earned-to-give your earnings to GiveWell / EA organizations. This is because I think there are very few startups that benefit the poorest of the poor, since the poorest people don't even have access to basic needs.

Can't you just provide people basic needs then though? Many of Wave's clients have no smartphone and can't read. Low-cost Android phones (e.g. Tecno Mobile) probably provided a lot of value to people who previously did... (read more)

Haha this is probably the first time someone said that about one of my essays—I’m flattered, and excited to potentially write follow ups!

Is there anything in particular you’re curious about? Sometimes it’s hard to be sure of what’s novel vs obvious/common knowledge.

3
Aaron Gertler 🔸
I'd be interested in hearing about challenges in keeping employees around, both on the local and international side. If there were cases where employees from the developed world quit after trying to live in Africa, what seemed like the major factors behind their not wanting to continue? If there were cases where local employees didn't fit in well, what happened? What has Wave done to improve comfort/productivity for both kinds of employee?
3
Afrothunder
Hi Ben! I second this comment; I would love to learn more from your experience. In particular, I would love to learn more about how you have balanced working in Silicon Valley and implementation contexts during different stages of your venture, as well as more about some of the initial challenges you faced with developing/launching the product that are specific to the start-up space in development context. I am personally also very interested in this kind of career trajectory!
I imagine that there a large fraction of EAs who expect to be more productive in direct work than in an ETG role. But I'm not too clear why we should believe that. The skills and manpower needed by EA organizations appear to be a small subset of the total careers that the world needs, and it would seem an odd coincidence if the comparative advantage of people who believe in EA happens to overlap heavily with the needs of EA organizations. Remember that EA principles suggest that you should donate to approximately one charity (i.e. the current best one
... (read more)
1
Michael_Wiebe
To add to Ben's argument, uncertainty about which cause is the best will rationalize diversifying across multiple causes. If we use confidence intervals instead of point estimates, it's plausible that the top causes will have overlapping confidence intervals.

I agree with most of your comment.

>Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.

That seems like almost the opposite of what the 80k post says. It says the people who get hired are not very replaceable. But it also appears to say that people who get evaluated as average by EA orgs are 2 or more standard deviations less productive, which seems to imply that they're pretty replaceable.

2
Raemon
(edit: whoops, responded to wrong comment)

If you're really worried about value drift, you might be able to use a bank account that requires two signatures to withdraw funds, and add a second signatory whom you trust to enforce your precommitment to donate?

I haven't actually tried to do this, but I know businesses sometimes have this type of control on their accounts, and it might be available to consumers too.

2
Benjamin Ikuta
Interesting idea! Have you looked into this since?

Whoops, sorry about the quotes--I was writing quickly and intended them to denote that I was using "solve" in an imprecise way, not attributing the word to you, but that is obviously not how it reads. Edited.

These theoretical claims seem quite weak/incomplete.

  • In practice, autocrats' time horizons are highly finite, so I don't think a theoretical mutual-cooperation equilibrium is very relevant. (At minimum, the autocrat will eventually die.)
  • All your suggestions about oligarchy improving the tyranny of the majority / collective action problems only apply to actions that are in the oligarchy's interests. You haven't made any case that the important instances of these problems are in an oligarchy's interests to solve, and it doesn't seem likely to me.
3
cole_haus
Yes, I agree they're very incomplete--as advertised. I also think the original claims they're responding to are pretty incomplete. I. I agree that time horizons are finite. If you're taking that as meaning that the defect/defect equilibrium reigns due to backward induction on a fixed number of games, that seems much too strong to me. Both empirically and theoretically, cooperation becomes much more plausible in indefinitely iterated games. Does the single shot game that Acemoglu and Robinson implicitly describe really seem like a better description of the situation to you? It seems very clear to me that it's not a good fit. If I had to choose between a single shot game and an iterated game as a model, I'd choose the iterated game every time (and maybe just set the discount rate more aggressively as needed--as the post points out, we can interpret the discount rate as having to do with the probability of deposition). Maybe the crux here is the average tenure of autocrats and who we're thinking of when we use the term? II. (I don't say "solve" anywhere in the post so I think the quote marks there are a bit misleading.) I agree that to come up with something closer to a conclusion, you'd have to do something like analyze the weighted value of each of these structural factors. Even in the absence of such an analysis, I think getting a fuller list of the structural advantages and disadvantages gets us closer to the truth than a one-sided list. Also, if we accept the claim that Acemoglu and Robinson's empirical evidence is weak, then the fact that I haven't presented any evidence on the real-world importance of these theoretical mechanisms becomes a bit less troubling. It means there's something closer to symmetry in the absence of good evidence bearing on the relative importance of structural advantages and disadvantages in each type of society. My intuition is that majoritarian tyrannies and collective action problems are huge, pervasive problems in the contemp

What's the shift you think it would imply in animal advocacy?

7
zdgroff
As I've been doing research this summer, I've become a bit more tentative and wary of acting like we know much, but my general intuition is that (a) our focus should not be on saving animals now but on securing whatever changes save future animals, so ethical changes and institutional changes; (b) I think institutional changes are the most promising avenue for this, and the question is which institutional changes last longest; (c) we should look for path dependencies. It's unclear to me what advocacy changes this means, but I think it makes the case for, e.g., the Nonhuman Rights Project or circus bans stronger than they are in the short term. I think this is a crucial area of research though. For path dependencies, the biggest one right now I think is whether clean and plant-based meat succeed. The shift from longtermism here I think is that rather than trying to get products to market the fastest, we should ask what in general makes industries most likely to succeed or fail and just optimize for the probability of success. As an example, this makes me inclined to favor clean meat companies supporting regulations and transparency.
Load more