Interesting. It sounds like you're saying that there are many EAs investing tons of time in doing things that are mostly only useful for getting particular roles at 1-2 orgs. I didn't realize that.
In addition to the feedback thing, this seems like a generally very bad dynamic—for instance, in your example, regardless of whether she gets feedback, Sally has now more or less wasted years of graduate schooling.
Top and (sustainably) fast-growing (over a long period of time) are roughly synonymous, but fast-growing is the upstream thing that causes it to be a good learning experience.
Note that billzito didn't specify, but the important number here is userbase or revenue growth, not headcount growth; the former causes the latter, but not vice versa, and rapid headcount growth without corresponding userbase growth is very bad.
People definitely can see rapidly increasing responsibility in less-fast-growing startups, but it's more likely to be because they're over-hir... (read more)
It sounds like you interpreted me as saying that rejecting resumes without feedback doesn't make people sad. I'm not saying that—I agree that it makes people sad (although on a per-person basis it does make people much less sad than rejecting them without feedback during later stages, which is what those points were in support of—having accidentally rejected people without feedback at many different steps, I'm speaking from experience here).
However, my main point is that providing feedback on resume applications is much more costly to the organization, not... (read more)
I think part of our disagreement might be that I see Wave as being in a different situation relative to some other EA organizations. There are a lot of software engineer jobs out there, and I'm guessing most people who are rejected by Wave would be fairly happy at some other software engineer job.
By contrast, I could imagine that stories like the following happening fairly frequently with other EA jobs:
Sally discovers the 80K website and gets excited about effective altruism. She spends hours reading the site and planning her career.
Note that at least for Rethink Priorities, a human reads through all applications; nobody is rejected just because of their resume.
I'm a bit confused about the phrasing here because it seems to imply that "Alice's application is read by a human" and "if Alice is rejected it's not just because of her resume" are equivalent, but many resume screen processes (including eg Wave's) involve humans reading all resumes and then rejecting people (just) because of them.
I mean the entire initial application (including the screening questions) is read, not just the resume, and the resume plays a relatively small part of this decision, as (we currently believe) resumes have low predictive validity for our roles.
I'm unfamiliar with EA orgs' interview processes, so I'm not sure whether you're talking about lack of feedback when someone fails an interview, or when someone's application is rejected before doing any interviews. It's really important to differentiate these because because providing feedback on someone's initial application is a massively harder problem:
I don't have research management experience in particular, but I have a lot of knowledge work (in particular software engineering) management experience.
IMO, giving insufficient positive feedback is a common, and damaging, blind spot for managers, especially those (like you and me) who expect their reports to derive most of their motivation from being intrinsically excited about their end goal. If unaddressed, it can easily lead to your reports feeling demotivated and like their work is pointless/terrible even when it's mostly good.
People use feedbac... (read more)
Looks like if this doesn't work out, I should at least update my surname...
I note that the framing / example case has changed a lot between your original comment / my reply (making a $5m grant and writing "person X is skeptical of MIRI" in the "cons" column) and this parent comment ("imagine I pointed a gun to your head and... offer you to give you additional information;" "never stopping at [person X thinks that p]"). I'm not arguing for entirely refusing to trust other people or dividing labor, as you implied there. I specifically object to giving weight to other people's top-line views on questions where there's substantial di... (read more)
if you make a decision with large-scale and irreversible effects on the world (e.g. "who should get this $5M grant?") I think it would usually be predictably worse for the world to ignore others' views
Taking into account specific facts or arguments made by other people seems reasonable here. Just writing down e.g. "person X doesn't like MIRI" in the "cons" column of your spreadsheet seems foolish and wrongheaded.
Framing it as "taking others' views into account" or "ignoring others' views" is a big part of the problem, IMO—that language itself directs people towards evaluating the people rather than the arguments, and overall opinions rather than specific facts or claims.
If 100 forecasters (who I roughly respect) look at the likelihood of a future event and think it's ~10% likely, and I look at the same question and think it's ~33% likely, I think I will be incorrect in my private use of reason for my all-things-considered-view to not update somewhat downwards from 33%. I think this continues to be true even if we all in theory have access to the same public evidence, etc. Now, it does depend a bit on the context of what this information is for. For example if I'm asked to give my perspective on a gro... (read more)
I think we disagree. I'm not sure why you think that even for decisions with large effects one should only or mostly take into account specific facts or arguments, and am curious about your reasoning here.
I do think it will often be even more valuable to understand someone's specific reasons for having a belief. However, (i) in complex domains achieving a full understanding would be a lot of work, (ii) people usually have incomplete insight into the specific reasons for why they hold a certain belief themselves and instead might appeal to intuition, (iii) ... (read more)
Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to....My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.
I also get a lot of this vibe from (parts of) the EA community, and it drives me a little nuts. Examples:
I'm somewhat sympathetic to the frustration you express. However, I suspect the optimal response isn't to be more or less epistemically modest indiscriminately. Instead, I suspect the optimal policy is something like:
Lots of emphasis on avoiding accidentally doing harm by being uninformed
I gave a talk about this, so I consider myself to be one of the repeaters of that message. But I also think I always tried to add a lot of caveats, like "you should take this advice less seriously if you're the type of person who listens to advice like this" and similar. It's a bit hard to calibrate, but I'm definitely in favor of people trying new projects, even at the risk of causing mild accidental harm, and in fact I think that's something that has helped me grow in the past.
If you... (read more)
I think I probably agree with the general thrust of this comment, but disagree on various specifics.
'Intelligent people disagree with this' is a good reason against being too confident in one's opinion. At the very least, it should highlight there are opportunities to explore where the disagreement is coming from, which should hopefully help everyone to form better opinions.
I also don't feel like moral uncertainty is a good example of people deferring too much.
A different way to look at this might be that if 'good judgement' is something that lots of peopl... (read more)
That last paragraph is a good observation, and I don’t think it’s entirely coincidental. 80k has a few instances in their history of accidentally causing harm, which has led them (correctly) to be very conservative about it as an organisation.
The thing is, career advice and PR are two areas 80k is very involved in and which have particular likelihood of causing as much harm as good, due to bad advice or distorted messaging. Most decisions individual EAs make are not like this, and it’s a mistake if they treat 80k’s caution as a reflection of how cautious they should be. Or worse, act even more cautiously reasoning the combined intelligence of the 80k staff is greater than their own (likely true, but likely irrelevant).
See also answers here mentioning that EA feels "intellectually stale". A friend says he thinks a lot of impressive people have left the EA movement because of this :(
I feel bad, because I think maybe I was one of the first people to push the "avoid accidental harm" thing.
I haven't had the opportunity to see this play out over multiple years/companies, so I'm not super well-informed yet, but I think I should have called out this part of my original comment more:
Not to mention various high-impact roles at companies that don't involve formal management at all.
If people think management is their only path to success then sure, you'll end up with everyone trying to be good at management. But if instead of starting from "who fills the new manager role" you start from "how can <person X> have the most impact on the company"... (read more)
I had a hard time answering this and I finally realized that I think it's because it sort of assumes performance is one-dimensional. My experience has been quite far from that: the same engineer who does a crap job on one task can, with a few tweaks to their project queue or work style, crush it at something else. In fact, making that happen is one of the most important parts of my (and all managers') jobs at Wave—we spend a lot of time trying to route people to roles where they can be the most successful.
Similarly, management is also not one-dimensional: ... (read more)
I'll let Lincoln add his as well, but here are a few things we do that I think are really helpful for this:
Agree that if you put a lot of weight on the efficient market hypothesis, then starting a company looks bad and probably isn't worth it. Personally, I don't think markets are efficient enough for this to be a dominant consideration (see e.g. my response here for partial justification; not sure it's possible to give a convincing full justification since it seems like a pretty deep worldview divergence between us and the more modest-epistemology-focused wing of the EA movement).
2. For personal work, it's annoying, but not a huge bottleneck—my internet in Jijiga (used in Dan's article) was much worse than anywhere else I've been in Africa. (Ethiopia has a monopoly, state-run telecom that provides among the worst service in the world.) You do have to put in some effort to managing usage (e.g. tracking things that burn network via Little Snitch, caching docs offline, minimizing Docker image size), but it's not terrible.
It is a sufficient bottleneck to reading some blogs that I wrote a simple proxy to strip bloat from web pages while... (read more)
The main outcome metric we try to optimize is currently number of monthly active users, because our business has strong network effects. We can't share exact stats for various reasons, but I am allowed to say that we crossed 1m users in June, and our growth rates are sufficiently high that our current user base is substantially larger than that. We're currently growing more quickly than most well-known fintech companies of similar sizes that I know of.
On EA providing for-profit funding: hard to say. Considerations against:
Cool! With the understanding that these aren't your opinions, I'm going to engage with them anyway bc I think they're interesting. I think for all four of these I agree that they directionally push toward for-profits being less good, but that people overestimate the magnitude of the effect.
For-profit entrepreneurship has built-in incentives that already cause many entrepreneurs to try and implement any promising opportunities. As a result, we'd expect it to be drastically less neglected, or at least drastically less neglected relative to nonprofit opportun
Same with eg OpenAI which got $1b in nonprofit commitments but still had to become (capped) for-profit in order to grow.
If you look at OpenAI's annual filings, it looks like the $1b did not materialize.
Hmm. This argument seems like it only works if there are no market failures (i.e. ideas where it's possible to capture a decent fraction of the value created), and it seems like most nonprofits address some sort of market failure? (e.g. "people do not understand the benefits of vitamin-fortified food," "vaccination has strong positive externalities"...)
I agree with most of what Lincoln said and would also plug Why and how to start a for-profit company serving emerging markets as material on this, if you haven't read it yet :)
Can you elaborate on the "various reasons" that people argue for-profit entrepreneurship is less promising than nonprofit entrepreneurship or provide any pointers on reading material? I haven't run across these arguments.
Thank you both for your thoughtful answers.
To clarify, I don't have a strong opinion on this comparison myself, and would love to hear more points of view on this. Sadly I'm not aware of any reading materials on this topic, but have heard the following arguments made in one on one conversations:
What are common failure cases/traps to avoid
I don't know about "most common" as I think it varies by company, but the worst one for me was allowing myself to get distracted by problems that were more rewarding in the short term, but less important or leveraged. I wrote a bit about this in Attention is your scarcest resource.
How much should I be directly coding vs "architecting" vs process management
Related to the above, you should never be coding anything that's even remotely urgent (because it'll distract you too much from non-coding probl... (read more)
Sorry for the minimalist website :) A couple clarifications:
Hey Marc, cool that you're thinking about this!
I work for Wave, we build mobile money systems in Senegal, Cote d'Ivoire, and hopefully soon other countries. Here are some thoughts on these interventions based on Wave's experience:
Interventions 1-2 (creating accounts): I think for most people that don't use mobile money, in countries where mobile money is available, "not having an account" is not the main blocker. It's more likely to be something like
Some of your "conservative" parameter estimates are surprising to me.
For instance, your conservative estimate of the effect of diminishing marginal returns is 2% per year or 10% over 5y. If (say) the total pool of EA-aligned funds grows by 50% over the next 5 years due to additional donors joining—which seems extremely plausible—it seems like that should make the marginal opportunity much more than 10% less good.
You also wrote
we’ll stick with 5% as a conservative estimate for real expected returns on index fund investing
but used 7% as your conservative estimate in the spreadsheet and in the bottom-line estimates you reported.
I'm looking forward to CEA having a great 2020 under hopefully much more stable and certain leadership!
I’d welcome feedback on these plans via this form or in the comments, especially if you think there’s something that we’re missing or could be doing better.
This is weakly held since I don't have any context on what's going on internally with CEA right now.
That said: of the items listed in your summary of goals, it looks like about 80% of them involve inward-facing initiatives (hiring, spinoffs, process improvements, str... (read more)
I think this is a really important point, and one I’ve been thinking a lot about over the past month. As you say, I do think that having a strategy is an important starting point, but I don’t want us to get stuck too meta. We’re still developing our strategy, but this quarter we’re planning to focus more on object-level work. Hopefully we can share more about strategy and object-level work in the future.
That said, I also think that we’ve made a lot of object-level progress in the last year, and we plan to make more this year, so we might have u
Hmm. You're betting based on whether the fatalities exceed the mean of Justin's implied prior, but the prior is really heavy-tailed, so it's not actually clear that your bet is positive EV for him. (e.g., "1:1 odds that you're off by an order of magnitude" would be a terrible bet for Justion because he has 2/3 credence that there will be no pandemic at all).
Justin's credence for P(a particular person gets it | it goes world scale pandemic) should also be heavy-tailed, since the spread of infections is a preferential attac... (read more)
Oops. I searched for the title of the link before posting, but didn't read the titles carefully enough to find duplicates that edited the title. Should have put more weight on my prior that this would already have been posted :)
I'm guessing that they assumed we were exaggerating the numbers in order to make them more interested in working with us. The fact that you're so ready to call anyone who lies about user numbers a "scammer" may itself be part of the cultural difference here :)
Examples (mostly from Senegal since that's where I have the most experience, caveat that these are generalizations, all of them could be confounded by other stuff, the world is complicated, etc.):
Broadly agree, but:
You might end up making more impact if you started a startup in your own country, and just earned-to-give your earnings to GiveWell / EA organizations. This is because I think there are very few startups that benefit the poorest of the poor, since the poorest people don't even have access to basic needs.
Can't you just provide people basic needs then though? Many of Wave's clients have no smartphone and can't read. Low-cost Android phones (e.g. Tecno Mobile) probably provided a lot of value to people who previously did... (read more)
Haha this is probably the first time someone said that about one of my essays—I’m flattered, and excited to potentially write follow ups!
Is there anything in particular you’re curious about? Sometimes it’s hard to be sure of what’s novel vs obvious/common knowledge.
I imagine that there a large fraction of EAs who expect to be more productive in direct work than in an ETG role. But I'm not too clear why we should believe that. The skills and manpower needed by EA organizations appear to be a small subset of the total careers that the world needs, and it would seem an odd coincidence if the comparative advantage of people who believe in EA happens to overlap heavily with the needs of EA organizations. Remember that EA principles suggest that you should donate to approximately one charity (i.e. the current best one
I agree with most of your comment.
>Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.That seems like almost the opposite of what the 80k post says. It says the people who get hired are not very replaceable. But it also appears to say that people who get evaluated as average by EA orgs are 2 or more standard deviations less productive, which seems to imply that they're pretty replaceable.
If you're really worried about value drift, you might be able to use a bank account that requires two signatures to withdraw funds, and add a second signatory whom you trust to enforce your precommitment to donate?
I haven't actually tried to do this, but I know businesses sometimes have this type of control on their accounts, and it might be available to consumers too.
Whoops, sorry about the quotes--I was writing quickly and intended them to denote that I was using "solve" in an imprecise way, not attributing the word to you, but that is obviously not how it reads. Edited.
These theoretical claims seem quite weak/incomplete.
What's the shift you think it would imply in animal advocacy?
I had one of his quotes on partial attribution bias (maybe even from that interview) in mind as I wrote this!
Yikes; this is pretty concerning data. Great find!
I'd be curious to hear from anyone at GWWC how this updates them, and in particular how it bears on their "realistic calculation" of their cost effectiveness, which assumes 5% annualized attrition. (That's not an apples to apples comparison, so their estimate isn't necessarily off by literally 10x, but it seems like it must be off by quite a lot, unless the survey data is somehow biased.)
We're definitely aware that Giving What We Can's 2015 analysis comes away with a more optimistic conclusion than other more recent data sources like the EA Survey indicate (and I believe the Slate Star Codex survey, though I haven't seen a careful analysis of that one as it bears on Giving What We Can). We've just made some improvements to the donation recording platform, and once a few last things are ironed out we'll be sending out reminders for members to record their donations that may not have been recorded. Once people have had time to respond to those reminders, we plan to do an update on our 2015 estimates of members' follow-through.
I suspect that straightforwardly taking specific EA ideas and putting them into fiction is going to be very hard to do in a non-cringeworthy way (as pointed out by elle in another comment). I'd be more interested in attempts to write fiction that conveys an EA mindset without being overly conceptual.
For instance, a lot of today's fiction seems cynical and pessimistic about human nature; the characters frequently don't seem to have goals related to anything other than their immediate social environment; and they often don't pursue those ... (read more)
worker cooperatives have positive impacts on both firm productivity and employee welfare; there is a lot more research showing that worker ownership is modestly better than regular capitalist ownership
This is causal language, but as far as I can tell (at least per the 2nd paper) the studies are all correlational? By default I'm very skeptical of ability to control for confounders in a correlational analysis here. Are there any studies with a more robust way to infer causation?
(PS: if you're interested in posting but unsure about content, I'd be excited to help answer any q's or read a draft! My email is in my profile.)
What EA is currently doing would definitely not scale to 10%+ of the population doing the same thing. However, that's not a strong argument against not doing it right now. You can't start a political party with support from 0.01% of the population!
In general, we should do things that don't scale but are optimal right now, rather than things that do scale but aren't optimal right now, because without optimizing for the current scale, you die before reaching the larger scale.
I would be extremely interested if you were to hypothetically write an "intro to child protection/welfare for EAs" post on this forum! (And it would probably be a great candidate for a prize as well!) I think the number of upvotes on this comment show that other people agree :)
Personally, I have ~zero knowledge of this topic (and probably at least as many misconceptions as accurate beliefs!) and would be happy to start learning about it from scratch.
"Cause X" usually refers to an issue that is (one of) the most important one(s) to work ... (read more)
While climate change doesn't immediately appear to be neglected, it seems possible that many people/orgs "working on climate change" aren't doing so particularly effectively.
Historically, it seems like the environmental movement has an extremely poor track record at applying an "optimizing mindset" to problems and has tended to advocate solutions based on mood affiliation rather than reasoning about efficiency. A recent example would be the reactions to the California drought which blame almost anyone except the actual biggest... (read more)
I agree that the environmental movement is extremely poor at optimisation. This being said, there are a number of very large philanthropists and charities who do take a sensible approach to climate change, so I don't think this is a case in which EAs could march in and totally change everything. Much of Climateworks' giving takes a broadly EA approach, and they oversee the giving of numerous multi-billion dollar foundations. Gates also does some sensible work on the energy innovation side. Nevertheless, most money in the space does seem to be spe... (read more)
If one person-year is 2000 hours, then that implies you're valuing CEA staff time at about $85/hour. Your marginal cost estimate would then imply that a marginal grant takes about 12-24 person-hours to process, on average, all-in.
This still seems higher than I would expect given the overheads that I know about (going back and forth about bank details, moving money between banks, accounting, auditing the accounting, dealing with disbursement mistakes, managing the people doing all of the above). I'm sure there are other overheads that I don't... (read more)
I actually think the $10k grant threshold doesn't make a lot of sense even if we assume the details of this "opportunity cost" perspective are correct. Grants should fulfill the following criterion:
"Benefit of making the grant" ≥ "Financial cost of grant" + "CEA's opportunity cost from distributing a grant"
If we assume that there are large impact differences between different opportunities, as EAs generally do, a $5k grant could easily have a benefit worth $50k to the EA community, and therefore easily be w... (read more)
I think we should think carefully about the norm being set by the comments here.
This is an exceptionally transparent and useful grant report (especially Oliver Habryka's). It's helped me learn a lot about how the fund thinks about things, what kind of donation opportunities are available, and what kind of things I could (hypothetically if I were interested) pitch the LTF fund on in the future. To compare it to a common benchmark, I found it more transparent and informative than a typical GiveWell report.
But the fact that Habryka now must defend a... (read more)
Now that the dust has settled a bit, I'm curious what Habryka & the other fund managers think of the level of community engagement that occurred on this report...
Relatedly, is Oli getting compensated for the work he's putting in to the Longterm Future Fund?
Seems good to move towards a regime wherein:
I think it's great that the Fund is trending towards more transparency & a broader set of grantees (cf. November 2018 grant report, cf. July 2018 concerns about the Fund).
And I really appreciate the level of care & attention that Oli is putting towards this thread. I've found the discussion really helpful.
I strongly agree with this. EA funds seemed to have a tough time finding grant makers who were both qualified and had sufficient time, and I would expect that to be partly because of the harsh online environment previous grant makers faced. The current team seems to have impressively addressed the worries people had in terms of donating to smaller and more speculative projects, and providing detailed write-ups on them. I imagine that in depth, harsh attacks on each grant decision will make it still harder to recruit great people for these committees, and m... (read more)
Agree with this, especially the comments about rudeness. This also means that I disagree with Oli's comment elsewhere in this thread:
that people should feel free to express any system-1 level reactions they have to these grants.
In line with what Ben says, I think people should apply a filter to their system-1 level reactions, and not express them whatever they are.
Wow! This is an order of magnitude larger than I expected. What's the source of the overhead here?
Here is my rough fermi:
My guess is that there is about one full-time person working on the logistics of EA Grants, together with about half of another person lost in overhead, communications, technology (EA Funds platform) and needing to manage them.
Since people's competence is generally high, I estimated the counterfactual earnings of that person at around $150k, with an additional salary from CEA of $60k that is presumably taxed at around 30%, resulting in a total loss of money going to EA-aligned people of around ($150k + 0.3 * $60k) * 1.5 = $252... (read more)
+ 0.3 * $60k) * 1.5 = $252
This is true as far as it goes, but I think that many EAs, including me, would endorse the idea that "social movements are the [or at least a] key drivers of change in human history." It seems perverse to assume otherwise on a forum whose entire point is to help the progress of a social movement that claims to e.g. help participants have 100x more positive impact in the world.
More generally, it's true that your chance of convincing "constitutionally disinclined" people with two papers is low. But your chance is zero of convincing a... (read more)
I'm very interested in hearing from grantmakers about their take on this problem (especially those at or associated with CEA, which it seems like has been involved in most of the biggest initiatives to scale out EA's vetting, through EA Grants and EA Funds).
(Funding manager of the EA Meta Fund here)
We have run an application round for our last distribution for the first time. I conducted the very initial investigation which I communicated to the committee. Previous grantees came all through our personal network.
Things we learnt during our application round:
i) We got significantly fewer applications than we expected and would have been able to spend more time vetting projects. This was not a bottleneck. After some investigation through personal outreach I have the impression there are not many projects being s... (read more)
It seems easier to increase the efficiency of your work than the quality.
In software engineering, I've found the exact opposite. It's relatively easy for me to train people to identify and correct flaws in their own code–I point out the problems in code review and try to explain the underlying heuristics/models I'm using, and eventually other people learn the same heuristics/models. On the other hand, I have no idea how to train people to work more quickly.
(Of course there are many reasons why other types of work might be different from software eng!)