I agree that it's downstream of this, but strongly agree with ideopunk that mission alignment is a reasonable requirement to have.* A (perhaps the) major cause of organizations becoming dysfunctional as they grow is that people within the organization act in ways that are good for them, but bad for the organization overall—for example, fudging numbers to make themselves look more successful, ask for more headcount when they don't really need it, doing things that are short-term good but long-term bad (with the assumption that they'll have moved on before t...
In addition to having a lot more on the line, other reasons to expect better of ourselves:
Because of the second point, many professional investors do surprisingly little vetting. For example, SoftBank is pretty widely reputed to be "dumb money;" I...
Strongly agree with these points and think the first is what makes the overwhelming difference on why EA should have done better. Multiple people allege (both publicly on the forum and people who have told me in confidence) to have told EA leadership that SBF was doing thinks that strongly break with EA values since the Alameda situation of 2018.
This doesn't imply we should know about any particular illegal activity SBF might have been undertaking, but I would expect SBF to not have been so promoted throughout the past couple of years. This is ...
Can someone clarify whether I'm interpreting this paragraph correctly?
Effective Ventures (EV) is a federation of organisations and projects working to have a large positive impact in the world. EV was previously known as the Centre for Effective Altruism but the board decided to change the name to avoid confusion with the organisation within EV that goes by the same name.
I think what this means is that the CEA board is drawing a distinction between the CEA legal entity / umbrella organization (which is becoming EV) and the public-facing CEA brand (whi...
Yep, your interpretation is correct. We didn't want to make a big deal about this rebrand because for most people the associations they have with "CEA" are for the organization which is still called CEA. (But over the years, and especially as the legal entity has grown and taken on more projects, we've noticed a number of times where the ambiguity between the two has been somewhat frustrating.) Sorry for the confusion!
Sorry that was confusing! I was attempting to distinguish:
I will try to think of a better title!
Since someone just commented privately to me with this confusion, I will state for the record that this commenter seems likely to be impersonating Matt Yglesias, who already has an EA Forum account with the username "Matthew Yglesias." (EDIT: apparently it actually is the same Matt with a different account!)
(Object-level response: I endorse Larks' reply.)
Please note that the Twitter thread linked in the first paragraph starts with a highly factually inaccurate claim. In reality, at EAGxBoston this year there were five talks on global health, six on animal welfare, and four talks and one panel on AI (alignment plus policy). Methodology: I collected these numbers by filtering the official conference app agenda by topic and event type.
I think it's unfortunate that the original tweet got a lot of retweets / quote-tweets and Jeff hasn't made a correction. (There is a reply saying "I should add, friend is not 10...
This must be somewhat true but FWIW, I think it's probably less true than most outsiders would expect—I don't spend very much personal time on in-country stuff (because I have coworkers who are local to those countries who will do a much better job than I could) and so end up having pretty limited (and random/biased) context on what's going on!
IIRC a lot of people liked this post at the time, but I don't think the critiques stood up well. Looking back 7 years later, I think the critique that Jacob Steinhardt wrote in response (which is not on the EA forum for some reason?) did a much better job of identifying more real and persistent problems:
...
- Over-focus on “tried and true” and “default” options, which may both reduce actual impact and decrease exploration of new potentially high-value opportunities.
- Over-confident claims coupled with insufficient background research.
- Over-reliance on a small set o
Interesting. It sounds like you're saying that there are many EAs investing tons of time in doing things that are mostly only useful for getting particular roles at 1-2 orgs. I didn't realize that.
In addition to the feedback thing, this seems like a generally very bad dynamic—for instance, in your example, regardless of whether she gets feedback, Sally has now more or less wasted years of graduate schooling.
Top and (sustainably) fast-growing (over a long period of time) are roughly synonymous, but fast-growing is the upstream thing that causes it to be a good learning experience.
Note that billzito didn't specify, but the important number here is userbase or revenue growth, not headcount growth; the former causes the latter, but not vice versa, and rapid headcount growth without corresponding userbase growth is very bad.
People definitely can see rapidly increasing responsibility in less-fast-growing startups, but it's more likely to be because they're over-hir...
It sounds like you interpreted me as saying that rejecting resumes without feedback doesn't make people sad. I'm not saying that—I agree that it makes people sad (although on a per-person basis it does make people much less sad than rejecting them without feedback during later stages, which is what those points were in support of—having accidentally rejected people without feedback at many different steps, I'm speaking from experience here).
However, my main point is that providing feedback on resume applications is much more costly to the organization, not...
I think part of our disagreement might be that I see Wave as being in a different situation relative to some other EA organizations. There are a lot of software engineer jobs out there, and I'm guessing most people who are rejected by Wave would be fairly happy at some other software engineer job.
By contrast, I could imagine that stories like the following happening fairly frequently with other EA jobs:
Sally discovers the 80K website and gets excited about effective altruism. She spends hours reading the site and planning her career.
Sally converges
Note that at least for Rethink Priorities, a human[1] reads through all applications; nobody is rejected just because of their resume.
I'm a bit confused about the phrasing here because it seems to imply that "Alice's application is read by a human" and "if Alice is rejected it's not just because of her resume" are equivalent, but many resume screen processes (including eg Wave's) involve humans reading all resumes and then rejecting people (just) because of them.
I'm unfamiliar with EA orgs' interview processes, so I'm not sure whether you're talking about lack of feedback when someone fails an interview, or when someone's application is rejected before doing any interviews. It's really important to differentiate these because because providing feedback on someone's initial application is a massively harder problem:
I don't have research management experience in particular, but I have a lot of knowledge work (in particular software engineering) management experience.
IMO, giving insufficient positive feedback is a common, and damaging, blind spot for managers, especially those (like you and me) who expect their reports to derive most of their motivation from being intrinsically excited about their end goal. If unaddressed, it can easily lead to your reports feeling demotivated and like their work is pointless/terrible even when it's mostly good.
People use feedbac...
I note that the framing / example case has changed a lot between your original comment / my reply (making a $5m grant and writing "person X is skeptical of MIRI" in the "cons" column) and this parent comment ("imagine I pointed a gun to your head and... offer you to give you additional information;" "never stopping at [person X thinks that p]"). I'm not arguing for entirely refusing to trust other people or dividing labor, as you implied there. I specifically object to giving weight to other people's top-line views on questions where there's substantial di...
if you make a decision with large-scale and irreversible effects on the world (e.g. "who should get this $5M grant?") I think it would usually be predictably worse for the world to ignore others' views
Taking into account specific facts or arguments made by other people seems reasonable here. Just writing down e.g. "person X doesn't like MIRI" in the "cons" column of your spreadsheet seems foolish and wrongheaded.
Framing it as "taking others' views into account" or "ignoring others' views" is a big part of the problem, IMO—that language itself directs people towards evaluating the people rather than the arguments, and overall opinions rather than specific facts or claims.
If 100 forecasters (who I roughly respect) look at the likelihood of a future event and think it's ~10% likely, and I look at the same question and think it's ~33% likely, I think I will be incorrect in my private use of reason for my all-things-considered-view to not update somewhat downwards from 33%.
I think this continues to be true even if we all in theory have access to the same public evidence, etc.
Now, it does depend a bit on the context of what this information is for. For example if I'm asked to give my perspective on a gro...
I think we disagree. I'm not sure why you think that even for decisions with large effects one should only or mostly take into account specific facts or arguments, and am curious about your reasoning here.
I do think it will often be even more valuable to understand someone's specific reasons for having a belief. However, (i) in complex domains achieving a full understanding would be a lot of work, (ii) people usually have incomplete insight into the specific reasons for why they hold a certain belief themselves and instead might appeal to intuition, (iii) ...
Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to.
...
My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.
I also get a lot of this vibe from (parts of) the EA community, and it drives me a little nuts. Examples:
I'm somewhat sympathetic to the frustration you express. However, I suspect the optimal response isn't to be more or less epistemically modest indiscriminately. Instead, I suspect the optimal policy is something like:
Lots of emphasis on avoiding accidentally doing harm by being uninformed
I gave a talk about this, so I consider myself to be one of the repeaters of that message. But I also think I always tried to add a lot of caveats, like "you should take this advice less seriously if you're the type of person who listens to advice like this" and similar. It's a bit hard to calibrate, but I'm definitely in favor of people trying new projects, even at the risk of causing mild accidental harm, and in fact I think that's something that has helped me grow in the past.
If you...
I think I probably agree with the general thrust of this comment, but disagree on various specifics.
'Intelligent people disagree with this' is a good reason against being too confident in one's opinion. At the very least, it should highlight there are opportunities to explore where the disagreement is coming from, which should hopefully help everyone to form better opinions.
I also don't feel like moral uncertainty is a good example of people deferring too much.
A different way to look at this might be that if 'good judgement' is something that lots of peopl...
That last paragraph is a good observation, and I don’t think it’s entirely coincidental. 80k has a few instances in their history of accidentally causing harm, which has led them (correctly) to be very conservative about it as an organisation.
The thing is, career advice and PR are two areas 80k is very involved in and which have particular likelihood of causing as much harm as good, due to bad advice or distorted messaging. Most decisions individual EAs make are not like this, and it’s a mistake if they treat 80k’s caution as a reflection of how cautious they should be. Or worse, act even more cautiously reasoning the combined intelligence of the 80k staff is greater than their own (likely true, but likely irrelevant).
See also answers here mentioning that EA feels "intellectually stale". A friend says he thinks a lot of impressive people have left the EA movement because of this :(
I feel bad, because I think maybe I was one of the first people to push the "avoid accidental harm" thing.
I haven't had the opportunity to see this play out over multiple years/companies, so I'm not super well-informed yet, but I think I should have called out this part of my original comment more:
Not to mention various high-impact roles at companies that don't involve formal management at all.
If people think management is their only path to success then sure, you'll end up with everyone trying to be good at management. But if instead of starting from "who fills the new manager role" you start from "how can <person X> have the most impact on the company"...
I had a hard time answering this and I finally realized that I think it's because it sort of assumes performance is one-dimensional. My experience has been quite far from that: the same engineer who does a crap job on one task can, with a few tweaks to their project queue or work style, crush it at something else. In fact, making that happen is one of the most important parts of my (and all managers') jobs at Wave—we spend a lot of time trying to route people to roles where they can be the most successful.
Similarly, management is also not one-dimensional: ...
I'll let Lincoln add his as well, but here are a few things we do that I think are really helpful for this:
Agree that if you put a lot of weight on the efficient market hypothesis, then starting a company looks bad and probably isn't worth it. Personally, I don't think markets are efficient enough for this to be a dominant consideration (see e.g. my response here for partial justification; not sure it's possible to give a convincing full justification since it seems like a pretty deep worldview divergence between us and the more modest-epistemology-focused wing of the EA movement).
2. For personal work, it's annoying, but not a huge bottleneck—my internet in Jijiga (used in Dan's article) was much worse than anywhere else I've been in Africa. (Ethiopia has a monopoly, state-run telecom that provides among the worst service in the world.) You do have to put in some effort to managing usage (e.g. tracking things that burn network via Little Snitch, caching docs offline, minimizing Docker image size), but it's not terrible.
It is a sufficient bottleneck to reading some blogs that I wrote a simple proxy to strip bloat from web pages while...
The main outcome metric we try to optimize is currently number of monthly active users, because our business has strong network effects. We can't share exact stats for various reasons, but I am allowed to say that we crossed 1m users in June, and our growth rates are sufficiently high that our current user base is substantially larger than that. We're currently growing more quickly than most well-known fintech companies of similar sizes that I know of.
On EA providing for-profit funding: hard to say. Considerations against:
Cool! With the understanding that these aren't your opinions, I'm going to engage with them anyway bc I think they're interesting. I think for all four of these I agree that they directionally push toward for-profits being less good, but that people overestimate the magnitude of the effect.
...For-profit entrepreneurship has built-in incentives that already cause many entrepreneurs to try and implement any promising opportunities. As a result, we'd expect it to be drastically less neglected, or at least drastically less neglected relative to nonprofit opportun
Hmm. This argument seems like it only works if there are no market failures (i.e. ideas where it's possible to capture a decent fraction of the value created), and it seems like most nonprofits address some sort of market failure? (e.g. "people do not understand the benefits of vitamin-fortified food," "vaccination has strong positive externalities"...)
I agree with most of what Lincoln said and would also plug Why and how to start a for-profit company serving emerging markets as material on this, if you haven't read it yet :)
Can you elaborate on the "various reasons" that people argue for-profit entrepreneurship is less promising than nonprofit entrepreneurship or provide any pointers on reading material? I haven't run across these arguments.
Thank you both for your thoughtful answers.
To clarify, I don't have a strong opinion on this comparison myself, and would love to hear more points of view on this. Sadly I'm not aware of any reading materials on this topic, but have heard the following arguments made in one on one conversations:
Great questions!
What are common failure cases/traps to avoid
I don't know about "most common" as I think it varies by company, but the worst one for me was allowing myself to get distracted by problems that were more rewarding in the short term, but less important or leveraged. I wrote a bit about this in Attention is your scarcest resource.
How much should I be directly coding vs "architecting" vs process management
Related to the above, you should never be coding anything that's even remotely urgent (because it'll distract you too much from non-coding probl...
Sorry for the minimalist website :) A couple clarifications:
Hey Marc, cool that you're thinking about this!
I work for Wave, we build mobile money systems in Senegal, Cote d'Ivoire, and hopefully soon other countries. Here are some thoughts on these interventions based on Wave's experience:
Interventions 1-2 (creating accounts): I think for most people that don't use mobile money, in countries where mobile money is available, "not having an account" is not the main blocker. It's more likely to be something like
Some of your "conservative" parameter estimates are surprising to me.
For instance, your conservative estimate of the effect of diminishing marginal returns is 2% per year or 10% over 5y. If (say) the total pool of EA-aligned funds grows by 50% over the next 5 years due to additional donors joining—which seems extremely plausible—it seems like that should make the marginal opportunity much more than 10% less good.
You also wrote
we’ll stick with 5% as a conservative estimate for real expected returns on index fund investing
but used 7% as your conservative estimate in the spreadsheet and in the bottom-line estimates you reported.
I'm looking forward to CEA having a great 2020 under hopefully much more stable and certain leadership!
I’d welcome feedback on these plans via this form or in the comments, especially if you think there’s something that we’re missing or could be doing better.
This is weakly held since I don't have any context on what's going on internally with CEA right now.
That said: of the items listed in your summary of goals, it looks like about 80% of them involve inward-facing initiatives (hiring, spinoffs, process improvements, str...
I think this is a really important point, and one I’ve been thinking a lot about over the past month. As you say, I do think that having a strategy is an important starting point, but I don’t want us to get stuck too meta. We’re still developing our strategy, but this quarter we’re planning to focus more on object-level work. Hopefully we can share more about strategy and object-level work in the future.
That said, I also think that we’ve made a lot of object-level progress in the last year, and we plan to make more this year, so we might have u
Hmm. You're betting based on whether the fatalities exceed the mean of Justin's implied prior, but the prior is really heavy-tailed, so it's not actually clear that your bet is positive EV for him. (e.g., "1:1 odds that you're off by an order of magnitude" would be a terrible bet for Justion because he has 2/3 credence that there will be no pandemic at all).
Justin's credence for P(a particular person gets it | it goes world scale pandemic) should also be heavy-tailed, since the spread of infections is a preferential attac...
Examples (mostly from Senegal since that's where I have the most experience, caveat that these are generalizations, all of them could be confounded by other stuff, the world is complicated, etc.):
Broadly agree, but:
You might end up making more impact if you started a startup in your own country, and just earned-to-give your earnings to GiveWell / EA organizations. This is because I think there are very few startups that benefit the poorest of the poor, since the poorest people don't even have access to basic needs.
Can't you just provide people basic needs then though? Many of Wave's clients have no smartphone and can't read. Low-cost Android phones (e.g. Tecno Mobile) probably provided a lot of value to people who previously did...
I imagine that there a large fraction of EAs who expect to be more productive in direct work than in an ETG role. But I'm not too clear why we should believe that. The skills and manpower needed by EA organizations appear to be a small subset of the total careers that the world needs, and it would seem an odd coincidence if the comparative advantage of people who believe in EA happens to overlap heavily with the needs of EA organizations. Remember that EA principles suggest that you should donate to approximately one charity (i.e. the current best one...
I agree with most of your comment.
>Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.
That seems like almost the opposite of what the 80k post says. It says the people who get hired are not very replaceable. But it also appears to say that people who get evaluated as average by EA orgs are 2 or more standard deviations less productive, which seems to imply that they're pretty replaceable.
If you're really worried about value drift, you might be able to use a bank account that requires two signatures to withdraw funds, and add a second signatory whom you trust to enforce your precommitment to donate?
I haven't actually tried to do this, but I know businesses sometimes have this type of control on their accounts, and it might be available to consumers too.
These theoretical claims seem quite weak/incomplete.
Don't forget Zenefits!
... (read more)