Arguments for earning to give (EtG) as an impactful career have a simplified conception of the market for talent. Assuming a functioning market, I argue there are many forces pushing the price of talent upwards. I also argue that the market for talent is dysfunctional and buying talent has to navigate supply not meeting demand. Finally, I provide reasons why buying talent is undesirable, even if it’s possible.
Overall, these arguments lend nuance to how money can be used altruistically. In particular, they suggest that those considering EtG as their primary career path might want to consider direct work instead.
When I was first introduced to effective altruism, I thought that money was the main lever used to move the world. I thought that money was relatively fungible with nearly all resources. In particular, money allowed you to change the distribution of problems on which talented people were working.
I accepted that earning to give (EtG) dominated direct work because you can always funge money with someone else doing direct work. Since you'll earn the "market rate" for your talent, the only cost is living expenses. Living expenses are low, so EtG is at least about as good as direct work. EtG has higher flexibility it is likely better.
This argument is flawed. One reason is the "market rate" people get paid is less than the value they could contribute by doing direct work. However, the main reason is that I no longer think money can be easily and efficiently converted into talent, which I will argue.
This post will mostly be written from an impatient longtermist perspective. This means I am mostly thinking about talent as the ability to help prevent global catastrophic risks, e.g. unaligned artificial general intelligence or global pandemics. I expect many of these arguments to fail when applied to non-longtermist cause areas.
Money Buys Talent
Some people are more talented than other people: more productive, more intelligent, more personable, etc. Talented people are better able to achieve goals, like creating software, managing people, or conducting research. Organizations want to hire talented people. This gives us a supply of and demand for talented people.
The economy uses pricing to match supply with demand. Money should be able to efficiently buy talent just like it can buy other goods. Thus, being willing to pay more money for talented people should increase the amount you can attract.
This argument is compelling. However, it makes multiple false assumptions about the market for talent.
Money can't always buy quality
In the market for iPhones, you can buy more iPhones for more money. At reasonable levels of iPhones, this relationship will be linear. If you buy a significant fraction of the current supply of iPhones, you increase demand and pay more per iPhone. If you want to buy more iPhones than exist on Earth, then you have to pay for new iPhones factories, which will make the marginal iPhone cost more.
Suppose you wanted an iPhone 15. How much would this cost you? There isn't any supply. To obtain one, you would have to make it. Apple is spending billions on R&D. How much money would you need to speed up the process? At least 1000x the price of an iPhone 12.
It's tempting to think of a unit of talent like an iPhone - you can buy some talent for some price, so you can buy double the talent for double the price. This is technically true if you hire two people. However, due to communication and managerial overhead, two people will do less than twice the work of one person or cost more than twice as much. One person with twice the talent is more valuable than two people, just like an iPhone 15 is more valuable than two iPhone 12s.
As another analogy, consider microSD cards. I can buy a 512 GB microSD for $65. A 1 TB microSD card costs $330, about 5x as much for a 2x storage increase. How much would I have to pay for a 2 TB microSD? It is not yet available generally, but it might be available in an R&D department. Said department would probably not part with it for a million dollars.
Applied to talent, imagine Alice can think twice as fast as Bob. How much more is Alice worth? Naively, twice as much as Bob. However, suppose that Alice and Bob work at competing trading firms. Both firms have capital, so they can maximally exploit any market opportunities their traders spot. Thus, it's winner takes all; whichever firm spots the market inefficiency first gets all of the profit. Alice thinks twice as fast as Bob, so she always spots the best market inefficiencies first. If some market inefficiencies are 10x more profitable than others, Alice is worth 10x more than Bob.
In general, additional copies of a good can usually be purchased for the same marginal price. However, higher quality products are a different market where one might have to pay arbitrarly more for marginally higher quality.
In the limit, there are things that people with nearly unlimited resources cannot purchase.
As the American Revolution was heating up, a wave of smallpox was raging on the other side of the Atlantic. An English dairy farmer named Benjamin Jesty was concerned for his wife and children...When smallpox began to pop up in Dorset, Jesty decided to take drastic action. He took his family to a nearby farm with a cowpox-infected cow, scratched their arms, and wiped pus from the infected cow on the scratches...Throughout the rest of their lives, through multiple waves of smallpox, they were immune...
The same wave of smallpox which ran across England in 1774 also made its way across Europe. In May, it reached Louis XV, King of France. Despite the wealth of a major government and the talents of Europe’s most respected doctors, Louis XV died of smallpox on May 10, 1774.
This observation about quality only establishes that the price of talent might be non-linear. Why might we expect it to be sharply non-linear in practice?
Talent is rare
Price is controlled by supply and demand. What's the supply? Pretty low. If you're reading this post, you've probably spent your entire life interacting with people who have already been selected for intelligence, motivation, conscientiousness, etc.
As an example, students at top US universities frequently have ACT scores of 35+. In 2020, about twenty thousand students out of 1.7 million scored 35+, which is about 1%. These students have already been selected from those who take the ACT, which have already been selected from those who are going to college.
But percentile rankings don't matter. Reality doesn't grade on a curve. How hard is it to find talented individuals? The 95th percentile isn't that good. The 99th percentile probably isn't good enough either. Translating percentiles into talent is difficult. My experience has been interacting with a sample, then realizing that the sample is more skewed than I initially thought.
Additionally, EA organizations desire particular talents. For example, many organizations don't have much training capacity. Such organizations are interested in hiring people that can "hit the ground running.” Ben Todd:
[Open Philanthropy is] not particularly constrained by finding people who have a strong resume who seemed quite aligned with their mission, but they are still constrained by someone who can just kind of like hit the ground running as a researcher. And some evidence for this is that they trialed 12 people, I think for like three to six months. But of those, they only hired five. And they’re like a multi-billion dollar foundation. So they clearly have the funds to hire more people if they found people above the bar.
The supply of talented individuals is low. What about demand?
Talent is very desirable
Hedge funds turn talent into profit. In particular, the compensation an individual receives is directly proportional to their talent. Thus, hedge funds create a demand for talent at the appropriate level.
Assuming the market functions properly, the amount one talented individual costs is the amount they would have been paid elsewhere. Since talent is also useful for making money, this amount can be large. To be specific, I have friends that might become quantitative traders/analysts. Their expected salaries are at least $1 million per year over their careers, plausibly 2-10 times larger. I expect many top longtermist researchers to be able to obtain similar salaries if they switched to finance.
At $1 million per year, $10 billion buys 10,000 researcher-years, equivalent to 250 researcher-careers. That's not a lot. And also probably an overestimate.
Some people are altruistic and might be willing to do direct work for less than their expected salary. Might we be able to use money to buy these people?
Not quite. Assuming market efficiency, the savings of supplementing an altruistic individual is balanced by lost counterfactual donations. Suppose that Alice currently makes $200,000 a year and is willing to do direct work for $100,000 a year. Hiring Alice seems like a good deal - we get $200,000 of talent for $100,000. However, if Alice would do direct work for half the salary, she should also be willing to donate half her salary. Our savings are balanced by lost donations.
This argument is false because many altruistic individuals are willing to take a larger pay-cut than they're willing to donate; people value the type of work they do. Additionally, people rarely get paid an equal to the value of their work, so doing direct work is more efficient than EtG. In practice, using money to hire talented altruistic individuals is highly impactful.
Assuming a market for talent, we have high demand. Is there even a market?
Talent isn’t a perfect market
Many of my friends work as software engineers at tech companies and haven’t looked for jobs outside of big tech. These individuals might be able to double their salaries at hedge funds or startups. What's going on here?
(Thanks to Zvi for inspiring this analysis. In what follows, the producers are the talented individuals and the consumers are the companies that want to hire them. I am unaware of the strength of most of these effects.)
Products/producers are not homogenized; information about them is costly for consumers. Even relatively homogenized products, like generic software engineers, differ vastly in quality between individuals. There is no cheap and reliable way to differentiate between levels of talent without costly trial periods. Hiring employees with specific sub-skills is difficult. Reputation and experience matter.
Hiring requires long-term predictions. There is often a substantial lag between hiring someone and having them start, and when an employee isn’t a good fit this can take time to determine. At the same time, the nature of GiveWell’s work is constantly changing. Thus, doing good evaluations of employees is both difficult and very important; the cost of an overly optimistic hire or evaluation can be significant.
Consumers are not homogenized; information is costly for producers. Personal fit is essential for productivity, happiness, career advancement, etc. However, it is difficult to determine fit; what was initially exciting might become boring after a few months. You might attempt to endure only to burn out. People don't know their own abilities. They don't know their market rate and don’t shop around to find out. Stigma around salaries means people don’t know they're being underpaid.
Producers have imperfect information. People do not know if they will want to switch jobs. Jobs at large companies allow for transfering to different departments, should the desire arise. Large companies offer stability. People might not know some jobs exist.
Consumers have imperfect information. Even with internship programs, employers cannot entirely measure a person. Hires from top universities possess more background knowledge. People might quit, which is costly for employers. Each person also alters workplace culture, which cannot be measured but must be maintained.
Fixed costs exist for producers. Searching for jobs is an expensive and time-consuming process. The skills one needs to interview well require cultivation (read: leetcode). If you're employed, spending a day interviewing might cost hundreds in potential earnings. Applying for jobs can be demoralizing and damage advancement in your current job.
Fixed costs exist for consumers. Hiring employees is expensive and time-consuming, often requiring dozens of full-time employees. Technical interviews are conducted by technically skilled people whose time is expensive. Economies of scale exist; high variance high value actions must be taken many times. Asking people to apply for jobs rarely works. Large companies attract talent with reputation
In order to reach the point where employees are producing work that we can rely on and integrate into our output, senior staff need to invest significant time in training, evaluation and management.
Producer preferences differ. People have different job fits. Someone could be a great trader if only they were passionate about arbitrage. Doing enjoyable work might be a requirement. People also have preferences about workplace culture, job respectability, job stability, coworker demeanor, hour flexibility, office location, etc.
Producers might care about individual consumers. People often have dreams of working in specific industries. If these dreams are strong enough, they might be unpersuadable. People have ethical qualms about working in other industries, e.g. finance.
Producers are not strategic. People who ask for raises make more than people who don't, and yet there are people who don't ask for raises. College graduates frequently take their first offer they get because they’re not applying to enough companies. People make mistakes in negotiation that cost them hundreds of thousands of dollars of potential earnings, yet they spend very little time practicing. People are unmotivated to develop the skills they lack or discover their fair market value.
Consumers are not strategic. Biases plague job interviews, and yet employers often don't take basic measures like blinding. Technical interviews are conducted by volunteers instead of trained individuals. Coding quizzes are developed on an ad hoc basis instead of treated as serious endeavors. Basic statistics correlating interview performance to job performance are not taken. Employers give large weight to unreliable subjective impressions of "fit" and "character”.
Consumers are constrained. Employees at large companies systematically make less than the value they generate. Small companies pay with equity instead of capital which carries more risk. Hires must be legibly defensible.
The market for talent is flawed; supply doesn't always meet demand. Can people be bought?
People don't value money
Suppose you wanted Terence Tao to work at your hedge fund. How much would you have to pay him? I think he wouldn't switch jobs for any reasonable price.
I sometimes ask my friends how much money someone would have to pay them to live as a hermit in the woods. Some of my friends said they would not do this for any amount of money. Their current lives were good, they reasoned; what would they even do with more money?
Most people suffer extremely sharp diminishing returns to large sums of money, problematizing the market for talent. As people have more money, their desires shift: work-life balance, passion, location, etc. If someone is passionate about their work, no amount of money may be sufficient.
There also might be systematic tendencies for talented individuals to value money less. Paul Graham: "If I had to put the recipe for genius into one sentence, that might be it: to have a disinterested obsession with something that matters." If Graham is right, talented people have talent precisely because they don’t care for money. Paul Erdős wouldn’t have left mathematics for any worldly thing.
Many talented individuals don't value money, but some might. Would we want to buy talent with money?
Alignment is important
An analogy: You’re at war with Examplestan. Examplestan doesn't maintain a large standing army;the bulk of their forces are mercenaries. You are richer than Examplestan. You pay Examplestan's mercenaries to join your side, easily winning the war.
A hypothetical: You discover the head of an AI Safety research organization is willing to switch to any job, including harmful ones, if offered enough money. Would that make you uncomfortable? It certainly would for me. I understand AI Safety better than most, but I cannot independently verify many research directions. Part of my confidence that the work is useful stems from trust.
Suppose someone is researching the evolutionary history of the immune system to help prevent global pandemics. I do not know if this is useful. I would be more comfortable funding this person if they were researching this topic for global pandemic prevention; it makes them likely to shift focus if their research is misguided. If they were researching for enjoyment, I would be wary of funding them.
Paying people to do things is a value alignment problem. I want you to do what I want, so I pay you money. But how do I know that you'll do what I want? If you don’t share my values, I have to use oversight. In disciplines like software engineering supervisors can assess performance metrics. In disciplines like research such mechanisms are hard to implement.
If core EA organizations were older, they could make use of talented, unaligned individuals. However, the EA movement in 2020 does not have sufficient infrastructure. My guess is that employees receive only vague directions, coupled with instructions to "do what I mean" or "do what you think is best.” Many EA organizations are functional only because the employees are value aligned, so such organizations do not fall prey to Goodhart's law.
Given the current lack of infrastructure to make use of unaligned individuals, the benefits of hiring such people is low. This analysis recommends building infrastructure that allows for money to be used more effectively, e.g. increasing the management and training capacity of existing organizations.
Basic economics suggests that talent, like any other good, should be available for a price. However, this argument has multiple complications:
- Highly talented individuals are a different class of good than less talented individuals and can cost arbitrarily more.
- People talented in ways useful for direct work are rare.
- Such people are in high demand, making them expensive.
- The market for talent is inefficient.
- Talented people might not value money.
- Value misalignment makes it difficult to use such people given current EA management capacity.
Some expensive things are worth buying. The arguments above are only strong enough to complicate the intuition that one can buy talent for money. There are also other subtleties that I have not adequately explained.
Overall, I see many people new to effective altruism considering EtG as their primary career. Arguments for the effectiveness of EtG, especially in the longtermist space, rely on simplifications of the market for talent. I hope to add nuance to this market and encourage young EAs to consider direct work as a career.
An example of this is quantitative traders, who are at the limit of getting efficiently compensated for the talent and only get 10-20% of their profits. ↩︎
Assuming 40 years per career. ↩︎
I don't have data on this, but it would not be surprising to me if most interns at tech companies were net-negative in terms of revenue, i.e. they cost more in money/management time than the value they produce. Of course, they're still likely worth it to the company because determining the talent of a potential employee is worth much more than the $20,000-$50,000 they pay the intern. ↩︎
As anecdata, as part of an interview process, my friend spent an hour talking to someone who made upwards of $5 million a year. Naively, that conversation cost the company more than $2500. In practice, that person worked more than 40 hours a week and didn't have to spend that much energy conversing, lowering the cost. ↩︎
I have no knowledge about mercenaries . This is also a bad example because Examplestan would not want to hire mercenaries that would switch sides in the middle of a battle. For reasons of reputation, the mercenaries would then not want to switch. However, they might switch if paid enough money that reputation no longer mattered. ↩︎
Of course, such mechanisms probably cause less effective behavior due to Goodhart's Law. ↩︎
I have very little experience working for EA organizations. This perspective was informed by conversing with some people who do have such experience and listening to this episode of the 80,000 Hours podcast. ↩︎
One example is the vetting problem: how do you robustly find talented and altruistic individuals? ↩︎
Interesting take on money and talent, thank you for writing up. I thought I would share some of my experience that might suggest an opposing view.
I do direct work and I don’t think I could do earning-to-give very well. Also when I look at my EA friends I see both someone who has gone from direct work to earning-to-give, disliked it and went back to direct work and someone who has gone from earning-to-give to direct work, disliked that an gone back to earning-to-give. Ultimately all this suggests to me that personal fit is likely to be by far the mort important factor here and that these arguments are mostly only useful to the folk who could do really well at either path.
I also think there are a lot of people in the EA community who have really struggled to get into doing direct work (example), and I have struggled to find funding at times and relied on earning-to-give folk to fund me. I wonder if there is maybe a grass is greener on the other side effect going on.
The example you linked to is about someone struggling to get a job in an 'EA organisation'. This is clearly not the same as direct work, which is a much larger category. I am pretty sure you'd agree as someone who does direct work not always in an EA org, but let me know if I'm wrong there.
Yes I agree with you.
That said the original post appears in a few places to be specifically taking about talent at EA organisations so the example felt apt.
I think Rethink Priorities is a very clear counterexample.
We were able to spend money to "buy" many longtermist researchers, some of which would not have counterfactually worked in the area. Plus our hiring round data indicates that there are many more such people out there that we could hire, if only we weren't funding constrained.
While I am in favor of Rethink Priorities and have recommended allocating funding to it multiple times, I do not yet know of any research that is the result of your recent hiring that actually seems useful to me (which is not very surprising, it's not been very long!).
I think Rethink Priorities is a promising approach to potentially resolve some of these issues, but I would really not count it as a success yet, and I really don't think it's obvious that it's going to work out (though it might, and that's what makes it exciting). Also, scaling an organization, in particular scaling high-context research organizations, is very hard, and I would not straightforwardly expect you to actually be able to scale (even if you currently believe that you can).
I also think Rethink Priorities is tapping into a talent funnel that was built by other people, and is very much not buying talent "on the open market" so to speak. I do currently think it is a good place for people to work, but I don't think you would actually be able to hire many people who haven't been engaged with the broader EA/Rationality/Longtermist community for quite a while, and that talent pool is itself pretty limited.
Yes, naturally that would take more than two months to produce!
I'd dispute that on two counts:
1.) I do think we have been able to acquire talent that would not have been otherwise counterfactually acquired by other organizations. For the clearest example, Luisa Rodriguez applied to a fair number of EA organizations and was turned down - she was then hired by us, and now has gone on to work with Will Macaskill and will soon be working for 80,000 Hours. Other examples are also available though I'd avoid going into too much detail publicly to respect the privacy of my employees. We also are continuing to invest on further developing talent pipelines across cause areas and think our upcoming internship program will be a big push in this direction.
2.) Even if we concede that we are using a talent funnel created by other people, I don't think it is a bad thing. There still is a massive oversupply of junior researchers who could potentially do good work, and a massive undersupply of open roles with available mentorship and management. I think anything Rethink Priorities could be doing to open more slots for researchers is a huge benefit to the talent pipeline even if we aren't developing the earlier part of the recruitment funnel from scratch (though I do think we are working on that to some extent).
As an additional data point, I can report that I think it's very unlikely that I would currently be employed by an EA organization if Rethink Priorities didn't exist. I applied to Rethink Priorities more or less on a whim, and the extent of my involvement with the EA community in 2018 (when I was hired) was that I was subscribed to the EA newsletter (where I heard about the job) and I donated to GiveWell top charities. At the time, I had completely different career plans.
[Speaking for myself, not for Rethink Priorities.]
I think someone reading this thread might (incorrectly) think "Habryka is saying the research that these hires have produced hasn't actually seemed useful. Peter agrees but emphasises that it'll take longer for the researchers to produce stuff that's more useful." The real situation is simply that the people hired around November haven't yet published any proper public write-ups (though there are things in the works that should be out in the coming months) - i.e., the situation isn't that they published stuff that Habryka found non-useful.
Hopefully our upcoming first outputs will indeed seem useful!
(I'm not saying Habryka or Peter said incorrect things; I'm just making a guess as to how someone could've interpreted what Habryka said.)
I agree with Peter - ALLFED has been training up volunteers and we could bring on a lot more talent full-time (both our volunteers and from the general EA pool) if we had more money.
[Speaking for myself, not for Rethink Priorities.]
Yeah, I agree with this (and already thought so before joining Rethink Priorities). Various people have made claims like that EA is vetting-constrained or that some of EA's main bottlenecks at the moment are "organizational capacity, infrastructure, and management to help train people up" (Ben Todd). This seems right to me, and seems to align with the idea that there is important work to be done improving parts of the talent funnel in ways other than bringing new people into the funnel in the first place.
(Related things were also discussed here and here.)
(That said, I definitely would agree that bringing more people into the funnel is also good. And I would agree that, all else held constant, it'd typically be more impactful to find, vet, train, manage, etc. someone who wouldn't have otherwise been working on EA-related stuff at all, relative to doing that with someone who would've done something EA-related anyway.)
I was surprised to discover that this doesn't seem to have already been written up in detail on the forum, so thanks for doing so. The same concept has been written up in a couple of other (old) places, one of which I see you linked to and I assume inspired the title:
Givewell: We can't (simply) buy capacity
80000 Hours: Focus more on talent gaps, not funding gaps
The 80k article also has a disclaimer and a follow-up post that felt relevant here; it's worth being careful about a word as broad as 'talent':
I would really appreciate an explicit definition of 'direct work' that this post is using. I was assuming it was my definition in which direct work includes not just work at EA orgs, but also lots of impactful roles e.g. in certain policy areas or certain AI companies. However, some of the comments seem to assume otherwise.
Also if this post does mean 'working at EA orgs' rather than a wider 'direct work' definition, consider not using the term 'direct work' to avoid ambiguity.
Thanks, Mark! I've been struggling to figure out what career goals I myself should pursue, so I appreciated this post.
I think this advice is missing a very important qualification: if you are a highly talented person, you might want to consider direct work. As the post mentions, highly talented people are rare—for example, you might be highly talented if you could plausibly earn upwards of $1m/year.
Regularly talented people are in general poor substitutes for highly talented people. As you say, there is little demand for them at EA organizations: "[Open Philanthropy is] not particularly constrained by finding people who have a strong resume who seemed quite aligned with their mission." (More anecdotal evidence: "It is really, really hard to get hired by an EA organisation.")
In other words, EA orgs value regularly talented people below the market rate—that’s one reason those people should prefer earning to give instead of direct work. (On the other hand, maybe there are opportunities for direct work at non-EA organizations that constitute sufficient demand?)
As a probably regularly-talented person myself, I'm particularly interested in the best course of action here. Rather than "earn to give" or "do direct work," I think it might be "try as hard as you can to become a highly talented person" (maybe by acquiring domain expertise in an important cause area).
One more thing:
The flip side is that if you value money/monetary donations linearly—or more linearly than other talented people—then you’ve got a comparative advantage in earning to give! The fact that "people don't value money" means that no one's taking the exhausting/boring/bad-location jobs that pay really well. If you do, you can earn more than you "should" (in an efficient market) and make an outsize impact.
I think there are lots of opportunities for direct work at non-EA orgs with sufficient demand.
"Try and become very talented" is good advice to take from this post. I don't have a particular method in mind, but becoming the Pareto best in the world at some combination of relevant skills might be a good starting point.
This is a good point. People able to competently perform work they're unenthusiastic about should, all else being equal, have an outsized impact because the work they do can more accurately reflect the true value behind the work.
I expect this isn't what you're actually implying, but I'm a bit worried this could be misread as saying that most people who are sufficiently talented in the relevant sense to work at an EA org are capable of earning $1m/year elsewhere, and that if you can't, then you probably aren't capable of working at an EA org or doing direct work. I just wanted to flag that I think the kinds of talent required for doing direct work are often not all that correlated with the kinds of talent that are highly financially rewarded outside of EA, and that people shouldn't rule themselves out for the former because they wouldn't be capable of earning a ton of money.
(Edit: People (or person?) who downvoted - I'd love to know why! Is it because you think smountjoy is obviously not saying the thing I thought they might be misread as saying, and so you think this is a pointless comment, or because you disagree with it, or something else? I'm fairly new to actually commenting on the forum, so maybe I've not understood the ettiquette properly.)
Agreed. I appreciate this post and responses alike, but think there are many examples of:
I expect there are several cases a year where the world would be better off if an individual in category 1 would EtG and fund direct work of 5-10 individuals in category 2, than if the individual in bullet 1 were to choose direct work instead. Not that those in category 1 should mostly EtG rather than do direct work, but I'd be more bullish on the EtG path in some cases than Mark is given the huge labor supply in category 2.
A sad example of the glut of brilliant history PhDs is the challenging labor market and career that Thea Hunter confronted, despite her extraordinary reputation/abilities according to Foner and others. Her painful trajectory is a sign that there is real slack in the "brilliant historian" market. I expect some rising star historians could be induced to work on EA-relevant problems via grants from those whose academic backgrounds offer greater potential to EtG than history or political science PhDs do.
Systematic undervaluing of some fields is not something I considered and slightly undermines my argument.
I still think the main problem would be identifying rising-star historians in advance instead of in retrospect.
You might not have to identify them in advance, rather than 10+ years into their post-doctoral career. Googling "mid-career grant history" leads to a few links like these — where charitable or governmental foundations provide support to experienced scholars.
The American Historical Association promoted the same grant here. One could imagine a similar grant (perhaps hosted at FHI, Princeton, or another EA-experienced university [or at Rethink Priorities]) where "architectural history," "preservation-related," and other italicized words below are replaced with EA-aligned project parameters that FHI and its donors would hope to support.
One could also structure fewer grants at a higher price point than $15K (say, $50K) to fund more ambitious projects that may absorb 6-9 months of a scholar's time — rather than 2-3 months. As star scholars are identified, their funding could be renewed for multiple years. (Open Phil has certainly followed that model for rising stars and their high-potential projects. See their extension of Jade's grant funding here.)
Thanks for that clarification—maybe the $1m/year figure is distracting. I only mentioned it as an illustration of this point:
The post argues that the kind of talent valuable for direct work is rare. Insofar as that's true, the conclusion ("prefer direct work") only applies to people with rare talent.
I think that overall this is a great post, and that you've made serious progress towards concretizing some vague concerns I have about EtG.
For me, the most striking point was:
I had not heard this before reading your post, and it feels novel and useful to me. I don't think it's true for all roles, but I like it as a way to think about some roles.
Two things in the post confused me.
The purpose of hiring two people isn't just to do twice the amount of work. Two people can complement each other, creating a team which is better than the sum of their parts. Even two people with the same job title are never doing exactly the same work, and this matters in determining how much value they're adding to the firm. I think this works against the point you're making in this passage. Do you account for this somewhere else in your post, and/or do you think it affects your overall point?
You use the word "talent" a lot, and it's not clear to me what you mean by that word. Parts of the post seems to assume that talent is an identifiable quantity, in principle measurable on a single scale. I think that many (most?) real world cases don't work like this. To me, "talent" is a vast array of incommensurable qualities. Some are quantifiable, some are not. In practice, the market attempts to rectify this by (implicitly) assigning monetary value to all these quantities and adding them up—your post argues convincingly that it regularly fails to do so.
But if we can't really even measure talent to begin with, what are we even talking about when we talk about talent? What do you mean when you say "talent"?
My claim is that having one person with the skill-set of two people is more useful that having both those people. I have some sense that teams are actually rarely better than the sum of their parts, but I have not thought this very much. I don't account for this and don't think it weakens my overall point very much.
I mean something vaguely like "has good judgement" and "if I gave this person a million dollars, I would be quite pleased with what they did with it" and "it would be quite useful for this person to spend time thinking about important things".
It is difficult to measure this property, which is why hiring talented people is difficult.
I agree I use the word talent a lot and this is unfortunate, but I couldn't think of a better word to use.
This section, in order to apply to people, seemingly assumes something like "beyond meeting their basic needs" or "beyond meeting some threshold amount."
I believe that there's a very good chance that many EA orgs are not meeting the threshold amount of many people whom they are targeting. There are many organizations offering amounts that many likely find greatly constraining to living off of.
I think this section would be more applicable if the market you were commenting on largely paid well; instead I think it is highly variable with a sizable constituency of poorly paying jobs.
I am confused by EA orgs not meeting basic living thresholds. Could you provide some examples?
I am not trying to claim that EA orgs do not meet basic living thresholds, but rather that "There are many organizations offering amounts that many likely find greatly constraining to living off of."
I think it's quite common for EA job offers to be in the $40-$55k range (there are also many well above this range), with multiple instances of being significantly lower than that (e.g. $30k).
I believe that there are many that find these potential salaries to be greatly constraining.
"The 99th percentile probably isn't good enough either." If you more than 99th percentile talented maybe you can give yourself a chance to earn a huge amount of money if you are willing to take on risk. Wealth is extremely fat-tailed so this seems potentially worthwhile.
If Dustin had not been a Facebook co-founder EA would have something like one-third of its current funding. Sam Bankman Fried strikes me as quite talented. He originally worked at Jane Street and quit to work at a major EA org. Instead, he ended up founding the crypto exchange FTX. FTX is now valued at around a billion dollars. I am quite happy he decided against 'direct work'.
It seems difficult but not impossible to replace top talent with multiple less talented people at many EA jobs (for example charity evaluation). It seems basically impossible to replace a talented cofounder with a less talented one without decimating your odds of success. However, it is plausible that top talent should directly work on AI issues.
It is also important to note most people are not 'top talent' and so they need to follow different advice.
Hey, yo, Mark, It’s me, Charles.
So I’ve read this post and there’s a lot of important thoughts you make here.
Focusing on your takeaways and conclusion, you seem to say that earning to give is bad because buying talent is impractical.
The reasoning is plausible, but I don’t see any evidence for the conclusion you make, and there seems to be direct counterpoints you haven’t addressed.
Here’s what I have to say:
It seems we can immediately evaluate “earning to give” and the purchasing of labor for EA
There’s a very direct way to get a sense if earning to give is effective, and that’s by looking at the projects and funds where earning to give goes, such as in the Open Phil and EA Funds grants database.
Looking at these databases, I think it’s implausible for me, or most other people, to say a large fraction of projects or areas are poorly chosen. This, plus the fact that many of these groups probably can accept more money seems to be an immediate response to your argument.
These funds are particularly apt because they are where a lot of earning to give funds go to.
It seems that following your post directly implies that people who earn to give and have donated haven’t been very effective. This seems implausible, as these people are often highly skilled and almost certainly think of their donations carefully. Also, meta comment or criticism is common.
It seems easy to construct EA projects that benefit from monies and purchasable talent
We know with certainty that millions of Africans will die of malnutrition and lack of basic running water. These causes are far greater than say, COVID deaths. In fact, the secondary effects of COVID are probably more harmful than the virus itself to these people.
The suffering is so stark that projects like simply putting up buckets of water to wash hands would probably alleviate suffering. In addition to saving lives, these projects probably help with demographic transition and other systemic, longer run effects that EAs should like.
Executing these projects would cost pennies per person.
This doesn't seem like it needs unusual skills that are hard to purchase.
Similarly, I think we could construct many other projects in the EA space that require skills like administrative, logistic, standard computer programming skills, outreach and organizational skills. All of these are available, probably by most people reading this post.
It seems implausible that market forces are ineffective
I am not of the “Chicago school of economics”, but this video vividly explains how money coordinates activity.
While blind interpretations of this idea are stupid, it seems plausible that money that causes effective altruistic activities in the same way that buying a pencil does.
Why wouldn’t we say that everyone in an organization and even the supply chain that provides clean water or malaria nets is doing effective altruism?
I also don’t get this section “Talent is very desirable”:
But in my mind, the idea of earning to give is that we have a pool of money and a pool of ex-Ante valuable EA projects. We take this money and buy labor (EA people or non-EA people) to do these projects.
The fact that this same labor can also earn money in other ways, doesn’t create some sort of grid lock, or undermine the concept of buying labor.
So, when I read most of your posts, I feel dumb.
I read your post “The Solomonoff Prior is Malign”. I wish I could also write awesome sentences like “use the Solomonoff prior to construct a utility function that will control the entire future”, but instead, I spent most of my time trying to find a wikipedia page simple enough to explain what those words mean.
Am I missing something here?
What is Mark’s model for talent?
I think one thing that would help clarify things is a clearly articulated model where talent is used in a cause area, and why money fails to purchase this.
You’re interested in AI safety, of like, the 2001 kind. While I am not right now, and not an expert, I can imagine models of this work where the best contributions would supersede slightly worse work, making even skilled people useless.
For these highest tier contributors, making sure that HAL doesn’t close the pod bay doors, perhaps all of your arguments apply. Their talent might be very expensive or require intrinsic motivation that doesn’t respond to money.
Also, maybe what you mean is another class, of an exotic “pathfinder” or leader model. These people are like Peter Singer, Martin Luther King or Stacey Abrams. It’s debatable, but perhaps it may be true these people cannot be predicted and cannot be directly funded.
However, in either of these cases, it seems that special organizations can find ways to motivate, mentor or cultivate these people, or the environment they grow up in. These organizations can be funded for money.
I don't consider Open Phil to be an example of Earning to Give. My understanding is that basically all of their funding comes from Dustin Moskowitz's Facebook stock. He completed his work on Facebook before taking the Giving Pledge, so his primary earning activities were not chosen in the spirit of Earning to Give.
It's also not clear to me that the EA Funds are examples of EtG. The EA Funds take frequent donations, and my impression is that they have many donors. At least, I don't see any evidence that the donors are purposefully Earning to Give (i.e. that they chose their jobs as a way to maximize earnings with a plan to donate).
It's possible that you and I have different definitions of EtG. Mark's post doesn't explicitly define it. Wikipedia's definition does not seem to include "normal" donors who give, say, 10% of their not-super-large income.
These examples might not be critical to your first point, but I think you would need to provide other examples of grantmakers that are more obviously funded by EtG (e.g. by evaluating Matt Wage's personal grantmaking).
Hey Charles! Glad to see that you're still around.
I don't think OpenPhil or the EA Funds are particularly funding constrained, so this seems to suggest that "people who can do useful things with money" is more of a bottleneck than money itself.
I think I disagree about the quality of execution one is likely to get by purchasing talent. I agree that in areas like global health, it's likely possible to construct scalable projects.
I am pessimistic about applying "standard skills" to projects in the EA space for reasons related to Goodhart's Law.
I think my take is "money can coordinate activity around a broad set of things, but EA is bottlenecked by things that are outside this set."
I don't think this section is very important. It is arguing that paying people less than market rate means they're effectively "donating their time". If those people were earning money, they would be donating money instead. In both cases, the amount of donations is roughly constant, assuming some market efficiently. Note that this argument is probably false because the efficiency assumption doesn't hold in practice.
I think your guesses are mostly right. Perhaps one analogy is that I think EA is trying to do something similar to "come up with revolutionary insights into fundamental physics", although that's not quite right because money can be used to build large measuring instruments, which has no obvious backwards analogue.
I agree this is true, but I claim that the current bottleneck by far the organizations/mentors not yet existing. I would much rather someone become a mentor than earn money and try to hire a mentor.