All of sapphire's Comments + Replies

Is it still hard to get a job in EA? Insights from CEA’s recruitment data

Bottom line is actually 'CEA is four times as selective'. This was pointed out elsewhere but its a big difference. 

The Future Might Not Be So Great

I find the following simple argument disturbing:

P1 - Currently, and historically, low power being (animals, children, old dying people) are treated very cruelly if treating them cruelly benefits the powerful even in minor ways. Weak benefits for the powerful empirically justify cruelty at scale.
P2 - There is no good reason to be sure the powerful wont have even minor reasons to be cruel to the powerless (ex: suffering sub-routines, human CEV might include spreading earth like life widely or respect for tradition)
P3 - Inequality between agents is likely to ... (read more)

1Davidmanheim1mo
This is true, but far less true recently than in the past, and far less true in the near past than in the far past. That trajectory seems between somewhat promising and incredibly good - we don't have certainty, but I think the best guess is that in fact, it's true that the arc of history bends towards justice.
The Future Might Not Be So Great

In most cases where I am actually familiar with the facts CEA has behaved very poorly. They have both been way too harsh on good actors and failed to take sufficient action against bad actors (ex Kathy Forth). They did handle some very obvious cases reasonably though (Diego). I don't claim I would do a way better job but I don't trust CEA to make these judgments.

Critiques of EA that I want to read

There are multiple examples of EA orgs behaving badly I can't really discuss in public. The community really does not ask for much 'openness'.

Transcript of Twitter Discussion on EA from June 2022

The story is more complicated but I can't really get into it in public. Since you work at Rethink you can maybe get the story from Peter.  I've maybe suggested too simplistic a narrative before. But you should chat Peter or Marcus about what happened with Rethink and EA funding.

Transcript of Twitter Discussion on EA from June 2022

https://forum.effectivealtruism.org/posts/3c8dLtNyMzS9WkfgA/what-are-some-high-ev-but-failed-ea-projects?commentId=7htva3Xc9snLSvAkB 

"Few people know that we tried to start something pretty similar to Rethink Priorities in 2016 (our actual founding was in 2018). We (Marcus and me, the RP co-founders, plus some others) did some initial work but failed to get sustained funding and traction so we gave up for >1 year before trying again. Given that RP -2018 seems to have turned out to be quite successful, I think RP-2016 could be an example of a failed... (read more)

4MichaelStJules2mo
Interesting. I hadn't heard about this. I think EA Funds, Founders Pledge and Farmed Animal Funders didn't exist back then, and this would have been out of scope for ACE (too small, too little room for more funding, no track record) and GiveWell (they don't grant to research) at the time, so among major funders/evaluators, it pretty much would have been on Open Phil. But Open Phil didn't get into farm animal welfare until late 2015/early 2016: https://www.openphilanthropy.org/research/incoming-program-officer-lewis-bollard/ [https://www.openphilanthropy.org/research/incoming-program-officer-lewis-bollard/] https://www.openphilanthropy.org/grants/page/12/?q&focus-area=farm-animal-welfare&view-list=false [https://www.openphilanthropy.org/grants/page/12/?q&focus-area=farm-animal-welfare&view-list=false] So seeing and catching RP this early on for a new farm animal welfare team at Open Phil was plausibly a lot to ask then.
The Strange Shortage of Moral Optimizers

DXE Bay is not very decentralized. It's run by the five people in 'Core Leadership'. The leadership is elected democratically. Though there is a bit on complexity since Wayne is influential but not formally part of the leadership. 

Leadership being replaced over time is not something to lament. I would strongly prefer more uhhhh 'churn' in EA's leadership. I endorse the current leadership quite a bit and strongly prefer that several previous 'Core' members lost their elections.

note: I haven't been very involved in DXE since I left California. Its really quite concentrated in the Bay.

Transcript of Twitter Discussion on EA from June 2022

If I had to guess I would predict Luke is more careful than various other EA leaders (mostly cause of Luke's ties to Eliezer). But you can look at the observed behavior of OpenPhil/80K/etc and I dont think they are behaving as carefully as I would endorse with respect to the most dangerous possible topic (besides maybe gain of function research which Ea would not fund). It doesn't make sense to write leadership a blank check. But it also doesn't make sense to worry about the 'unilateralists curse' when deciding if you should buy your friend a laptop!

Transcript of Twitter Discussion on EA from June 2022

This level of support for centralization and deferral is really unusual. I actually don't know of any community besides EA that endorses it. I'm aware it's a common position in effective altruism. But the arguments for it haven't been worked out in detail anywhere I know. 

"Keep in mind that many things you might want to fund are in scope of an existing fund, including even small grants for things like laptops. You can just recommend they apply to these funds. If they don't get any money, I'd guess there were better options you would have missed but sh... (read more)

2MichaelStJules2mo
Ya, I guess I wouldn't have funded them myself in Open Phil's position, but I'm probably missing a lot of context. I think they did this to try to influence OpenAI to take safety more seriously, getting Holden on their board. Pretty expensive for a board seat, though, and lots of potential downside with unrestricted funding. From their grant writeup [https://www.openphilanthropy.org/grants/openai-general-support/]: FWIW, I trust the judgement of Open Phil in animal welfare and the EA Animal Welfare Fund a lot. See my long comment here [https://forum.effectivealtruism.org/posts/MpJcvzHfQyFLxLZNh/transcript-of-twitter-discussion-on-ea-from-june-2022?commentId=qf2uy3vwjjrjz3Ku4] .
2MichaelStJules2mo
Luke from Open Phil on net negative interventions in AI safety (maybe AI governance specifically): https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism#6yFEBSgDiAfGHHKTD [https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism#6yFEBSgDiAfGHHKTD]
Transcript of Twitter Discussion on EA from June 2022

That doesnt really engage with the argument. If some other agent is values aligned and approximately equally capable why would you keep all the resources? It doesnt really make sense to value 'you being you' so much.

I dont find donor lotteries compelling. I think resources in Ea are way too concentrated. 'Deeper investigations' is not enough compensation for making power imbalances even worse.

1MichaelStJules2mo
I'm not suggesting you keep all the resources, I'm suggesting you give them to someone even more capable (and better informed) than someone equally capable, to increase the probability that they'll be directly allocated by someone more capable. Keep in mind that many things you might want to fund are in scope of an existing fund, including even small grants for things like laptops. You can just recommend they apply to these funds. If they don't get any money, I'd guess there were better options you would have missed but should have funded first. You may also be unaware of ways it would backfire, and the reason something doesn't get funded is because others judge it to be net negative. We get into unilateralist curse territory. There are of course cases where you might have better info about an opportunity, but this should be balanced against having worse info about other opportunities. Of course, if you are very capable, then plausibly you should join a fund as a grantmaker or start your own or just make your own direct donations, but you'd want to see what other grantmakers are and aren't funding and why, or where their bar for cost-effectiveness is and what red flags they use, at least.
Transcript of Twitter Discussion on EA from June 2022

I think the Aumann/outside-view argument for 'giving friends money' is very strong. Imagine your friend is about as capable and altruistic as you. But you have way more money. It just seems rational and efficient to make the distribution of resources more even? This argument does not at all endorse giving semi-random people money.

7Lukas_Gloor2mo
I've also had this thought (though wouldn't necessarily have thought of it as an outside view argument). I'm not convinced by counterarguments here in the thread so far. Quoting from a reply below that argues for deferring to grantmakers (and thereby increasing their overhead with them getting more applications): >You may also be unaware of ways it would backfire, and the reason something doesn't get funded is because others judge it to be net negative. I mean, that's true in theory, but giving people who you know well (so have a comparative advantage at evaluating their character and competence) some extra resources isn't usually a high-variance decision. Sure, if one of your friends had a grand plan for having impact in the category of "tread carefully," then you probably want to consult experts to make sure it doesn't backfire. But you also want to talk to your friend/acquaintance to slow down in general, in that case, so it isn't a concept that only or particularly applies to whether to give them resources. And for many or even most people who work on EA topics, their work/activities don't come with high backfiring risks (at least I tentatively think so, even though I might agree with the statement "probably >10% of people in EA have predictably negative impact." Most people who have negative impact have low negative impact.) >This would be like the opposite of the donor lottery, which exists to incentivize fewer deeper independent investigations over more shallow investigations. I think both things are valuable. You can focus on comparative advantages and reducing overhead, or you can focus on benefits from scale and deep immersion. One more thought on this: If someone is inexperienced with EA and feels unsuited for any grantmaking decisions, even in areas where they have local information that grantmakers lack, it makes more sense for them to defer. However, it gets tricky. They'll also tend to be bad at deciding who to defer to. So, yeah, they can reduc
1MichaelStJules2mo
This would be like the opposite of the donor lottery, which exists to incentivize fewer deeper independent investigations over more shallow investigations. You could also give it to someone far better informed about and a better judge of giving opportunities. I think grantmakers are in this position relative to the majority of EAs, but you would be increasing funding/decision-making concentration.
2rogersbacon13mo
"To register, please email info@theseedsofscience.org with your name, title (can be anything/optional), institution (same as title), and link (personal website, twitter, or linkedin is fine) for your listing on the gardeners page [https://www.theseedsofscience.org/gardeners]. From there, it's pretty self-explanatory - I will add you to the mailing list and send you an email that includes the manuscript, our publication criteria, and a simple review form for recording votes/comments."
Solving the replication crisis (FTX proposal)

What was the approximate budget? When I read this my first thought was 'did they ask for a super ton of money and get rejected on that basis'?

3Michael_Wiebe4mo
Around a million.
Some thoughts on vegetarianism and veganism

Effective altruists talk a lot about cooperation but I actually think its sort of pathologically uncooperative to eat meat. It seems pretty hard to dismiss the arguments for veganism. Lots of provably well informed EAs (in the sense they could score well on a test of EA knowledge) are going to settle on 'dont personally participate in enormous moral outrages'. Why would you personally make the problem worse? And make the social situation worse for vegan EAs. It's a very serious breach of solidarity. (Though maybe using the term solidarity outs me as a leftie)

Agrippa's Shortform

As a recent counterpoint to some collaborationist messages: https://forum.effectivealtruism.org/posts/KoWW2cc6HezbeDmYE/greg_colbourn-s-shortform?commentId=Cus6idrdtH548XSKZ

"It was disappointing to see that in this recent report by CSET, the default (mainstream) assumption that continued progress in AI capabilities is important was never questioned. Indeed, AI alignment/safety/x-risk is not mentioned once, and all the policy recommendations are to do with accelerating/maintaining the growth of AI capabilities! This coming from an org that OpenPhil has give... (read more)

1Agrippa6mo
yeah this is really alarming and aligns with my least possible charitable interpretation of my feelings / data.
2Greg_Colbourn6mo
I'm comfortable publicly criticising big orgs (I feel that I am independent enough for this), but would be less comfortable publicly criticising individual researchers (I'd be more inclined to try and persuade them to change course toward alignment; I have been trying to sow some seeds in this regard recently with some people keen on creating AGI that I've met).
FTX EA Fellowships

Why is the stipend 10K? How are these numbers chosen? Funds are not exactly tight on this scale. I understand that EA wants to filter for dedicated people. But I feel like these really low pay/stipends should be justified a little more explicitly.  Wouldn't it make sense to offer more money since 'moving to the Bahamas for 6 months is not exactly a low-cost decision for many people? [I am aware the EA hotel gets people despite a lack of very generous stipends]

Isn't the $10k additional to the salary that you'll keep receiving from your employer?

It's 10k plus travel plus housing plus co-working space, so it sounds like other than food basically the whole 10k would be disposable income. Potentially the housing provides food also. I'm not sure what cost of living is like in the Bahamas but that hardly sounds like "really low pay"

Starting a Small Charity to Give Grants

I am aware but the benefit is still quite large.

Starting a Small Charity to Give Grants

Oh wow. This was super informative. Thanks so much.

Starting a Small Charity to Give Grants

This would have been way better than holding everything in my individual account. But it doesn't let you make 'grants' to individuals. We need something like a smaller version of EA Funds.

What would you do if you had half a million dollars?

I would unrestrictedly give it to individual EAs you trust. 

Why should we be effective in our altruism?

When I was small I needed help. Instead, I was treated very badly. Many people and animals need help right now. We have to help as many of them as possible. 

Earlier in our relationship, I told my wife that we should legally marry other people so they could move to the USA. She is usually quite open-minded but she very much hated the plan so we never did it. 

I am very big on living out your values. If you are a citizen of a highly desired country you can help make open borders a reality. I encourage you to consider this in who you legally marry. This is especially relevant if you are poly. There are a lot of versions that differ quite a bit in terms of risk. 

Good luck.

Vitalik Buterin just donated $54M (in ETH) to GiveWell

Groups are not public. Here is an example from 'EAs in crypto'.  The original thread was in 'highly speculative EA investing'. The EAs in crypto thread got the most engagement.

note: Anthony Deluca is my deadnmae (I wasnt out when Greg wrote this comment. he didn't deadnmae me), Greg is a well-known EA.

6IanDavidMoss1y
Amazing! It seems not-totally-crazy to think you may have had a hand in this :)
Vitalik Buterin just donated $54M (in ETH) to GiveWell

Multiple people connected to the lesswrong/ea investing groups tried to contact him. We both contacted him directly and got some people closer to Vitalik to talk to him. I am unsure how much influence we had. He donated less than two days after the facebook threads went up.

We definitely tried!

5IanDavidMoss1y
What Facebook threads are you referring to?
Being Vocal About What Works

I am not sure Effective Altruism has been a net hedonic positive for me. In fact, I think it has not been. 

Recently in order to save money to donate more, I chose to live in very cheap housing in California. This resulted in many serious problems. Looking back arguably the biggest problem was the noise. If you cram a bunch of people into a house it's going to be noisy.  This very badly affected my mental health. There were other issues as well. My wife and I could have afforded a much more expensive place. That would have been money very well spe... (read more)

Thanks for sharing your experiences. I think it's valuable to get anecdata on downsides so people have clearer expectations going in.

Should you do a PhD in science?

It is hard for me to think of much advice that has gone worse for rationalists/EAs on average than 'Get a PHD'.  I know dozens of people in the community who spent at least some time in a PHD program. There is a huge amount of people expressing strong regret. A small number of people think their PHD went ok. Very few people think their PHD was a great and effective idea. Notably, I am only counting people who have already left grad school and had some time to reflect. 

The track record is incredibly bad in the community. The opportunity cost is ex... (read more)

Having talked to many people for multiple hours (>100) over the years about their career decisions, I share this assessment. 

Sapphire's Shortform

I don't like when animal advocates are too confident about their approach and are critical of other advocates. We are losing badly, meat consumption is still skyrocketing! Now is time to be humble and open-minded. Meta-advice: Don't be too critical of the critical either!

What does failure look like?

My biggest mistake was not buying, and holding, crypto early. This was an extremely costly mistake. If I bought and held I would have hundreds of millions of dollars that could have been given as grants. I doubt I will ever make such a costly mistake again.

Going to graduate school was a very bad decision too. After 2.5 years I had to take my L and get out. It was very painful to admit I had been wrong but that is life.

Mundane trouble with EV / utility

The problem is real. Though for 'normal' low probabilities I suggest biting the bullet. A practical example is the question of whether to found a company. If you found a startup you will probably fail and make very little or no money. However, right now a majority of effective altruist funding comes from Facebook co-founder Dustin Moskovich. The tails are very thick. 

If you have a high-risk plan with a sufficiently large reward I suggest going for it even if you are overwhelming likely to fail. Taking the risk is the most altruistic thing you can do. ... (read more)

What Makes Outreach to Progressives Hard

Really cool to learn about resource generation. These fellows are hardcore. I promote the following to EA type people:
-- Donate at least 10% of pre-tax income (I am above this)
-- Be as frugal as you can. Certainly don't spend more than could be supported by the median income in your city. 
-- Once you have at least ~500K net worth give away all additional income. In my opinion, 500K is enough to fund a lean retirement if you are willing to accept a little risk. 

--If you get a big windfall I suggest either putting it in a trust or just earmarking i... (read more)

What Makes Outreach to Progressives Hard

From this perspective, a corporate lawyer who went to Harvard is not a class traitor. They are just acting in their own class interests.

1[comment deleted]1y
What Makes Outreach to Progressives Hard

I think of the intersectionality/social justice/anti-oppression cluster as being a bit more specific than just 'progressive' so I will only discuss the specific cluster. Through activism, I met many people in this cluster. I myself am quite sympathetic to the ideology. 

But I have to ask: How do you hold this ideology while attending Harvard Law? From this perspective, Harvard law is a seat of the existing oppressive power structure and you are choosing to become part of this power structure by attending. The privileges that come from attending Harvard... (read more)

Based on my experiences as a Yale undergraduate, I've come away with the perhaps overly pessimistic conclusion that a lot of class-privileged leftists at Ivy+ schools don't actually resolve that contradiction, and are unfortunately not that interested in interrogating and addressing their class privilege, or thinking about redistributing what familial or future wealth / resources they may have access to. I say this as both a former organizer of Yale EA, but also as someone who started a Resource Generation chapter there, and found it difficult to get peopl... (read more)

Ironically, the situation in which I have most frequently been asked about whether EA is elitist is while giving intro talks about EA at MIT, Yale, etc.

9deleted1y
Not OP or at Harvard Law but anecdotally I know plenty of people who would consider themselves to be leftists, fit in the anti-oppression cluster, but wouldn't think that just going to Harvard Law makes you a class traitor. I think for many it would depend on what the Harvard Law grad actually did as a profession, eg - are you a corporate lawyer (class traitor) or a human rights lawyer (not class traitor). That being said, I also think that the mainstreaming of social justice issues means that increasing numbers of people in the intersectionality/anti-oppression cluster don't know about / care about / support ideas about class struggle and class war, so aren't really 'leftists' in that sense of the word.
What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings?

Parasitic wasps may be the most diverse group of animals (beating out beetles). In some areas environments, a shocking fraction of prey insects are parasitized.

 If you value 'life' you should probably keep humans around so we can spread life beyond earth. The expected amount of life in the Galaxy seems much higher if humans stick around. Imo the other logical position is 'blow up the sun'. Don't just take out the humans, take out the wasps too. The earth is full of really horrible suffering and if the humans die out then wasp parasitism will probably ... (read more)

Money Can't (Easily) Buy Talent

"The 99th percentile probably isn't good enough either." If you more than 99th percentile talented maybe you can give yourself a chance to earn a huge amount of money if you are willing to take on risk. Wealth is extremely fat-tailed so this seems potentially worthwhile.

If Dustin had not been a Facebook co-founder EA would have something like one-third of its current funding. Sam Bankman Fried strikes me as quite talented. He originally worked at Jane Street and quit to work at a major EA org.  Instead, he ended up founding the crypto exchange FTX. FT... (read more)

Careers Questions Open Thread

You should take the quant role imo. Optionality is valuable (though not infinitely so). Quant trading gives you vastly more optionality. If trading goes well but you leave the field after five years you will have still gained a large amount of experience and donated/saved a large amount of capital. It's not unrealistic to try for 500K donated and 500K+ saved in that timeframe, especially since firms think you are unusually talented. If you have five hundred thousand dollars, or more, saved you are no longer very constrained by finances. Five hundred thousa... (read more)

AI PhDs tend to be very well-compensated after graduating, so I don't think personal financial constraints should be a big concern on that path.

More generally, skill in AI is going to be upstream of basically everything pretty soon; purely in terms of skill optionality, this seems much more valuable than being a quant.

What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent?

I am a rather strong proponent of publishing credible accusations and calling out community leadership if they engage in abuse enabling behavior. I published a long post on Abuse in the Rationality/EA Community. I also publicly disclosed details of a smaller incident. People have a right to know what they are getting into. If community processes are not taking abuse seriously in the absence of public pressure then information has to be made public. Though anyone doing this should be careful.

Several people are discussing allegations of DXE being abusive an... (read more)

How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna?

Good point that Open Phil makes all donations public. I found a CSV on their site and added up the donations dated 2018/2019/2020.

2018: $190,477,938

2019: $273,279,362

2020 so far: $145,405,362

This is a really useful answer.

What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent?

I am a member of DXE and have interacted with Wayne. I think if you care about animals the amount of QALYs gained would be massive. In general Wayne has always seemed like a careful, if overly optimistic, thinker to me. He always tries to follow good leadership practices. Even if you are not concerned with animal welfare I think Wayne would be very effective at advancing good policies.

Wayne being mayor would result in huge improvements for climate change policy. Having a city with a genuine green policy is worth a lot of QALYs. My only real complaint about Wayne is that he is too optimistic but that isn't the most serious issue for a Mayor.

This answer would be strengthened by one or two examples of his careful thinking, or especially by a counterpoint to the claim that DxE uses psychological manipulation techniques on its members.

I haven't checked the claims myself, but "follow good leadership practices" seems to be a heavily disputed claim. Some people claim DxE is a cult, see e.g. here.

Replaceability Concerns and Possible Responses

Why do you think orgs labelled 'effective altruist' get so much talent applying but those orgs don't? How big do you think the difference is? I am somewhat informed about the job market in Animal Advocacy. It does not seem nearly as competitive as the EA market. But I am not sure the magnitude of the difference in the replaceability analysis.

2Kirsten2y
I think organisations labelled 'Effective Altruist' are more prestigious amongst our friends. People like to work places that are widely recognised and hard to get in to, don't they? I'm not sure how many applicants these other organisations receive, though.
Thoughts on 80,000 Hours’ research that might help with job-search frustrations

Really good article. I have been critical of 80K hours in the past but this article caused me to substantially update my views. I am happy to hear you will be at 80K hours.

What to do with people?

I think we are pretty far from exhausting all the good giving oppurtunities. And even if all the highly effective charities are filled something like Give Directly can be scaled up. It is possible in the future we will eventually get to the point where there are so few people in poverty that cash transfers are ineffective. But if that happens there is nothing to be sad about. the marignal value of donations will go down as more money flows into EA. That is an argument for giving more now. A future where marginal EA donaions are ineffective is a very good future.

Yeah, GiveDirectly feels like the kind of thing that could take hundreds of millions or billions of dollars. If we ever do run out of funding opportunities, which I don't think we will any time soon, that's a really good problem to have.

8Raemon3y
Nod. My comment wasn't intended to be an argument against, so much as "make sure you understand that this is the world you're building" (and that, accordingly, you make sure your arguments and language don't depend on the old world) The traditional EA mindset is something like "find the charities with the heavy tails on the power law distribution." The Agora mindset (Agora was an org I worked at for a bit, that evolved sort of in parallel to EA) was instead "find a way to cut out the bottom 50% on charities and focus on the top 50%", which at the time I chafed at but I appreciate better now as the sort of thing you automatically deal with when you're trying to build something that scales. I do think we're *already quite close* to the point where that phase transition needs to happen. (I think people who are very thoughtful about their donations can still do much better than "top 50%", but "be very thoughtful" isn't a part of the thing that scales easily)
After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation
Does the high difficulty of getting a job at an EA organization mean we should stop promoting EA? (What are the EA movement's current bottlenecks?)

Promoting donations or Earnign to Give seems fine. I think we should stop promoting 'EA is talent constrained'. There is a sense in which EA is 'talent constrained'. But the current messaging around 'EA is talent constrained' consistently misleads people, even very informed people such as the OP and some of the experts who gave him advice. On the other hand EA can certainly abs... (read more)

2Richenda3y
Couldn't agree more!
Simultaneous Shortage and Oversupply

At least some people at OpenAI are making a ton of money: https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-openai.html /. Of course not everyone is making that much but I doubt salaries at OpenAI/DeepMind are low. I think the obvious explanation is the best one. These companies want to hire top talent. Top talent is hard to find.

The situation is different for organizations that cannot afford high salaries. Let me link to Nate's explanation from three years ago:

I want to push back a bit against point #1 ("Let's
... (read more)
3Davidmanheim4y
I don't think this is quite right. The people working at OpenAI are paid well, but at the same time they are taking huge cuts in salary compared to where they could be working otherwise. (Goodfellow and Sutskever could be making millions anywhere.) And given the distribution of salary, it's very likely that the majority of both OpenAI and Deepmind researchers are making under $200k - not a crazy amount for Deep Learning talent nowadays.
Earning to Save (Give 1%, Save 10%)

Great Comment. Thanks for the detailed explanation. This was especially useful for me to understand your model:

Early stage projects need a variety of skills, and just being median-competent is often enough to get them off the ground. Basically every project needs a website and an ops person (or, better – a programmer who uses their power to automate ops). They often need board members and people to sit in boring meetings, handle taxes and bureaucracy.
I think this is quite achievable for the median EA.
Earning to Save (Give 1%, Save 10%)

I feel like this post illustrates large inferential gaps. In my experience trying to work in EA works for a rather small number of people. I certainly don't recommend it. Let me quote something I posted on the 80K hours thread:

80K Hour's advice seems aimed, perhaps implicitly, at extremely talented people. I >would roughly describe the level of success/talent as 'top half of Oxford'. If you do >not have that level of ability, then the recommend career paths are going to be long >shots at best. Most people are not realistically
... (read more)
1dpiepgrass5mo
Odd, I never felt that way while reading 80,000 hours, even though it always felt like they (and EA in general) are seeking out people who are as smart as possible and positioned to earn a lot. But if I could make 200K and donate 60K, I'd consider it a huge success!
9Raemon4y
For completeness sake, responding more in depth to your 80k comment. (It's plausible this should go in the other 80k post-thread but it seemed just as much part of this conversation. shrug) Disclaimer Re: 80k I haven't read 80k very thoroughly and am not sure whether I endorse their advice or if my picture of their overall advice is accurate. But what advice I've seen does seem like it's aiming to fill a fairly narrow set of top-vacancies. And that it does seem pretty alienating if you're not part of their demographic. This doesn't necessarily mean 80k should change focus – the top career paths are still highly important to fill and they have limited time. But I do think it probably means 80k style advice shouldn't be the only/primary place we direct newcomer's attention. My own take on what kind of direct work is advisable is still a probably a bit depressing – I don't think there are easy answers on how to help, and it'd be hard to scale across 10,000s of people. [It's possible 80k actually shares these views, or even that they're listed on the website, I haven't checked] My take: [edit: updated because I didn't quite address deluks917's points as worded] I think the issues getting into EA Direct Work has less do with how skilled you need to be, and more to do with limitations in network bandwidth. There is some agentiness needed to get involved, but a) I think agency is a learnable skill, b) the amount required is less than you might think. If you can successfully get yourself into the EA network, then you can be aware of early stage projects forming. Early stage projects need a variety of skills, and just being median-competent is often enough to get them off the ground. Basically every project needs a website and an ops person (or, better – a programmer who uses their power to automate ops). They often need board members and people to sit in boring meetings, handle taxes and bureaucracy. I think this is quite achievable for the median EA. Early sta
5Raemon4y
So I have a mixture of agreements and disagreements with your quoted comment (minor meta point: I recommend formatting it such that it's a blockquote to make it easier to see which section is which) I'll summarize my own version of that comment in a bit (the tldr of which is "it's not as bad as you describe it, but yeah, it's still pretty bad"). But I don't think the applicability hinges on the specifics of your comment. Instead, I'd argue: Earn-to-save is relevant to a much broader swath of people. Even if you're just trying to Earn-to-Give ultimately, it's still much more important to seek out higher paying jobs than to donate when you're at at a low-to-mid-paying job. This is relevant even if you're "just" moving from $50k to $80k. My biggest crux here is that having 2 years of runway is important even for switching jobs at that level, and I think this should dominate even within your framework (at least by my understanding of your position). Meanwhile, I'd make a more speculative claim which is that while yes, most people probably won't end up getting a Direct Impact career, the people that do still have enough expected value that that early EAs should at least be seriously considering that possibility. (I very much don't think you need to be top-half of oxford to for direct work to be better than earning to give)
Earning to Save (Give 1%, Save 10%)

I feel like your post would be harder to misunderstand if it included some hard numbers. In particular hard numbers on income.

8Raemon4y
I do think the post would be much improved if it went into details with more numbers and cases (I definitely did a low effort version of the post). But my core point was actually subtly different from mingyuan's, and I think the numbers that would support my point are a different sort than "how much money can people afford to donate?" (not sure which type of numbers you meant to imply) Mingyuan's case is one of the things I was trying to solve for. But the more important underlying claim was "it's more important to have at least a year of runway in the bank than it is to get started donating heavily." (This is essentially what 80k is already recommending [https://80000hours.org/2015/11/why-everyone-even-our-readers-should-save-enough-to-live-for-6-24-months/] as Ben Todd notes elsewhere in thread. Their current post argues to donate 1% until you have 6-12 months of runway, and runway includes moving in with parents. I'd argue for a stronger claim that recommends 12-36 months and living with parents doesn't count, but the basic principle is the same) This obviously only makes sense as EA advice if there's a part 2, where you actually do something with the money (be it donate, or actually use the runway to switch jobs or move cities or retrain skills). My suggested numbers of Earning to Save weren't an attempt to rigorously determine the optimal financial advice, they were mostly starting from the point of "we currently encourage people to donate 10%. and instead I basically think the upfront advice should switch that 10% to focus on savings until they have enough runway." The numbers that'd support this don't have much to do with how much you can easily live on, and instead have more to do with "how strong are the benefits to switching careers, how likely are people to run into financial hardship, how long does it typically take to get a new job, how valuable is it to try and launch a major project or contribute to EA with direct work." I admittedly don't h
Earning to Save (Give 1%, Save 10%)

I feel like you are generalizing from a small sample of very dedicated EAs. In my opinion the data does not support 'EAs have often prioritized giving 10% and living frugally *too* heavily'. See data here: https://forum.effectivealtruism.org/posts/S2ypk8fsHFrQopvyo/ea-survey-2017-series-donation-data.

The median donation percentage among EAs who reported 10K+ income was only 4.28%. The following example you give is not typical 'For example, I've heard from some of the early Australian EAs that when EA was just starting out they all live... (read more)

Earning to Save (Give 1%, Save 10%)

55K is, rather surprisingly, more than the median household income in NYC. 46K is 9K less than 55K. And the hypothetical person making 55K was only donating + saving 11K a year. Though I still think if you are making 46K you could afford to donate and save substantially more than 10% / 1% of discretionary.

The bigger crux is I want to pushback on the idea that the average individual making more than the local median household, and living in one of the richest societies on the planet, cannot afford to be generous.

I want to pushback on the idea that the average individual making more than the local median household, and living in one of the richest societies on the planet, cannot afford to be generous.

I don't think that's a good or charitable reading of what Ray's saying. I think the core idea is that EAs have often prioritized giving 10% and living frugally *too* heavily, to the point where it interferes with their long-term potential. This seems like a case where the law of equal and opposite advice is coming into play - while it's true that mo... (read more)

3Raemon4y
Hmm. This feels like it's reading more or different things into the post than I intended to convey.
Load More