Bottom line is actually 'CEA is four times as selective'. This was pointed out elsewhere but its a big difference.
I find the following simple argument disturbing:P1 - Currently, and historically, low power being (animals, children, old dying people) are treated very cruelly if treating them cruelly benefits the powerful even in minor ways. Weak benefits for the powerful empirically justify cruelty at scale.P2 - There is no good reason to be sure the powerful wont have even minor reasons to be cruel to the powerless (ex: suffering sub-routines, human CEV might include spreading earth like life widely or respect for tradition)P3 - Inequality between agents is likely to ... (read more)
In most cases where I am actually familiar with the facts CEA has behaved very poorly. They have both been way too harsh on good actors and failed to take sufficient action against bad actors (ex Kathy Forth). They did handle some very obvious cases reasonably though (Diego). I don't claim I would do a way better job but I don't trust CEA to make these judgments.
There are multiple examples of EA orgs behaving badly I can't really discuss in public. The community really does not ask for much 'openness'.
The story is more complicated but I can't really get into it in public. Since you work at Rethink you can maybe get the story from Peter. I've maybe suggested too simplistic a narrative before. But you should chat Peter or Marcus about what happened with Rethink and EA funding.
https://forum.effectivealtruism.org/posts/3c8dLtNyMzS9WkfgA/what-are-some-high-ev-but-failed-ea-projects?commentId=7htva3Xc9snLSvAkB "Few people know that we tried to start something pretty similar to Rethink Priorities in 2016 (our actual founding was in 2018). We (Marcus and me, the RP co-founders, plus some others) did some initial work but failed to get sustained funding and traction so we gave up for >1 year before trying again. Given that RP -2018 seems to have turned out to be quite successful, I think RP-2016 could be an example of a failed... (read more)
DXE Bay is not very decentralized. It's run by the five people in 'Core Leadership'. The leadership is elected democratically. Though there is a bit on complexity since Wayne is influential but not formally part of the leadership.
Leadership being replaced over time is not something to lament. I would strongly prefer more uhhhh 'churn' in EA's leadership. I endorse the current leadership quite a bit and strongly prefer that several previous 'Core' members lost their elections.
note: I haven't been very involved in DXE since I left California. Its really quite concentrated in the Bay.
If I had to guess I would predict Luke is more careful than various other EA leaders (mostly cause of Luke's ties to Eliezer). But you can look at the observed behavior of OpenPhil/80K/etc and I dont think they are behaving as carefully as I would endorse with respect to the most dangerous possible topic (besides maybe gain of function research which Ea would not fund). It doesn't make sense to write leadership a blank check. But it also doesn't make sense to worry about the 'unilateralists curse' when deciding if you should buy your friend a laptop!
This level of support for centralization and deferral is really unusual. I actually don't know of any community besides EA that endorses it. I'm aware it's a common position in effective altruism. But the arguments for it haven't been worked out in detail anywhere I know. "Keep in mind that many things you might want to fund are in scope of an existing fund, including even small grants for things like laptops. You can just recommend they apply to these funds. If they don't get any money, I'd guess there were better options you would have missed but sh... (read more)
That doesnt really engage with the argument. If some other agent is values aligned and approximately equally capable why would you keep all the resources? It doesnt really make sense to value 'you being you' so much.
I dont find donor lotteries compelling. I think resources in Ea are way too concentrated. 'Deeper investigations' is not enough compensation for making power imbalances even worse.
I think the Aumann/outside-view argument for 'giving friends money' is very strong. Imagine your friend is about as capable and altruistic as you. But you have way more money. It just seems rational and efficient to make the distribution of resources more even? This argument does not at all endorse giving semi-random people money.
Where is the article?
What was the approximate budget? When I read this my first thought was 'did they ask for a super ton of money and get rejected on that basis'?
Effective altruists talk a lot about cooperation but I actually think its sort of pathologically uncooperative to eat meat. It seems pretty hard to dismiss the arguments for veganism. Lots of provably well informed EAs (in the sense they could score well on a test of EA knowledge) are going to settle on 'dont personally participate in enormous moral outrages'. Why would you personally make the problem worse? And make the social situation worse for vegan EAs. It's a very serious breach of solidarity. (Though maybe using the term solidarity outs me as a leftie)
As a recent counterpoint to some collaborationist messages: https://forum.effectivealtruism.org/posts/KoWW2cc6HezbeDmYE/greg_colbourn-s-shortform?commentId=Cus6idrdtH548XSKZ
"It was disappointing to see that in this recent report by CSET, the default (mainstream) assumption that continued progress in AI capabilities is important was never questioned. Indeed, AI alignment/safety/x-risk is not mentioned once, and all the policy recommendations are to do with accelerating/maintaining the growth of AI capabilities! This coming from an org that OpenPhil has give... (read more)
I don't get it. https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang?commentId=o58cMKKjGp87dzTgx
I wont associate with people doing serious capabilities research.
Why is the stipend 10K? How are these numbers chosen? Funds are not exactly tight on this scale. I understand that EA wants to filter for dedicated people. But I feel like these really low pay/stipends should be justified a little more explicitly. Wouldn't it make sense to offer more money since 'moving to the Bahamas for 6 months is not exactly a low-cost decision for many people? [I am aware the EA hotel gets people despite a lack of very generous stipends]
Isn't the $10k additional to the salary that you'll keep receiving from your employer?
It's 10k plus travel plus housing plus co-working space, so it sounds like other than food basically the whole 10k would be disposable income. Potentially the housing provides food also. I'm not sure what cost of living is like in the Bahamas but that hardly sounds like "really low pay"
I am aware but the benefit is still quite large.
Oh wow. This was super informative. Thanks so much.
This would have been way better than holding everything in my individual account. But it doesn't let you make 'grants' to individuals. We need something like a smaller version of EA Funds.
I would unrestrictedly give it to individual EAs you trust.
When I was small I needed help. Instead, I was treated very badly. Many people and animals need help right now. We have to help as many of them as possible.
Earlier in our relationship, I told my wife that we should legally marry other people so they could move to the USA. She is usually quite open-minded but she very much hated the plan so we never did it.
I am very big on living out your values. If you are a citizen of a highly desired country you can help make open borders a reality. I encourage you to consider this in who you legally marry. This is especially relevant if you are poly. There are a lot of versions that differ quite a bit in terms of risk.
Groups are not public. Here is an example from 'EAs in crypto'. The original thread was in 'highly speculative EA investing'. The EAs in crypto thread got the most engagement.
note: Anthony Deluca is my deadnmae (I wasnt out when Greg wrote this comment. he didn't deadnmae me), Greg is a well-known EA.
Multiple people connected to the lesswrong/ea investing groups tried to contact him. We both contacted him directly and got some people closer to Vitalik to talk to him. I am unsure how much influence we had. He donated less than two days after the facebook threads went up.
We definitely tried!
I am not sure Effective Altruism has been a net hedonic positive for me. In fact, I think it has not been.
Recently in order to save money to donate more, I chose to live in very cheap housing in California. This resulted in many serious problems. Looking back arguably the biggest problem was the noise. If you cram a bunch of people into a house it's going to be noisy. This very badly affected my mental health. There were other issues as well. My wife and I could have afforded a much more expensive place. That would have been money very well spe... (read more)
Thanks for sharing your experiences. I think it's valuable to get anecdata on downsides so people have clearer expectations going in.
It is hard for me to think of much advice that has gone worse for rationalists/EAs on average than 'Get a PHD'. I know dozens of people in the community who spent at least some time in a PHD program. There is a huge amount of people expressing strong regret. A small number of people think their PHD went ok. Very few people think their PHD was a great and effective idea. Notably, I am only counting people who have already left grad school and had some time to reflect.
The track record is incredibly bad in the community. The opportunity cost is ex... (read more)
Having talked to many people for multiple hours (>100) over the years about their career decisions, I share this assessment.
I don't like when animal advocates are too confident about their approach and are critical of other advocates. We are losing badly, meat consumption is still skyrocketing! Now is time to be humble and open-minded. Meta-advice: Don't be too critical of the critical either!
My biggest mistake was not buying, and holding, crypto early. This was an extremely costly mistake. If I bought and held I would have hundreds of millions of dollars that could have been given as grants. I doubt I will ever make such a costly mistake again.
Going to graduate school was a very bad decision too. After 2.5 years I had to take my L and get out. It was very painful to admit I had been wrong but that is life.
The problem is real. Though for 'normal' low probabilities I suggest biting the bullet. A practical example is the question of whether to found a company. If you found a startup you will probably fail and make very little or no money. However, right now a majority of effective altruist funding comes from Facebook co-founder Dustin Moskovich. The tails are very thick.
If you have a high-risk plan with a sufficiently large reward I suggest going for it even if you are overwhelming likely to fail. Taking the risk is the most altruistic thing you can do. ... (read more)
Really cool to learn about resource generation. These fellows are hardcore. I promote the following to EA type people:-- Donate at least 10% of pre-tax income (I am above this)-- Be as frugal as you can. Certainly don't spend more than could be supported by the median income in your city. -- Once you have at least ~500K net worth give away all additional income. In my opinion, 500K is enough to fund a lean retirement if you are willing to accept a little risk.
--If you get a big windfall I suggest either putting it in a trust or just earmarking i... (read more)
From this perspective, a corporate lawyer who went to Harvard is not a class traitor. They are just acting in their own class interests.
I think of the intersectionality/social justice/anti-oppression cluster as being a bit more specific than just 'progressive' so I will only discuss the specific cluster. Through activism, I met many people in this cluster. I myself am quite sympathetic to the ideology.
But I have to ask: How do you hold this ideology while attending Harvard Law? From this perspective, Harvard law is a seat of the existing oppressive power structure and you are choosing to become part of this power structure by attending. The privileges that come from attending Harvard... (read more)
Based on my experiences as a Yale undergraduate, I've come away with the perhaps overly pessimistic conclusion that a lot of class-privileged leftists at Ivy+ schools don't actually resolve that contradiction, and are unfortunately not that interested in interrogating and addressing their class privilege, or thinking about redistributing what familial or future wealth / resources they may have access to. I say this as both a former organizer of Yale EA, but also as someone who started a Resource Generation chapter there, and found it difficult to get peopl... (read more)
Ironically, the situation in which I have most frequently been asked about whether EA is elitist is while giving intro talks about EA at MIT, Yale, etc.
Parasitic wasps may be the most diverse group of animals (beating out beetles). In some areas environments, a shocking fraction of prey insects are parasitized.
If you value 'life' you should probably keep humans around so we can spread life beyond earth. The expected amount of life in the Galaxy seems much higher if humans stick around. Imo the other logical position is 'blow up the sun'. Don't just take out the humans, take out the wasps too. The earth is full of really horrible suffering and if the humans die out then wasp parasitism will probably ... (read more)
"The 99th percentile probably isn't good enough either." If you more than 99th percentile talented maybe you can give yourself a chance to earn a huge amount of money if you are willing to take on risk. Wealth is extremely fat-tailed so this seems potentially worthwhile.
If Dustin had not been a Facebook co-founder EA would have something like one-third of its current funding. Sam Bankman Fried strikes me as quite talented. He originally worked at Jane Street and quit to work at a major EA org. Instead, he ended up founding the crypto exchange FTX. FT... (read more)
You should take the quant role imo. Optionality is valuable (though not infinitely so). Quant trading gives you vastly more optionality. If trading goes well but you leave the field after five years you will have still gained a large amount of experience and donated/saved a large amount of capital. It's not unrealistic to try for 500K donated and 500K+ saved in that timeframe, especially since firms think you are unusually talented. If you have five hundred thousand dollars, or more, saved you are no longer very constrained by finances. Five hundred thousa... (read more)
AI PhDs tend to be very well-compensated after graduating, so I don't think personal financial constraints should be a big concern on that path.
More generally, skill in AI is going to be upstream of basically everything pretty soon; purely in terms of skill optionality, this seems much more valuable than being a quant.
I am a rather strong proponent of publishing credible accusations and calling out community leadership if they engage in abuse enabling behavior. I published a long post on Abuse in the Rationality/EA Community. I also publicly disclosed details of a smaller incident. People have a right to know what they are getting into. If community processes are not taking abuse seriously in the absence of public pressure then information has to be made public. Though anyone doing this should be careful.
Several people are discussing allegations of DXE being abusive an... (read more)
Good point that Open Phil makes all donations public. I found a CSV on their site and added up the donations dated 2018/2019/2020.
2020 so far: $145,405,362
This is a really useful answer.
https://www.givewell.org/about/impact is something I already found.
I am a member of DXE and have interacted with Wayne. I think if you care about animals the amount of QALYs gained would be massive. In general Wayne has always seemed like a careful, if overly optimistic, thinker to me. He always tries to follow good leadership practices. Even if you are not concerned with animal welfare I think Wayne would be very effective at advancing good policies.
Wayne being mayor would result in huge improvements for climate change policy. Having a city with a genuine green policy is worth a lot of QALYs. My only real complaint about Wayne is that he is too optimistic but that isn't the most serious issue for a Mayor.
This answer would be strengthened by one or two examples of his careful thinking, or especially by a counterpoint to the claim that DxE uses psychological manipulation techniques on its members.
I haven't checked the claims myself, but "follow good leadership practices" seems to be a heavily disputed claim. Some people claim DxE is a cult, see e.g. here.
Why do you think orgs labelled 'effective altruist' get so much talent applying but those orgs don't? How big do you think the difference is? I am somewhat informed about the job market in Animal Advocacy. It does not seem nearly as competitive as the EA market. But I am not sure the magnitude of the difference in the replaceability analysis.
Really good article. I have been critical of 80K hours in the past but this article caused me to substantially update my views. I am happy to hear you will be at 80K hours.
I think we are pretty far from exhausting all the good giving oppurtunities. And even if all the highly effective charities are filled something like Give Directly can be scaled up. It is possible in the future we will eventually get to the point where there are so few people in poverty that cash transfers are ineffective. But if that happens there is nothing to be sad about. the marignal value of donations will go down as more money flows into EA. That is an argument for giving more now. A future where marginal EA donaions are ineffective is a very good future.
Yeah, GiveDirectly feels like the kind of thing that could take hundreds of millions or billions of dollars. If we ever do run out of funding opportunities, which I don't think we will any time soon, that's a really good problem to have.
Does the high difficulty of getting a job at an EA organization mean we should stop promoting EA? (What are the EA movement's current bottlenecks?)
Promoting donations or Earnign to Give seems fine. I think we should stop promoting 'EA is talent constrained'. There is a sense in which EA is 'talent constrained'. But the current messaging around 'EA is talent constrained' consistently misleads people, even very informed people such as the OP and some of the experts who gave him advice. On the other hand EA can certainly abs... (read more)
At least some people at OpenAI are making a ton of money: https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-openai.html /. Of course not everyone is making that much but I doubt salaries at OpenAI/DeepMind are low. I think the obvious explanation is the best one. These companies want to hire top talent. Top talent is hard to find.
The situation is different for organizations that cannot afford high salaries. Let me link to Nate's explanation from three years ago:
I want to push back a bit against point #1 ("Let's
Great Comment. Thanks for the detailed explanation. This was especially useful for me to understand your model:
Early stage projects need a variety of skills, and just being median-competent is often enough to get them off the ground. Basically every project needs a website and an ops person (or, better – a programmer who uses their power to automate ops). They often need board members and people to sit in boring meetings, handle taxes and bureaucracy.
I think this is quite achievable for the median EA.
I feel like this post illustrates large inferential gaps. In my experience trying to work in EA works for a rather small number of people. I certainly don't recommend it. Let me quote something I posted on the 80K hours thread:
80K Hour's advice seems aimed, perhaps implicitly, at extremely talented people. I >would roughly describe the level of success/talent as 'top half of Oxford'. If you do >not have that level of ability, then the recommend career paths are going to be long >shots at best. Most people are not realistically
I feel like your post would be harder to misunderstand if it included some hard numbers. In particular hard numbers on income.
I feel like you are generalizing from a small sample of very dedicated EAs. In my opinion the data does not support 'EAs have often prioritized giving 10% and living frugally *too* heavily'. See data here: https://forum.effectivealtruism.org/posts/S2ypk8fsHFrQopvyo/ea-survey-2017-series-donation-data.
The median donation percentage among EAs who reported 10K+ income was only 4.28%. The following example you give is not typical 'For example, I've heard from some of the early Australian EAs that when EA was just starting out they all live... (read more)
55K is, rather surprisingly, more than the median household income in NYC. 46K is 9K less than 55K. And the hypothetical person making 55K was only donating + saving 11K a year. Though I still think if you are making 46K you could afford to donate and save substantially more than 10% / 1% of discretionary.
The bigger crux is I want to pushback on the idea that the average individual making more than the local median household, and living in one of the richest societies on the planet, cannot afford to be generous.
I want to pushback on the idea that the average individual making more than the local median household, and living in one of the richest societies on the planet, cannot afford to be generous.
I don't think that's a good or charitable reading of what Ray's saying. I think the core idea is that EAs have often prioritized giving 10% and living frugally *too* heavily, to the point where it interferes with their long-term potential. This seems like a case where the law of equal and opposite advice is coming into play - while it's true that mo... (read more)