All of Nathan Young's Comments + Replies

I dunno, I think that sounds galaxy-brained to me. I think that giving numbers is better than not giving them and that thinking carefully about the numbers is better than that. I don't really buy your second order concerns (or think they could easily go in the opposite direction)

Yeah, I think you make good points. I think that forecasts are useful on balance, and then people should investigate them. Do you think that forecasting like this will hurt the information landscape on average? 

Personally, to me, people engaged in this forecasting generally seem more capable of changing their minds. I think the AI2027 folks would probably be pretty capable of acknowledging they were wrong, which seems like a healthy thing. Probably more so than the media and academic? 

Seems like a lot of specific, quite technical criticisms.

Sure,... (read more)

2
Arepo
Ah, sorry, I misunderstood that as criticism. I'm a big fan of the development e.g. QRI's process of making tools that make it increasingly easy to translate natural thoughts into more usable forms. In my dream world, if you told me your beliefs it would be in the form of a set of distributions that I could run a monte carlo sim on, having potentially substituted my own opinions if I felt differently confident than you (and maybe beyond that there's still neater ways of unpacking my credences that even better tools could reveal). Absent that, I'm a fan of forecasting, but I worry that overnormalising the naive I-say-a-number-and-you-have-no-idea-how-I-reached-it-or-how-confident-I-am-in-it form of it might get in the way of developing it into something better.

Some thoughts:

  • I agree that the Forum's speech norms are annoying. I would prefer that people weren't banned for being impolite even white making useful points.
  • I agree in a larger sense that EA can be innervating, sapping one's will for conflict with many small touches
  • I agree that having one main funder and wanting to please them seems unhelpful
  • I've always thought you are a person of courage and integrity

On the other hand:

  • I think if you are struggling to convince EAs that is some evidence. I too am in the "it's very likely not the end of the world but still
... (read more)

I feel this quite a lot:

  • The need to please OpenPhil etc
  • The sense of inness or outness based on cause area
  • The lack of comparing notes openly
  • That one can "just have friends"

And so I think Holly's advice is worth reading, because it's fine advice.

Personally I feel a bit differently. I have been hurt by EA, but I still think it's a community of people who care about doing good per $. I don't know how we get to a place that I think is more functional, but I still think it's worth trying for the amout of people and resources attached to this space. But yes, I am less emotionally envolved than once I was.

Seems like a lot of specific, quite technical criticisms. I don't edorse Thorstadts work in general (or not endorse it), but often when he cites things I find them valuable. This has enough material that it seems worth reading. 

I think my main disagreement is here:

“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so” … I think the rationalist mantra of “If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics” will turn out to hurt our information landscape much more than it helps.

I weakly disag... (read more)

I weakly disagree here. I am very much in the "make up statistics and be clear about that" camp.

 

I'm sympathetic to that camp, but I think it has major epistemic issues that largely go unaddressed:

  • It systemically biases away from extreme probabilities (it's hard to assert < than , for e.g., but many real-world probabilities are and post-hoc credences look like they should have been below this)
  • By focusing on very specific pathways towards some outcome, it diverts attention towards easily definable issues, and hence away from the prospe
... (read more)

My comments are on LessWrong (see link below) but I thought I'd give you lot a chance to comment also.

@Gavriel Kleinwaks (who works in this area) Gives her recommendation. When asked whether she "backed" them:

I do! (Not in the financial sense, tbc.) But just want to flag that my endorsement is confounded. Basically, Aerolamp uses the design of the nonprofit referenced in my post, OSLUV, and most of my technical info about far-UV comes from a) Aerolamp cofounder Viv Belenky and b) OSLUV. I've been working with Viv and OSLUV for a couple of years, long before the founding of Aerolamp, and trust their information, but you should know that my professional opin

... (read more)

This is a cool post, though I think it's kind of annoying not to be able to see the specific numbers that one is putting them on without reading the chart. 

4
Toby Tremlett🔹
Yeah perhaps this is a feature for polls v3 (v2 is almost done). 

I do! (Not in the financial sense, tbc.) But just want to flag that my endorsement is confounded. Basically, Aerolamp uses the design of the nonprofit referenced in my post, OSLUV, and most of my technical info about far-UV comes from a) Aerolamp cofounder Viv Belenky and b) OSLUV. I've been working with Viv and OSLUV for a couple of years, long before the founding of Aerolamp, and trust their information, but you should know that my professional opinion is highly correlated with theirs—1Day Sooner doesn't have the equipment to do independent testing.

I thi... (read more)

Sure, and do you want to stand on any of those accusations? I am not going to argue the point with 2 blogposts. What is the point you think is the strongest?

As for Moskovitz, he can do as he wishes, but I think it was an error. I do think that ugly or difficult topics should be discussed and I don't fear that. LessWrong, and Manifest, have cut okay lines through these topics in my view. But it's probably too early to judge. 

-2
Yarrow Bouchard 🔸
Well, the evidence is there if you're ever curious. You asked for it, and I gave it. David Thorstad, who writes the Reflective Altruism blog, is a professional academic philosopher and, until recently, was a researcher at the Global Priorities Institute at Oxford. He was an editor of the recent Essays on Longtermism anthology published by Oxford University Press, which includes an essay co-authored by Will MacAskill, as well as essays by a few other people well-known in the effective altruism community and the LessWrong community. He has a number of published academic papers on rationality, epistemology, cognition, existential risk, and AI. He's also about as deeply familiar with the effective altruist community as it's possible for someone to be, and also has a deep familiarity with the LessWrong community. In my opinion, David Thorstad has a deeper understanding of the EA community's ideas and community dynamics than many people in the community do, and, given the overlap between the EA community and the LessWrong community, his understanding also extends to a significant degree to the LessWrong community as well. I think people in the EA community are accustomed to drive-by criticisms by people who have paid minimal attention to EA and its ideas, but David has spent years interfacing with the community and doing both academic research and blogging related to EA. So, what he writes are not drive-by criticisms and, indeed, apparently a number of people in EA listen to him, read his blog posts and academic papers, and take him seriously. All this to say, his work isn't something that can be dismissed out of hand. His work is the kind of scrutiny or critical appraisal that people in EA have been saying they want for years. Here it is, so folks better at least give it a chance. To me, "ugly or difficult topics should be discussed" is an inaccurate euphemism. I don't think the LessWrong community is particularly capable of or competent at discussing ugly or difficul

I often don't respond to people who write far more than I do. 

I may not respond to this. 

Option B clearly provides no advantage to the poor people over Option A. On the other hand, it sure seems like Option A provides an advantage to the poor people over Option B.

This isn't clear to me. 

If the countries in question have been growing much slower than the S&P 500, then the money at the future point might be far more money to them than it is to them now. And they aren't going to invest in the S&P 500 in the meantime. 

2
Vasco Grilo🔸
Sure!

Sure, but I think there are also relatively accurate comments about the world. 

Hi this is the second or third of my comments you've come and snarked on. I'll ask again. Have I upset you that you should talk to me like this?

Maybe I'm being too facile here, but I genuinely think that even just taking all these numbers, making them visible in some place, and then taking the median of them, and giving a ranking according to that, and then allowing people to find things they think are perverse within that ranking, would be a pretty solid start. 

I think producing suspect work is often the precursor to producing good work.

And I think there's enough estimates that one could produce a thing which just gathers all the estimates up and displays them. That would be sort of a survey... (read more)

6
ElliotTep
 I think one of the challenges here is for the people who are respected/have a leadershipy role on cause prioritisation, I get the sense that they've been reluctant to weigh in here, perhaps to the detriment of Anthropic folks trying to make a decision one way or another. Even more speculative: Maybe part of what's going on here is that the charity comparison numbers that GiveWell produce, or when charities are being compared within a cause area in general, is one level of crazy and difficult. But the moment you get to cross-course comparisons, these numbers become several orders of magnitude more crazy and uncertain. And maybe there's a reluctance to use the same methodology for something so much more uncertain, because it's a less useful tool/there's a risk it is perceived as something more solid than it is.  Overal I think more people who have insights on cause prio should be saying: if I had a billion dollars, here's how I'd spend it, and why. 
6
Vasco Grilo🔸
How different is that from ranking the results from RP's cross-cause cost-effectiveness model (CCM)? I collected estimates from this in a comment 2 years ago.

I appreciate the correction on the Suez stuff. 

If we're going to criticise rationality, I think we should take the good with the bad. There are multiple adjacent cults, which I've said in the past. They were also early to crypto, early to AI, early to Covid. It's sometimes hard to decide which things are from EA or Rationality, but there are a number of possible wins. If you don't mention those, I think you're probably fudging the numbers. 

For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving

... (read more)
2
Yarrow Bouchard 🔸
What do you think the base rate for cult formation is for a town or community of that size? Seems like LessWrong is far, far above the base rate, maybe even by orders of magnitude. I don’t think any of these are particularly good or strong examples. A very large number of people were as early or earlier to all of these things as the LessWrong community. For instance, many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally. In January 2020, stores sold out of face masks in many cities in North America. (One example of many.) The oldest post on LessWrong tagged with "covid-19" is from well after this started happening. (I also searched the forum for posts containing "covid" or "coronavirus" and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described "prepper" who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post. If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts don't start to get really worried about covid until mid-to-late February.  How is the rest of the world reacting at that time? Here's a New York Times article from February 2, 2020, entitled "Wuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Say", well before any of the worried posts on LessWrong: The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one: The worried posts on LessWrong don't start until weeks after this article was published. On a February 25, 2020 post asking
4
David Mathers🔸
I think on the racism fron Yarrow is referring to the perception that the reason Moskowtiz won't fund rationalist stuff is because either he thinks that a lot of rationalist believe Black people have lower average IQs than whites for genetic reasons, or he thinks that other people believe that and doesn't want the hassle. I think that belief genuinely is quite common among rationalists, no? Although, there are clearly rationalists who don't believe it, and most rationalists are not right-wing extremists as  far as I can tell. 

Sure but a really illegible and hard to search one.

I guess lots of money will be given. Seems reasonable to think about the impacts of that. Happy to bet.

6
Yarrow Bouchard 🔸
I’ll bet you a $10 donation to the charity of your/my choice that by December 31, 2026, not all three of these things will be true: 1. Anthropic will have successfully completed an IPO at a valuation of at least $200 billion and its market cap will have remained above $200 billion.[1] 2. More than $100 million in new money[2] (so, at least $100 million more than in 2025 or 2024, and from new sources) will be donated to EA Funds or a new explicitly EA-affiliated fund similar to the FTX Future Fund[3] (managed at least in part by people with active, existing, at least slightly prominent roles in the EA community as of December 10, 2025) by Anthropic employees in 2026 other than Daniela Amodei, Holden Karnofsky, or Dario Amodei. (Given Karnofsky’s historical role in the EA movement and EA-related grantmaking, I’m excluding him, his wife, and his brother-in-law from consideration as potentially corrupting influences.) 3. A survey of least ten representative and impartial EA Forum users (with accounts created before December 10, 2025 and at least 50 karma) will find that more than 50% believe it’s at least 10% likely that this very EA Forum post on which we’re commenting (as well as any/all other posts on the same topic this month) reduced by at least 1% the amount of corruption, loss of virtue, or undue influence relating to that $100+ million in a way that could not have been done by waiting to have the conversation until after the Anthropic IPO was officially announced. Or a majority of 1-3 judges we agree on believe that is at least 10% likely.[4] I think that at least one and possibly two or all three of these things won’t be true by December 31, 2026. If at least one of them isn’t true, I win the bet. If all three are true, you win the bet. I think December 31, 2026 is a reasonable deadline because if this still hasn’t happened by then, then my fundamental point that this conversation is premature will have been proven right. I’m open to counter-offers. I’m
4
Yarrow Bouchard 🔸
Do you think lots of money will just be given to EA-related charities such as the Against Malaria Foundation, the Future of Life Institute, and so on (that sounds plausible to me) or do you think lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing? It’s the second part that I’m doubting. I suppose a lot of it comes down to what specifically Daniela Amodei and Holden Karnofsky decide to do if their family has their big liquidity event, and that’s hard to predict. Given Karnofsky’s career history, he doesn’t seem like the kind of guy to want to just outsource his family’s philanthropy to EA funds or something like that.

This is an annoying feature of search: (this is the wrong will macaskill)

Sure, seems plausible. 

I guess I kind of like @William_MacAskill's piece or as much as I remember of it. 

My recollection is roughly this: 

  • Yes, it's strange to have lots more money.
  • Perhaps we're spending it badly.
  • But also seeking not to spend enough money might be a bad thing, too.
  • Frugal EA had something to recommend it.
  • But more impact probably requires more resources. 

This seems good, though I guess it feels like a missing piece is: 

  • Are we sure this money is got ethically?
  • How much harm will getting this money for bad reasons hurt u
... (read more)
9
jenn
I think the other missing piece is "what will this money do to the community fabric, what are the trade-offs we can take to make the community fabric more resilient and robust, and are those trade-offs worth it?" When it comes to funding effective charities, I agree that having more money is straightforwardly good. It's the second-order effects on the community (the current people in it and what might make them leave, the kinds of people who are more likely to become new entrants) that I'm more concerned with. I anticipate that the rationalists would have to face a similar problem but to a lesser degree, since the idea that well-kept gardens die by pacifism is more in the water there, and they are more ambivalent about scaling the community. But EA should scale, because its ideas are good, and this leaves it in a much more tricky situation.
5
Yarrow Bouchard 🔸
Unless you explicitly warn your donors that you’re going to sit on their money and do nothing with it, you might anger them by employing this strategy, such that they won’t donate to you again. (I don’t know if SBF would have noticed or cared because he couldn’t even sit through a meeting or an interview without playing a video game, but what applies to SBF doesn’t apply to most large donors.) Also, if there is a most important time in history, and if we can ever know we’re in the most important time in history while we’re in it, it might be 100 years or 1,000 years from now, and obviously holding onto money that long is a silly strategy. (Especially if you think we’re going to start having 10% economic growth within 50 years due to AI, but even if you don’t.) As a donor, I want to donate to charities that can "beat the market" in terms of their impact, i.e., the impact they create by spending the money now is big enough that it is bigger than the effects of investing the money and spending it in 5 years. I would be furious if I found out the charities I donate to were employing the invest-and-wait strategy. I can invest my own money or give it to someone who will spend it.

Had Phil been listened to, then perhaps much of the FTX money would have been put aside, and things could have gone quite differently. 

My understanding of what happened is different:

  • Not that much of the FTX FF money was ever awarded (~$150-200million, details).
  • A lot of the FTX Future Fund money could have been clawed back (I'm not sure how often this actually happened) – especially if it was unspent.
  • It was sometimes voluntarily returned by EA organisations (e.g. BERI) or paid back as part of a settlement (e.g. Effective Ventures).
2
Yarrow Bouchard 🔸
Interesting, say more about how you see EA struggling or failing to sit in discomfort?

Naaaah, seems cheems. Seems worth trying. If we can't then fair enough. But it doesn't feel to me like we've tried.

Edit, for specificity. I think that shrimp QALYs and human QALYs have some exchange rate, we just don't have a good handle on it yet. And I think that if we'd decided that difficult things weren't worth doing we wouldn't have done a lot of the things we've already done.

Also, hey Elliot, I hope you're doing well.

-12
Yarrow Bouchard 🔸
6
ElliotTep
Oh, this is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP's moral weights project).  Some rough thoughts: It's when we get to comparing Shrimp Welfare Project to AI safety PACs in the US that I think the task goes from crazy hard but worth it to maybe too gargantuan a task (although some have tried). I also think here the uncertainty is so large that it's harder to defer to experts in the way that one can defer to GiveWell if they care about helping the world's poorest people alive today. But I do agree that people need a way to decide, and Anthropic staff are incredibly time poor and some of these interventions are very time sensitive if you have short timelines, so that just begs the question: if I'm recommending worldview diversification, which cause areas get attention and how do we split among them?  I am legitimately very interested in thoughtful quantitative ways of going about this (my job involves a non-zero amount of advising Anthropic folks). Right now, it seems like Rethink Priorities is the only group doing this in public (e.g. here). To be honest, I find their work has gone over my heard, and while I don't want to speak for them my understanding is they might be doing more in this space soon.
1
Hugh P
When people write about where they donate, aren’t they implicitly giving a ranking? 

Reading Will's post about the future of EA (here) I think that there is an option also to "hang around and see what happens". It seems valuable to have multiple similar communities. For a while I was more involved in EA, then more in rationalism. I can imagine being more involved in EA again.

A better earth would build a second suez canal, to ensure that we don't suffer trillions in damage if the first one gets stuck. Likewise, having 2 "think carefully about things movements" seems fine. 

It hasn't always felt like this "two is better than one" feeling... (read more)

9
David Mathers🔸
What have EA funders done that's upset you? 
-3
Yarrow Bouchard 🔸
When the Ever Given got stuck in the Suez Canal in March 2021, it cost the global economy much less than trillions: The Suez Canal is being expanded, and this was the plan before the Ever Given got stuck: If members of the LessWrong community have truly found a reliably better way to think than the rest of the world, they should be able to achieve plenty of success in domains where success is externally verifiable, such as science, technology, engineering, medicine, business, economics, and so on. Since this is not the case, the LessWrong community has almost certainly not actually found a reliably better way to think. (It has started multiple cults, which is not something you typically associate with rationality.) What the LessWrong community likes to do is fire off half-cocked opinions and assume the rest of the world must be stupid/insane without thinking about it that much, or looking into it. It hasn’t invented a new, better way to think. It’s just arrogance. For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving cars to drive in, completely neglecting the astronomical cost of tunnels compared to roads — an obvious and well-known thing to consider. In his book Inadequate Equilibria, Yudkowsky specifically highlighted his opinion on Japanese monetary policy as the peak example of his superior rationality. He was wrong. In Harry Potter and the Methods of Rationality, Yudkowsky both attempts to teach readers about and condescend to them for not already knowing about various concepts in science, the social sciences, and other fields, and gets many of them wrong. His grasp on deep learning doesn’t seem to be much better. This is a pattern. Yudkowsky apparently never notices or admits these mistakes, possibly because they conflict with his self-image as by far the smartest person in the world — either in general or at least with AI safety/alignment research. Unfortunately, Yudkowsky is the role model and guru

I do not see 14 charity ranking tools. I don't really think I see 2? What, other than asking claude/chatGPT/gemini are you suggesting? 

4
Yarrow Bouchard 🔸
You know what, I don’t mean to discourage you from your project. Go for it.

Could you give a concise explanation of what giving circles are?

7
jenn
Lydia Laurenson has a non-concise article here.

Thanks, someone else mentioned them. Do you think there is anything else I'm missing?

jenn
14
0
0
1

the other nonprofit in this space is the Effective Institutions Project, which was linked in Zvi's 2025 nonprofits roundup:

They report that they are advising multiple major donors, and would welcome the opportunity to advise additional major donors. I haven’t had the opportunity to review their donation advisory work, but what I have seen in other areas gives me confidence. They specialize in advising donors who have brad interests across multiple areas, and they list AI safety, global health, democracy and (peace and security).

from the same post, re: SFF ... (read more)

There was. It was on Gathertown, I was one of the organisers.

EA still seems to have a GatherTown, though I don't know what's inside it:

https://app.gather.town/app/Yhi4XYj0zFNWuUNv/EA%20coworking%20and%20lounge

The Lightcone (LessWrong) gathetown was extensive and, in my view, pretty beautiful. 

2
gergo
Aw, I wish I knew about it! Thanks for organising.

Did the not knowing he was in the film come up organically then?

4
Peter Wildeford
Correct!

And let's not gloss over this, right. His concession is a knockdown argument to the overall thesis. 

If AI means I can't eat, but can still work, I cannot eat. Game over is much more likely. 

2
Vasco Grilo🔸
I do not think the concession matters much. I ultimately care about expected changes in welfare, not whether something is possible.

Thanks for doing this and for adding the shortcut portal.

Have only scanned this but it seems to have flaws I've seen elsewhere. In general. I recommend reading @Charles Dillon 🔸's article on comparative advantage (Charles, I couldn't find it here, but I suggest posting it here):

https://open.substack.com/pub/charlesd353/p/on-comparative-advantage-and-agi?utm_campaign=post&utm_medium=web

The quickest summary is:

  • Comparative advantage means I'm guaranteed work but not that that work will provide enough for me to eat
  • If comparative advantage is a panacea, why are there fewer horses?
4
Ben_West🔸
+1 to this being an important question to ask.

I don't have time to research this take, but one of my economist friends criticised this study for the following two reasons:

  • They claimed the averted deaths were in a famine, so there was regression to the mean (in a normal period there wouldn't have been so many deaths in the control)
  • They claimed the averted deaths were close to hospitals, so areas without existing healthcare infrastructure would not see this benefit so the counterfactual value of the money is less.

I haven't looked into this robustly so if someone has, please agree or disagree vote with this comment accordingly.

Thanks GiveDirectly for their work.

I was looking around for one of these numbers and Perplexity sent me here, which is I suppose a bit ironic. 

Let's discuss this on the other blog, not sure it's good to do it in two places at once.

I agree that it could be easier for people in EA to build a track record that funders take seriously. 

I struggle to know if your project is underfunded—many projects aren't great and there have to be some that are rejected. In order to figure that out we have to actually discuss the project and I've appreciated our back and forth on the other blog you posted. 

2
Brad West🔸
Yeah, the central idea is that PFGs can have operational parity (or superiority) because how they do good is in the identity of the shareholder, rather than through some way they do their operations. And stakeholders (consumers, employees, media, suppliers, partners, lenders) have a non-zero preference for the PFG (they'd rather a charity benefit from their transaction than a random shareholder). This is why they should have a competitive advantage over normal firms.  From this competitive advantage, you potentially have an arbitrage opportunity by philanthropists. Basically channel your money through PFGs and you get more than what you pay for.  This is a very simple and intuitively plausible mechanism for leverage for philanthropists, yet there has been very little curiosity on the potential of this model to multiply philanthropic funding. 

How have you factored this into your calculations? Surely if the returns are much lower, the total % of the market that could be run like this is much smaller?

4
Brad West🔸
What might help you conceptually is not to think of donations and shareholders as a separate thing (i.e. donations are something that limits returns) but rather think of it as business where charities are the shareholders (not conferring any disadvantage moreso than any other shareholder).
2
Brad West🔸
The returns are not lower. They are higher, because economic actors have a non-zero preference for charities but they can operationally do what normal businesses can (hence Humanitix's meteoric rise).  The limiting factor right now is philanthropic capital. And if philanthropists realize they can get more money to charity through this model, then they would be motivated to use it because it offers the opportunity to multiply impact. And then if the evidence base gets stronger, they can use debt (leveraged buyouts) to expand beyond what philanthropic resources would allow.    See my below article on why PFGs should have a competitive advantage. Stakeholder non-zero preference > business advantage > philanthropic multiplier opportunity https://profit4good.org/from-charity-choice-to-competitive-advantage-the-power-of-profit-for-good/

Surely it's going to be much more difficult for a PFG company to raise capital? Stocks are (in some way) related to future profits. If you are locked in to giving 90% away then doesn't that mean that stocks will trade at a much lower price and hence it will be much harder for VCs to get their return?

2
Brad West🔸
Yeah you are very limited in ability to exchange equity in exchange for cash. So for regular investors you could raise money with bonds.    The idea would be philanthropists would be in the position that for-profit investors would be in normal businesses because they could multiply their money to charity. See the below article for more information (and the whole previous blog series)    https://profit4good.org/above-market-philanthropy-why-profit-for-good-can-surpass-normal-returns/

I guess my questions are:

  • "what is earn to give". is the typical ETG giving $1m? $10m? At what point do we want people to switch?
  • Is there a genuinely different skill set? Like, are there some people who are very mediocre EA jobs but great at earning money? 

My guess would be that people should have some sense of how much they would earn to give for, and then how much impact they would stop earning to give and work for, and then they should move between the two. That would also create some great on-the-job learning, because I imagine that earn-to-give roles teach different skills, which can be fed back into the EA community. 

5
Jason
I speculate that there are enough differences at play that a significant fraction of people should choose direct work and a significant fraction who should choose EtG. 1. It is often asserted that impact/success is significantly right-tailed. If that's so, a modest raw difference in an individual's suitability for EtG vs. suitability for direct work might create a large difference in expected outcome. Making numbers up, even the difference between being 99.99th percentile in one suitability (compared to the general population) versus the 99.9th percentile in the other might make a big difference. And it's plausible to think that even if the two suitabilities are highly correlated, people could easily have these sorts of differences. I don't think the difference needs to be anywhere near being "very mediocre [at] EA jobs" vs. "great at earning money." 2. There have been discussions about the relative importance of value alignment in hiring for direct work. Although the concept is slippery to define, it seems less likely that the concept as applied to direct work significantly predicts success at EtG. There is a specific virtue that is needed for success at EtG -- related to following through on your donation plans once the money rolls in -- but it's questionable whether this is strongly correlated with the kind of alignment that factors into success for "EA jobs." 3. The differences involve not only skill sets but also idiosyncratic personal attributes such as presence of other obligations (e.g., family commitments), psychological traits, individual passions, and so on. These things differ among potential workers, and one would expect them to point in different directions.

It feels like if there were more money held by EAs some projects would be much easier:

  • Lots of animal welfare lobbying
  • Donating money to the developing world
  • AI lobbying
  • Paying people more for work trials

I don't know if there are some people who are much more suited to earning than to doing direct work. It seems to me they're quite similar skill sets. But if they're really sort of at all different, then you should really want quite different people to work on quite different things.

2
calebp
  I agree, but I don't know why you think people should move from direct work (or skill building) to e2g. Is the argument that the best things require very specialised labour, so on priors, more people should e2g (or raise capital in other ways) than do direct work?

I really like this format. Props to the forum team.

4
NickLaing
Yeah man, those polls you were trying to run for ages now be looking awesome :D.
Nathan Young
4
2
0
50% agree

The percentage of EAs earning to give is too low


Resources are useful. The movement is very built around one large donor.

My friend Barak Gila wrote about spending $10k offsetting plane & car miles, in Cars and Carbon

This seems way too expensive? I feel like make sunsets suggest you can offset a lifetime of carbon for like $500. 

I think a big problem is it's hard to know what to believe here. And hence people don't offset.

5
Austin
Without knowing a ton about the economics, my understanding is that Project Vesta, as a startup working on carbon capture and sequestration, costs more per ton than other initiatives currently, but the hope is with continued revenue & investment they can go down the cost curve. I agree it's hard to know for sure what to believe -- the geoengineering route taken by Make Sunsets is somewhat more controversial than CC&S (and I think, encodes more assumptions about efficacy), and one might reasonably prefer a more direct if expensive route to reversing carbon emissions. I might make a rough analogy to the difference between GiveDirectly and AMF, with reasonable people preferring the first due to being more direct (even if less cost effective).

To add some thoughts/anecdotes:

  • I'm sad this happens. I have had similar and it's hard.
  • It seems like orgs and individuals have different incentives here - orgs want the most applicants possible, individuals want to get jobs.
  • I have been asked to apply for 1 - 3 jobs that seemed wildly beyond my qualification then failed at the first hurdle without any feedback. This was quite frustrating, but I guess I understand why it happens.
  • I like that work trials are paid well
  • If we believe the best person for a job might be 5-10x better than the next best, then perhaps
... (read more)
6
SiobhanBall
Hey, thanks for writing! * It does seem like orgs want to simply maximise the number of applicants. I’m putting forward that this isn’t cost-effective. * I think there should be a soft rule that recommending someone to apply = shortlisting them to the interview/work test stage automatically. There should be some benefit to being encouraged to apply. * I don’t believe that the 5–10x differential holds at all, especially not for soft skills like comms, fundraising, and programs. If it did, I would agree with you. But how do you quantify what 5–10x looks like for a marketing manager, for example, ahead of time? What if the real value difference is actually a fraction of 1%, and you’ve gone and spent an extra 20k on a hiring round completely unnecessarily, when the number two candidate was already known to you? * Pay is pay, but yes, I strongly agree that applicant numbers, and then numbers at each stage, should be available on request. I often don’t get a reply when I ask about this, unfortunately. * Whether or not you should expect success depends on all sorts of things. If you’re brand new to the movement and applying for your first role, expectations should be low. However, this is a different point to the main thrust of my post, which is: why are orgs running expensive hiring rounds when the talent is already queuing up, out the door, into the stratosphere? I don’t think that’s cost-effective, but I want to know what others think on that question.
Nathan Young
2
0
0
21% agree

I dunno, by how much? Seems contingent on lots of factors. 

I like Ben personally.

I don't intend to quote tweet him, but I'd like someone to make a kind of defence.

Ben Landau-Taylor tweeted this a couple of days back:

It has been annoying me, since I don't think it's accurate. Here is my proposed response (These aren't tweets, it's a scheduling app where I draft):

I would appreciate criticism.

I mean I just don't take Ben to be a reasonable actor regarding his opinions on EA? I doubt you'll see him open up and fully explain a) who the people he's arguing with are or b) what the explicit change in EA to an "NGO patronage network" was with names, details, public evidence of the above, and being willing to change his mind to counter-evidence.

He seems to have been related to Leverage Research, maybe in the original days?[1] And there was a big falling out there, any many people linked to original Leverage hate "EA" with the fire of a thousand b... (read more)

I think it would allow many very online slightly anxious people to note how the situation is changing, rather than plug their minds into twitter each day.

I think the bird flu site helped a little in my part of twitter to tell people to chill out a bit and not work themselves into a frenzy earlier than was necessary. At least one powerful person said it was cool. 

I think that civil servants might use it but I'm not sure they know what they want here and it's good to have something concrete to show.

Seems notable that my modes is that OpenPhil has stopped funding some right wing or even centrist projects, so has less power in this world than it could have done. 

Ben Todd writes:

 Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures, via Open Philanthropy.2 But they’ve recently stopped funding several categories of work (my own categories, not theirs):

...

... (read more)
Load more