All of Linch's Comments + Replies

A Sequence Against Strong Longtermism

You can imagine that OP has limited opportunities or interest or time to improve, and can only focus on one thing. In that case I'd strongly encourage focusing on higher quality arguments over better style, as I usually find the lack of the former much more offputting than the latter.

1Arepo1hI find it hard to believe that leaving out snarky comments is a drain on anyone's productivity, let alone that the movement should encourage norms where we assume our value is so high that the risks of snark-deprivation outweigh the benefits.
A Sequence Against Strong Longtermism

For what it's worth, I have the opposite reaction, and between the OP having higher quality arguments and less snark, would strongly prefer higher quality arguments. 

3Arepo5hIt's not a trade-off!
Narration: The case against “EA cause areas”

I have a mild preference for narrations to not show up on the front-page of the EA Forum, and instead eg be comments to the relevant posts, or bunded up together in a long intro sequence post each time.

I don't know how unusual this preference is (eg I'm also like maybe in the fifth percentile of EAs for how many podcasts I listen to, for example)

3D0TheMath11hNoted. I was worried it would get annoying, so thanks for confirming that worry. I’ll experiment with posting some not on the front-page, and see if they get significantly fewer listens.
Linch's Shortform

I know this is a really mainstream opinion, but I recently watched a recording of the musical Hamilton and I really liked it. 

I think Hamilton (the character, not the historical figure which I know very little about) has many key flaws (most notably selfishness, pride, and misogyny(?)) but also virtues/attitudes that are useful to emulate.

I especially found the Non-stop song(lyrics) highly relatable/aspirational,  at least for a subset of EA research that looks more like "reading lots and synthesize many thoughts quickly" and less like "think ver... (read more)

Open Philanthropy is seeking proposals for outreach projects

I thought Decision Problem: Paperclips introduced a subset of AI risk arguments fairly well in gamified form, but I'm not aware of anybody  where the game made them become interested enough in AGI alignment/risk/safety enough to work on it. Does anybody else on this forum have data/anecdata?

Research into people's willingness to change cause *areas*?

Do we have strong evidence that "average donors" even have "cause areas," as an accurate/descriptively useful mapping of how they understand the world? My young and pre-EA self feels so distant from me that it's barely worth mentioning, but I vaguely recall that teenage me donated to things as disparate as earthquake relief in Sichuan, local beggars, LGBT stuff and probably something something climate change. 

I don't think I ever consciously considered until several years later how dumb it was to a) donate to multiple things at the tiny amounts I was donating at the time and b) to have multiple cause areas of very varying cost-effectiveness and theories of change.

2David_Moss1hI think there's definitely something to this. As is suggested by this report [http://www.cgap.org.uk/uploads/reports/HowDonorsChooseCharities.pdf], even donors who are very proactive, are often barely reflecting about where they should give at all. They are also, often, thinking about the charity sector in terms of very coarse-grained categories (e.g. my country/international charities, people/animal charities). On the other hand, they often are making sense of their donations in terms of causes and an implicit hierarchy of causes (including particular, personal commitments, such as to heart disease because a family member died from that, and so on). They also view charitable donation as highly personal and subjective (e.g. a matter of personal choice) [there is some evidence for this in here [https://www.tandfonline.com/doi/abs/10.1080/09515089.2011.633751] and unpublished work by me and my academic colleagues]. I think the overall picture this suggests is that people are sometimes thinking in terms of causes, but rarely explicitly deliberating about the optimal cause or set of causes. To address the original question: I think this suggests that trying to get people to "change causes" by giving them reasons as to why certain causes are best may be ineffective in most cases, as people rarely deliberate about what cause is best and may not even be aiming to select the best cause. On the other hand, as many donors give fairly promiscuously or indiscriminately to charities across different cause areas, it's plausible you could get them to support different causes just by making them salient and appealing.
3Davis_Kingsley2hI (very anecdotally) think there are lots of people who are interested in donating to quite specific cause areas, e.g. "my father died of cancer so I donate to cancer charities" or "I want to donate to help homelessness in my area" -- haven't studied that in depth though.
A Sequence Against Strong Longtermism

I agree that a good Bayesian should grant the hypothesis of continuity nonzero credence, as well as other ways the universe can be infinite. I think the critique will be more compelling if it was framed as "there's a small chance the universe is infinite, Bayesian consequentialism by default will incorporate small probability of infinity, the decision theory can potentially blow up under those constraints "

Then we see that this is a special unresolved case of infinity (which is likely an issue with many other decision theories) rather than a claim that the... (read more)

A Sequence Against Strong Longtermism

Hmm, I think 3 does not follow from 2. 

If I think there's a 10% chance I will quit my job upon further reflection, and I do the reflection, and then quit my job, this does not mean that before the reflection I cannot make any quantified statements about the expected earnings from my job.

vaidehi_agarwalla's Shortform

Keen to get feedback on whether I've over/underestimated any variables.

I think 

Average person's value of time (USD)

As a normal distribution between $20-30 is too low, many EA applicants counterfactually have upper middle class professional jobs in the US. 

I also want to flag that you are assuming that the time is

unpaid labour time

but many EA orgs do in fact pay for work trials. "trial week" especially should almost always be paid. 

4vaidehi_agarwalla1dHi Linch, thanks for the input! I'll adjust the estimate a bit higher. In the Guesstimate I do discount the hours to say that 75% of the total hours are unpaid (trial week hours cone to 5% of the total hours).
Taylor Swift's "long story short" Is Actually About Effective Altruism and Longtermism (PARODY)

Got it, I agree with you that this can be what's going on! When the intuition is spelled out we clearly see the "trick" is comparing individual incomes as if they were comparable to household incomes. 

Living in the Bay Area, I think some of my friends do forget that in addition to being extremely rich by international standards, they are also somewhere between fairly and extremely rich by American standards as well. 

Taylor Swift's "long story short" Is Actually About Effective Altruism and Longtermism (PARODY)

Speaking of the second video, I have my own fan theory that "Blank Space" is based on popular manga and anime series Death Note.

1Ikaxas10hIf you happen to not be aware of this video [https://m.youtube.com/watch?v=T0P4rVKrP-Y] already, you really should be.
Taylor Swift's "long story short" Is Actually About Effective Altruism and Longtermism (PARODY)

I dunno, I feel like these are two fairly different claims. I also expect the average non-American household to be larger than the average American household, not smaller (so there will be <6 B households worldwide). 

3WilliamKiely2dYes, I agree they are different. And I agree the OP's claim is implausible for the reason you give. I just meant to point out that I think the OP misstated the claim made by the GWWC calculator cited and that the GWWC claim is plausible (or at least that your given reason is not sufficient to make it implausible). The GWWC calculator explicitly says "If you have a household income of $58,000 (in a household of 1 adult)" "you are in the richest 1% of the global population." That can be true even if the median individual American is not in the top 1% of individuals by income globally, and even if the median American household is not in the top 1% of households by income globally because an individual who makes as much as the median American household actually makes more than most Americans . (We can't estimate the exact percentile a priori. A priori they could make more than 18-100% of Americans depending on how individual Americans and income are distributed across American households.) The GWWC calculator claim could be true only if the individual who makes as much as the median American household makes more than at least 76.3% of Americans (76.3% = 1-0.01/(333,000,000/7,881,000,000)). 76.3% is in the range that I would have guessed (~60-80%) so it's at least plausible. Edited to add: Wikipedia says [https://en.m.wikipedia.org/wiki/Personal_income_in_the_United_States#Income_distribution] 76% of Americans make less than $57,500/year. The existence of any number of non-Americans making more than $58,000/year is surely enough to cause the GWWC calculator's claim to be false. Looks like only ~5-15% of Americans are in the global 1% of the income distribution then. I'd be interested in knowing the exact number / income level.
4Charles He2dIndeed, my own qualitative research suggests US households are smaller than average—and maybe even consist mainly of single individuals. My research involves parsing the details in this video [https://www.youtube.com/watch?v=XnbCSboujF4]and this video. [https://www.youtube.com/watch?v=e-ORhEE9VVg]
Buck's Shortform

I thought you were making an empirical claim with the quoted sentence, not a normative claim. 

2Buck2dAh, fair.
Taylor Swift's "long story short" Is Actually About Effective Altruism and Longtermism (PARODY)

Not your fault, but

the median American household is comfortably in the top richest 1% globally 

does not seem plausible to me, because the US has ~4% of the world population

7WilliamKiely2dAn individual who makes as much as the median American household plausibly could be in the top 1% of individuals by income, if average household size is a few people. (There are only 120 million [https://www.google.com/search?q=number+of+american+households&oq=number+of+american+households&aqs=chrome..69i57j35i39j0i433j0l7.2424j0j7&sourceid=chrome&ie=UTF-8] American households.) I think this is what the linked GWWC calculator is doing.
Buck's Shortform

Below this level of consumption, they’ll prefer consuming dollars to donating them, and so they will always consume them. And above it, they’ll prefer donating dollars to consuming them, and so will always donate them. And this is why the GWWC pledge asks you to input the C such that dF(C)/d(C) is 1, and you pledge to donate everything above it and nothing below it.


Wait the standard GWWC pledge is a 10% of your income, presumably based on cultural norms like tithing which in themselves might reflect an implicit understanding that (if we assume log utility)... (read more)

2Buck2dYeah but this pledge is kind of weird for an altruist to actually follow, instead of donating more above the 10%. (Unless you think that almost everyone believes that most of the reason for them to do the GWWC pledge is to enforce the norm, and this causes them to donate 10%, which is more than they'd otherwise donate.)
A Sequence Against Strong Longtermism

The set of all possible futures is infinite, regardless of whether we consider the life of the universe to be infinite. Why is this? Add to any finite set of possible futures a future where someone spontaneously shouts “1”!, and a future where someone spontaneously shouts “2”!, and a future where someone spontaneously shouts

Wait are you assuming that physics is continuous? If so, isn't this a rejection of modern physics? If not, how do you respond to the objection that there is a limited number of possible configurations for atoms in our controllable unive... (read more)

3MichaelStJules2dI don't think there's a consensus on whether physics is continuous or discrete, but I expect that what matters ethically is describable in discrete terms. Things like wavefunctions (or the motions of physical objects) could depend continuously on time or space. I don't think we know that there are finitely many configurations of a finite set of atoms, but maybe there are only finitely many functionally distinct ones, and the rest are effectively equivalent. I think we've also probed scales smaller than Planck by observing gamma ray bursts, but I might be misinterpreting, and these were specific claims about specific theories of quantum gravity. Also, a good Bayesian should grant the hypothesis of continuity nonzero credence. FWIW, though, I don't think dealing with infinitely many possibilities is much of a problem as made out to be here. We can use (mixed-)continuous measures, and we can decide what resolutions are relevant and useful as a practical matter.
Writing about my job: pharmaceutical chemist

This seems unlikely from your description, but do you do or know of any work on biologics by any chance? I ask because I'm writing a report on cultured meat and would like a slightly larger pool of reviewers from adjacent industries (eg people who have experience scaling use of CHO cells). 

Metaculus Questions Suggest Money Will Do More Good in the Future

For the first question, I was one of the forecasters who gave close to the current Metaculus median answer (~30%). I can't remember my exact reasoning, but roughly:

1. Outside view on how frequently things have changed + some estimates on how likely things are to change in the future, from an entirely curve fitting perspective.

2. Decent probability that the current top charities will go down in effectiveness as the problems become less neglected/we've had stronger partial solutions for them/we discover new evidence about them. Concretely:

Malaria: CRISPR or ... (read more)

EA Picnic: San Francisco | Sunday, July 11

EDIT: I'm less certain this is true because I think I didn't fully update on how much the vaccines reduce the risks of covid for young people. I think maybe not getting tested is fine if you aren't likely to be exposed to non-vaccinated people and you aren't in a position to interact heavily with many people.

I've been informed 3 days ago that someone at the event now has covid, likely from the event itself.

Dear Linchuan,  

One of the attendees of the EA Picnic let us know they developed COVID symptoms on Friday July 16th and tested positive on Sunday J

... (read more)
Further thoughts on charter cities and effective altruism

(I work for Rethink Priorities in a different team. I had no input into the charter cities intervention report other than  feedback in a very early version of the draft. All comments here truly my own. Due to time constraints I did not run it by anybody else at the org before commenting).

The Rethink Priorities report used a 2017 World Bank article on special economic zones as the reference point for potential growth rates for charter cities. The World Bank report concludes, “rather than catalyzing economic development, in the aggregate, most zones’ pe

... (read more)
Lant Pritchett on the futility of "smart buys" in developing-world education

The belief that micro-credit has good investment ROIs for the typical recipient.

What would you do if you had half a million dollars?

I lend some credence to the trendlines argument but mostly think that humans are more likely to want to optimize for extreme happiness (or other positive moral goods) than extreme suffering (or other negatives/moral bads), and any additive account  of moral goods will shake out to in expectation have a lot more positive moral goods than moral bads, unless you have really extreme inside views to think that optimizing for extreme moral bads is as as likely (or more likely) than optimizing for extreme moral goods. 

I do think there are nontrivial pro... (read more)

saulius's Shortform

I think this is an interesting point but I'm not convinced that it's true with high enough probability that the alternative isn't worth considering. 

In particular, I can imagine luck/happenstance to shake out enough that arbitrarily powerful agents on one dimension are less powerful/rational on other dimensions. 

Another issue is the nature of precommitments[1]. It seems that under most games/simple decision theories for playing those games (eg "Chicken" in CDT), being the first to credibly precommit gives you a strategic edge under most circumsta... (read more)

Lant Pritchett on the futility of "smart buys" in developing-world education

I think this is a much more plausible view of much of the drop-out phenomena than is “credit constraints.” First, strictly speaking, “credit constraints” is not a very good description of the problem. Let us take the author’s numbers seriously that the return to schooling is, say, 8-10 percent. Let us suppose that families in developing countries could borrow at the prime interest rate. The real interest rate in many countries in the world is around 8 to 10 percent. So given the opportunity to borrow at prime to finance schooling many households would rati

... (read more)
2lucy.ea84dLinch, Can you explain. What did not survive the test of time?
Lant Pritchett on the futility of "smart buys" in developing-world education

Now, there are many other ways that spending on primary education can be justified—that education is a universal human right, education is a merit good, the demands of political socialization demand universal education. I suspect that the actual positive theory of education has more to do with those than with the economic returns. But for the purposes of the present exercise of comparing alternative uses of public funds across sectors one cannot invoke “human rights” as a reason to spend on schooling without a counter of “intrinsic values” of an unchanged

... (read more)
0lucy.ea84dLinch we can also use HDI (Human Development Index) and calculate education ~= money Here is what I get for children's education 6 years schooling = 890 PPP USD per year 9 years schooling = 2800 PPP USD per year 12 years schooling = 8500 PPP USD per year
4Linch5d(Copenhagen Consensus 2008 Perspective Paper, Pg8) Huh this take did not survive the test of time well, given the last 13 years of research on microfinance.
2Aaron Gertler5dI deliberately chose not to use this as one of my chosen excerpts, though I don't think it reveals a weakness in anything Pritchett believes — I read him as a skeptic about these "rights" who nevertheless acknowledges that other people would rather talk about rights than economic return in discussions of education. But whether he believes in the concept or not, your objection to the concept seems correct to me.
You should write about your job

I'm a generalist researcher at Rethink Priorities. I'm on the longtermism team and that's what I try to spend most of my time doing, but some of my projects touch on global health and some of projects are relevant to animal welfare as well (I think doing work across cause areas is fairly common at RP, though this will likely decrease with time as the org gets larger and individual researchers become more specialized). 

I'm happy to talk about my job, but unclear how valuable this is, given that a) "generalist researcher" is probably one of the most wel... (read more)

1Abby Hoskin5dI would love to hear more about your job, and it might be really useful for RP too since they're hiring ;)
1Ikaxas5dI'd be interested in this. Even though "generalist researcher" is well-known, I think it's easy from the outside to get a distorted picture of the "content [https://www.greaterwrong.com/posts/x6Kv7nxKHfLGtPJej/the-topic-is-not-the-content] " of the job. Aside from this recent post [https://forum.effectivealtruism.org/posts/vSGi7oJeYSszf5uK6/writing-about-my-job-internet-blogger] , I don't know of write ups about it off the top of my head (though there could be ones I don't know about), and of course multiple writeups are useful since different people's situations and experiences will be different.
What foundational science would help produce clean meat?

In addition to what avacyn said about hydrolysates (very important! Amino acids are really expensive!), off the top of my head:

  • Figuring out ways to do extreme sanitation/fully aseptic operations cheaply at scale
    • mammal stem cells double every 21-48 hours, E.coli doubles every 25 minutes, if you have a giant bioreactor full of yummy meat cells + growth media at ph~=7.0 and temp~=37C, one stray bacterium or virus can ruin your day. 
    • maybe more of an engineering problem than a foundational science problem, but solving this would also be fairly helpful for
... (read more)
All Possible Views About Humanity's Future Are Wild

Not all actions humans are capable of doing are good. 

EA Superpower?! 😋

I'd want the orange pill, I think. 

Linch's Shortform

Can you be less abstract and point, quantitatively, to which numbers I gave seem vastly off to you and insert your own numbers? I definitely think my numbers are pretty fuzzy but I'd like to see different ones before just arguing verbally instead. 

(Also I think my actual original argument was a conditional claim, so it feels a little bit weird to be challenged on the premises of them! :)). 

How do you communicate that you like to optimise processes without people assuming you like tricks / hacks / shortcuts?

Can you give a concrete and detailed (anonymized) example of this? As presented, it feels like the people you're talking to aren't saying something very useful, but I only have your side of the conversation so it might be helpful for us to understand in a bit more detail what was actually going on.

3Madhav Malhotra10dThank you for taking the time to respond :-) When you posted this, it actually made me go back over the examples I was thinking of and I realised there could be different interpretations instead of just the one where the other person was frowning upon me asking the question. Perhaps, it was just that they didn't know of the answer and were uncertain in what i was asking The example was actually in a podcast recording where I was interviewing an entrepreneur about leadership, so you can listen to the snippet of the conversation if you want :D https://sndup.net/9t8m [https://sndup.net/9t8m]
New blog: Cold Takes

It was a serious question, maybe presented in a slightly glib way.

New blog: Cold Takes

I too am excited about this! In the "about" page, you say:

Most of the posts on this blog are written at least a month before they're posted, sometimes much longer. I try to post things that are worth posting even so, hence the name "Cold Takes."

So my question here is, what's your preferred feedback policy/commenting norms? Should we bias towards more normal "EA Forum commenting norms" or closer to "write out our comments at least a month before they're posted, sometimes much longer, and only comment if upon >1 month of reflection we still think they're worth your time/attention to read?"

This comment made me laugh out loud, all the more so because I couldn't tell whether you were joking.

How to explain AI risk/EA concepts to family and friends?
Answer by LinchJul 12, 202116

This is not exactly the answer you're looking for, and I'm not confident about this, but I think it's maybe good to refine your reasons for working on AI risk and being clear what you mean first, and after you get a good sense of what you mean (at least enough to convince a much more skeptical version of yourself), a more easily explainable version of the arguments may come naturally to you. 

(Take everything I say here with a huge lump of salt...FWIW I don't know how to explain EA or longtermism or forecasting stuff to my own parents, partially due to the language barrier). 

Jumping in Front of Bullets vs. Organ Donation

This seems surprisingly low to me. Do you have some notes or a writeup of the analysis somewhere?

6Josh Jacobson14dIt looks like I accidentally took credit for Zach Weems' estimate, made here: https://www.facebook.com/groups/EACryonics/permalink/1737340919637664/ [https://www.facebook.com/groups/EACryonics/permalink/1737340919637664/]
3SamiM14dI can imagine such a low number if we're talking about posthumous donations. According to this [https://www.organdonor.gov/learn/organ-donation-statistics], only 3/1000 people die in such a way that their organs are useful. When you add that to the fact that deceased organs are less good than living ones, you can get something as low as this. For example, this [https://forum.effectivealtruism.org/posts/yTu9pa9Po4hAuhETJ/kidney-donation-is-a-reasonable-choice-for-effective] says that the QALY's from a deceased kidney are 4.31. If only 3/1000 donors have such kidneys, you get 0.013 QALY's. It will probably get higher when you account for all other organs (but it will be a surprise if it's 10x, which probably means I did something wrong). I should also mention that it's not clear if all organs are damaged equally, so a less naive estimate would be useful.
Open Thread: July 2021

On a semi-related note, Peter Singer appeared on the podcast of a Canadian MP, which I thought was pretty cool.

Linch's Shortform

One additional risk: if done poorly, harsh criticism of someone else's blog post from several years ago could be pretty unpleasant and make the EA community seem less friendly.

I think I agree this is a concern. But just so we're on the same page here, what's your threat model? Are you more worried about

  1. The EA community feeling less pleasant and friendly to existing established EAs, so we'll have more retention issues with people disengaging?
  2. The EA community feeling less pleasant and friendly to newcomers, so we have more issues with recruitment and people
... (read more)
5Khorton18dIt's actually a bit of numbers 1-3; I'm imagining decreased engagement generally, especially sharing ideas transparently.
Linch's Shortform

I'm actually super excited about this idea though - let's set some courtesy norms around contacting the author privately before red-teaming their paper and then get going!

Thanks for the excitement! I agree that contacting someone ahead of time might be good (so at least someone doesn't learn about their project being red teamed until social media blows up), but I feel like it might not mitigate most of the potential unpleasantness/harshness. Like I don't see a good cultural way to both incentivize Red Teaming and allow a face-saving way to refuse to let yo... (read more)

microCOVID.org: A tool to estimate COVID risk from common activities

Is Microcovid.org or other people in EA tracking Delta and the possibility of scarier variants? Personally, I continue to follow some epidemiologists and virologists and data people on Twitter but other than that I've stopped following Covid almost completely. I'm wondering if this is a sane choice to assume that "the community" (or broader society) has enough of a grip on things and can give us forewarning in case the correct choice later is for (even fully vaccinated) people to go into partial or full lockdowns again. 

2Habryka19dI found this FB post by Matt Bell surprisingly useful: https://www.facebook.com/thismattbell/posts/10161279341706038 [https://www.facebook.com/thismattbell/posts/10161279341706038]
Linch's Shortform

No, weaker claim than that, just saying that P(we spread to the stars|we don't all die or are otherwise curtailed from AI in the next 100 years) > 1%. 

(I should figure out my actual probabilities on AI and existential risk with at least moderate rigor at some point, but I've never actually done this so far).

2anonymous_ea10dThanks. Going back to your original impact estimate, I think the bigger difficulty I have in swallowing your impact estimate and claims related to it (e.g. "the ultimate weight of small decisions you make is measured not in dollars or relative status, but in stars") is not the probabilities of AI or space expansion, but what seems to me to be a pretty big jump from the potential stakes of a cause area or value possible in the future without any existential catastrophes, to the impact that researchers working on that cause area might have. Joe Carlsmith has a small paragraph articulating some of my worries along these lines elsewhere [https://forum.effectivealtruism.org/posts/2foH3jJGMpSnkumGX/on-future-people-looking-back-at-21st-century-longtermism] on the forum:
EA Infrastructure Fund: Ask us anything!

That's great to hear! But to be clear, not for risk adjustment? Or are you just not sure on that point? 

2Buck19dI am not sure. I think it’s pretty likely I would want to fund after risk adjustment. I think that if you are considering trying to get funded this way, you should consider reaching out to me first.
Linch's Shortform

Upon (brief) reflection I agree that relying on the epistemic savviness of the mentors might be too much and the best version of the training program will train a sort of keen internal sense of scientific skepticism that's not particularly reliant on social approval.  

If we have enough time I would float a version of a course that slowly goes from very obvious crap (marketing tripe, bad graphs) into things that are subtler crap (Why We Sleep, Bem ESP stuff) into weasely/motivated stuff (Hickel? Pinker? Sunstein? popular nonfiction in general?) into th... (read more)

Help Rethink Priorities Use Data for Animals, Longtermism, and EA

We'll likely have at least one more internship round before you graduate, so stay tuned! 

Linch's Shortform

Hmm I feel more uneasy about the truthiness grounds of considering some of these examples as "ground truth" (except maybe the Clauset et al example, not sure). I'd rather either a) train people to Red Team existing EA orthodoxy stuff and let their own internal senses + mentor guidance decide whether the red teaming is credible or b) for basic scientific literacy stuff where you do want clear ground truths, let them challenge stuff that's closer to obvious junk (Why We Sleep, some climate science stuff, maybe some covid papers, maybe pull up examples from Calling Bullshit, which I have not read).

4Max_Daniel20dThat seems fair. To be clear, I think "ground truth" isn't the exact framing I'd want to use, and overall I think the best version of such an exercise would encourage some degree of skepticism about the alleged 'better' answer as well. Assuming it's framed well, I think there are both upsides and downsides to using examples that are closer to EA vs. clearer-cut. I'm uncertain on what seemed better overall if I could only do one of them. Another advantage of my suggestion in my view is that it relies less on mentors. I'm concerned that having mentors that are less epistemically savvy than the best participants can detract a lot from the optimal value that exercise might provide, and that it would be super hard to ensure adequate mentor quality for some audiences I'd want to use this exercise for. Even if you're less concerned about this, relying on any kind of plausible mentor seems like less scaleable than a version that only relies on access to published material.
Linch's Shortform

Hmm I think the most likely way downside stuff will happen is by flipping the sign rather than reducing the magnitude, curious why your model is different.

I wrote a bit more in the linked shortform.

Linch's Shortform

FWIW I'm also skeptical of naive ex ante differences of >~2 orders of magnitude between causes, after accounting for meta-EA effects. That said, I also think maybe our culture will be better if we celebrate doing naively good things over doing things that are externally high status.* 

But I don't feel too strongly, main point of the shortform was just that I talk to some people who are disillusioned because they feel like EA tells them that their jobs are less important than other jobs, and I'm just like, whoa, that's just such a weird impression on... (read more)

2Aaron Gertler19dThis is a reasonable theory. But I think there are lots of naively good things that are broadly accessible to people in a way that "janitor at MIRI" isn't, hence my critique. (Not that this one Shortform post is doing anything wrong on its own — I just hear this kind of example used too often relative to examples like the ones I mentioned, including in this popular post [https://forum.effectivealtruism.org/posts/cCrHv4Rn3StCXqgEw/in-praise-of-unhistoric-heroism] , though the "sweep the floors at CEA" example was a bit less central there.)
2Habryka20dI feel like the meta effects are likely to exaggerate the differences, not reduce them? Surprised about the line of reasoning here.
Linch's Shortform

I agree with this, and also I did try emphasizing that I was only using MIRI as an example. Do you think the post would be better if I replaced MIRI with a hypothetical example? The problem with that is that then the differences would be less visceral. 

Linch's Shortform

I think the world either ends or some other form of (implied permanent) x-risk in the next 100 years or it doesn't. And if the world doesn't end in the next 100 years, we eventually will either a) settle the stars or b) ends or drastically curtails at some point >100 years out. 

I guess I assume b) is pretty low probability with AI, like much less than 99% chance. And 2 orders of magnitude isn't much when all the other numbers are pretty fuzzy and spans that many orders of magnitude.

(A lot of this is pretty fuzzy).

2anonymous_ea20dSo is the basic idea that transformative AI not ending in an existential catastrophe is the major bottleneck on a vastly positive future for humanity?
EA needs consultancies

I do agree with you that client quality and incentives are a serious potential problem here, especially when we consider potential funders other than Open Phil. A potential solution here is for the rest of the EA movement to make it clear that "you are more likely to get future work if you write truthful things,  even if they are critical of your direct client/more negative than your client wants or is incentivizing you to write/believe," but maybe this message/nuance is hard to convey and/or may not initially seem believable to people more used to other field norms. 

EA needs consultancies

Thanks for the detailed response! 

The only factor particular to consulting that I could see weighing against truth-seeking would be the desire to sell future work to the client... but to me that's resolved by clients making clear that what the client values is truth, which would keep incentives well-aligned. 

Hmm, on reflection maybe the issue isn't as particular to consulting, like I think the issue here isn't that people by default have overwhelming incentives against truth, but just that actually seeking truth is such an unusual preference in t... (read more)

Load More