All of Linch's Comments + Replies

Pathways to impact for forecasting and evaluation

But there are also ways in which evaluations can have zero or negative impact. The one that worries me the most at the moment is people taking noisy evaluations too seriously, i.e., outsourcing too much of their thinking to imperfect evaluators. Lack of stakeholder buy-in doesn't seem like that much of a problem for the EA community: Reception for some of my evaluations posts was fairly warm, and funders seem keen to pay for evaluations [emphasis mine]

This doesn't seem like much evidence to me, for what it's worth. It seems very plausible to me that there'... (read more)

Where is a good place to start learning about Forecasting?

Also, Superforecasting is great but longer than it needs to be, I've heard that there are good summaries out there but don't personally know where they are. 

I like this summary from AI Impacts.

Comments for shorter Cold Takes pieces

Speaking of "I think that's a great point about the value of seeing people change their opinions in real time," if you don't mind me asking, would you like to mention a sentence or two on why you no longer endorse the above paragraphs?

2Florin6dYeah, there were two groups [https://journals.lww.com/pidj/Fulltext/2000/10001/Transmission_of_viral_respiratory_infections_in.2.aspx] that studied how rhinovirus is transmitted. One group was from the University of Virginia and found evidence of only fomite transmission. The study you cited is theirs. The other group was from the University of Wisconsin and found evidence for only airborne transmission. The Wisconsin group "...argued that the high rate of transmission via the hands in the Virginia experiments might be attributable to intensive contact with fresh wet secretions produced by volunteers who essentially blew their nose into their hand."
You are probably underestimating how good self-love can be

I'm curious what the path to impact here is. Have people tried this and found themselves doing directly more impactful work, having more creative + useful research, etc? 

Concrete Biosecurity Projects (some of which could be big)

I can't find the source anymore but I remember being fairly convinced (70%?) that rhinovirus is probably spread primarily via formites, fwiw. 

The main thing is that snot can carry a lot more viruses than  aerosols. It's also suggestive to me that covid restrictions often had major effects on influenza and RSV, but probably much less so on rhinoviruses

I also don't think we should necessarily overindex on viral respiratory diseases/pandemics, even though I agree they're the scariest. 

4Linch6dHere's the study [https://pubmed.ncbi.nlm.nih.gov/6293304/] FYI.

rhinovirus is probably spread primarily via formites


Until the COVID19 pandemic, nearly everyone thought that most infectious respiratory diseases were transmitted via fomites and droplets, but unfortunately, this was based on shockingly poor evidence and assumptions. The material you've seen is based on this outdated consensus.

As I pointed out before, there are mechanistic reasons to doubt that pandemics can arise from fomite transmission.

However, if I squint hard enough, I can kinda, sorta see how young children in daycare might be infected by sharing toy... (read more)

Bryan Caplan on EA groups

I don't have a good model of what this topping out will look like. My intuition is that there's quite a bit of variance in the top 0.1% though I agree that the case is weaker for a normal distribution. My reasoning for why "student goodness" is probably not a normal distribution is partially because if you care about multiple relatively independent factors (say smarts, conscientiousness, and general niceness) in a multiplicative way, and the individual factors are normally or log-normally distributed, your resulting distribution is a log-normal. 

One f... (read more)

By topping out I just meant that Bryan's impression of students I saturated at an upper bounded for the best libertarian and EA students they met. This could be either be because the students that they met actually did have a very high and similar performance, or because their model/test for assessing how good people are had the best students scoring very highly (so their test was not well calibrated to show differences in top students).

I think this was probably a bit of an off hand remark, and Stefan is right in it being very weak evidence of the performance of top libertarian/EA students.

I personally agree with all of your comment and agree that the underlying distribution seems unlikely to be normal.

Linch's Shortform

What is the empirical discount rate in EA? 

Ie, what is the empirical historical discount rate for donations...

  • overall?
  • in global health and development?
  • in farmed animal advocacy?
  • in EA movement building?
  • in longtermism?
  • in everything else?

What have past attempts to look at this uncovered, as broad numbers? 

And what should this tell us about the discount rate going forwards?

Should EA be explicitly long-termist or uncommitted?

This is interesting and I'm glad you're bringing the discussion up. I think your footnote 2 demonstrates a lot of my disagreements with your overall post:

I'm using resources in a broad sense here to include everything from funding to attention to advice to slots at EAG. Also, given the amount of resources being deployed by EA is increasing, a shift in the distribution of resources towards long-termism may still involve an increase in the absolute number of resources dedicated towards short-termist projects.

Consider this section: 

Secondly, many of the

... (read more)
2Evan_Gaensbauer9dStrongly upvoted. As I was hitting the upvote button, there was a little change in the existing karma from '4' to '3', which meant someone downvoted it. I don't know why and I consider it responsible of downvoters to leave a comment as to why they're downvoting but it doesn't matter because I gave this comment more karma than can be taken away so easily.
4casebash9dYeah, some parts of this discussion are more theoretical than practical and I probably should have highlighted this. Nonetheless, I think it's easy to make the mistake of saying "We'll never get to point X" and then end up having no idea of what to do if you actually get to point X. If the prominence of long-termism keeps growing within EA, who knows where we'll end up? This is an excellent point and now that you've explained this line of reasoning, I agree. I guess it's not immediately clear to me to what extent my proposals would shift limited community building and vetting capability away from long-termist projects. If, for example, Giving What We Can had additional money, it's not clear to me, although it's certainly possible, that they might hire someone who would otherwise go to work at a long-termist organisation. I guess it just seems to me that even though there are real human capital and vetting bottlenecks, that you can work around them to a certain extent if you're willing to just throw money at the issue. Like there has to be something that's the equivalent of GiveDirectly for long-termism.
Bryan Caplan on EA groups

Why do I prefer EA to, say, libertarian student clubs?  First and foremost, libertarian student clubs don’t attract enough members.  Since their numbers are small, it’s simply hard to get a vibrant discussion going.  EA has much broader appeal.  

It's pretty cool that EA has a broader appeal among student clubs than the third most popular political party in America!

Furthermore, while the best libertarian students hold their own against the best EA students, medians tell a different story.  The median EA student, like the median

... (read more)
8Stefan_Schubert8dMaybe he didn't meet that many students, or maybe his main point concerned the median students. I think this is quite weak evidence.
3David Johnston9dSomeone should offer him a bet! Best EA vs best lib in a debate or something.
4calebp9dMaybe they think that both movements are large enough such that they both 'top out' I.e. both movements have some people in the top 0.1% of students or the top 10 EA students are about as good as the top 10 libertarian students. I'd still expect a larger movement to have more of the top 0.1% people, I'm not exactly sure what Bryan was describing here but if he expects that students are roughly normally distributed with regards to goodness it seems like a reasonably hypothesis to me.
Making large donation decisions as a person focused on direct work

I'm curious if anyone else has experienced the same dilemma (maybe unlikely since I think Wave pays unusually high salaries for a role focused on direct work)

FWIW I'm in a similar position, largely because I did well in tech and have had some significant asset growth after I started doing direct work.

In addition to things you've mentioned, I've considered just funding a large fraction of Rethink's/my team's funding gap, though I'm confused about the epistemics/community virtues of donating to an employer [1] I've also considered just funding indi... (read more)

Rhetorical Abusability is a Poor Counterargument

Consider proposition P:

P: consequentialism leads people to believing predictably wrong things or undertake predictably harmful actions

I think if it were the case that we received evidence for P, it would be reasonable to conclude that consequentialism is more likely to be wrong as a decision procedure[1] than if we received evidence for not-P.

Do you disagree? If not, we should examine the distinction between "(heightened) rhetorical abusability" and P. My best guess is something that I often tritely summarize as "anything is possible when you lie":

Any

... (read more)
Rowing and Steering the Effective Altruism Movement

Thanks for this post! I think it's interesting, I'm very glad you wrote it, and I'm inclined to agree with a narrow version of it. I imagine many senior EAs will agree as well with a statement like "often to do much more good we're on the margin more bottlenecked by conceptual and empirical clarity than we are by lack of resources" (eg I'd usually be happier for a researcher-year to be spent helping the movement become less confused than for them to acquire resources via eg writing white papers that policymakers will agree with or via public comms or via m... (read more)

7jtm6dHey Linch, thanks for this thoughtful comment! Yeah, I agree that my examples of steering sometimes are closely related to other terms in Holden's framework, particularly equity – indeed I have a comment about that buried deep in a footnote [https://forum.effectivealtruism.org/posts/XweBntieePnzQyLtK/rowing-and-steering-the-effective-altruism-movement#fnpjk67svjpd] . One reason I think this happens is because I think a super important concept for steering is the idea of moral uncertainty, and taking moral uncertainty seriously can imply putting a greater weight on equity than you otherwise might. I guess another reason is that I tend to assume that effective steering is, as an empirical matter, more likely to be achieved if you incorporate a wide range of voices and perspectives. And this does in practice end up being similar to efforts to "amplify the voices and advance the interests of historically marginalized groups" that Holden puts under the category of equity. But yeah, like you say, it can be hard to differentiate whether people advocate for equity and diversity of perspectives for instrumental or intrinsic reasons (I'm keen on both). I also think your last remark is a fair critique of my post – perhaps I did bring in some more controversial (though, to me, compelling!) perspectives under the less controversial heading of steering. A very similar critique I've heard from two others is something like: "Is your argument purely that there isn't enough steering going on, or is it also that you disagree with the current direction of steering?" And I think, to be fair, that it's also partly the latter for me, at least on some very specific domains. But one response to that is that, yes, I disagree with some of the current steering – but a necessary condition for changing direction is that people talk/care/focus more on steering, so I'm going to make the case for that first. Thanks again for your comment!
The Bioethicists are (Mostly) Alright

In addition to the what AllAmericanBreakfast said, the issue with "all Muslims are complicit in terrorism unless they loudly and publicly condemn terrorism" is that 

a. not all terrorism is committed by Muslims. 

b. The shared notion between a) Islamic terrorist and b) normal guy who happens to be Muslim is that they share a belief in the "the will of God", and they have deferring notions about what the will of God tells them to do.

c. In contrast, (at least in AllAmericanBreakfast's telling)"the establishment" specifically appeals to the bioethics ... (read more)

The Bioethicists are (Mostly) Alright

Taking what you said at face value, what's going on here, institutionally? Philosophy is a nontrivially competitive field, and Stanford professorships aren't easy to get.

5Davidmanheim10dThey don't compete for jobs in "Philosophy," they compete for jobs in a specific department which specializes in, say, deconstructionist readings of Nietzsche's later work. (OK, I'm exaggerating slightly. But the point stands - they don't need to know anything about Philosophy as a whole to do their research and get papers published, or even to teach most of their classes.)
The Bioethicists are (Mostly) Alright

Fwiw when I see criticisms of a field, especially in a technical/semi-academic setting, I rarely assume the criticisms are about individuals and generally assume it's about institutions. 

This is possibly epistemically unwise/to our detriment, see Dan Luu's article, and I do think maybe EA currently pays too much attention to ideas and institutions and not enough to people, at least publicly. But I think at the very least, the broad trend in public conversations is for e.g. a criticism about CEA to be more about the institution than specific individual... (read more)

1Devin Kalish11dI think there's a ton to criticize in the institutions, don't get me wrong, I just disagree that that's how lots of the criticisms I see come off.
What quotes do you find most inspire you to use your resources (effectively) to help others?
Answer by LinchJan 05, 202210

“There is beauty in the world and there is a horror,” she said, “and I would not miss a second of the beauty and I will not close my eyes to the horror.”

From The Unweaving of a Beautiful Thing, the winner of the recent creative writing contest.

The Unweaving of a Beautiful Thing

This is a beautiful story. Just yesterday I remarked that I thought I've lost my ability to cry. Reading this story has restored that ability.

Linch's Shortform

I think I have a preference for typing "xrisk" over "x-risk," as it is easier to type out, communicates the same information, and like other transitions (e-mail to email, long-termism to longtermism), the time has come for the unhyphenated version.

Curious to see if people disagree.

Linch's Shortform

I've now responded though I still don't see the connection clearly. 

1Harrison D17dIt’s just that it related to a project/concept idea I have been mulling over for a while and seeking feedback on
[Linkpost] Don't Look Up - a Netflix comedy about asteroid risk and realistic societal reactions (Dec. 24th)

My impression was that in early 2020, there was a lot of serious-sounding articles in the news about how worries about covid were covering up the much bigger problem of the flu.

Convergence thesis between longtermism and neartermism

This lens is in contrast to the approach that Effective Institutions Project is taking to the issue, which considers institutions on a case-by-case basis and tries to understand what interventions would cause those specific institutions to contribute more to the net good of humanity.

I'm excited about this! Do people on the Effective Institutions Project consider these institutions from a LT lens? If so, do they mostly have a "broad tent" approach to LT impacts, or more of a "targeted/narrow theory of change" approach?

4IanDavidMoss17dYes, we have an institutional prioritization analysis in progress that uses both neartermist and longtermist lenses explicitly and also tries to triangulate between them (in the spirit of Sam's advice that "Doing Both Is Best"). We'll be sending out a draft for review towards the end of this month and I'd be happy to include you in the distribution list if interested. With respect to LT impact/issues, it is a broad tent approach although the theory of change to make change in an institution could be more targeted depending on the specific circumstances of that institution.
Convergence thesis between longtermism and neartermism

I appreciate the (politer than me) engagement!

These are the key diagrams from Lizka's post:

The key simplifying assumption is one in which decision quality is orthogonal to value alignment. I don't believe this is literally true, but is a good start. MichaelA et. al's BIP (Benevolence, Intelligence, Power) ontology* is also helpful here.

If we think of Lizka's B in the first diagram ("a well-run government") is only weakly positive or neutral on the value alignment axis from an LT perspective, and most other dots are negative, we'd yield a simplified result ... (read more)

2weeatquince17dSuper thanks for the lengthy answer. I think we are mostly on the same page. Agree. And yes to date I have focused on targeted interventions (e.g. improving government risk management functions) and value-aligning orgs (e.g. institutions for Future Generations). Agree. FWIW I think I would make this case about approval voting as I believe aligning powerful actors (elected officials) incentives with the populations incentives is a form of value-aligning. Not sure I would make this case for forecasting, but could be open to hearing others make the case. So where if anywhere do we disagree? Disagree. I don’t see that as a worry. I have not seen any evidence any cases of this, and there are 100s of EA aligned folk in the UK policy space. Where are you from? I have heard this worry so far only from people in the USA, maybe there are cultural differences or this has been happening there. Insofar as it is a risk I would assume it might be less bad for actors working outside of institutions (capaigners, lobbyists) so I do think more EA-aligned institutions in this domain could be useful. I think a well-run government is pretty positive. Maybe it depends on the government (as you say maybe there is a case for picking sides) and my experience is UK based. But, for example my understanding is there is some evidence that improved diplomacy practice is good for avoiding conflicts and mismanagement of central government functions can lead to periods of great instability (e.g. financial crises). Also a government is a collections of many smaller institutions it when you get into the weeds of it it becomes easier to pick and choose the sub-institutions that matter more.
Convergence thesis between longtermism and neartermism

Yeah since almost all x-risk is anthropogenic, our prior for economic growth and scientific progress is very close to 50-50, and I have specific empirical  (though still not very detailed) reasons to update in the negative direction (at least on the margin, as of 2022).

With regards to IIDM I don't see why that wouldn't be net positive.

I think this disentanglement by Lizka might be helpful*, especially if (like me) your empirical views about external institutions are a bit more negative than Lizka's.

*Disclaimer: I supervised her when she was writing this

4weeatquince18dHi. Thank you so much for the link, somehow I had missed that post by Lizka. Was great reading :-) To flag however I am still a bit confused. Lizka's post says "Personally, I think IIDM-style work is a very promising area for effective altruism"so I don’t understand how you go from that too IIDM is net-negative. I also don’t understand what the phrase "especially if (like me) your empirical views about external institutions are a bit more negative than Lizka's" means (like if you think institutions are generally not doing good then IIDM might be more useful not less). I am not trying to be critical here. I am genuinely very keen to understand the case against. I work in this space so it would be really great to find people who think this is not useful and to understand their point of view.
Convergence thesis between longtermism and neartermism

I'm personally pretty skeptical that much of IIDM, economic growth, and meta-science is net positive, am confused about moral circle expansion (though intuitively feel the most positive about it on this list), and while I agree that "(safe) research and development of biotech is good," I suspect the word "safe" is doing a helluva work here.

Also the (implicit) moral trade perspective here assumes that there exists large gains from trade from doing a bunch of investment into new cause/intervention areas that naively looks decent on both NT and LT grounds; it... (read more)

4weeatquince19dWhy are you sceptical of IIDM, meta-science, etc. Would love to hear arguments against? The short argument for is that insofar as making the future goes well means dealing with uncertainty and things that are hard to predict, then these seem like exactly the kinds of interventions to work on (as set out here [https://miro.medium.com/max/875/1*6nRue2O4zMO9xr8aiwUkxQ.png]).
Convergence thesis between longtermism and neartermism

The RCTs or other high-rigor evidence that would be most exciting for long-term impact probably aren't going to be looking at the same evidence base or metrics that would be the best for short-term impact. 

Convergence thesis between longtermism and neartermism

Here's my rushed, high-level take. I'll try to engage with specific subpoints later.

My views

Despite making the case for convergence being plausible this does still feel a bit contrived. I am sure if you put effort into it you could make a many weak arguments approach to show that nearterm and longterm approaches to doing good will diverge.

I feel like this is by far the most important part of this post and I think it should be highlighted more/put more upfront. The entire rest of the article felt like a concerted exercise in motivated reasoning (including t... (read more)

Have you considered switching countries to save money?

Yeah, my impression is that the Bahamas is more expensive than many parts of the US (and more expensive than the vast majority of the US if you aren't including housing), particularly if you're planning to live in an "expat-y" lifestyle. 

Note that this impression is contested, see discussion here

"Disappointing Futures" Might Be As Important As Existential Risks

So I thought about this post a bit more, particularly the we never saturate the universe with maximally flourishing beings and  impossibility of reflective equilibrium sections. 

If we accept something like the total view of population ethics with linear aggregation, it follows that we should enrich the universe with as much goodness as possible. That means creating maximum pleasure, or eudaimonia, or whatever it is we consider valuable.

[...]

This vision does not depend on specific assumptions about what "flourishing" looks like. It could fit the h

... (read more)
2MichaelStJules21dHow many different plausible definitions of flourishing that differ significantly enough do you expect there to be? One potential solution would be to divide the future spacetime (not necessarily into contiguous blocks) in proportion to our credences in them (or evenly), and optimize separately for the corresponding view in each. With equal weights, each of n views could get at least about 1/n of what it would if it had 100% weight (taking ratios of expected values), assuming there isn't costly conflict between the views and no view (significantly) negatively values what another finds near optimal in practice. They could potentially do much better with some moral trades and/or if there's enough overlap in what they value positively. One view going for a larger share would lead to zero sum work and deadweight loss as others respond to it. I would indeed guess that a complex theory of flourishing ("complexity of value", objective list theories, maybe), a preference/desire view and hedonism would assign <1% value to each other's (practical) optima compared to their own. I think there could be substantial agreement between different complex theories of flourishing, though, since I expect them generally to overlap a lot in their requirements. I could also see hedonism and preference views overlapping considerably and having good moral trades, in case most of the resource usage is just to sustain consciousness (and not to instantiate preference satisfaction or pleasure in particular) and most of the resulting consciousness-sustaining structures/activity can shared without much loss on either view. However, this could just be false.
EA/Rationalist Safety Nets: Promising, but Arduous

Some anecdata that might or might not be helpful:

As I mentioned on FB, I didn't have a lot of money in 2017, and I was trying to transition jobs (not even to do something directly in EA, just to work in tech so I had more earning and giving potential). I'm really grateful to the EAs who lent me money, including you. If I instead did the standard "work a minimum wage job while studying in my off hours" (or worse, "work a minimum wage job while applying to normal grad jobs, and then work a normal grad job while studying in my off hours") route, I think my ca... (read more)

Linch's Shortform

I think many individual EAs should spend some time brainstorming and considering ways they can be really ambitious, eg come up with concrete plans to generate >100M in moral value, reduce existential risk by more than a basis point, etc.

Likewise, I think we as a community should figure out better ways to help people ideate and incubate such projects and ambitious career directions, as well as aim to become a community that can really help people both celebrate successes and to mitigate the individual costs/risks of having very ambitious plans fail.

2Nathan Young16dSome related thoughts https://twitter.com/NathanpmYoung/status/1478342847187271687?s=20
1Harrison D17d(Perhaps you could take a first step by responding to my DM 😉)
Should I fly instead of taking trains?

you don't want to take a moral position where it's ok to harm some people in order to help others "more effectively".

This is not a full defense of my normative ethics, but I think it's reasonable to "pull" in the classical trolley problem, and I want to note that I think this is the most common position among EAs, philosophers, and laymen

In addition, the harm from increasing CO2 emissions is fairly abstract, and to me should not invoke many of the same non-consequentialist moral intuitions as e.g. agent-relative harms like lying. breaking a promise,... (read more)

Is EA compatible with technopessimism?

I recently convinced myself to be fairly technopessimistic in the short term (at least relative to some people I talk to in EA, unclear how this compares to e.g. online EAs or the population overall), though it's not a very exciting position and I don't know if I should prioritize writing up this argument over other things I can do that's productive. 

1acylhalide24dThanks, this is useful.
2021 AI Alignment Literature Review and Charity Comparison

Great work as usual. Here's a minor comment before I dig more substantively:

In the past I have had very demanding standards around Conflicts of Interest, including being critical of others for their lax treatment of the issue. Historically this was not an issue because I had very few conflicts. However this year I have accumulated a large number of such conflicts, and worse, conflicts that cannot all be individually publically disclosed due to another ethical constraint.

As such the reader should assume I could be conflicted on any and all reviewed organisa

... (read more)

You are correct that this would be much more useful - indeed this is essentially what I wrote into an earlier draft. Unfortunately the specific nature of the other ethical constraint makes it difficult to share even the existence of the conflict with any specific group/individual.

Should I fly instead of taking trains?

No, it could come from having a high-impact job (where nonzero marginal hours go into it) or from donating a fraction of the difference rather than all of the difference. 

I also think that if you believe that donations to other charities have higher marginal impact than donation to climate charities, it'd be less moral to donate to climate charities instead.

2Guy Raveh24dTrue; this still means you're doing something with the "profit" from that extra time and not just letting the information sit in your head. You're putting it into an impactful job (and not playing videogames) or you're using the money to mitigate the damage. I think there are at least two points against believing this. First, you're directly harming the world in a specific way by flying instead of taking the train, and you don't want to take a moral position where it's ok to harm some people in order to help others "more effectively". Second, some cause areas lots of people here believe in are enticing in that investing in them moves the money back to you or to people you know, instead of directly to those you're trying to help. Which is not necessarily a reason to drop them, but is in my opinion certainly a reason not to treat them as the single cause you want to put all your eggs into. It's easier just to see them as the most moral, no matter the circumstances, but I think that's dangerous.
Movie review: Don't Look Up

As an anecdotal counterpoint, my girlfriend (not an EA, not an American) watched it with me and a friend on Christmas Eve and she said it was the best movie she saw this year, and enjoyed many parts of it (including parts I didn't like as much). 

2JackM20dYeah it seems quite polarising. I watched it with my flatmate (not EA, not American) and he absolutely loved it. I thought it was pretty mediocre.
Biosecurity needs engineers and materials scientists

Like Jackson mentioned, another biosecurity-relevant intervention where I think engineers would be useful would be in helping to design pandemic-safe refuges to help preserve civilization. My current belief as a non-expert is that this is quite high on I/N/T, though as usual there are nontrivial downside risks for a plan that's executed poorly. 

There are also cobenefits for shielding against risks other than bio, though my current best guess is that shielding against biorisk is the most important reason for refuges.

I'd be excited to talk to (civil) en... (read more)

The Possibility of an Ongoing Moral Catastrophe (Summary)

I considered Evan Williams' paper one of the most important papers in cause prioritization at the time, and I think I still broadly buy this. As I mention in this answer, there are at least 4 points his paper brought up that are nontrivial, interesting, and hard to refute.

If I were to write this summary again, I think I'd be noticeably more opinionated. In particular, a key disagreement I have with him (which I remember having at the time I was making the summary, but this never making it into my notes) is on the importance of the speed of moral progress v... (read more)

Response to Recent Criticisms of Longtermism

Thanks for the response, from you and others! I think I had a large illusion of transparency about how obviously wrong Torres' critiques are to common-sense reason and morality. Naively I'd have thought that they'd come across as clearly dumb to target audiences the way (e.g.) the 2013 Charity Navigator critique of EA did. But I agree that if you and others think that many people who could potentially do useful work in EA (e.g., promising members of local groups, or academic collaborators at Cambridge) would otherwise have read Torres' article and been per... (read more)

I don't think people being "persuaded" by Torres is the primary concern — rather, I think Torres could make people feel vaguely icky or concerned about longtermism, even if they still basically "believe in it", in a way that makes them less likely to get fully engaged / more likely to bounce off toward other EA topics. Even if those other topics are also very valuable, it seems good to have a reaction piece like this to counter the "ick" reactions and give people an easy way to see the full set of relevant arguments before they bounce.

External Evaluation of the EA Wiki

The EA Wiki seems probably worth funding, but it is not the most ambitious project that the main person behind it could be doing.[emphasis mine]

This is a really minor point, but I think your phrasing here is overly euphemistic. "most ambitious project" taken very literally is a) a very high bar and b) usually not a bar we want people to go for [1]. To the extent I understand your all-things-considered views correctly, I would prefer phrasings like "While I think on the margin this project is worth spending EA dollars on, I do not believe that this project ... (read more)

I thought it was clear enough, fwiw.

Response to Recent Criticisms of Longtermism

Hi! Thank you so much for this article. I have only skimmed it, but it appears substantive, interesting, and carefully done. 

Please don't take my question the wrong way, but may I ask what the motivation is for writing this article? Naively, this looks very detailed (43 minutes read according to the EAF, and you mention that you had to cut some sections) and possibly the most expansive public piece of research/communication you've done in effective altruism to date. While I applaud and actively encourage critiques of effective altruism and related con... (read more)

2Dr. David Mathers1moFor what it's worth, I think the basic critique of total utilitarianism of 'it's just obviously more important to save a life than to bring a new one into existence' is actually very strong. I think insofar as longtermist folk don't see that, it's probably a) because it's so obvious that they are bored with it now and b) Torres tone is so obnoxious and plausibly motivated by personal animosity. But neither of those are good reason to reject the objection!

Hello Linch, Sean and Marisa capture the reasons well. I have had several people outside EA/LT ask about the Torres essays and I didn't have a great response to point them to so this response is written for them. I also posted it here in case others have a similar use for it. 

I totally understand your concerns. FWIW as a former group organizer, as the Torres pieces were coming out, I had a lot of members express serious concerns about longtermism as a result of the articles and ask for my thoughts about them, so I appreciate having something to point them to that (in my opinion) summarizes the counterpoints well.

I note the rider says it's not directed at regular forum users/people necessarily familiar with longtermism. 

The Torres critiques are getting attention in non-longtermist contexts, especially with people not very familiar with the source material being critiqued. I expect to find myself linking to this post regularly when discussing with academic colleagues who have come across the Torres critiques; several sections (the "missing context/selective quotations" section in particular) demonstrate  effectively places in which the critiques are not representing the source material entirely fairly.

An Emergency Fund for Effective Altruists

I wrote a prior version of this idea when I first got interested in EA: https://forum.effectivealtruism.org/posts/5jBa7chCZudMHWe39/donation-insurance

Like many semi-decent ideas, it was never actually implemented.

I like your conceptualization better, and I also think compared to 6 or so years ago, EA now has both more money and more operational capacity, so I feel pretty good at partially or mostly refunding the earlier donors, particularly the ones who have fallen on hard times.

An Emergency Fund for Effective Altruists

A lot of these choices seem unnecessarily punitive to me, not sure.

2Davidmanheim1moThat seems plausible - but I don't know how this would work if there wasn't any transparency about who was given money back, so I'm not sure how to avoid #3. And For #2, I think the point is that this isn't a payout for disaster, it's a temporary solution to replace savings. Once they are back on their feet and able to donate, I'd think that paying a larger part into the fund makes sense. But I agree that it's a different vision than using this as pure insurance.
EA megaprojects continued

Some quick thoughts: 

  • Word on the grapevine is that many universities have really poor operations capacity, including R1 research universities in the US and equivalent ones in Europe. It's unclear to me if an EA university can do better (eg by paying for more ops staff, by thinking harder about incentives), but it's at least not implausible.
    • Rethink Priorities, Open Phil, and MIRI all naively appear to have better ops than my personal impression of what ops at EA-affliated departments in research universities look like.
  • Promotion tracks in most (but not
... (read more)
EA megaprojects continued

I wouldn't treat the upvotes there as much evidence; I think most EAs voting on these things don't have very good qualitative or quantitative models of xrisks and what it'd take to stop them. 

A reductio ad absurdum here you might have is whether this is an indictment of the karma system in general. I don't think it is, because (to pick a sample of other posts on the frontpage) posts about burnout and productivity can simply invoke people's internal sense/vibe of what makes them worried, so just using affect isn't terrible, posts about internship/job o... (read more)

Flimsy Pet Theories, Enormous Initiatives

I agree it provides stronger challenges. I think I disagree with the other claims as presented, but the sentence is not detailed enough for me to really know if I actually disagree.

Flimsy Pet Theories, Enormous Initiatives

This is the exact type of reasoning that would cause someone intuitively to think that space settlements are important - it's clearly a thing that increases the anti-fragility of humanity, even if you don't have exact models of the threats that it may help against. By increasing anti-fragility, you're increasing the ability to face unknown threats.  Certainly, you can get into specifics, and you can realize it doesn't make you as anti-fragile as you thought, but again, it's very easy to miss some other specifics that are unknown unknowns and totally reverse your conclusion.

This will be a good argument if Musk built and populated Antarctica bunkers before space. 

It's pretty clear that being multiplanetary is more anti-fragile? It provides more optionality, allows for more differentiation and evolution, and provides stronger challenges.

What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?

This may sound really obvious in retrospect, but Evan G. Williams' 2015 paper (summarized here) felt pretty convincing to me that conditional upon moral realism being broadly true, we're all almost certainly unknowingly guilty of large moral atrocities

There's several steps here that I think is interesting:

  1. We may believe that this is a problem only for the "rest" of society; as enlightened vegan cosmopolitan longtermists, we see all the moral flaws that others do not.
    1. But this just has both a really bad historical track record and isn't very logically
... (read more)
Load More