Buck's Shortform

by Buck13th Sep 202037 comments
37 comments, sorted by Highlighting new comments since Today at 9:02 AM
New Comment

I’ve recently been thinking about medieval alchemy as a metaphor for longtermist EA.

I think there’s a sense in which it was an extremely reasonable choice to study alchemy. The basic hope of alchemy was that by fiddling around in various ways with substances you had, you’d be able to turn them into other things which had various helpful properties. It would be a really big deal if humans were able to do this.

And it seems a priori pretty reasonable to expect that humanity could get way better at manipulating substances, because there was an established history of people figuring out ways that you could do useful things by fiddling around with substances in weird ways, for example metallurgy or glassmaking, and we have lots of examples of materials having different and useful properties. If you had been particularly forward thinking, you might even have noted that it seems plausible that we’ll eventually be able to do the full range of manipulations of materials that life is able to do.

So I think that alchemists deserve a lot of points for spotting a really big and important consideration about the future. (I actually have no idea if any alchemists were thinking about it this way; that’s why I billed this as a metaphor rather than an analogy.) But they weren’t really very correct about how anything worked, and so most of their work before 1650 was pretty useless. 

It’s interesting to think about whether EA is in a similar spot. I think EA has done a great job of identifying crucial and underrated considerations about how to do good and what the future will be like, eg x-risk and AI alignment. But I think our ideas for acting on these considerations seem much more tenuous. And it wouldn’t be super shocking to find out that later generations of longtermists think that our plans and ideas about the world are similarly inaccurate.

So what should you have done if you were an alchemist in the 1500s who agreed with this argument that you had some really underrated considerations but didn’t have great ideas for what to do about them? 

I think that you should probably have done some of the following things:

  • Try to establish the limits of your knowledge and be clear about the fact that you’re in possession of good questions rather than good answers.
  • Do lots of measurements, write down your experiments clearly, and disseminate the results widely, so that other alchemists could make faster progress.
  • Push for better scientific norms. (Scientific norms were in fact invented in large part by Robert Boyle for the sake of making chemistry a better field.)
  • Work on building devices which would enable people to do experiments better.

Overall I feel like the alchemists did pretty well at making the world better, and if they’d been more altruistically motivated they would have been even better.

There are some reasons to think that pushing early chemistry forward is easier than working on improving the long term future, In particular, you might think that it’s only possible to work on x-risk stuff around the time of the hinge of history.

Huh, interesting thoughts, have you looked into the actual motivations behind it more? I'd've guessed that there was little "big if true" thinking in alchemy and mostly hopes for wealth and power.

Another thought, I suppose alchemy was more technical than something like magical potion brewing and in that way attracted other kinds of people, making it more proto-scientific? Another similar comparison might be sincere altruistic missionaries that work on finding the "true" interpretation of the bible/koran/..., sharing their progress in understanding it and working on convincing others to save them.

Regarding pushing chemnistry being easier than longtermism, I'd have guessed the big reasons why pushing scientific fields is easier are the possibility of repeating experiments and profitability of the knowledge. Are there really longtermists who find it plausible we can only work on x-risk stuff around the hinge? Even patient longtermists seem to want to save resources and I suppose invest in other capacity building. Ah, or do you mean "it's only possible to *directly* work on x-risk stuff", vs. indirectly? It just seemed odd to suggest that everything longtermists have done so far has not affected the probability of eventual x-risk, in the very least it has set in motion the longtermism movement earlier and shaping the culture and thinking style and so forth via institutions like FHI.

Here's a crazy idea. I haven't run it by any EAIF people yet.

I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)

Basic structure:

  • Someone picks a book they want to review.
  • Optionally, they email me asking how on-topic I think the book is (to reduce the probability of not getting the prize later).
  • They write a review, and send it to me.
  • If it’s the kind of review I want, I give them $500 in return for them posting the review to EA Forum or LW with a “This post sponsored by the EAIF” banner at the top. (I’d also love to set up an impact purchase thing but that’s probably too complicated).
  • If I don’t want to give them the money, they can do whatever with the review.

What books are on topic: Anything of interest to people who want to have a massive altruistic impact on the world. More specifically:

  • Things directly related to traditional EA topics
  • Things about the world more generally. Eg macrohistory, how do governments work, The Doomsday Machine, history of science (eg Asimov’s “A Short History of Chemistry”)
  • I think that books about self-help, productivity, or skill-building (eg management) are dubiously on topic.

Goals:

  • I think that these book reviews might be directly useful. There are many topics where I’d love to know the basic EA-relevant takeaways, especially when combined with basic fact-checking.
  • It might encourage people to practice useful skills, like writing, quickly learning about new topics, and thinking through what topics would be useful to know more about.
  • I think it would be healthy for EA’s culture. I worry sometimes that EAs aren’t sufficiently interested in learning facts about the world that aren’t directly related to EA stuff. I think that this might be improved both by people writing these reviews and people reading them.
    • Conversely, sometimes I worry that rationalists are too interested in thinking about the world by introspection or weird analogies relative to learning many facts about different aspects of the world; I think book reviews would maybe be a healthier way to direct energy towards intellectual development.
  • It might surface some talented writers and thinkers who weren’t otherwise known to EA.
  • It might produce good content on the EA Forum and LW that engages intellectually curious people.

Suggested elements of a book review:

  • One paragraph summary of the book
  • How compelling you found the book’s thesis, and why
  • The main takeaways that relate to vastly improving the world, with emphasis on the surprising ones
  • Optionally, epistemic spot checks
  • Optionally, “book adversarial collaborations”, where you actually review two different books on the same topic.

I worry sometimes that EAs aren’t sufficiently interested in learning facts about the world that aren’t directly related to EA stuff.

I share this concern, and I think a culture with more book reviews is a great way to achieve that (I've been happy to see all of Michael Aird's book summaries for that reason).

CEA briefly considered paying for book reviews (I was asked to write this review as a test of that idea). IIRC, the goal at the time was more about getting more engagement from people on the periphery of EA by creating EA-related content they'd find interesting for other reasons. But book reviews as a push toward levelling up more involved people // changing EA culture is a different angle, and one I like a lot.

One suggestion: I'd want the epistemic spot checks, or something similar, to be mandatory. Many interesting books fail the basic test of "is the author routinely saying true things?", and I think a good truth-oriented book review should check for that.

One suggestion: I'd want the epistemic spot checks, or something similar, to be mandatory. Many interesting books fail the basic test of "is the author routinely saying true things?", and I think a good truth-oriented book review should check for that.

I think that this may make sense / probably makes sense for receiving payment for book reviews. But I think I'd be opposed to discouraging people from just posting book summaries/reviews/notes in general unless they do this. 

This is because I think it's possible to create useful book notes posts in only ~30 mins of extra time on top of the time one spends reading the book and making Anki cards anyway (assuming someone is making Anki cards as they read, which I'd suggest they do). (That time includes writing key takeaways from memory or adapting them from rough notes, copying the cards into the editor and formatting them, etc.) Given that, I think it's worthwhile for me to make such posts. But even doubling that time might make it no longer worthwhile, given how stretched my time is. 

Me doing an epistemic spot check would also be useful for me anyway, but I don't think useful enough to justify the time, relative to focusing on my main projects whenever I'm at a computer, listening to books while I do chores etc., and churning out very quick notes posts when I finish.

All that said, I think highlighting the idea of doing epistemic spot checks, and highlighting why it's useful, would be good. And Michael2019 and MichaelEarly2020 probably should've done such epistemic spot checks and included them in book notes posts (as at that point I knew less and my time was less stretched), as probably should various other people. And maybe I should still do it now for the books that are especially relevant to my main projects.

I think that this may make sense / probably makes sense for receiving payment for book reviews. But I think I'd be opposed to discouraging people from just posting book summaries/reviews/notes in general unless they do this. 

Yep, agreed. If someone is creating e.g. an EAIF-funded book review, I want it to feel very "solid", like I can really trust what they're saying and what the author is saying. 

But I also want Forum users to feel comfortable writing less  time-intensive content (like your book notes). That's why we encourage epistemic statuses, have Shortform as an option, etc.

(Though it helps if, even for a shorter set of notes, someone can add a note about their process. As an example: "Copying over the most interesting bits and my immediate impressions. I haven't fact-checked anything, looked for other perspectives, etc.")

Yeah, I entirely agree, and your comment makes me realise that, although I make my process fairly obvious in my posts, I should probably in future add almost the exact sentences "I haven't fact-checked anything, looked for other perspectives, etc.", just to make that extra explicit. (I didn't interpret your comment as directed at my posts specifically - I'm just reporting a useful takeaway for me personally.)

Yeah, I really like this. SSC currently already has a book-review contest running on SSC, and maybe LW and the EAF could do something similar? (Probably not a contest, but something that creates a bit of momentum behind the idea of doing this)

This does seem like a good model to try.

I'd be interested in this. I've been posting book reviews of the books I read to Facebook - mostly for my own benefit. These have mostly been written quickly, but if there was a decent chance of getting $500 I could pick out the most relevant books and relisten to them and then rewrite them.

I haven't read any of those reviews you've posted on FB, but I'd guess you should in any case post them to the Forum! Even if you don't have time for any further editing or polishing.

I say this because:

  • This sort of thing often seems useful in general
  • People can just ignore them if they're not useful, or not relevant to them

Maybe there being a decent chance of you getting $500 for them and/or you relistening and rewriting would be even better - I'm just saying that this simple step of putting them on the Forum already seems net positive anyway.

Could be as top-level posts or as shortforms, depending on the length, substantiveness, polish, etc.

I've thought about this before and I would also like to see this happen.

Quick take is this sounds like a pretty good bet, mostly for the indirect effects. You could do it with a 'contest' framing instead of a 'I pay you to produce book reviews' framing; idk whether that's meaningfully better.

Yeah, this seems good to me. 

I also just think in any case more people should post their notes, key takeaways, and (if they make them) Anki cards to the Forum, as either top-level posts or shortforms. I think this need only take ~30 mins of extra time on top of the time they spend reading or note-taking or whatever for their own benefit. (But doing what you propose would still add value by incentivising more effortful and even more useful versions of this.)

There are many topics where I’d love to know the basic EA-relevant takeaways [...]

The main takeaways that relate to vastly improving the world, with emphasis on the surprising ones

Yeah, I think this is worth emphasising, since:

  • Those are things existing, non-EA summaries of the books are less likely to provide
  • Those are things that even another EA reading the same book might not think of
    • Coming up with key takeaways is an analytical exercise and will often draw on specific other knowledge, intuitions, experiences, etc. the reader has

Also, readers of this shortform may find posts tagged effective altruism books interesting.

I don't think it's crazy at all. I think this sounds pretty good.

You can already pay for book reviews - what would make these different?

That might achieve the "these might be directly useful goal" and "produce interesting content" goals, if the reviewers knew about how to summarize the books from an EA perspective, how to do epistemic spot checks, and so on, which they probably don't. It wouldn't achieve any of the other goals, though.

I wonder if there are better ways to encourage and reward talented writers to look for outside ideas - although I agree book reviews are attractive in their simplicity!

Edited to add: I think that I phrased this post misleadingly; I meant to complain mostly about low quality criticism of EA rather than eg criticism of comments. Sorry to be so unclear. I suspect most commenters misunderstood me.

I think that EAs, especially on the EA Forum, are too welcoming to low quality criticism [EDIT: of EA]. I feel like an easy way to get lots of upvotes is to make lots of vague critical comments about how EA isn’t intellectually rigorous enough, or inclusive enough, or whatever. This makes me feel less enthusiastic about engaging with the EA Forum, because it makes me feel like everything I’m saying is being read by a jeering crowd who just want excuses to call me a moron.

I’m not sure how to have a forum where people will listen to criticism open mindedly which doesn’t lead to this bias towards low quality criticism.

1. At an object level, I don't think I've noticed the dynamic particularly strongly on the EA Forum (as opposed to eg. social media). I feel like people are generally pretty positive about each other/the EA project (and if anything are less negative than is perhaps warranted sometimes?). There are occasionally low-quality critical posts (that to some degree reads to me as status plays) that pop up, but they usually get downvoted fairly quickly.

2. At a meta level, I'm not sure how to get around the problem of having a low bar for criticism in general. I think as an individual it's fairly hard to get good feedback without also being accepting of bad feedback, and likely something similar is true of groups as well?

I feel like an easy way to get lots of upvotes is to make lots of vague critical comments about how EA isn’t intellectually rigorous enough, or inclusive enough, or whatever. This makes me feel less enthusiastic about engaging with the EA Forum, because it makes me feel like everything I’m saying is being read by a jeering crowd who just want excuses to call me a moron.

Could you unpack this a bit? Is it the originating poster who makes you feel that there's a jeering crowd, or the people up-voting the OP which makes you feel the jeers?

As counterbalance...

Writing, and sharing your writing, is how you often come to know your own thoughts. I often recognise the kernel of truth someone is getting at before they've articulated it well, both in written posts and verbally. I'd rather encourage someone for getting at something even if it was lacking, and then guide them to do better. I'd especially prefer to do this given I personally know that it's difficult to make time to perfect a post whilst doing a job and other commitments.

This is even more the case when it's on a topic that hasn't been explored much, such as biases in thinking common to EAs or diversity issues. I accept that in liberal circles being critical on basis of diversity and inclusion or cognitive biases is a good signalling-win, and you might think it would follow suit in EA. But I'm reminded of what Will MacAskill said about 8 months ago on an 80k podcast that he was awake thinking his reputation would be in tatters after posting in the EA forum, that his post would be torn to shreds (didn't happen). For quite some time I was surprised at the diversity elephant in the room on EA, and welcomed when these critiques came forward. But I was in the room and not pointing out the elephant for a long time because I - like Will - had fears about being torn to shreds for putting myself out there, and I don't think this is unusual.

I also think that criticisms of underlying trends in groups are really difficult to get at in a substantive way, and though they often come across as put-downs from someone who wants to feel bigger, it is not always clear whether that's due to authorial intent or reader's perception. I still think there's something that can be taken from them though. I remember a scathing article about yuppies who listen to NPR to feel educated and part of the world for signalling purposes. It was very mean-spirited but definitely gave me food for thought on my media consumption and what I am (not) achieving from it. I think a healthy attitude for a community is willingness to find usefulness in seemingly threatening criticism. As all groups are vulnerable to effects of polarisation and fractiousness, this attitude could be a good protective element.

So in summary, even if someone could have done better on articulating their 'vague critical comments', I think it's good to encourage the start of a conversation on a topic which is not easy to bring up or articulate, but is important. So I would say go on ahead and upvote that criticism whilst giving feedback on ways to improve it. If that person hasn't nailed it, it's started the conversation at least, and maybe someone else will deliver the argument better. And I think there is a role for us as a community to be curious and open to 'vague critical comments' and find the important message, and that will prove more useful than the alternative of shunning it.

I have felt this way as well. I have been a bit unhappy with how many upvotes in my view low quality critiques of mine have gotten (and think I may have fallen prey to a poor incentive structure there). Over the last couple of months I have tried harder to avoid that by having a mental checklist before I post anything but not sure whether I am succeeding. At least I have gotten fewer wildly upvoted comments!

I've upvoted some low quality criticism of EA. Some of this is due to emotional biases or whatever, but a reason I still endorse is that I haven't read strong responses to some obvious criticism.

Example: I currently believe that an important reason EA is slightly uninclusive and moderately undiverse is because EA community-building was targeted at people with a lot of power as a necessary strategic move. Rich people, top university students, etc. It feels like it's worked, but I haven't seen a good writeup of the effects of this.

I think the same low-quality criticisms keep popping up because there's no quick rebuttal. I wish there were a post of "fallacies about problems with EA" that one could quickly link to.

I think that EAs, especially on the EA Forum, are too welcoming to low quality criticism.

can you show one actual example of what exactly you mean?

I thought this post was really bad, basically for the reasons described by Rohin in his comment. I think it's pretty sad that that post has positive karma.

I actually strong upvoted that post, because I wanted to see more engagement with the topic, decision-making under deep uncertainty, since that's a major point in my skepticism of strong longtermism. I just reduced my vote to a regular upvote. It's worth noting that Rohin's comment had more karma than the post itself (even before I reduced my vote).

I pretty much agree with your OP. Regarding that post in particular, I am uncertain about whether it's a good or bad post. It's bad in the sense that its author doesn't seem to have a great grasp of longtermism, and the post basically doesn't move the conversation forward at all. It's good in the sense that it's engaging with an important question, and the author clearly put some effort into it. I don't know how to balance these considerations.

I agree that post is low-quality in some sense (which is why I didn't upvote it), but my impression is that its central flaw is being misinformed, in a way that's fairly easy to identify. I'm more worried about criticism where it's not even clear how much I agree with the criticism or where it's socially costly to argue against the criticism because of the way it has been framed.

It also looks like the post got a fair number of downvotes, and that its karma is way lower than for other posts by the same author or on similar topics. So it actually seems to me the karma system is working well in that case.

(Possibly there is an issue where "has a fair number of downvotes" on the EA FOrum corresponds to "has zero karma" in fora with different voting norms/rules, and so the former here appearing too positive if one is more used to fora with the latter norm. Conversely I used to be confused why posts on the Alignment Forum that seemed great to me had more votes than karma score.)

It also looks like the post got a fair number of downvotes, and that its karma is way lower than for other posts by the same author or on similar topics. So it actually seems to me the karma system is working well in that case.

That's what I thought as well. The top critical comment also has more karma than the top level post, which I have always considered to be functionally equivalent to a top level post being below par.

I agree with this as stated, though I'm not sure how much overlap there is between the things we consider low-quality criticism. (I can think of at least one example where I was mildly annoyed that something got a lot of upvotes, but it seems awkward to point to publicly.)

I'm not so worried about becoming the target of low-quality criticism myself. I'm actually more worried about low-quality criticism crowding out higher-quality criticism. I can definitely think of instances where I wanted to say X but then was like "oh no, if I say X then people will lump this together with some other person saying nearby thing Y in a bad way, so I either need to be extra careful and explain that I'm not saying Y or shouldn't say X after all".

I'm overall not super worried because I think the opposite failure mode, i.e. appearing too unwelcoming of criticism, is worse.

I've proposed before that voting shouldn't be anonymous, and that (strong) downvotes should  require explanation (either your own comment or a link to someone else's). Maybe strong upvotes should, too?

Of course, this is perhaps a bad sign about the EA community as a whole, and fixing forum incentives might hide the issue.

This makes me feel less enthusiastic about engaging with the EA Forum, because it makes me feel like everything I’m saying is being read by a jeering crowd who just want excuses to call me a moron.

How much of this do you think is due to the tone or framing of the criticism rather than just its content (accurate or not)?

I've proposed before that voting shouldn't be anonymous, and that (strong) downvotes should  require explanation (either your own comment or a link to someone else's). Maybe strong upvotes should, too?

It seems this could lead to a lot of comments and very rapid ascending through the meta hierarchy! What if I want to strong downvote your strong downvote explanation?

It seems this could lead to a lot of comments and very rapid ascending through the meta hierarchy! What if I want to strong downvote your strong downvote explanation?

I don't really expect this to happen much, and I'd expect strong downvotes to decay quickly down a thread (which is my impression of what happens now when people do explain voluntarily), unless people are actually just being uncivil. 

I also don't see why this would be a particularly bad thing. I'd rather people hash out their differences properly and come to a mutual understanding than essentially just call each other's comments very stupid without explanation.

I thought the same thing recently.

I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.

I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.

But I don’t think that this prediction is true: I think that I see a weak positive correlation between how altruistic people are and how good their epistemics seem.

----

I think the main reason for this is that striving for accurate beliefs is unpleasant and unrewarding. In particular, having accurate beliefs involves doing things like trying actively to step outside the current frame you’re using, and looking for ways you might be wrong, and maintaining constant vigilance against disagreeing with people because they’re annoying and stupid.

Altruists often seem to me to do better than people who instrumentally value epistemics; I think this is because valuing epistemics terminally has some attractive properties compared to valuing it instrumentally. One reason this is better is that it means that you’re less likely to stop being rational when it stops being fun. For example, I find many animal rights activists very annoying, and if I didn’t feel tied to them by virtue of our shared interest in the welfare of animals, I’d be tempted to sneer at them. 

Another reason is that if you’re an altruist, you find yourself interested in various subjects that aren’t the subjects you would have learned about for fun--you have less of an opportunity to only ever think in the way you think in by default. I think that it might be healthy that altruists are forced by the world to learn subjects that are further from their predispositions. 

----

I think it’s indeed true that altruistic people sometimes end up mindkilled. But I think that truth-seeking-enthusiasts seem to get mindkilled at around the same rate. One major mechanism here is that truth-seekers often start to really hate opinions that they regularly hear bad arguments for, and they end up rationalizing their way into dumb contrarian takes.

I think it’s common for altruists to avoid saying unpopular true things because they don’t want to get in trouble; I think that this isn’t actually that bad for epistemics.

----

I think that EAs would have much worse epistemics if EA wasn’t pretty strongly tied to the rationalist community; I’d be pretty worried about weakening those ties. I think my claim here is that being altruistic seems to make you overall a bit better at using rationality techniques, instead of it making you substantially worse.

I tried searching the literature a bit, as I'm sure that there are studies on the relation between rationality and altruistic behavior. The most relevant paper I found (from about 20 minutes of search and reading) is The cognitive basis of social behavior (2015). It seems to agree with your hypothesis. From the abstract:

Applying a dual-process framework to the study of social preferences, we show in two studies that individuals with a more reflective/deliberative cognitive style, as measured by scores on the Cognitive Reflection Test (CRT), are more likely to make choices consistent with “mild” altruism in simple non-strategic decisions. Such choices increase social welfare by increasing the other person’s payoff at very low or no cost for the individual. The choices of less reflective individuals (i.e. those who rely more heavily on intuition), on the other hand, are more likely to be associated with either egalitarian or spiteful motives. We also identify a negative link between reflection and choices characterized by “strong” altruism, but this result holds only in Study 2. Moreover, we provide evidence that the relationship between social preferences and CRT scores is not driven by general intelligence. We discuss how our results can reconcile some previous conflicting findings on the cognitive basis of social behavior.

Also relevant is This Review (2016) by Rand:

Does cooperating require the inhibition of selfish urges? Or does “rational” self-interest constrain cooperative impulses? I investigated the role of intuition and deliberation in cooperation by meta-analyzing 67 studies in which cognitive-processing manipulations were applied to economic cooperation games. My meta-analysis was guided by the social heuristics hypothesis, which proposes that intuition favors behavior that typically maximizes payoffs, whereas deliberation favors behavior that maximizes one’s payoff in the current situation. Therefore, this theory predicts that deliberation will undermine pure cooperation (i.e., cooperation in settings where there are few future consequences for one’s actions, such that cooperating is not in one’s self-interest) but not strategic cooperation (i.e., cooperation in settings where cooperating can maximize one’s payoff). As predicted, the meta-analysis revealed 17.3% more pure cooperation when intuition was promoted over deliberation, but no significant difference in strategic cooperation between more intuitive and more deliberative conditions.

And This Paper (2016) on Belief in Altruism and Rationality claims that 

However, contra our predictions, cognitive reflection was not significantly negatively correlated with belief in altruism (r(285) = .04, p =.52, 95% CI [-.08,.15]).

Where belief in altruism is a measure of how much people believe that other people are acting out of care or compassion to others as opposed to self-interest.

Note: I think that this might be a delicate subject in EA and it might be useful to be more careful about alienating people. I definitely agree that better epistemics is very important to the EA community and to doing good generally and that the ties to the rationalist community probably played (and plays) a very important role, and in fact I think that it is sometimes useful to think of EA as rationality applied to altruism. However, many amazing altruistic people have a totally different view on what would be good epistemics (nevermind the question of "are they right?"), and many people already involved in the EA community seem to have a negative view of (at least some aspects of) the rationality community, both of which call for a more kind and appreciative conversation. 

In this shortform post, the most obvious point where I think that this becomes a problem is the example

For example, I find many animal rights activists very annoying, and if I didn’t feel tied to them by virtue of our shared interest in the welfare of animals, I’d be tempted to sneer at them. 

This is supposed to be an example of a case where people are not behaving rationally since that would stop them from having fun. You could have used a lot of abstract or personal examples where people in their day to day work are not taking time to think something through or seek negative feedback or update their actions based on (noticing when they) update their beliefs.