RobBensinger

Comments

Launching a new resource: 'Effective Altruism: An Introduction'

Conversely, if the 80K intro podcast list was just tossed together in a few minutes without much concern for narrative flow / sequencing / cohesiveness, then I'm much less averse to redesign-via-quick-EA-Forum-comments. :)

Launching a new resource: 'Effective Altruism: An Introduction'

Possible-bias disclosure: am longtermist, focused on x-risk.

I haven't heard all of the podcast episodes under consideration, but methodologically I like the idea of there being a wide variety of 'intro' EA resources that reflect different views of what EA causes and approaches are best, cater to different audiences, and employ different communication/pedagogy methods. If there's an unresolved disagreement about one of those things, I'd usually rather see people make new intro resources to compete with the old one, rather than trying to make any one resource universally beloved (which can lead to mediocre or uncohesive designed-by-committee end products).

In this case, I'd rather see a new podcast episodes collection that's more shorttermist and see whether a cohesive, useful playlist can be designed that way.

And if hours went into carefully picking the original ten episodes and deciding how to sequence them, I'd like to see modifications made via a process of re-listening to different podcasts for hours and experimenting with their effects in different orders, seeing what "arcs" they form, etc., rather than via quick EA Forum comments and happy recollections of isolated episodes.

Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration.

Intuition check: If philosophy were a brain, and published articles were how it did its "thinking", then would it reach better conclusions if it avoided thinking about whether it's giving too much attention to a topic?

In the case of an individual, we value the idea of reflecting on your thought process and methodology. Reasoning about your own reasoning is good -- indeed, such thoughts can be among the most leveraged parts of a person's life, since any improvements you make to your allocation of effort or the quality of your reasoning will improve all your future reasoning.

The group version of this is reasoning about whether the group is reasoning well, or whether the group is misallocating its attention and effort.

You could argue that articles like this are unnecessary even when a field goes totally off the rails, because academic articles aren't the only way the field can think. Individuals in the field can think in the privacy of their own head, and reach correlated conclusions because the balance of evidence is easy to assess. They can talk at conferences, or send emails to each other.

But if those are an essential part of the intellectual process anyway, I guess I don't see the value in trying to hide that process from the public eye or the public record. And once they're public, I'm not sure it matters much whether it's a newspaper article, a journal article, or a blog post.

What are your main reservations about identifying as an effective altruist?

Yeah, I'm an EA: an Estimated-as-Effective-in-Expectation (in Excess of Endeavors with Equivalent Ends I've Evaluated) Agent with an Audaciously Altruistic Agenda.

Politics is far too meta

It's frustrating to have people who agree with you bat for the other team.

I don't like "bat for the other team" here; it reminds me of "arguments are soldiers" and the idea that people on your "side" should agree your ideas are great, while the people who criticize your ideas are the enemy.

Criticism is good! Having accurate models of tractability (including political tractability) is good!

What I would say is:

  • Some "criticisms" are actually self-fulfilling prophecies, rather than being objective descriptions of reality. EAs aren't wary enough of these, and don't have strong enough norms against meta/PR becoming overrepresented or leaking into object-level discussions. This is especially bad in early-stage brainstorming and discussion.
  • On Doing the Improbable + Status Regulation and Anxious Underconfidence: EAs are far too inclined to abandon high-EV ideas that are <50% likely to succeed. There should be far larger number of failures, weird experiments, and risky bets in EA. If you're too willing to give up at the smallest problem, then "seeking out criticism" can turn into "seeking out rationalizations for inaction" (or "seeking out rationalizations for only doing normal/simple/predictable things").

Using a general reference class when you have a better, more specific class available

I agree this is one of the biggest things EAs currently tend to get wrong. I'd distinguish two kinds of mistake here, both of which I think EAs tend to make:

  • Over-relying on outside views over inside views. Inside views (making predictions based on details and causal mechanisms) and outside views (making predictions based on high-level similarities) are both important, but EA currently puts too much thought into outside views and not enough into inside views. If you're NASA, your outside views help you predict budget and time overruns and build in good safety/robustness margins, while your inside views let you build a rocket at all.
  • Picking the wrong outside view / reference class, or not even considering the different reference classes on offer. Picking a good reference class can be extremely difficult; in some cases, many years of accumulated domain expertise may be the only thing that allows you to spot the right surface similarities to put your weight down on.
AMA: Tom Chivers, science writer, science editor at UnHerd

Relatedly, in my experience 'writing an article or blog post' can have bad effects on my ability to reason about stuff. I want to say things that are relevant and congruent and that flow together nicely; but my actual thought process includes a bunch of zig-zagging and updating and sorting-through-thoughts-that-don't-initially-make-perfect-crisp-sense. So focusing on the writing makes me focus less on my thought process, and it becomes tempting to for me confuse the writing process or written artifact for my thought process or beliefs.

You've spent a lot of time living and breathing EA/rationalist stuff, so I don't know that I have any advice that will be useful to you. But if I were giving advice to a random reporter, I'd warn about the above phenomenon and say that this can lead to overconfidence when someone's just getting started adding probabilistic forecasts to their blogging.

I think this calibration-and-reflection bug is important -- it's a bug in your ability to recognize what you believe, not just in your ability to communicate it -- and I think it's fixable with some practice, without having to do the superforecaster 'sink lots of hours into getting expertise about every topic you predict' thing.

(And I don't know, maybe the journey to fixing this could be an interesting one that generates an article of its own? Maybe a thing that could be linked to at the bottom of posts to give context for readers who are confused about why the numbers are there and why they're so low-confidence?)

AMA: Tom Chivers, science writer, science editor at UnHerd

If you haven't spent time on calibration training, I recommend it! Open Phil has a tool here: https://www.openphilanthropy.org/blog/new-web-app-calibration-training. Making good forecasts is a mix of 'understand the topic you're making a prediction about' and 'understand yourself well enough to interpret your own feelings of confidence'. Even if they mostly don't have expertise in the topic they're writing about, I think most people can become pretty well-calibrated with an hour or two of practice.

And that's a valuable service in its own right, I think. It would be a major gift to the public even if the only take-away readers got from predictions at the end of articles were 'wow, even though these articles sound confident, the claims almost always tend to be 50% or 60% probable according to the reporter; guess I should keep in mind these topics are complex and these articles are being banged out in a few hours rather than being the product of months of study, so of course things are going to end up being pretty uncertain'.

If you also know enough about a topic to make a calibrated 80% or 90% (or 99%!) prediction about it, that's great. But one of the nice things about probabilities is just that they clarify what you're saying -- they can function like an epistemic status disclaimer that notes how uncertain you really are, even if it was hard to make your prose flow without sounding kinda confident in the midst of the article. Making probabilistic predictions doesn't have to be framed as 'here's me using my amazing knowledge of the world to predict the future'; it can just be framed as an attempt to disambiguate what you were saying in the article.

AMA: Tom Chivers, science writer, science editor at UnHerd

Thanks, Tom. :) I'm interested to hear about reporters who aren't "EA-ish" but are worth paying attention to anyway — I think sometimes EA's blind spots arise from things that don't have the EA "vibe" but that would come up in a search anyway if you just classified writers by "awesome", "insightful", "unusually rigorous and knowledgeable", "getting at something important", etc.

For people who missed my post: Politics Is Far Too Meta

Politics is far too meta

Thanks to Chana Messinger for discussing some of these topics with me. Any remaining errors in the post are society's fault for raising me wrong, not Chana's.

Note on the definitions: People use the word "meta" to refer to plenty of other things. If you're in a meeting to discuss Clinton's electability and someone raises a point of process, you might want to call that "meta" and distinguish it from "object-level" discussion of electability. When I define "meta", I'm just clarifying terminology in the post itself, not insisting that other posts use "meta" to refer to the exact same things.

Load More