Proposal: I think building epistemic health infrastructure is currently the most effective way to improve EA epistemic health, and is the biggest gap in EA epistemics.
(note: in order to keep the shortform short I tried to be curt when writing the content below; as a result the tone may come out harsher than I intended)
We talk a lot about epistemic health, but have massively underinvested in infrastructure that safeguards epistemic health. While things like EA Forum and EAG and social circles at EA hubs are effective at spreading information and communicating ideas, to my knowledge there has been no systematic attempt at understanding (and subsequently improving) how they affect epistemic health.
Examples of things not-currently-existing that I consider epistemic health infrastructure:
I plan to coordinate a discussion/brainstorming on this topic among people with relevant interests. Please do PM me if you're interested!
Four podcasts on animal advocacy that I recommend:
Off-topic: I also recommend the Nonlinear Library podcasts; they turn posts on EA Forum and other adjacent forums (LW, AF) to audio. There're different versions that form a series, including a version containing all-time top posts of EA Forum. There's also a version containing the latest posts meeting a not-very-high karma bar - I use that version to keep track of EA news, and it saved me a lot of time.
Hypothesis: in the face of cluelessness caused by flow-through effects, "paving path for future progress" may be a robust benefit of altruistic actions.
Epistemic status: off-the-cuff thoughts, highly uncertain, a hypothesis instead of a conclusion
(In this short-form I will assume a consequentialist perspective.)
Take slavery abolition as an example. The abolition of slavery seems obviously positive at the object level. But when we take into account second-order effects, things become less clear (e.g. the meater-eater problem). However, I think the bad second-order effects (if any) can plausibly be outweighed by one big second-order benefit: that the abolition of slavery paves the way for future moral progress, including (but not limited to) those around our treatment of animals. For example, it seems likely to me that in a world with slavery, it would be much harder to advocate for the rights of human minorities, of animals, and of digital sentience.
I guess this applies to many other cases too, including cases irrelevant to moral progress but relevant to some other kind of progress. This hypothesis might not change how we act by much, as we usually tend to ignore hard-to-evaluate second-order effects. This hypothesis may provide a reason why sometimes an action is justified despite of seemingly negative second-order effects, but I also worry that it may be abused, as a rationalization for ignoring flow-through effects.
Summary: This is a slightly steelmanned version of an argument for creating a mass social movement as an effective intervention for animal advocacy (which I think is neglected by EA animal advocacy), based on a talk by people at Animal Think Tank. (Vote on my comment below to indicate if you think it's worth expanding into a top-level post)
link to the talk; alternative version with clearer audio, whose contents - I guess - are similar, but I'm not sure. (This shortform doesn't cover all content of the talk, and has likely misinterpreted something in the talk; I recommend you to listen to the full talk)
Epistemic status: An attempt at steelmaning the arguments, though I didn't really try hard - I just wrote down some arguments that occur to me.
The claim: Creating a mass social movement around animals, is more effective than top-to-bottom interventions (e.g. policy) and other interventions like vegan advocacy, at least on current margins.
A model of mass movements:
Strategies for mass movements:
Do you think it's worth expanding into a top-level post? Please vote on my comment below.
Statement: This shortform is worth expanding into a top-level post.
Please cast upvote/downvote on this comment to indicate agreement/disagreement to the above statement. Please don't hesitate to cast downvotes.
If you think it's valuable, it'll be really great if you are willing to write this post, as I likely won't have time to do that. Please reach out if you're interested - I'd be happy to help by providing feedback etc., though I'm no expert on this topic.
Apologies for posting four shortforms in a row. I accumulated quite a few ideas in recent days, and I poured them all out.
Summary: When exploring/prioritizing causes and interventions, EA might be neglecting alternative future scenarios, especially along dimensions orthogonal to popular EA topics. We may need to consider causes/interventions that specifically target alternative futures, as well as add a "robustness across future worlds" dimension to the ITN framework.
Epistemic status: low confidence
In cause/intervention exploration, evaluation and prioritization, EA might be neglecting alternative future scenarios, e.g.
This is not about pushing for certain futures to realize. Instead, it's about what to do given that future. Therefore, arguments against pushing for certain futures (e.g. low neglectedness) do not apply.
For example, an EA might de-prioritize pushing for future X due to its low neglectedness, but if they think X has a non-trivial probability to realize, and its realization has rich implications for cause/intervention prioritization, then whenever doing prioritization, they need to think about "what I should do in a world where X would be realized". This could mean:
In theory, the consideration of alternative futures should be captured by the ITN framework, but in practice it's usually not. Therefore it could be valuable to add one more dimension to the ITN framework: "robustness across future worlds".
Also, there're different dimensions on which futures can differ. EA tends to have already considered the dimensions that are related to EA topics (e.g. which trajectory of AI is actualized), but tends to ignore the dimensions that aren't. But this is unreasonable, as EA-topic-related dimensions aren't necessarily the dimensions in which futures have the largest variance.
Finally, note that in some future worlds, it's easier to have high altruistic impact than in other worlds. For example in a capitalist world, altruists seem to be at quite a disadvantage to profit-seekers; in some alternative social forms, altruism plausibly becomes much easier and more impactful, while in some other social forms, it may become even harder. In such cases, we may want to prioritize the futures that have the most potential for current altruistic interventions.
Epistemic status: I only spent 10 minutes thinking about this before I started writing.
Idea: Funders may want to pre-commit to awarding whoever accomplished a certain goal. (e.g. maybe some funder like Open Phil can commit to awarding a pool of money to people/orgs who reduce meat consumption to a certain level, and the pool will be split in proportion to contribution)
Detailed considerations:
This can be seen as a version of retroactive funding, but it's special in that the funder makes a pre-commitment.
(I don't know a lot about retroactive funding/impact markets, so please correct me if I'm wrong on the comparisons below)
Compared to other forms of retroactive funding, this leads to the following benefits:
... but also the following detriments:
Compared to classical grant-proposal-based funding mechanisms, this leads to the following benefits:
... but also the following detriments:
Important points:
One doubt on superrationality:
(I guess similar discussions must have happened elsewhere, but I can't find them. I am new to decision theory and superrationality, so my thinking may very well be wrong.)
First I present an inaccruate summary of what I want to say, to give a rough idea:
Then I shall elaborate with an example:
After writing this down, I'm seeing a possible response to the argument above:
However:
Proposal: I think building epistemic health infrastructure is currently the most effective way to improve EA epistemic health, and is the biggest gap in EA epistemics.
(note: in order to keep the shortform short I tried to be curt when writing the content below; as a result the tone may come out harsher than I intended)
We talk a lot about epistemic health, but have massively underinvested in infrastructure that safeguards epistemic health. While things like EA Forum and EAG and social circles at EA hubs are effective at spreading information and communicating ideas, to my knowledge there has been no systematic attempt at understanding (and subsequently improving) how they affect epistemic health.
Examples of things not-currently-existing that I consider epistemic health infrastructure:
I plan to coordinate a discussion/brainstorming on this topic among people with relevant interests. Please do PM me if you're interested!