I'd also like to reiterate the arguments Larry Temkin gave against international aid, since the post doesn't cover them. I'm not sure if I'm convinced by these arguments, but I do find them reasonable and worth serious consideration.
Comment from author: Note that I lean slightly towards the term "animal advocacy", so it's possible that my analysis contains a slight bias towards this term.
I like this idea of using structured discussion platforms to aggregate views on a topic.
However, there is a cost for an individual to switch to new platforms, so perhaps the harder task is to get a large number of EAs to use this platform.
I think building epistemic health infrastructure is currently the most effective way to improve EA epistemic health, and is the biggest gap in EA epistemics.
I elaborated on this in my shortform. If the suggestion above seems too vague, there're also examples in the shortform. (I plan to coordinate a discussion/brainstorming on this topic among people with relevant interests; please do PM me if you're interested)
(I was late to the party, but since Nathan encourages late comments, I'm posting my suggestion anyways.)
Suggestion: I think building epistemic health infrastructure is currently the most effective way to improve EA epistemic health, and is the biggest gap in EA epistemics.
I elaborated on this in my shortform. If the suggestion above seems too vague, there're also examples in the shortform. (I plan to coordinate a discussion/brainstorming on this topic among people with relevant interests; please do PM me if you're interested)
(I was late to the party, but since Nathan encourages late comments, I'm posting my suggestion anyways. I'm posting the comment also un...
Proposal: I think building epistemic health infrastructure is currently the most effective way to improve EA epistemic health, and is the biggest gap in EA epistemics.
Apologies for posting four shortforms in a row. I accumulated quite a few ideas in recent days, and I poured them all out.
Summary: When exploring/prioritizing causes and interventions, EA might be neglecting alternative future scenarios, especially along dimensions orthogonal to popular EA topics. We may need to consider causes/interventions that specifically target alternative futures, as well as add a "robustness across future worlds" dimension to the ITN framework.
Epistemic status: low confidence
In cause/intervention exploration, evaluation and prioriti...
Epistemic status: I only spent 10 minutes thinking about this before I started writing.
Idea: Funders may want to pre-commit to awarding whoever accomplished a certain goal. (e.g. maybe some funder like Open Phil can commit to awarding a pool of money to people/orgs who reduce meat consumption to a certain level, and the pool will be split in proportion to contribution)
Detailed considerations:
This can be seen as a version of retroactive funding, but it's special in that the funder makes a pre-commitment.
(I don't know a lot about retroactive funding/impact m...
Four podcasts on animal advocacy that I recommend:
Statement: This shortform is worth expanding into a top-level post.
Please cast upvote/downvote on this comment to indicate agreement/disagreement to the above statement. Please don't hesitate to cast downvotes.
If you think it's valuable, it'll be really great if you are willing to write this post, as I likely won't have time to do that. Please reach out if you're interested - I'd be happy to help by providing feedback etc., though I'm no expert on this topic.
Summary: This is a slightly steelmanned version of an argument for creating a mass social movement as an effective intervention for animal advocacy (which I think is neglected by EA animal advocacy), based on a talk by people at Animal Think Tank. (Vote on my comment below to indicate if you think it's worth expanding into a top-level post)
link to the talk; alternative version with clearer audio, whose contents - I guess - are similar, but I'm not sure. (This shortform doesn't cover all content of the talk, and has likely misinterpreted something in the ta...
Hypothesis: in the face of cluelessness caused by flow-through effects, "paving path for future progress" may be a robust benefit of altruistic actions.
Epistemic status: off-the-cuff thoughts, highly uncertain, a hypothesis instead of a conclusion
(In this short-form I will assume a consequentialist perspective.)
Take slavery abolition as an example. The abolition of slavery seems obviously positive at the object level. But when we take into account second-order effects, things become less clear (e.g. the meater-eater problem). However, I think the bad secon...
My personal approach:
I'm excited to see this post, thank you for it!
I also think much more exploration and/or concrete work needs to be done in this "EA+AI+animals" (perhaps also non-humans other than animals) direction, which (I vaguely speculate) may extend far beyond the vicinity of the Project CETI example that you gave. Up till now, this direction seems almost completely neglected.
I'll be giving some critique below, but nevertheless, thank you for the idea and the analysis!
I think the animal welfare section of this post would benefit from more rigor. (not sure about the other sections; haven't read them yet)
healthy: “oysters, mussels, scallops, and clams are good for you. They’re loaded with protein, healthy fats, and minerals like iron and manganese.”
Neither the linked article nor the quote sounds very credible or scientifically convincing to me.
...Eating bivalves causes less suffering than an equivalent amount of chickens, pigs
How does "practicing compassion and generosity with those around us" get operationalized in the EA community?
The most salient example that comes to mind may be going vegetarian/vegan (for ethical and/or climate reasons), which (a little less than) half of the community members claimed to have done, according to a survey.
Apart from that there's also everyday altruism, e.g. helping granny cross the street.
Nothing more comes up, though I had only thought about this for twenty seconds so I have probably missed something.
And finally: EAs are into policy and systemic change.
Yes, but not enough, I suspect.
Also there seems to be an imbalance between different EA cause areas in terms of “how much work there currently is on policy and systemic change”. Reading the post titles under the policy tag may help one notice this.
I agree that the second- and third-order effects of e.g. donating to super-effective animal advocacy charities are, more likely than not, larger than those of e.g. volunteering at local animal shelters. (though that may depend on the exact charity you're donating to?)
However, it's likely that some other action has even larger second- and third-order effects than donating to top charities - after all, most (though not all) of these charities are optimizing for first-order effects, rather than the second- and third-order ones.
Therefore, it's not obviously justifiable to simply ignore second- and third-order effects in our analysis.
Thank you for this critique!
Just want to highlight one thing: comments to this post are sometimes a bit harsh, but please don't take this to mean we're unwelcoming or defensive (although there may be a real tendency to overly argue for ourselves). The style of discussion on the forum is sometimes just like this :)
Are people encouraged to share this opportunity with non-EA friends and in non-EA circles? If so, maybe consider making this clear in the post?
Glad to hear that you found this useful!
Do you know of any companies that are hiring HRI designers?
Sorry, I know nothing about the HRI space :(
Hi Martyna, maybe this post and its comments can interest you.
Also, something else that comes to mind: Andrew Critch thinks that working on Human-Robot Interaction may be very useful to AI Safety. Note that he isn't solely talking about robots, but also human-machine interaction in general (that's how I interpret it; I may well be wrong):
HRI research is concerned with designing and optimizing patterns of interaction between humans and machines—usually actual physical robots, but not always.
Not sure whether other AI Safety researchers would agree on t...
Thanks for the post, it's really exciting!
One very minor point:
In China, tofu is a symbol of poverty—a relic from when ordinary people couldn’t afford meat. As such, ordering tofu for guests is often seen as cheap and disrespectful.
I agree that this is somewhat true, but stating it like this seems a bit unfair. Ordering tofu for guests seems fine to me; It only gets problematic when you order way too much of it - in the same way as ordering nothing but rice for guests is extremely disrespectful. (Conflict of Interest: I'm a tofu lover!)
Anyway, I really like your idea! Good luck :)
Great points! I agree that the longtermist community need to better internalize the anti-speciesist belief that we claim to hold, and explicitly include non-humans in our considerations.
On your specific argument that longtermist work doesn't affect non-humans:
From a consequentialist perspective, I think what matters more is how these options affect your psychology and epistemics (in particular, whether doing this will increase or decrease your speciesist bias, and whether doing this makes you uncomfortable), instead of the amount of suffering they directly produce or reduce. After all, your major impact on the world is from your words and actions, not what you eat.
That being said, I think non-consequentialist views deserve some considerations too, if only due to moral uncertainty. I'm less certain about what ar...
Currently, EA resources are not gained gradually year by year; instead, they're gained in big leaps (think of Openphil and FTX). Therefore it might not make sense to accumulate resources for several years and give them out all at once.
In fact, there is a call for megaprojects in EA, which echos your point 1 and 3 (though these megaprojects are not expected to funded by accumulating resources over the years, but by directly deploying existing resources). I'm not sure if I understand your second point though.
Thanks for the reply, your points make sense! There is certainly a problem of "degree" to each of the concerns I wrote about in the comment, so arguments both for and against it should be taken into account. (To be clear, I wasn't raising my points to dismiss your approach; Instead, they're things that I think need to be taken care of, if we're to take such approach.)
I have to say I'm not sure why the most influential time being in the future wouldn't imply investing for that time though - I'd be interested to hear your reasoning.
Caveat: I haven't spend mu...
Interesting idea, thanks for doing this! I agree it's good to have more approachable cause prioritization models, but there're also associated risks to be careful about:
While, to my knowledge, an artificial neural network has not been used to distinguish between large numbers of species (the most I found was fourteen, by Ruff et al., 2021)
Here is one study distinguishing between 24 species using bioacoustic data. I stumbled upon this study totally by coincidence, and I don't know if there're other studies larger in scale.
The study was carried out by the bioacoustics lab at MSR. It seems like some of their other projects might also be relevant to what we're discussing here (low confidence, just speculating).
Maybe it would be better to mention less about "do good with your money" and instead more about "do good with your time"? (to counter the misconception that EA is all about E2G)
Also, agreed that the message should be short and simple.
Closely related, and also important, is the question of "which world gets precluded". Different possibilities include:
After writing this down, I'm seeing a possible response to the argument above:
However:
One doubt on superrationality:
(I guess similar discussions must have happened elsewhere, but I can't find them. I am new to decision theory and superrationality, so my thinking may very well be wrong.)
First I present an inaccruate summary of what I want to say, to give a rough idea:
Then I shall e...
Thanks for the answers, they all make sense and upvoted all of them :)
So for a brief summary:
Building conscious AI (in the form of brain emulations or other architectures) could possibly help us create a large amount of valuable artificial beings. Wildely speculative indulgence: being able to simulate humans and their descendents could be a great way to make the human species more robust to most existing existential risks (if it is easy to create artificial humans that can live in simulations then humanity could becomes much more resilient)
That would pose a huge risk of creating astronomical suffering too. For example, if someone decides to do a conscious simulation of natural history on earth, that would be a nightmare for those who work on reducing s-risks.
veganhealth.org
Here's the link!