All of Babel's Comments + Replies

I'd also like to reiterate the arguments Larry Temkin gave against international aid, since the post doesn't cover them. I'm not sure if I'm convinced by these arguments, but I do find them reasonable and worth serious consideration.

  • Opportunity cost of local human resources: International aid agencies tend to hire competent local people in the country they operate in (e.g. Sub-Saharan African countries), but these competent people could otherwise serve in important roles for the development of the local society.
  • Corruption: Lots of international aid funds a
... (read more)

Comment from author: Note that I lean slightly towards the term "animal advocacy", so it's possible that my analysis contains a slight bias towards this term.

Answer by BabelJan 31, 20233
0
0

I like this idea of using structured discussion platforms to aggregate views on a topic. 

However, there is a cost for an individual to switch to new platforms, so perhaps the harder task is to get a large number of EAs to use this platform.

3
Harrison Durland
1y
That’s fair, although the user base in this case would mainly just be community builders rather than EA more generally, so I would figure that if it is considered beneficial enough the transition costs shouldn’t be that insurmountable.

A counter-argument: Here it is argued that the research supporting the 3.5% figure may not apply to the animal advocacy context.

Answer by BabelDec 07, 20223
0
0

I think building epistemic health infrastructure is currently the most effective way to improve EA epistemic health, and is the biggest gap in EA epistemics.

I elaborated on this in my shortform. If the suggestion above seems too vague, there're also examples in the shortform. (I plan to coordinate a discussion/brainstorming on this topic among people with relevant interests; please do PM me if you're interested)

(I was late to the party, but since Nathan encourages late comments, I'm posting my suggestion anyways.)

Suggestion: I think building epistemic health infrastructure is currently the most effective way to improve EA epistemic health, and is the biggest gap in EA epistemics.

I elaborated on this in my shortform. If the suggestion above seems too vague, there're also examples in the shortform. (I plan to coordinate a discussion/brainstorming on this topic among people with relevant interests; please do PM me if you're interested)

(I was late to the party, but since Nathan encourages late comments, I'm posting my suggestion anyways. I'm posting the comment also un... (read more)

Proposal: I think building epistemic health infrastructure is currently the most effective way to improve EA epistemic health, and is the biggest gap in EA epistemics.

  • My definition of epistemic health infrastructure: a social, digital, or organizational structure that provides systematic safeguards against one or more epistemic health issues, by regulating some aspect of the intellectual processes within the community.
    • They can have different forms (social, digital, organizational, and more) or different focuses (individual epistemics, group epistemology, a
... (read more)

Apologies for posting four shortforms in a row. I accumulated quite a few ideas in recent days, and I poured them all out.

Summary: When exploring/prioritizing causes and interventions, EA might be neglecting alternative future scenarios, especially along dimensions orthogonal to popular EA topics. We may need to consider causes/interventions that specifically target alternative futures, as well as add a "robustness across future worlds" dimension to the ITN framework.

Epistemic status: low confidence

In cause/intervention exploration, evaluation and prioriti... (read more)

Epistemic status: I only spent 10 minutes thinking about this before I started writing.

Idea: Funders may want to pre-commit to awarding whoever accomplished a certain goal. (e.g. maybe some funder like Open Phil can commit to awarding a pool of money to people/orgs who reduce meat consumption to a certain level, and the pool will be split in proportion to contribution)

Detailed considerations:

This can be seen as a version of retroactive funding, but it's special in that the funder makes a pre-commitment.

(I don't know a lot about retroactive funding/impact m... (read more)

Four podcasts on animal advocacy that I recommend:

  • Freedom of Species (part of 3CR radio station)
    Covers a wide range of topics relevant to animal advocacy, from protest campaigns to wild animal suffering to VR. More of its episodes are on the "protest campaigns" end which is less popular in EA, but I think it's good to have an alternative perspective, if only for some diversification.
  • Knowing Animals (hosted by Josh Milburn)
    An academic-leaning podcast that focuses on Critical Animal Studies, which IMO is like the academic equivalent of animal advocacy. Most
... (read more)

Statement: This shortform is worth expanding into a top-level post.

Please cast upvote/downvote on this comment to indicate agreement/disagreement to the above statement. Please don't hesitate to cast downvotes.

If you think it's valuable, it'll be really great if you are willing to write this post, as I likely won't have time to do that. Please reach out if you're interested - I'd be happy to help by providing feedback etc., though I'm no expert on this topic.

Summary: This is a slightly steelmanned version of an argument for creating a mass social movement as an effective intervention for animal advocacy (which I think is neglected by EA animal advocacy), based on a talk by people at Animal Think Tank. (Vote on my comment below to indicate if you think it's worth expanding into a top-level post)

link to the talk; alternative version with clearer audio, whose contents - I guess - are similar, but I'm not sure. (This shortform doesn't cover all content of the talk, and has likely misinterpreted something in the ta... (read more)

1
Babel
1y
A counter-argument: Here it is argued that the research supporting the 3.5% figure may not apply to the animal advocacy context.
2
Babel
1y
Statement: This shortform is worth expanding into a top-level post. Please cast upvote/downvote on this comment to indicate agreement/disagreement to the above statement. Please don't hesitate to cast downvotes. If you think it's valuable, it'll be really great if you are willing to write this post, as I likely won't have time to do that. Please reach out if you're interested - I'd be happy to help by providing feedback etc., though I'm no expert on this topic.

Hypothesis: in the face of cluelessness caused by flow-through effects, "paving path for future progress" may be a robust benefit of altruistic actions.

Epistemic status: off-the-cuff thoughts, highly uncertain, a hypothesis instead of a conclusion

(In this short-form I will assume a consequentialist perspective.)

Take slavery abolition as an example. The abolition of slavery seems obviously positive at the object level. But when we take into account second-order effects, things become less clear (e.g. the meater-eater problem). However, I think the bad secon... (read more)

My personal approach:

  • I no longer think of myself as "a good person" or "a bad person", which may have something to do with my leaning towards moral anti-realism. I recognize that I did bad things in the past and even now, but refuse to label myself "morally bad" because of them; similarly, I refuse to label myself "morally good" because of my good deeds. 
    • Despite this, sometimes I still feel like I'm a bad person. When this happens, I tell myself: "I may have been a bad person, so what? Nobody should stop me from doing good, even if I'm the worst perso
... (read more)

Don't know how to use Airtable, but a quick googling led me to this. The last reply (by kuovonne) in the linked thread seems useful.

1
Ben Williamson
2y
Cheers for this! Think I’d skimmed the top of that thread and missed the last reply you highlighted. Looks a little clunky but worth adding.

I'm excited to see this post, thank you for it! 

I also think much more exploration and/or concrete work needs to be done in this "EA+AI+animals" (perhaps also non-humans other than animals) direction, which (I vaguely speculate) may extend far beyond the vicinity of the Project CETI example that you gave. Up till now, this direction seems almost completely neglected. 

I'll be giving some critique below, but nevertheless, thank you for the idea and the analysis!

I think the animal welfare section of this post would benefit from more rigor. (not sure about the other sections; haven't read them yet)

healthy: “oysters, mussels, scallops, and clams are good for you. They’re loaded with protein, healthy fats, and minerals like iron and manganese.”

Neither the linked article nor the quote sounds very credible or scientifically convincing to me. 

Eating bivalves causes less suffering than an equivalent amount of chickens, pigs

... (read more)

How does "practicing compassion and generosity with those around us" get operationalized in the EA community?

The most salient example that comes to mind may be going vegetarian/vegan (for ethical and/or climate reasons), which (a little less than) half of the community members claimed to have done, according to a survey.

Apart from that there's also everyday altruism, e.g. helping granny cross the street.

Nothing more comes up, though I had only thought about this for twenty seconds so I have probably missed something.

And finally: EAs are into policy and systemic change.

Yes, but not enough, I suspect. 

Also there seems to be an imbalance between different EA cause areas in terms of “how much work there currently is on policy and systemic change”. Reading the post titles under the policy tag may help one notice this.

2
Locke
2y
Yeah I share your suspicion. Reading through the institutional decision making topic, most if not all of the writing seems to be basically applying LessWrong style rationality principles to decision making. There isn't any real say structural analysis. For example in So Cal where I live, there's precisely a zillion local municipalities, a bunch of Balkanized fiefdoms that often work at cross purposes. The challenge isn't a lack of quality information and decision heuristics. It's the reality that there's a panoply of veto points and a rube goldberg esque system that makes it impossibly difficult to get things done. Vitalik had a nice piece on the underlying issues with Vetocracy that's worth a read. 

I agree that the second- and third-order effects of e.g. donating to super-effective animal advocacy charities are, more likely than not, larger than those of e.g. volunteering at local animal shelters. (though that may depend on the exact charity you're donating to?)

However, it's likely that some other action has even larger second- and third-order effects than donating to top charities - after all, most (though not all) of these charities are optimizing for first-order effects, rather than the second- and third-order ones. 

Therefore, it's not obviously justifiable to simply ignore second- and third-order effects in our analysis.

Thank you for this critique! 

Just want to highlight one thing: comments to this post are sometimes a bit harsh, but please don't take this to mean we're unwelcoming or defensive (although there may be a real tendency to overly argue for ourselves). The style of discussion on the forum is sometimes just like this  :)

1
Locke
2y
Thanks! All good. 

Are people encouraged to share this opportunity with non-EA friends and in non-EA circles? If so, maybe consider making this clear in the post?

Glad to hear that you found this useful!

 Do you know of any companies that are hiring HRI designers?

Sorry, I know nothing about the HRI space :(

Hi Martyna, maybe this post and its comments can interest you. 

Also, something else that comes to mind: Andrew Critch thinks that working on Human-Robot Interaction may be very useful to AI Safety. Note that he isn't solely talking about robots, but also human-machine interaction in general (that's how I interpret it; I may well be wrong):

HRI research is concerned with designing and optimizing patterns of interaction between humans and machines—usually actual physical robots, but not always.

Not sure whether other AI Safety researchers would agree on t... (read more)

1
martyna
2y
Thank you so much! I'll reach out to Lotte next week, seems like we will have a lot to discuss! Human-robot-interaction is something I never considered, but it sounds very interesting. HMI is basically the ground for my work, but it is applied very widely, from physical design (elevators, printers, cars) to SW design, and I'm in the second sector atm. But boy, do I dream to get to the first one.  Do you know of any companies that are hiring HRI designers?

Thanks for the post, it's really exciting!

One very minor point:

In China, tofu is a symbol of poverty—a relic from when ordinary people couldn’t afford meat. As such, ordering tofu for guests is often seen as cheap and disrespectful.

I agree that this is somewhat true, but stating it like this seems a bit unfair. Ordering tofu for guests seems fine to me; It only gets problematic when you order way too much of it - in the same way as ordering nothing but rice for guests is extremely disrespectful. (Conflict of Interest: I'm a tofu lover!)

Anyway, I really like your idea! Good luck :)

8
George Stiffman
2y
Fair enough! I definitely stated that point too strongly, more of a "if you just order tofu for guests, without much meat/seafood, it could come across as rude." Thanks for the pointer!  And glad to meet another tofu lover :)

Thanks for the suggestion, but I'm currently in college, so it's impossible for me to move :)

Great points! I agree that the longtermist community need to better internalize the anti-speciesist belief that we claim to hold, and explicitly include non-humans in our considerations.

On your specific argument that longtermist work doesn't affect non-humans:

  • X-risks aren't the sole focus of longtermism. IMO work in the S-risk space takes non-humans (including digital minds) much more seriously, to the extent that human welfare is mentioned much less often than non-human welfare.
  • I think X-risk work does affect non-humans. Linch's comment mentions one possi
... (read more)

From a consequentialist perspective, I think what matters more is how these options affect your psychology and epistemics (in particular, whether doing this will increase or decrease your speciesist bias, and whether doing this makes you uncomfortable), instead of the amount of suffering they directly produce or reduce. After all, your major impact on the world is from your words and actions, not what you eat.

That being said, I think non-consequentialist views deserve some considerations too, if only due to moral uncertainty. I'm less certain about what ar... (read more)

1
Lucas Lewit-Mendes
2y
Thanks, these are really interesting and useful thoughts!
1
utilitarian01
2y
Might be irrelevant, but have you considered moving to the US for the increased salary?

Currently, EA resources are not gained gradually year by year; instead, they're gained in big leaps (think of Openphil and FTX). Therefore it might not make sense to accumulate resources for several years and give them out all at once. 

In fact, there is a call for megaprojects in EA, which echos your point 1 and 3 (though these megaprojects are not expected to funded by accumulating resources over the years, but by directly deploying existing resources). I'm not sure if I understand your second point though. 

Thanks for the reply, your points make sense! There is certainly a problem of "degree" to each of the concerns I wrote about in the comment, so arguments both for and against it should be taken into account. (To be clear, I wasn't raising my points to dismiss your approach; Instead, they're things that I think need to be taken care of, if we're to take such approach.)

I have to say I'm not sure why the most influential time being in the future wouldn't imply investing for that time though - I'd be interested to hear your reasoning.

Caveat: I haven't spend mu... (read more)

Interesting idea, thanks for doing this! I agree it's good to have more approachable cause prioritization models, but there're also associated risks to be careful about:

  • A widely used model that is not frequently updated could do a lot of damage by spreading outdated views. Unlike large collections of articles, a simple model in a graphic form can be spread really fast, and once it's spread out on the Internet it can't be taken back.
  • A model made by a few individuals or some central organisation may run the risk of deviating from the view of majority EAs; in
... (read more)
5
JackM
2y
Thanks for this, you raise a number of useful points.  I guess this risk could be mitigated by ensuring the model is frequently updated and includes disclaimers. I think this risk is faced by many EA orgs, for example 80,000 Hours, but that doesn't stop them from publishing advice which they regularly update. I like that idea and I certainly don't think my model is anywhere near final (it was just my preliminary attempt with no outside help!). There could be a process with engagement with prominent EAs to finalise a model. Also fair. However it seems that certain EA orgs such as 80,000 Hours do adopt certain views, naturally excluding other views (for which they have been criticised). Maybe it would make more sense for such a model to be owned by an org like 80,000 Hours which is open about their longtermist focus for example, rather than CEA which is supposed to represent EA as a whole. As I said to alexjrl, my idea for a guided flowchart is that nuances like this would be explained in the accompanying guidance, but not necessarily alluded to in the flowchart itself which is supposed to stay fairly high-level and simple. I don't think a flowchart can be 100% prescriptive and final, there are too many nuances to consider. I just want it to raise key considerations for EAs to consider. For example, I think it would be fine for an EA to end up at a certain point in the flowchart and then think to themselves that they should actually choose a difference cause area because there is some nuance that the flowchart didn't consider that means they ended up in the wrong place. That's fine - but it would still be good to have systematic process in my opinion that ensures EAs consider some really key considerations. Feedback like this is useful and could lead to updating the flowchart itself. I have to say I'm not sure why the most influential time being in the future wouldn't imply investing for that time though - I'd be interested to hear your reasoning. Fair point. A

While, to my knowledge, an artificial neural network has not been used to distinguish between large numbers of species (the most I found was fourteen, by Ruff et al., 2021)

Here is one study distinguishing between 24 species using bioacoustic data. I stumbled upon this study totally by coincidence, and I don't know if there're other studies larger in scale.

The study was carried out by the bioacoustics lab at MSR. It seems like some of their other projects might also be relevant to what we're discussing here (low confidence, just speculating).

Maybe it would be better to mention less about "do good with your money" and instead more about "do good with your time"? (to counter the misconception that EA is all about E2G)

Also, agreed that the message should be short and simple.

Closely related, and also important, is the question of "which world gets precluded". Different possibilities include:

  1. By reducing extinction risk from a (hypothetical) scenario in which Earth explodes and falls into pieces, we preclude a world in which there's no life (and therefore no powerful agent) on what previously was Earth.
  2. By reducing extinction risk from pandemics, we preclude a world in which there's no human on Earth, but possibly other intelligent species that have evolved to fill the niche previously occupied by humans.
  3. By reducing extinction ri
... (read more)

After writing this down, I'm seeing a possible response to the argument above:

  • If we observe that Alice and Bob had, in the past, made similar decisions under equivalent circumstances, then we can infer that:
    • There's an above-baseline likelihood that Alice and Bob have similar source codes, and
    • There's an above-baseline likelihood that Alice and Bob have correlated sources of randomness.
    • (where the "baseline" refers to our prior)

 However:

  • It still rests on the non-trivial metaphysical claim that different "free wills" (i.e. different sources of randomness)
... (read more)

One doubt on superrationality:

(I guess similar discussions must have happened elsewhere, but I can't find them. I am new to decision theory and superrationality, so my thinking may very well be wrong.)

First I present an inaccruate summary of what I want to say, to give a rough idea:

  • The claim that "if I choose to do X, then my identical counterpart will also do X" seems to (don't necessarily though; see the example for details) imply there is no free will. But if we in deed assume determinism, then no decision theory is practically meaningful.

Then I shall e... (read more)

1
Babel
2y
After writing this down, I'm seeing a possible response to the argument above: * If we observe that Alice and Bob had, in the past, made similar decisions under equivalent circumstances, then we can infer that: * There's an above-baseline likelihood that Alice and Bob have similar source codes, and * There's an above-baseline likelihood that Alice and Bob have correlated sources of randomness. * (where the "baseline" refers to our prior)  However: * It still rests on the non-trivial metaphysical claim that different "free wills" (i.e. different sources of randomness) could be correlated. * The extent to which we update our prior (on the likelihood of correlated inputs) might be small, especially if we consider it unlikely that inputs could be correlated. This may lead to a much smaller weight of superrational considerations in our decision-making.

Thanks for the answers, they all make sense and upvoted all of them :)

So for a brief summary:

  • The action that I described in the question is far from optimal under EV framework (CarlShulman & Brian_Tomasik), and
  • Even it is optimal, a utilitarian may still have ethical reasons to reject it, if he or she:
    • endorses some kind of non-traditional utilitarianism, most notably SFE (TimothyChan); or
    • considers the uncertainty involved to be moral (instead of factual) uncertainty (Brian_Tomasik).

Building conscious AI (in the form of brain emulations or other architectures) could possibly help us create a large amount of valuable artificial beings. Wildely speculative indulgence: being able to simulate humans and their descendents could be a great way to make the human species more robust to most existing existential risks (if it is easy to create artificial humans that can live in simulations then humanity could becomes much more resilient)

That would pose a huge risk of creating astronomical suffering too. For example, if someone decides to do a conscious simulation of natural history on earth, that would be a nightmare for those who work on reducing s-risks.

Thanks for the detailed answer!

Good idea, I'll consider that. Thanks!