Under what circumstances is it potentially cost-effective to move money within low-impact causes?
This is preliminary and most likely somehow wrong. I'd love for someone to have a look at my math and tell me if (how?) I'm on the absolute wrong track here.
Start from the assumption that there is some amount of charitable funding that is resolutely non-cause-neutral. It is dedicated to some cause area Y and cannot be budged. I'll assume for these purposes that DALYs saved per dollar is distributed log-normally within Cause Y:
I want t... (read more)
I guess a more useful way to think about this for prospective funders is to move things about again. Given that you can exert c/x leverage over funds within Cause Y, then you're justified in spending c to do so provided you can find some Cause Y such that the distribution of DALYs per dollar meets the condition...
...which makes for a potentially nice rule of thumb. When assessing some Cause Y, you need only ("only") identify a plausibly best or close-to-best opportunity, as well as the median one, and work from there.
Obviously this condition... (read more)
Has anyone else noticed that the EA Forum moderation is quite intense of late?
Back in 2014, I'd proposed quite limited criteria for moderation: "spam, abuse, guilt-trips, socially or ecologically destructive destructive advocacy". I'd said then: "Largely, I expect to be able to stay out of users' way!" But my impression is that the moderators have at some point after 2017 taken to advising and sanction users based on their tone, for example, here (Halstead being "warned" for unsubstantiated true comments), "rudeness" and "Other behav... (read more)
I don't have a view of the level of moderation in general, but think that warning Halstead was incorrect. I suggest that the warning be retracted.
It also seems out of step with what the forum users think - at the time of writing, the comment in question has 143 Karma (56 votes).
Sometimes when I see people writing about opposition to the death penalty I get the urge to mention Effective Altruism to them, and suggest it is borderline insane to think opposition to capital punishment in the US is where a humanitarian should focus their energies. (Other political causes don't cause me to react in the same way because people's desire to campaign for things like lower taxes, feminism or more school spending seems tied up with self-interest to a much larger degree, so the question if it is the most pressing issue seems irrelevant.) I always refrain from mentioning EA because I think it would do more harm then good, so I will just vent my irrational frustation here.
I endorse using Shortform posts to vent! I think you're right that mentioning EA would be likely to do more harm than good in those cases, but your feelings are reasonable and I'm glad this can be a place to express them.
Some object-level thoughts not meant to interfere with your venting:
I don't feel the same way about people who oppose the death penalty, I think largely because I have a strong natural sense that justice is very important and injustice is very especially extra-bad. This doesn't influence my giving, but I definitely feel worse about the sto... (read more)
I recently spent some time trying to work out what I think about AI timelines. I definitely don’t have any particular insight here; I just thought it was a useful exercise for me to go through for various reasons (and I did find it very useful!).
As it came out, I "estimated" a ~5% chance of TAI by 2030 and a ~20% chance of TAI by 2050 (the probabilities for AGI are slightly higher). As you’d expect me to say, these numbers are highly non-robust.
When I showed them the below plots a couple of people commented that they were surprised that my AGI probabilitie... (read more)
Are there any EAA researchers carefully tracking the potential of huge cost-effectiveness gains in the ag industry from genetic engineering advances of factory farmed animals? Or (less plausibly) advances from better knowledge/practice/lore from classical artificial selection? As someone pretty far away from the field, a priori the massive gains made in biology/genetics in the last few decades seems like something that we plausibly have not priced in in. So it'd be sad if EAAs get blindsided by animal meat becoming a lot cheaper in the next few decades (if this is indeed viable, which it may not be).
Besides just extrapolating trends in cost of production/prices, I think the main things to track would be feed conversion ratios and the possibility of feeding animals more waste products or otherwise cheaper inputs, since feed is often the main cost of production. Some FCRs are already < 2 and close to 1, e.g. it takes less than 2kg of input to get 1kg of animal product (this could be measured in just weight, calories, protein weight, etc..), e.g. for chickens, some fishes and some insects.
I keep hearing that animal protein comes from the protein in wh... (read more)
While talking to my manager (Peter Hurford), I made a realization that by default when "life" gets in the way (concretely, last week a fair amount of hours were taken up by management training seminars I wanted to attend before I get my first interns, this week I'm losing ~3 hours the covid vaccination appointment and in expectation will lose ~5 more from side effects), research (ie the most important thing on my agenda that I'm explicitly being paid to do) is the first to go. This seems like a bad state of affairs.I suspect that this is more prominent in ... (read more)
I liked this, thanks.I hear that this similar to a common problem for many entrepreneurs; they spend much of their time on the urgent/small tasks, and not the really important ones. One solution recommended by Matt Mochary is to dedicate 2 hours per day of the most productive time to work on the the most important problems.
I've occasionally followed this, and mean to more.
EA twitter bots
A set of EA jobs twitter bots which each retweet a specific set of hashtags eg #AISafety #EAJob, #AnimalSuffering #EAJob, etc etc. Please don't get hung up on these, we'd actually need to brainstorm the right hashtags.
You follow the bots and hear about the jobs.
Not super-effective, but given Sanjay's post on ESG, maybe there are people interested:Ethics and Trust in Finance 8th Global PrizeThe Prize is a project of the Observatoire de la Finance (Geneva), a non-profit foundation, working since 1996 on the relationship between the ethos of financial activities and its impact on society. The Observatoire aims to raise awareness of the need to pursue the common good through reconciling the good of persons, organizations, and community. [...]The 8th edition (2020-2021) of the Prize was officially launched o... (read more)
Sometimes the concern is raised that caring about wild animal welfare is seen as unituitive and will bring conflict with the environmental movement. I do not think large-scale efforts to help wild animals should be an EA cause at the moment, but in the long-term I don't think environmentalist concerns will be a limiting factor. Rather, I think environmentalist concerns are partially taken as seriously as they are because people see it as helping wild animals as well. (In some perhaps not fully thought out way.) I do not think it is a coindince that the ext... (read more)
Thank you for those links.
On 80000 hours webpage they have a profile on factory farming, where they say they estimate ending factory farming would increase the expected value of the future of humanity by between 0.01% and 0.1%. I realize one cannot hope for precision in these things but I am still curious if anyone knows anything more about the reasoning process that went into making that estimate.
I think it's basically that moral circle expansion is an approach to reduce s-risks (mostly related to artificial sentience), and ending factory farming advances moral circle expansion. Those links have posts on the topic, but the most specific tag is probably Non-humans and the long-term future. From a recent paper on the topic:
The fact that there are over 100 billion animals on factory farms is partly why we consider them one of the most important frontiers of today’s moral circle (K. Anthis & J. R. Anthis, 2019).
I think Sentience Institute and the C... (read more)
Small request for help to measure the impact of an intervention we have planned:
In short, Animal Rebellion UK is planning a large protest/civil resistance action and I'm keen to try quantitatively and semi-rigorously measure the impact of this, budget permitting. We’re planning on doing an opinion poll (or another bit of market research) before and after our action, with roughly a cohort of 1000 people, to see if what we did actually managed to change public opinion around animal farming. I’ve never done something like this before but I’m s... (read more)
If you haven't already, make sure you check out Faunalytics' helpful resources on survey design. https://faunalytics.org/research-advice/
And yes, I'd be happy to have a quick call, assuming this is still relevant. You can pick a time here https://calendly.com/jamie-a-harris94/60min
I don't like when animal advocates are too confident about their approach and are critical of other advocates. We are losing badly, meat consumption is still skyrocketing! Now is time to be humble and open-minded. Meta-advice: Don't be too critical of the critical either!
I recently did a debate with a critic of effective altruism. It got reasonable reception so far, so for those who think that this is a useful ea activity, feel free to give it a share.
I wrote this last Summer as a private “blog post” just for me. I’m posting it publicly now (after mild editing) because I have some vague idea that it can be good to make things like this public. These rambling thoughts come from my very naive point of view (as it was in the Summer of 2020; not to suggest my present day point of view is much less naive). In particular if you’ve already read lots of moral philosophy you probably won’t learn anything from reading this.
Generally, reading various moral philosophy ... (read more)
A thought on how we describe existential risks from misaligned AI:
Sometimes discussions focus on a fairly specific version of AI risk, which involves humanity being quickly wiped out. Increasingly, though, the emphasis seems to be on the more abstract idea of “humanity losing control of its future.” I think it might be worthwhile to unpack this latter idea a bit more.
There’s already a fairly strong sense in which humanity has never controlled its own future. For example, looking back ten thousand years, no one decided that the sedentary agriculture would i... (read more)
That's a good example.
I do agree that quasi-random variation in culture can be really important. And I agree that this variation is sometimes pretty sticky (e.g. Europe being predominantly Christian and the Middle East being predominantly Muslim for more than a thousand years). I wouldn't say that this kind of variation is a "rounding error."
Over sufficiently long timespans, though, I think that technological/economic change has been more significant.
As an attempt to operationalize this claim: The average human society in 1000AD was obviously very differen... (read more)
Something that came up with a discussion with a coworker recently is that often internet writers want some (thoughtful) comments, but not too many, since too many comments can be overwhelming. Or at the very least, the marginal value of additional comments is usually lower for authors when there are more comments. However, the incentives for commentators is very different: by default people want to comment on the most exciting/cool/wrong thing, so internet posts can easily by default either attract many comments or none. (I think) very little self-pol... (read more)
One other semi-relevant thing from my post Notes on EA-related research, writing, testing fit, learning, and the Forum:
Sometimes people worry that a post idea might be missing some obvious, core insight, or just replicating some other writing you haven't come across. I think this is mainly a problem only inasmuch as it could've been more efficient for you to learn things than slowly craft a post.So if you can write (a rough version of) the post quickly, you could just do that.Or you could ask around or make a quick Question post to outline the basic idea a
Sometimes people worry that a post idea might be missing some obvious, core insight, or just replicating some other writing you haven't come across. I think this is mainly a problem only inasmuch as it could've been more efficient for you to learn things than slowly craft a post.
I'm considering donating to the Centre for Women's Justice. With a budget of about £300k last year, they have undertaken strategic litigation against the government , Crown prosecutors etc for mismanagement of sexual assault cases. The cases seem well-chosen to raise the issue on the political agenda. I think more rapists being successfully prosecuted would have a very positive impact so I'm excited to see this work. I'm planning to email them soon.
In the below I give a very rough summary of Will MacAskill’s article Are We Living At The Hinge Of History? and give some very preliminary thoughts on the article and some of the questions it raises.
I definitely don’t think that what I’m writing here is particularly original or insightful: I’ve thought about this for no more than a few days, any points I make are probably repeating points other people have already made somewhere, and/or are misguided, etc. This seems like an incredibly deep t... (read more)
I wrote this last Summer as a private “blog post” just for me. I’m posting it publicly now (after mild editing) because I have some vague idea that it can be good to make things like this public. These thoughts come from my very naive point of view (as it was in the Summer of 2020; not to suggest my present day point of view is much less naive). In particular if you’ve already read lots of moral philosophy you probably won’t learn anything from reading this. Also, I hope my summaries of other people’s a... (read more)
A thought on epistemic deference:
The longer you hold a view, and the more publicly you hold a view, the more calcified it typically becomes. Changing your mind becomes more aversive and potentially costly, you have more tools at your disposal to mount a lawyerly defense, and you find it harder to adopt frameworks/perspectives other than your favored one (the grooves become firmly imprinted into your brain). At least, this is the way it seems and personally feels to me.
For this reason, the observation “someone I respect publicly argued for X many years a... (read more)
That consideration -- and the more basic consideration that more junior people often just know less -- definitely pushes in the opposite direction. If you wanted to try some version of seniority-weighted epistemic deference, my guess is that the most reliable cohort would have studied a given topic for at least a few years but less than a couple decades.