Shortform Content [Beta]

Matt_Lerner's Shortform

Under what circumstances is it potentially cost-effective to move money within low-impact causes?

This is preliminary and most likely somehow wrong.  I'd love for someone to have a look at my math and tell me if (how?) I'm on the absolute wrong track here.

Start from the assumption that there is some amount of charitable funding that is resolutely non-cause-neutral. It is dedicated to some cause area Y and cannot be budged. I'll assume for these purposes that DALYs saved per dollar is distributed log-normally within Cause Y:

I want t... (read more)

I guess a more useful way to think about this for prospective funders is to move things about again. Given that you can exert c/x leverage over funds within Cause Y, then you're justified in spending c to do so provided you can find some Cause Y such that the distribution of DALYs per dollar meets the condition...

...which makes for a potentially nice rule of thumb. When assessing some Cause Y, you need only ("only") identify a plausibly best or close-to-best opportunity, as well as the median one, and work from there.

Obviously this condition... (read more)

RyanCarey's Shortform

Overzealous moderation?

Has anyone else noticed that the EA Forum moderation is quite intense of late?

Back in 2014, I'd proposed quite limited criteria for moderation: "spam, abuse, guilt-trips, socially or ecologically destructive destructive advocacy". I'd said then: "Largely, I expect to be able to stay out of users' way!" But my impression is that the moderators have at some point after 2017 taken to advising and sanction users based on their tone, for example, here (Halstead being "warned" for unsubstantiated true comments), "rudeness" and "Other behav... (read more)

I don't have a view of the level of moderation in general, but think that warning Halstead was incorrect. I suggest that the warning be retracted.

It also seems out of step with what the forum users think - at the time of writing, the comment in question has 143 Karma (56 votes).

RogerAckroyd's Shortform

Sometimes when I see people writing about opposition to the death penalty I get the urge to mention Effective Altruism to them, and suggest it is borderline insane to think opposition to capital punishment in the US is where a humanitarian should focus their energies. (Other political causes don't cause me to react in the same way because people's desire to campaign for things like lower taxes, feminism or more school spending seems tied up with self-interest to a much larger degree, so the question if it is the most pressing issue seems irrelevant.) I always refrain from mentioning EA because I think it would do more harm then good, so I will just vent my irrational frustation here. 

 

I endorse using Shortform posts to vent! I think you're right that mentioning EA would be likely to do more harm than good in those cases, but your feelings are reasonable and I'm glad this can be a place to express them.

Some object-level thoughts not meant to interfere with your venting:

I don't feel the same way about people who oppose the death penalty, I think largely because I have a strong natural sense that justice is very important and injustice is very especially extra-bad. This doesn't influence my giving, but I definitely feel worse about the sto... (read more)

Ben_Snodin's Shortform

I recently spent some time trying to work out what I think about AI timelines. I definitely don’t have any particular insight here; I just thought it was a useful exercise for me to go through for various reasons (and I did find it very useful!).

As it came out, I "estimated" a ~5% chance of TAI by 2030 and a ~20% chance of TAI by 2050 (the probabilities for AGI are slightly higher). As you’d expect me to say, these numbers are highly non-robust.

When I showed them the below plots a couple of people commented that they were surprised that my AGI probabilitie... (read more)

Linch's Shortform

Are there any EAA researchers carefully tracking the potential of huge cost-effectiveness gains in the ag industry from genetic engineering advances of factory farmed animals? Or (less plausibly) advances from better knowledge/practice/lore from classical artificial selection? As someone pretty far away from the field, a priori the massive gains made in biology/genetics in the last few decades seems like something that we plausibly have not priced in in. So it'd be sad if EAAs get blindsided by animal meat becoming a lot cheaper in the next few decades (if this is indeed viable, which it may not be).

Besides just extrapolating trends in cost of production/prices, I think the main things to track would be feed conversion ratios and the possibility of feeding animals more waste products or otherwise cheaper inputs, since feed is often the main cost of production. Some FCRs are already < 2 and close to 1, e.g. it takes less than 2kg of input to get 1kg of animal product (this could be measured in just weight, calories, protein weight, etc..), e.g. for chickens, some fishes and some insects.

I keep hearing that animal protein comes from the protein in wh... (read more)

2Pablo9dThis post [https://www.overcomingbias.com/2012/12/breeding-happier-livestock-no-futuristic-tech-required.html] may be of interest, in case you haven't seen it already.
2Linch9dYep, aware of this! Solid post.
Linch's Shortform

While talking to my manager (Peter Hurford), I made a realization that by default when "life" gets in the way (concretely, last week a fair amount of hours were taken up by management training seminars I wanted to attend before I get my first interns, this week I'm losing ~3 hours the covid vaccination appointment and in expectation will lose ~5 more from side effects), research (ie the most important thing on my agenda that I'm explicitly being paid to do) is the first to go. This seems like a bad state of affairs.

I suspect that this is more prominent in ... (read more)

I liked this, thanks.

I hear that this similar to a common problem for many entrepreneurs; they spend much of their time on the urgent/small tasks, and not the really important ones. 

One solution recommended by Matt Mochary is to dedicate 2 hours per day of the most productive time to work on the the most important problems. 

https://www.amazon.com/Great-CEO-Within-Tactical-Building-ebook/dp/B07ZLGQZYC

I've occasionally followed this, and mean to more. 

2meerpirat3dYeah, I can also relate a lot (doing my PhD). One thing I noticed is that my motivational system slowly but surely seems to update on my AI related worries and that this now and then helps keeping me focused on what I actually think is more important from the EA perspective. Not sure what you are working on, but maybe there are some things that come to your mind how to increase your overall motivation, e.g. by reading or thinking of concrete stories of why the work is important, and by talking to others why you care about the things you are trying to achieve.
3FJehn3dThis resonated with me a lot. Unfortunately, I do not have a quick fix. However, what seems to help at least a bit for me is seperating planning for a day and doing the work. Every workday the last thing I do (or try to do) is look at my calendar and to do lists and figure out what I should be doing the next day. By doing this I think I am better at assessing at what is important, as I do not have to do it at that moment. I only have to think of what my future self will be capable of doing. When the next day comes and future self turns into present self I find it really helpful to already having the work for the day planned for me. I do not have to think about what is important, I just do what past me decided. Not sure if this is just an obvious way to do this, but I thought it does not hurt to write it down.
Nathan Young's Shortform

EA twitter bots

 A set of EA jobs twitter bots which each retweet a specific set of hashtags eg #AISafety #EAJob, #AnimalSuffering #EAJob, etc etc.  Please don't get hung up on these, we'd actually need to brainstorm the right hashtags.

You follow the bots and hear about the jobs.

Ramiro's Shortform

Not super-effective, but given Sanjay's post on ESG, maybe there are people interested:
Ethics and Trust in Finance 8th Global Prize
The Prize is a project of the Observatoire de la Finance (Geneva), a non-profit foundation, working since 1996 on the relationship between the ethos of financial activities and its impact on society. The Observatoire aims to raise awareness of the need to pursue the common good through reconciling the good of persons, organizations, and community. 
[...]
The 8th edition (2020-2021) of the Prize was officially launched o... (read more)

RogerAckroyd's Shortform

Sometimes the concern is raised that caring about wild animal welfare is seen as unituitive and will bring conflict with the environmental movement. I do not think large-scale efforts to help wild animals should be an EA cause at the moment, but in the long-term I don't think environmentalist concerns will be a limiting factor. Rather, I think environmentalist concerns are partially taken as seriously as they are because people see it as helping wild animals as well. (In some perhaps not fully thought out way.) I do not think it is a coindince that the ext... (read more)

3MichaelStJules5dTo add to this, Animal Ethics has done some research on attitudes towards helping wild animals: 1. https://www.animal-ethics.org/survey-helping-wild-animals-scientists-students/ [https://www.animal-ethics.org/survey-helping-wild-animals-scientists-students/:] 2. https://www.animal-ethics.org/scientists-attitudes-animals-wild-qualitative/ [https://www.animal-ethics.org/scientists-attitudes-animals-wild-qualitative/] (another summary by Faunalytics [https://faunalytics.org/institutional-attitudes-towards-wild-animal-suffering/] ) From the first link, which looked at attitudes among scholars and students in life sciences towards helping wild animals in urban settings, with vaccinations and for weather events: For what it's worth, I think the current focus is primarily research, advocacy for wild animals and field building, not the implementation or promotion of specific direct interventions.

Thank you for those links. 

RogerAckroyd's Shortform

On 80000 hours webpage they have a profile on factory farming, where they say they estimate ending factory farming would increase the expected value of the future of humanity by between 0.01% and 0.1%. I realize one cannot hope for precision in these things but I am still curious if anyone knows anything more about the reasoning process that went into making that estimate.  

I think it's basically that moral circle expansion is an approach to reduce s-risks (mostly related to artificial sentience), and ending factory farming advances moral circle expansion. Those links have posts on the topic, but the most specific tag is probably Non-humans and the long-term future. From a recent paper on the topic:

The fact that there are over 100 billion animals on factory farms is partly why we consider them one of the most important frontiers of today’s moral circle (K. Anthis & J. R. Anthis, 2019).

I think Sentience Institute and the C... (read more)

3Aaron Gertler4moNote: I don't work for 80,000 Hours, and I don't know how closely the people who wrote that article/produced their "scale" table would agree with me. For that particular number, I don't think there was an especially rigorous reasoning process. As they say when explaining the table in their scale metric [https://80000hours.org/articles/problem-framework/#how-to-assess-it], "the tradeoffs across the columns are extremely uncertain". That is, I don't think that there's an obvious chain of logic from "factory farming ends" to "the future is 0.01% better". Figuring out what constitutes "the value of the future" is too big a problem to solve right now. However, there are some columns in the table that do seem easier to compare to animal welfare. For example, you can see that a scale of "10" (what factory farming gets) means that roughly 10 million QALYs are saved each year. So a scale of "10" means (roughly) that something happens each year which is as good as 10 million people living for another year in perfect health, instead of dying. Does it seem reasonable that the annual impact of factory farming is as bad as 10 million people losing a healthy year of their lives? If you think that does sound reasonable, then a scale score of "10" for ending factory farming should be fine. But you might also think that one of those two things -- the QALYs, or factory farming -- is much more important than the other. That might lead you to assign a different scale score to one of them when you try to prioritize between causes. Of course, these comparisons are far from perfectly empirical. But at some point, you have to say "okay, outcome A seems about as good/bad as outcome B" in order to set priorities.
JamesOz's Shortform

Small request for help to measure the impact of an intervention we have planned:

 

In short, Animal Rebellion UK is planning a large protest/civil resistance action and I'm keen to try quantitatively and semi-rigorously measure the impact of this, budget permitting. We’re planning on doing an opinion poll (or another bit of market research) before and after our action, with roughly a cohort of 1000 people, to see if what we did actually managed to change public opinion around animal farming. I’ve never done something like this before but I’m s... (read more)

If you haven't already, make sure you check out Faunalytics' helpful resources on survey design. https://faunalytics.org/research-advice/

And yes, I'd be happy to have a quick call, assuming this is still relevant. You can pick a time here https://calendly.com/jamie-a-harris94/60min

deluks917's Shortform

I don't like when animal advocates are too confident about their approach and are critical of other advocates. We are losing badly, meat consumption is still skyrocketing! Now is time to be humble and open-minded. Meta-advice: Don't be too critical of the critical either!

omnizoid's Shortform

I recently did a debate with a critic of effective altruism.  It got reasonable reception so far, so for those who think that this is a useful ea activity, feel free to give it a share.  

Ben_Snodin's Shortform

A moral philosophy free-for-all

I wrote this last Summer as a private “blog post” just for me. I’m posting it publicly now (after mild editing) because I have some vague idea that it can be good to make things like this public. These rambling thoughts come from my very naive point of view (as it was in the Summer of 2020; not to suggest my present day point of view is much less naive). In particular if you’ve already read lots of moral philosophy you probably won’t learn anything from reading this.

The free-for-all

Generally, reading various moral philosophy ... (read more)

Ben Garfinkel's Shortform

A thought on how we describe existential risks from misaligned AI:

Sometimes discussions focus on a fairly specific version of AI risk, which involves humanity being quickly wiped out. Increasingly, though, the emphasis seems to be on the more abstract idea of “humanity losing control of its future.” I think it might be worthwhile to unpack this latter idea a bit more.

There’s already a fairly strong sense in which humanity has never controlled its own future. For example, looking back ten thousand years, no one decided that the sedentary agriculture would i... (read more)

Showing 3 of 9 replies (Click to show all)
4abergal10dReally appreciate the clarifications! I think I was interpreting "humanity loses control of the future" in a weirdly temporally narrow sense that makes it all about outcomes, i.e. where "humanity" refers to present-day humans, rather than humans at any given time period. I totally agree that future humans may have less freedom to choose the outcome in a way that's not a consequence of alignment issues. I also agree value drift hasn't historically driven long-run social change, though I kind of do think it will going forward, as humanity has more power to shape its environment at will.
3Linch10dMy impression is that the differences in historical vegetarianism rates between India and China, and especially India and southern China (where there is greater similarity of climate and crops used), is a moderate counterpoint. At the timescale of centuries, vegetarianism rates in India [https://en.wikipedia.org/wiki/Vegetarianism_by_country#:~:text=Other%20surveys%20cited%20by%20FAO,the%20reasons%20are%20mainly%20cultural.] are much higher than rates in China [https://en.wikipedia.org/wiki/Vegetarianism_by_country#:~:text=An%20estimated%204%20to%205%20percent%20of%20Chinese%20are%20vegetarian.] . Since factory farming is plausibly one of the larger sources of human-caused suffering today, the differences aren't exactly a rounding error.

That's a good example.

I do agree that quasi-random variation in culture can be really important. And I agree that this variation is sometimes pretty sticky (e.g. Europe being predominantly Christian and the Middle East being predominantly Muslim for more than a thousand years). I wouldn't say that this kind of variation is a "rounding error."

Over sufficiently long timespans, though, I think that technological/economic change has been more significant.

As an attempt to operationalize this claim: The average human society in 1000AD was obviously very differen... (read more)

Linch's Shortform

Something that came up with a discussion with a coworker recently is that often internet writers want some (thoughtful) comments, but not too many, since too many comments can be overwhelming. Or at the very least, the marginal value of additional comments is usually lower for authors when there are more comments. 

However, the incentives for commentators is very different: by default people want to comment on the most exciting/cool/wrong thing, so internet posts can easily by default either attract many comments or none. (I think) very little self-pol... (read more)

2MichaelA10dI think these are useful observations and questions. (Though I think "too many comments" should probably be much less of a worry than "too few", at least if the comments make some effort to be polite and relevant, and except inasmuch as loads of comments on one thing sucks up time that could be spent commenting on other things where that'd be more useful.) I think a few simple steps that could be taken by writers are: 1. People could more often send google doc drafts to a handful of people specifically selected for being more likely than average to (a) be interested in reading the draft and (b) have useful things to say about it 2. People could more often share google doc drafts in the Effective Altruism Editing & Review Facebook group 3. People could more often share google doc drafts in other Facebook groups, Slack workspaces, or the like * E.g., sharing a draft relevant to improving institutional decision-making in the corresponding Facebook group 4. People could more often make posts/shortforms that include an executive summary (or similar) and a link to the full google doc draft, saying that this is still like a draft and they'd appreciate comment * Roughly this has been done recently by Joe Carlsmith and Ben Garfinkel, for example * This could encourage more comments that just posting the whole thing to the Forum as a regular post, since (a) this conveys that this is still a work-in-progress and that comments are welcome, and (b) google docs make it easier to comment on specific points 5. When people do post full versions of things on the Forum (or wherever), they could explicitly indicate that they're interested in feedback, indicate roughly what kinds of feedback would be most valuable, and indicate that they might update the post in light of feedback (if that's true) 6. People could implement the advice given in these two good posts: 1. ht

One other semi-relevant thing from my post Notes on EA-related research, writing, testing fit, learning, and the Forum:

Sometimes people worry that a post idea might be missing some obvious, core insight, or just replicating some other writing you haven't come across. I think this is mainly a problem only inasmuch as it could've been more efficient for you to learn things than slowly craft a post.

  • So if you can write (a rough version of) the post quickly, you could just do that.
  • Or you could ask around or make a quick Question post to outline the basic idea a
... (read more)
Khorton's Shortform

I'm considering donating to the Centre for Women's Justice. With a budget of about £300k last year, they have undertaken strategic litigation against the government , Crown prosecutors etc for mismanagement of sexual assault cases. The cases seem well-chosen to raise the issue on the political agenda. I think more rapists being successfully prosecuted would have a very positive impact so I'm excited to see this work. I'm planning to email them soon. https://www.centreforwomensjustice.org.uk/strategic-plan

Ben_Snodin's Shortform

Some initial thoughts on "Are We Living At The Hinge Of History"?

In the below I give a very rough summary of Will MacAskill’s article Are We Living At The Hinge Of History? and give some very preliminary thoughts on the article and some of the questions it raises.

I definitely don’t think that what I’m writing here is particularly original or insightful: I’ve thought about this for no more than a few days, any points I make are probably repeating points other people have already made somewhere, and/or are misguided, etc. This seems like an incredibly deep t... (read more)

Ben_Snodin's Shortform

Some initial thoughts on moral realism vs anti-realism

I wrote this last Summer as a private “blog post” just for me. I’m posting it publicly now (after mild editing) because I have some vague idea that it can be good to make things like this public. These thoughts come from my very naive point of view (as it was in the Summer of 2020; not to suggest my present day point of view is much less naive). In particular if you’ve already read lots of moral philosophy you probably won’t learn anything from reading this. Also, I hope my summaries of other people’s a... (read more)

Ben Garfinkel's Shortform

A thought on epistemic deference:

The longer you hold a view, and the more publicly you hold a view, the more calcified it typically becomes. Changing your mind becomes more aversive and potentially costly, you have more tools at your disposal to mount a lawyerly defense, and you find it harder to adopt frameworks/perspectives other than your favored one (the grooves become firmly imprinted into your brain). At least, this is the way it seems and personally feels to me.[1]

For this reason, the observation “someone I respect publicly argued for X many years a... (read more)

13JP Addison14dAt least in software, there's a problem I see where young engineers are often overly bought-in to hype trains, but older engineers (on average) stick with technologies they know too much. I would imagine something similar in academia, where hot new theories are over-valued by the young, but older academics have the problem you describe.

Good point!

That consideration -- and the more basic consideration that more junior people often just know less -- definitely pushes in the opposite direction. If you wanted to try some version of seniority-weighted epistemic deference, my guess is that the most reliable cohort would have studied a given topic for at least a few years but less than a couple decades.

Load More