DR

Dylan Richardson

Bio

Participation
2

Graduate student at Johns Hopkins. Looking for part-time work.

Comments
48

I'm not familiar with the examples you listed @mal_graham🔸(anticoagulant bans and bird-safe glass), are these really robustly examples of palatability? I'm betting that they are more motivated by safety for dogs, children and predatory birds, not the rats? And I'm guessing that even the glass succeeded more on conservation grounds?

Certainly, even if so, it's good to see that there are some palatability workarounds. But given the small-body problem, this doesn't encourage great confidence that there could be more latent palatability for important interventions.  Especially once the palatable low-hanging fruit are plucked.

Hi Aditi! My current level of involvement in the animal movement isn't high enough to be very decision relevant.

As for others in the movement: The main appeal of the first question is to better draw out expectations about future moral patients. Might shed light on what the relative strength of given hypothetical sentience candidates in relation to each other are. My understanding is that the consensus view is that digital minds dominate far-future welfare. But regardless of whether that is the case, it's not obvious that will be the case without concerted efforts to design these minds as such. And if it is necessary to design digital minds for sentience, then we might expect that other artificial consciousnesses are created before that point (which may deserve our concern).

The last two questions are rough attempts to aid prioritization efforts. 

1. Farmed animals receive very little in philanthropic funding; so relatively minor changes may matter a lot. 
2. Holden Karnofsky in his latest 80k episode appearance said something to the effect that corporate campaigns had in his view some of Open Phil's best returns. Arguably, with less commitments being achieved overtime and other successes on the horizon (alt protein, policy, new small animal focused orgs), this could change. Predictions expecting that it will might in themselves help inform funders making inter cause prioritization decisions.

 

Of less immediate practical relevance than other questions, but nonetheless interesting and not before discussed in this context (to my knowledge):

Will the first artificial conscious mind be determined to be:

  1. In the form of an LLM
  2. A simulation of a non-human animal brain (such as nematodes, for instance)
  3. A simulation/emulation of a human brain
  4. There will not be any such determination by the resolution date  (seems best to exclude this answer and have the question not resolve should this be the case, considering that it would dominate otherwise. A separate question on this would be better)
  5. Other

    Also:

  • Something about octopus farm prevalence/output probably
  • Forecasts of overall farmed animal welfare spending for a given future year, inflation corrected. Not sure what the most current estimates are at the moment or what org would be best for resolution.
  • Might be interesting to do something like "according to reputable figure X (Lewis Bollard?), what will be judged to have been the most effective animal spending on the margin over the prior 5 years". Options: Corporate campaigns, movement building, direct action, go-vegan advocacy, policy advocacy, alternative protein development, etc.
     

I take your point about "Welfareans" vs hedonium as beings rather than things, perhaps that would improve consensus building on this. 

That being said, I don't really expect whatever these entities are to be anything like what we are accustomed to calling persons. A big part of this is that I don't see any reason for experiences to be changing over time; they wouldn't need to be aging or learning or growing satiated or accustomed. 

Perhaps this is just my hedonist bias coming through -  certainly there's room for compromise. But unfortunately my experience is that lots of people are strongly compelled by experience machine arguments and are unwilling to make the slightest concession to the hedonist position. 

Changed my mind, I like this. I'm going to call them Welfareans from now on.

I'm very pro-deprioritizing of community posts. They invariably get way more engagement then other topics and I don't think this is only an FTX related phenomenon. Community posts are the manifestation of in/out group tensions and come with all of the associated poor judgement and decorum. The EA forum's politics and religion.

Obviously they are needed to an extent, but it is entirely reasonable to give the less contentious contributions a boost.

AI safety pretty clearly swallows longtermist community building. If we want longtermism to be built and developed it needs to be very explicitly aimed at, not just mentioned on the side. I suspect that general EA group community building is better for this reason too - it isn't overwhelmed by any one object level cause/career/demographic.

Dylan Richardson
1
0
0
70% disagree

Morality is Objective

I don't this is an important or interesting question, at least not over the type of disagreement we are seeing here. The scope of the question (and of possible views) is larger than BB seems to acknowledge. At the very least, it is obvious to me that there is a type of realism/objectivity that is 

1. Endorsed by at least some realists, especially with certain religious views.

2. Ontologically much more significant then BB is willing to defend.

Why ignore this? 

There's a lot of good, old, semi-formal content on the GiveWell blog: https://blog.givewell.org/ If you do some searches, you may be able to find the subject touched on. 

I'm not sure if they have done any formal review of the subject however.

I don't have anything to add about the intra-cause effectiveness multiplier debate. But much of the multiplier over the average charity is simply due to very poor cause selection. So while I applaud OP for wanting rigorous empirical evidence, some comparisons simply don't require peer-reviewed studies. We can still reason well in the absence of easy quantification

Dogs and cats vs farmed animal causes is a great example. But animal shelters vs GHD is just as tenable.

This isn't an esoteric point; a substantial amount of donations are simply to bad causes. Poverty alleviation in rich countries (not political or policy directed), most mutual aid campaigns, feeding or clothing the poor in the rich world, most rich-world DEI related activism lacking political aims (movement building or policy is at least more plausible), most ecological efforts, undirected scholarship funds, the arts. 

I'm comfortable suggesting that any of these are at least 1000x less cost effective.

Hot take, but political violence is bad and will continue to be bad in the foreseeable near-term future. That's all I came here to say folks, have a great rest of your day.

Load more