All of RavenclawPrefect's Comments + Replies

Thanks so much for this post - I'm going to adjust my buying habits from now on! 

My impression is that e.g. Vital Farms is still substantially better than conventional egg brands, and if I need to buy eggs in a store that doesn't offer these improved options it still probably cuts suffering per egg in half or more relative to a cheaper alternative. Does that seem right to you?

4
mayleaf
3mo
I do expect Vital Farms to be a lot better than an average cheap egg brand with no certifications. I'm not sure how they compare to the average brand that's Certified Humane and USDA Certified Organic (a combination that requires outdoor access*, no debeaking, and no forced-molting), but my guess would be that they're better than that too. Most of my uncertainty comes from lack of knowledge of chicken psychology (is being outdoors but in a large flock of 20k birds a lot less stressful than being indoors in a large dense flock, or about the same? Does beak trimming cause chronic pain or frustration as the hens can't forage as well?) One specific consideration with Vital Farms is that their practices vary by farm: they're a collective of small farms nationwide, and it seems that they have different subgroups of farms that adhere to different standards. Here are the Cornucopia institute's egg scorecards for both their standard and "regenerative organic" lines: standard, regenerative organic. Based on Cornucopia, I think both lines still look pretty good, even compared to other organic farms. *asterisk on "outdoor access" since apparently USDA Organic counts caged-in porches as outdoor access, which seems bad to me

What are the limitations of the rodent studies? Two ways I could imagine them being inadequate:

  • Rodent eyes are smaller and the physical scale of relevant features matters a lot for how damaging far UV-C is (although I would naively guess that smaller eyes are if anything worse for this, so if the rodents do fine then I'd think the humans would too).
  • Rodents can't follow detailed instructions or provide subjective reports, so some kinds of subtle vision impairments we wouldn't be able to notice.

Do either of these apply, or are the limitations in these studies from other factors?

Since no one has said anything in reply to this comment yet: I suspect it is getting downvotes because it doesn't seem especially relevant to the current discussion and feels like it would fit better as a standalone post or an Intercom message or something.

2
Nathan Young
1y
I argue that since Lizka, who runs the forum had to spend a load of time turning comments into a basic poll feature, maybe it's a feature she and others want. Also i've never seen her do this before. (But thank you)
  1. I'm lazy; I am not immune to the phenomenon where users reliably fail to apply optimization to their use of a website, despite their experience improving when such changes are made for them. (I suspect this perspective is underrepresented in the comments because fewer people are willing to admit it and it's probably more common among lurkers.)
  2. I consume content weighted in large part by how many upvotes it has, because that's where the discussion is and it's what people will be talking about. (Also because in my case most of my EA Forum reading comes from k
... (read more)

I read the original comment not as an exhortation to always include lots of nuanced reflection in mostly-unrelated posts, but to have a norm that on the forum, the time and place to write sentences that you do not think are actually true as stated is "never (except maybe April Fools)".

The change I'd like to see in this post isn't a five-paragraph footnote on morality, just the replacement of a sentence that I don't think they actually believe with one they do. I think that environments where it is considered a faux pas to point out "actually, I don't think... (read more)

Also note that their statement included "...that occurred at FTX". So not any potential fraud anywhere.

3
SiebeRozendal
1y
Ah I didn't mean to apply Habryka's comment to be faux pas. That's awkward phrasing of mine. I just meant to say that the points he raises feel irrelevant to this post and its context.

it doesn't seem good for people to face hardship as a result of this

I agree, but the tradeoff is not between "someone with a grant faces hardship" and "no one faces hardship", it's between "someone with a grant faces hardship" and "someone with deposits at FTX faces hardship". 

I expect that the person with the grant is likely to put that money to much better uses for the world, and that's a valid reason not to do it! But in terms of the direct harms experienced by the person being deprived of money, I'd guess the median person who lost $10,000 to unre... (read more)

I assume you mean something like “return the money to FTX such that it gets used to pay out customer balances”, but I don’t actually know how I’d go about doing this as an individual. It seems like if this was a thing lots of people wished to do, we’d need some infrastructure to make it happen, and doing so in a way that led to the funds having the correct legal status to be transferred back to customers in that fashion might be nontrivial.

(Or not; I’m definitely not an expert here. Happy to hear from someone with more knowledge!)

1
sawyer
1y
Yep this is a great point and overlaps with Vardev's comment. If I thought that the money was gained immorally, it would be pretty bad to just return it to the people who did the immoral thing!

What level of feedback detail do applicants currently receive? I would expect that giving a few more bits beyond a simple yes/no would have a good ROI, e.g. at least having the grantmaker tick some boxes on a dropdown menu. 

"No because we think your approach has a substantial chance of doing harm", "no because your application was confusing and we didn't have the time to figure out what it was saying", and "no because we think another funder is better able to evaluate this proposal, so if they didn't fund it we'll defer to their judgment" seem like useful distinctions to applicants without requiring much time from grantmakers.

Buck
2y19
0
0

The problem with saying things like this isn't that they're time consuming to say, but that they open you up to some risk of the applicant getting really mad at you, and have various other other risks like this. These costs can be mitigated by being careful (eg picking phrasings very intentionally, running your proposed feedback by other people) but being careful is time-consuming.

3
timunderwood
2y
Also 'no because my intuitions say this is likely to be low impact', and 'other' But I agree that those four options would be useful -- maybe even despite the risk that the person immediately decides to try arguing with the grant maker about how his proposal really is in fact likely to be high impact, beneficial rather than harmful, and totally not confusing, and that the proposal definitely shouldn't be othered.

Opening with a strong claim,  making your readers scroll through a lot of introductory text, and ending abruptly with "but I don't feel like justifying my point in any way, so come up with your own arguments" is not a very good look on this forum. 

Insightful criticism of the capital allocation dynamics in EA is a valuable and worthwhile thing that I expect most EA Forum readers would like to see! But this is not that, and the extent to which it appears to be that for several minutes of the reader's attention comes across as rather rude. My gut re... (read more)

-2
Milan_Griffes
3y
"Opening with a strong claim,  making your readers scroll through a lot of introductory text, and ending abruptly with "but I don't feel like justifying my point in any way, so come up with your own arguments" is not a very good look on this forum. " I wasn't intending the text included in the post to be introductory... "[I have read the entirety of The Inner Ring, but not the vast series of apparent prerequisite posts to this one. I would be very surprised if reading them caused me to disagree with the points in this comment, though.]" If you don't want to read the existing work that undergirds this post, why should I expect further writing to change your mind about the topic?

Alexey Guzey has posted a very critical review of Why We Sleep - I haven't deeply investigated the resulting debate, but my impression from what I've seen thus far is that the book should be read with a healthy dose of skepticism.

5
BenSchifman
3y
Wow I wasn't aware of this, thanks for alerting me to it.  It appears the author might have responded somewhat indirectly here (https://sleepdiplomat.wordpress.com/).  I will add a note in the post and do some more digging when I have time. 

If one doesn't have strong time discounting in favor of the present, the vast majority of the value that can be theoretically realized exists in the far future.

As a toy model, suppose the world is habitable for a billion years, but there is an extinction risk in 100 years which requires substantial effort to avert.

If resources are dedicated entirely to mitigating extinction risks, there is net -1 utility each year for 100 years but a 90% chance that the world can be at +5 utility every year afterwards once these resources are freed up for direct work... (read more)

1
Jack Malde
5y
Thanks for this. I'd like to ask you the same question I'm asking others in this thread. I do wonder about the prospect of 'solving' extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I'm not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?

It seems to me that there are quite low odds of 4000-qubit computers being deployed without proper preparations? There are very strong incentives for cryptography-using organizations of almost any stripe to transition to post-quantum encryption algorithms as soon as they expect such algorithms to become necessary in the near future, for instance as soon as they catch wind of 200- and 500- and 1000- bit quantum computers. Given that post-quantum algorithms already exist, it does not take much time from worrying about better quantum computers to protecting a... (read more)

3
len.hoang.lnh
5y
Thanks! This is reassuring. I met someone last week who does his PhD in post-quantum cryptography and he did tell me about an ongoing competition to set the standards of such a cryptography. The transition seems on its way!