191 karmaJoined Apr 2019


What are the limitations of the rodent studies? Two ways I could imagine them being inadequate:

  • Rodent eyes are smaller and the physical scale of relevant features matters a lot for how damaging far UV-C is (although I would naively guess that smaller eyes are if anything worse for this, so if the rodents do fine then I'd think the humans would too).
  • Rodents can't follow detailed instructions or provide subjective reports, so some kinds of subtle vision impairments we wouldn't be able to notice.

Do either of these apply, or are the limitations in these studies from other factors?

Since no one has said anything in reply to this comment yet: I suspect it is getting downvotes because it doesn't seem especially relevant to the current discussion and feels like it would fit better as a standalone post or an Intercom message or something.

  1. I'm lazy; I am not immune to the phenomenon where users reliably fail to apply optimization to their use of a website, despite their experience improving when such changes are made for them. (I suspect this perspective is underrepresented in the comments because fewer people are willing to admit it and it's probably more common among lurkers.)
  2. I consume content weighted in large part by how many upvotes it has, because that's where the discussion is and it's what people will be talking about. (Also because in my case most of my EA Forum reading comes from karma-gated RSS feeds, though I expect this to be uncommon.) This means that in an equilibrium of most attention going to community posts, I'll read more of them, but I would be happy with a state of affairs that shifted the equilibrium to object-level posts. 

I read the original comment not as an exhortation to always include lots of nuanced reflection in mostly-unrelated posts, but to have a norm that on the forum, the time and place to write sentences that you do not think are actually true as stated is "never (except maybe April Fools)".

The change I'd like to see in this post isn't a five-paragraph footnote on morality, just the replacement of a sentence that I don't think they actually believe with one they do. I think that environments where it is considered a faux pas to point out "actually, I don't think you can have a justified belief in the thing you said" are extremely corrosive to the epistemics of a community hosting those environments, and it's worth pushing back on them pretty strongly. 

it doesn't seem good for people to face hardship as a result of this

I agree, but the tradeoff is not between "someone with a grant faces hardship" and "no one faces hardship", it's between "someone with a grant faces hardship" and "someone with deposits at FTX faces hardship". 

I expect that the person with the grant is likely to put that money to much better uses for the world, and that's a valid reason not to do it! But in terms of the direct harms experienced by the person being deprived of money, I'd guess the median person who lost $10,000 to unrecoverable FTX deposits is made a fair bit worse off by that than the median person with a $10,000 Future Fund grant would be by returning it. 

I assume you mean something like “return the money to FTX such that it gets used to pay out customer balances”, but I don’t actually know how I’d go about doing this as an individual. It seems like if this was a thing lots of people wished to do, we’d need some infrastructure to make it happen, and doing so in a way that led to the funds having the correct legal status to be transferred back to customers in that fashion might be nontrivial.

(Or not; I’m definitely not an expert here. Happy to hear from someone with more knowledge!)

What level of feedback detail do applicants currently receive? I would expect that giving a few more bits beyond a simple yes/no would have a good ROI, e.g. at least having the grantmaker tick some boxes on a dropdown menu. 

"No because we think your approach has a substantial chance of doing harm", "no because your application was confusing and we didn't have the time to figure out what it was saying", and "no because we think another funder is better able to evaluate this proposal, so if they didn't fund it we'll defer to their judgment" seem like useful distinctions to applicants without requiring much time from grantmakers.

Opening with a strong claim,  making your readers scroll through a lot of introductory text, and ending abruptly with "but I don't feel like justifying my point in any way, so come up with your own arguments" is not a very good look on this forum. 

Insightful criticism of the capital allocation dynamics in EA is a valuable and worthwhile thing that I expect most EA Forum readers would like to see! But this is not that, and the extent to which it appears to be that for several minutes of the reader's attention comes across as rather rude. My gut reaction to this kind of rhetorical strategy is "if even the author doesn't want to put forth the effort to make this into a coherent argument, why should I?"

[I have read the entirety of The Inner Ring, but not the vast series of apparent prerequisite posts to this one. I would be very surprised if reading them caused me to disagree with the points in this comment, though.]

Alexey Guzey has posted a very critical review of Why We Sleep - I haven't deeply investigated the resulting debate, but my impression from what I've seen thus far is that the book should be read with a healthy dose of skepticism.

Load more