Seth Ariel Green 🔸

Research Scientist @ Humane and Sustainable Food Lab
1598 karmaJoined Working (6-15 years)New York, NY, USA
setharielgreen.com

Bio

Participation
1

I am a Research Scientist at the Humane and Sustainable Food Lab at  Stanford.

How others can help me

the lab I work at is seeking collaborators! More here.

How I can help others

If you want to write a meta-analysis, I'm happy to consult! I think I know something about what kinds of questions are good candidates, what your default assumptions should be, and how to delineate categories for comparisons

Comments
200

Topic contributions
1

  • I think it’s ok in most cases to ask someone to be nice on the internet. To quote Juana Molina, "No seas antipática. No seas antipática con tu mamá. La la la." Words to live by.
  • I seriously doubt this is going to be the example they focus on when they send the trains
  • The first person I lightly flambéed also said “sorry I was used to less nice parts of the internet” so I guess that went down well
  • When someone says “be nice” and it gets more upvotes than the post itself that provides a more meaningful signal of how a post went astray than just downvoting the original post

So I am not persuaded and will keep doing my chastise-y hall monitor thing 😤😤😤

Dean Ball's commentary on this refamed the issue for me https://www.hyperdimensional.co/p/clawed

The big difference, however, is that Anthropic is essentially using the contractual vehicle to impose what feel less like technical constraints and more like policy constraints on the military. Think of the difference between “this fighter jet is not certified for flight above such-and-such an altitude, and if you fly above that altitude, you’ve breached your warranty,” and “you may not fly this jet above such-and-such an altitude”). It is probably the case that the military should not agree to terms like this, and private firms should not try to set them.

But the Biden Administration did agree to those terms, and so did the Trump Administration, until it changed its mind. That alone should make one thing clear: terms like this are not some ridiculous violation of the norms of defense contracting...

The contract was not illegal, just perhaps unwise, and even that probably only in retrospect. Note that this is true even if you agree with the underlying substance of the limitations. You can support restrictions on mass domestic surveillance and lethal autonomous weapons, but disagree that a defense contract is the optimal vehicle to achieve that policy outcome. The way you achieve new policy outcomes, under the usual rules of our republic, is to pass a law...

I agree that there's something iffy/non-democratic in theory about putting that kind of constraint around the Pentagon, and that it would have been prudent for them to decline it in the first place. An analogy I read on Substack: if an epidural manufacturer told a government hospital "you're welcome to use our drug so long as you don't use it in any abortions," it would probably be prudent to decline that contract (too much overhead). 

 Anyway this reframing put one sentence in particular by Dario into a new light: "To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI." In other words, because we know what the law should be and what it's probably going to be, we should implement that policy today. I think many of us can think of examples where we'd be uncomfortable with a billionaire tech CEO saying that. 

I am glad that you think this issue is tractable and I've been following your work with great interest since I saw your RECAP talk in July (Side note to anyone following this -- the RECAP talks are great!). I am not sure what my own threshold for "tractable" is but I appreciate that you are cracking on it and I would be glad to be proven wrong. Tractability is inherently based on unknowns and I'm glad we're a big tent where people can prove something possible by doing it.

Thank you, I've now hyperlinked to the piece in both of my prior comments where I use the word "norms" (in part so that I remember it next time 😃)   

Well, you know me, always playing nice 😃😃😃

I do want to say, in Andrew's defense, that the comments on @Alistair Stewart 's original post are not exactly the very model of civility that EA might hope to show the world. I can understand why you'd read them and come away with the sense that people don't really get what you're trying to do. 

However, the point that calling something a 'leading cause area' requires cross-cause comparison is well-taken.

Personally, I don't think "mostly or entirely plant-based diet for dogs" sounds nutty. I think most people understand that dogs can subsist on literal garbage. Might not be optimal but I think we can make the case that if you feel really strongly about supplementing a healthy plant-based diet with animal protein, it should come from bivalves

The case for cats is much less intuitive, I think. 

By the way, Ben

Knight claims that there are other economically productive uses for byproducts; if that’s true, then a reduction in demand for animal-derived pet food would change the marginal use case for byproducts but not reduce their production

I don't think "not reduce their production" follows from your reasoning. If the next available use cases pay less money, we should, in espectation, see fewer animals raised, no? 

I have twice recently "gently counseled" people on EA forum norms when they come in, in my opinion, a little too hot for this rather cool medium 😃 is there something official/CEA-endorsed on this subject? If not, should I/someone write it? I could point them to Scout Mindset but that's kind of a high barrier to entry. 

Hi Andrew, welcome to the forum! I am keenly interested in this subject -- I am one of the commentators you mentioned and have written on the subject previously (Towards non-meat diets for domesticated dogs).

Without getting to much into the specifics here, I wish to gently counsel you on EA forum norms in a way that might help the message go down better for readers.

  1. We generally assume good intent. It is true that I am not persuaded by some elements of your analysis, which is why I stated a preference for the Alexander et al. estimation methods, but I would not describe that disagreement as "seeking to undermine [your] studies." My disagreement is not coming from a place of malice.
  2. We generally do not use maximalist language to describe each other's perceived mistakes, e.g. "profoundly misrepresents," "dramatically incorrect," etc. Instead it is more in line with how we talk to say "This is mistaken" or "this is not what I intended."
  3. We tend to address each other by name/username and use tags-- by all means please call me Seth rather than "a commentator" 😃 

Anyway, looking forward to more engagement, 

Thank you Vasco! I'll be curious to hear folks' responses, if any.

Load more