NC

Nick Corvino

201 karmaJoined Jul 2021

Comments
10

was there a motive behind the font change? 

it's hard to put into words, but like there were cocktails and nice background music and all the events transitioned super smoothly. It's like when you watch the Oscars or something and everything seems like it's been rehearsed--that's how this felt. EA conferences, on the other hand, usually seem more hectic and improvisational. 

Did not hear animal welfare mentioned once, and they had lots of meat options for lunch. That's all I got lol. 

You know that's what I thought as well, but I've found the community to be more open to caution than I initially thought. Derek Thompson in particular (the main organizer for the event) harped on safety quite a bit. And if more EAs got involved (assuming they don't get amnesia) I assume they can carry over some of these concerns and shift the culture. 

Strong upvote. 

To me, this seems more relevant for more established groups. Perhaps thinking about operational tasks vs skilling up shouldn't be thought of in terms of percentages, but in terms of necessary vs supplemental tasks. I would imagine things like sending emails, doing 1:1s, buying food for events, etc. are necessary for any group to stay alive.  So if you are the only HEA for your uni group, you might have to spend 90% of your time doing these (and tbh I think this would be the right call). But when it comes to things like doing an egregious amount of marketing or anything else that doesn't seem necessary, perhaps skilling up should be prioritized. 
Also, I didn't see the multiplier effect come up anywhere, and I'm interested to hear how heavily you weight it. 

(generally) how much counterfactual suffering  comes buying cage free eggs vs. factory farmed eggs? I couldn't find any straightforward posts/research on the topic, but I'm sure it's somewhere. 

The problem here is that it's still overtly utilitarian, with just a  bit more wiggle room. It still forces people to weigh one thing against the other, which is what I think they might be uncomfortable doing. Buck Shlegeris says' everything is triage' and I think you'd agree with this sentiment. However, I don't think everyone likes to think this way, and I don't want that hiccup to be the reason they don't further investigate EA. 

I agree, and that is essentially the rationale I employ. I personally think I could put a value on every aspect of my life, therefore subverting the notion that implicit values can't be made explicit. 

However, I think the problem is that for some people your answer will be a non-starter. They might not want to assign the implicit value an explicit value (and therefore your response would shew them away). So what I'm proposing is allowing them keep their implicit values implicit while showing them that you can still be an EA if you accept that other people have implicit values as well.  In honesty, it's barely a meta-ethical claim, and more-so an explication of how  EA can jive with various ethical frameworks.