Nick Corvino

Topic Contributions

Comments

Community Builders Spend Too Much Time Community Building

Strong upvote. 

To me, this seems more relevant for more established groups. Perhaps thinking about operational tasks vs skilling up shouldn't be thought of in terms of percentages, but in terms of necessary vs supplemental tasks. I would imagine things like sending emails, doing 1:1s, buying food for events, etc. are necessary for any group to stay alive.  So if you are the only HEA for your uni group, you might have to spend 90% of your time doing these (and tbh I think this would be the right call). But when it comes to things like doing an egregious amount of marketing or anything else that doesn't seem necessary, perhaps skilling up should be prioritized. 
Also, I didn't see the multiplier effect come up anywhere, and I'm interested to hear how heavily you weight it. 

Nick Corvino's Shortform

(generally) how much counterfactual suffering  comes buying cage free eggs vs. factory farmed eggs? I couldn't find any straightforward posts/research on the topic, but I'm sure it's somewhere. 

Pitching EA to someone who believes certain goods can't be assigned a value

The problem here is that it's still overtly utilitarian, with just a  bit more wiggle room. It still forces people to weigh one thing against the other, which is what I think they might be uncomfortable doing. Buck Shlegeris says' everything is triage' and I think you'd agree with this sentiment. However, I don't think everyone likes to think this way, and I don't want that hiccup to be the reason they don't further investigate EA. 

Pitching EA to someone who believes certain goods can't be assigned a value

I agree, and that is essentially the rationale I employ. I personally think I could put a value on every aspect of my life, therefore subverting the notion that implicit values can't be made explicit. 

However, I think the problem is that for some people your answer will be a non-starter. They might not want to assign the implicit value an explicit value (and therefore your response would shew them away). So what I'm proposing is allowing them keep their implicit values implicit while showing them that you can still be an EA if you accept that other people have implicit values as well.  In honesty, it's barely a meta-ethical claim, and more-so an explication of how  EA can jive with various ethical frameworks.