Yes, that's the narrowly utilitarian perspective (on the current margin). My point was that if you mix in even a little bit of common sense moral reasoning and/or moral uncertainty, causing x harm and preventing x harm is obviously more wrong than staying uninvolved. (To make this very obvious, imagine if someone beat their spouse but then donated to an anti-domestic abuse charity to offset this.) I guess I should have made it clearer that I wasn't objecting to the utilitarian logic of it. But even from a purely utilitarian perspective, this matters because it can make a real difference to the optics of the behavior.
Epistemic status: Some kind of fuzzy but important-feeling arguments
If one steps away from a very narrowly utilitarian perspective, I think the two are importantly disanalogous in a few ways, such that paying more attention to individual consumption of (factory farmed) animal products is justified.
The two are disanalogous from an offsetting perspective: Eating (factory farmed) animal products relatively directly results in an increase in animal suffering, and there is nothing that you can do to "undo" that suffering, even if you can "offset" it by donating...
Interesting post, thanks for writing it!
I'm not very familiar with the inner workings of think tanks, but I think you may be understating one aspect of the bad research consideration: If the incentives are sufficiently screwed up such that these organizations mostly aren't trying to produce good, objective research, then they're probably not doing a good job of teaching their staff to do that, nor selecting for staff that want to do that or are good at that. So you might not be able to get good research out of these institutions by just locally fixing the ...
"Effective altruism" sounds more like a social movement and less like a research/policy project. The community has changed a lot over the past decade, from "a few nerds discussing philosophy on the internet" with a focus on individual action to larger and respected institutions focusing on large-scale policy change, but the name still feels reminiscent of the former.
It's not just that it has developed in that direction, it has developed in many directions. Could the solution then be to use different brands in different contexts? "Global priorities comm...
This was a great overview, thanks!
I guess I was left a bit confused as to what the goal and threat/trust model is here. I guess the goal is to be able to run models on devices controlled by untrusted users, without allowing the user direct access to the weights? Because if the user had access to the weights, they could take them as a starting point for fine tuning? (I guess this doesn't do anything to prevent misuse that can be achieved with the unmodified weights? You might want to make this clearer.)
As far as I can tell, the key problem with all of the m... (read more)