Jeff_Kaufman

Jeff_Kaufman's Comments

Some thoughts on deference and inside-view models

Similar with what you're saying about AI alignment being preparadigmatic, a major reason why trying to prove the Riemann conjecture head-on would be a bad idea is that people have already been trying to do that for a long time without success. I expect the first people to consider the conjecture approached it directly, and were reasonable to do so.

Some thoughts on deference and inside-view models
I asked an AI safety researcher "Suppose your research project went as well as it could possibly go; how would it make it easier to align powerful AI systems?", and they said that they hadn't really thought about that. I think that this makes your work less useful.

This seems like a deeper disagreement than you're describing. A lot of research in academia (ex: much of math) involves playing with ideas that seem poorly understood, trying to figure out what's going on. It's not really goal directed, especially not the kind of goal you can chain back to world improvement, it's more understanding directed.

It reminds me of Sarah Constantin's post about the trade-off between output and external direction: https://srconstantin.wordpress.com/2019/07/20/the-costs-of-reliability/

For AI safety your view may still be right: one major way I could see the field going wrong is getting really into interesting problems that aren't useful. But on the other hand it's also possible that the best path involves highly productive interest-following understanding-building research where most individual projects don't seem promising from an end to end view. And maybe even where most aren't useful from an end to end view!

Again, I'm not sure here at all, but I don't think it's obvious you're right.

How should longtermists think about eating meat?

"With this framework, we can propose a clearer answer to the moral offsetting problem: you can offset axiology, but not morality." https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/

How should longtermists think about eating meat?

I wonder how much we can trust people's given reasons for having been veg? For example, say people sometimes go veg both for health reasons and because they also care about animals. I could imagine something where if you asked them while they were still veg they would say "mostly because I care about animals" but then if you ask them after you get more "I was doing it for health reasons" because talking about how you used to do it for the animals makes you sound selfish?

How should longtermists think about eating meat?

https://faunalytics.org/a-summary-of-faunalytics-study-of-current-and-former-vegetarians-and-vegans/ has "84% of vegetarians/vegans abandon their diet" which matches my experience and I think is an indication that it's pretty far from costless?

How should longtermists think about eating meat?

> a lot of the long-term vegans that I know

It sounds like you may have a sampling bias, where you're missing out on all the people who disliked being vegan enough to stop?

Why I'm Not Vegan
However, even if I were to get more than $10 of enjoyment out of punching that person, I don't think it's right that I'm morally permitted to do so.

I don't think you would be morally permitted to either, because I think https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/ is right and you can offset axiology, but not morality.

Why I'm Not Vegan
I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals' moral worth in these discussions

Let's say I'm trying to convince someone that they shouldn't donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future ("astronomical stakes") as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future don't matter, though, this isn't going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but it's likely that existential risk just isn't a high priority by their values. Them saying they think there's only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.

On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldn't try to convince people to go vegan because diet is strongly cultural and trying to change people's diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence people's diet. On other questions, though, it's much harder to get evidence, and that's where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.

(I'm still very curious what you think of my demandingness objection to your argument above)

Why I'm Not Vegan

While I think moral trades are interesting, I don't know why you would expect me to see $4.30 going to an existential risk charity to be enough for it to be worth me going vegetarian for a year over? I'd much rather donate $4.30 myself and not change my diet.

I think you're conflating "Jeff sees $0.43/y to a good charity as being clearly better than averting the animal suffering due to omnivorous eating" and "Jeff only selfishly values eating animal products at $0.43/y"?

Leaving Things For Others
why can't I do both the individual action and the institutional part?

Both avoiding delivery and calling stores to encourage prioritization are ways of turning time into a better world. Yes, you can do your own shopping and call your own grocery store, but you have further options. Do you call other stores you go to less frequently and make similar encouragements? Do you call stores in other areas? Do you sign up as an Instacart shopper so there will be more delivery spots available? You write that you can act on both fronts, but if you start thinking of how you might do good with your time you'll quickly have so many potential things you can do that you have to prioritize. I'm arguing that you should prioritize based on how much good the action does relative to how much of a sacrifice it is to yourself.

The link at the end ( https://www.jefftk.com/p/effective-altruism-and-everyday-decisions ) gives more details, but overall I see these as very similar to encouragements to use cold water for showering instead of warm. Yes, there's some benefit to both, but when you compare the benefit to others (the delivery slot has a chance of going to someone else who needs it more than you do, a cold shower means less CO2 emitted) with the cost to yourself (you would prefer grocery delivery and warm showers), most people will have other altruistic options that do more good for less sacrifice.

Load More