Jeff_Kaufman

Comments

EA Relationship Status

What do you think the "for life" adds to the pledge if not "for the rest of your lives"?

EA Relationship Status

See the discussion here: https://www.facebook.com/jefftk/posts/10100184609772372?comment_id=10100184674817022

It doesn't account for a very much of the data, unfortunately.

EA Relationship Status

"for life" sounds just as permanent to me, if less morbid, than "till death do us part"

Some thoughts on deference and inside-view models

Similar with what you're saying about AI alignment being preparadigmatic, a major reason why trying to prove the Riemann conjecture head-on would be a bad idea is that people have already been trying to do that for a long time without success. I expect the first people to consider the conjecture approached it directly, and were reasonable to do so.

Some thoughts on deference and inside-view models
I asked an AI safety researcher "Suppose your research project went as well as it could possibly go; how would it make it easier to align powerful AI systems?", and they said that they hadn't really thought about that. I think that this makes your work less useful.

This seems like a deeper disagreement than you're describing. A lot of research in academia (ex: much of math) involves playing with ideas that seem poorly understood, trying to figure out what's going on. It's not really goal directed, especially not the kind of goal you can chain back to world improvement, it's more understanding directed.

It reminds me of Sarah Constantin's post about the trade-off between output and external direction: https://srconstantin.wordpress.com/2019/07/20/the-costs-of-reliability/

For AI safety your view may still be right: one major way I could see the field going wrong is getting really into interesting problems that aren't useful. But on the other hand it's also possible that the best path involves highly productive interest-following understanding-building research where most individual projects don't seem promising from an end to end view. And maybe even where most aren't useful from an end to end view!

Again, I'm not sure here at all, but I don't think it's obvious you're right.

How should longtermists think about eating meat?

"With this framework, we can propose a clearer answer to the moral offsetting problem: you can offset axiology, but not morality." https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/

How should longtermists think about eating meat?

I wonder how much we can trust people's given reasons for having been veg? For example, say people sometimes go veg both for health reasons and because they also care about animals. I could imagine something where if you asked them while they were still veg they would say "mostly because I care about animals" but then if you ask them after you get more "I was doing it for health reasons" because talking about how you used to do it for the animals makes you sound selfish?

How should longtermists think about eating meat?

https://faunalytics.org/a-summary-of-faunalytics-study-of-current-and-former-vegetarians-and-vegans/ has "84% of vegetarians/vegans abandon their diet" which matches my experience and I think is an indication that it's pretty far from costless?

How should longtermists think about eating meat?

> a lot of the long-term vegans that I know

It sounds like you may have a sampling bias, where you're missing out on all the people who disliked being vegan enough to stop?

Why I'm Not Vegan
However, even if I were to get more than $10 of enjoyment out of punching that person, I don't think it's right that I'm morally permitted to do so.

I don't think you would be morally permitted to either, because I think https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/ is right and you can offset axiology, but not morality.

Load More