Lukas_Finnveden

Comments

A ranked list of all EA-relevant (audio)books I've read

This has been discussed on lw here: www.lesswrong.com/posts/xBAeSSwLFBs2NCTND/do-you-vote-based-on-what-you-think-total-karma-should-be

Strong opinions on both sides, with a majority of people currently thinking about current karma levels occasionally but not always.

Were the Great Tragedies of History “Mere Ripples”?

It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear.

Agreed.

There are many longtermists that don't hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn't think we're at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists).

Yeah this seems right, maybe with the caveat that Will has (as far as I know) mostly expressed skepticism about this being the most influential century, and I'd guess he does think this century is unusually influential, or at least unusually likely to be unusually influential.

And yes, I also agree that the quoted views are very extreme, and that longtermists at most hold weaker versions of them.

Were the Great Tragedies of History “Mere Ripples”?

Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism

I haven't read the top-level post (thanks for summarising!); but in general, I think this is a weak counterargument. If most people in a movement (or academic field, or political party, etc) holds a rare belief X, it's perfectly fair to criticise the movement for believing X. If the movement claims that X isn't a necessary part of their ideology, it's polite for a critic to note that X isn't necessarily endorsed as the stated ideology, but it's important that their critique of the movement is still taken seriously. Otherwise, any movement can choose a definition that avoids mentioning the most objectionable part of their ideology without changing their beliefs or actions. (Similar to the motte-and-bailey fallacy). In this case, the author seems to be directly worried about longtermists' beliefs and actions; he isn't just disputing the philosophy.

Scope-sensitive ethics: capturing the core intuition motivating utilitarianism

As a toy example, say that  is some bounded sigmoid function, and my utility function is to maximize ; it's always going to be the case that  so I am in some sense scope sensitive, but I don't think I'm open to Pascal's mugging

This seems right to me.

I think it means that there is something which we value linearly, but that thing might be a complicated function of happiness, preference satisfaction, etc.

Yeah, I have no quibbles with this. FWIW, I personally didn't  interpret the passage as saying this, so if that's what's meant, I'd recommend reformulating.

(To gesture at where I'm coming from: "in expectation bring about more paperclips" seems much more specific than "in expectation increase some function defined over the number of paperclips"; and I assumed that this statement was similar, except pointing towards the physical structure of "intuitively valuable aspects of individual lives" rather than the physical structure of "paperclips". In particular, "intuitively valuable aspects of individual lives" seems like a local phenomena rather than something defined over world-histories, and you kind of need to define your utility function over world-histories to represent risk-aversion.)

Lessons from my time in Effective Altruism

I agree it's partly a lucky coincidence, but I also count it as some general evidence. Ie., insofar as careers are unpredictable, up-skilling in a single area may be a bit less reliably good than expected, compared with placing yourself in a situation where you get exposed to lots of information and inspiration that's directly relevant to things you care about. (That last bit is unfortunately vague, but seems to gesture at something that there's more of in direct work.)

Scope-sensitive ethics: capturing the core intuition motivating utilitarianism

Endorsing actions which, in expectation, bring about more intuitively valuable aspects of individual lives (e.g. happiness, preference-satisfaction, etc), or bring about fewer intuitively disvaluable aspects of individual lives

If this is the technical meaning of "in expectation", this brings in a lot of baggage. I think it implicitly means that you value those things ~linearly in their amount (which makes the second statement superfluous?), and it opens you up to pascal's mugging.

Lessons from my time in Effective Altruism

when I graduated, I was very keen to get started in an AI safety research group straightaway. But I now think that, for most people in that position, getting 1-2 years of research engineering experience elsewhere before starting direct work has similar expected value

If you'd done this, wouldn't you have missed out on this insight:

I’d assumed that the field would make much more sense once I was inside it, that didn’t really happen: it felt like there were still many unresolved questions (and some mistakes) in foundational premises of the field.

or do you think you could've learned that some other way?

Also, in your case, skilling up in engineering turned out to be less important than updating on personal fit and philosophising. I'm curious if you think you would've updated as hard on your personal fit in a non-safety workplace, and if you think your off-work philosophy would've been similarly good?

(Of course, you could answer: yes there were many benefits from working in the safety team; but the benefits from working in other orgs – e.g. getting non-EA connections – are similarly large in expectation.)

Lessons from my time in Effective Altruism

Great post!

EAs tend to lack experience with more formal or competitive interactions, such as political maneuvering in big organisations. This is particularly important for interacting with prestigious or senior people, who as a rule don’t have much time for naivety, and who we don’t want to form a bad impression of EA.

I can't immediately see why a lack of experience with political maneuvering would mean that we often waste prestigious peoples' time. Could you give an example? Is this just when an EA is talking to somoene prestigious and asks a silly question? (e.g. "Why do you  need a managing structure when you could just write up your goals and then ask each employee to maximize those goals?" or whatever)

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

When considering whether to cure a billion headaches or save someone's life, I'd guess that people's prioritarian intuition would kick in, and say that it's better to save the single life. However, when considering whether to cure a billion headaches or to increase one person's life from ok to awesome, I imagine that most people prefer to cure a billion headaches. I think this latter situation is more analogous to the repugnant conclusion. Since people's intuition differ in this case and in the repugnant conclusion, I claim that "The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future" is incorrect. The fact that the repugnant conclusion concerns is about merely possible people clearly matters for people's intuition in some way.

I agree that the repugnace can't be grounded by saying that merely possible people don't matter at all. But there are other possible mechanics that treat merely possible people differently from existing people, that can ground the repugnance. For example, the paper that we're discussing under!

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future.

It doesn't? That's not my impression. In particular:

There are current generation perfect analogues of the repugnant conclusion. Imagine you could provide a medicine that provides a low quality life to billions of currently existing people or provide a different medicine to a much smaller number of people giving them brilliant lives.

But people don't find these cases intuitively identical, right? I imagine that in the current-generation case, most people who oppose the repugnant conclusion instead favor egalitarian solutions, granting small benefits to many (though I haven't seen any data on this, so I'd be curious if you disagree!). Whereas when debating who to bring into existence, people who oppose the repugnant conclusion aren't just indifferent about what happens to these merely-possible people; they actively think that the happy, tiny population is better. 

So the tricky thing is that people intuitively support granting small benefits to many already existing people above large benefits to a few already existing people, but don't want to extend this to creating many barely-good lives above creating a few really good ones.

Load More