Lukas_Finnveden

Wiki Contributions

Comments

Further thoughts on charter cities and effective altruism

this page has some statistics on openphil's giving (though it is noted to be preliminary)  https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy

[Future Perfect] How to be a good ancestor

Sweden has a “Ministry of the Future,”

Unfortunately, this is now a thing of the past. It only lasted 2014-2016. (Wikipedia on the minister post: https://en.wikipedia.org/wiki/Minister_for_Strategic_Development_and_Nordic_Cooperation )

What are some key numbers that (almost) every EA should know?

The last two should be 10^11 - 10^12 and 10^11, respectively?

A ranked list of all EA-relevant (audio)books I've read

This has been discussed on lw here: www.lesswrong.com/posts/xBAeSSwLFBs2NCTND/do-you-vote-based-on-what-you-think-total-karma-should-be

Strong opinions on both sides, with a majority of people currently thinking about current karma levels occasionally but not always.

Were the Great Tragedies of History “Mere Ripples”?

It seems fine to switch between critiquing the movement and critiquing the philosophy, but I think it'd be better if the switch was made clear.

Agreed.

There are many longtermists that don't hold these views (eg. Will MacAskill is literally about to publish the book on longtermism and doesn't think we're at an especially influential time in history, and patient philanthropy gets taken seriously by lots of longtermists).

Yeah this seems right, maybe with the caveat that Will has (as far as I know) mostly expressed skepticism about this being the most influential century, and I'd guess he does think this century is unusually influential, or at least unusually likely to be unusually influential.

And yes, I also agree that the quoted views are very extreme, and that longtermists at most hold weaker versions of them.

Were the Great Tragedies of History “Mere Ripples”?

Granted, there are probably longtermists that do hold these views, but these views are not longtermism. I don’t know whether Bostrom (whose views seems to be the focus of the book) holds these views. Even if he does, these views are not longtermism

I haven't read the top-level post (thanks for summarising!); but in general, I think this is a weak counterargument. If most people in a movement (or academic field, or political party, etc) holds a rare belief X, it's perfectly fair to criticise the movement for believing X. If the movement claims that X isn't a necessary part of their ideology, it's polite for a critic to note that X isn't necessarily endorsed as the stated ideology, but it's important that their critique of the movement is still taken seriously. Otherwise, any movement can choose a definition that avoids mentioning the most objectionable part of their ideology without changing their beliefs or actions. (Similar to the motte-and-bailey fallacy). In this case, the author seems to be directly worried about longtermists' beliefs and actions; he isn't just disputing the philosophy.

Scope-sensitive ethics: capturing the core intuition motivating utilitarianism

As a toy example, say that  is some bounded sigmoid function, and my utility function is to maximize ; it's always going to be the case that  so I am in some sense scope sensitive, but I don't think I'm open to Pascal's mugging

This seems right to me.

I think it means that there is something which we value linearly, but that thing might be a complicated function of happiness, preference satisfaction, etc.

Yeah, I have no quibbles with this. FWIW, I personally didn't  interpret the passage as saying this, so if that's what's meant, I'd recommend reformulating.

(To gesture at where I'm coming from: "in expectation bring about more paperclips" seems much more specific than "in expectation increase some function defined over the number of paperclips"; and I assumed that this statement was similar, except pointing towards the physical structure of "intuitively valuable aspects of individual lives" rather than the physical structure of "paperclips". In particular, "intuitively valuable aspects of individual lives" seems like a local phenomena rather than something defined over world-histories, and you kind of need to define your utility function over world-histories to represent risk-aversion.)

Lessons from my time in Effective Altruism

I agree it's partly a lucky coincidence, but I also count it as some general evidence. Ie., insofar as careers are unpredictable, up-skilling in a single area may be a bit less reliably good than expected, compared with placing yourself in a situation where you get exposed to lots of information and inspiration that's directly relevant to things you care about. (That last bit is unfortunately vague, but seems to gesture at something that there's more of in direct work.)

Scope-sensitive ethics: capturing the core intuition motivating utilitarianism

Endorsing actions which, in expectation, bring about more intuitively valuable aspects of individual lives (e.g. happiness, preference-satisfaction, etc), or bring about fewer intuitively disvaluable aspects of individual lives

If this is the technical meaning of "in expectation", this brings in a lot of baggage. I think it implicitly means that you value those things ~linearly in their amount (which makes the second statement superfluous?), and it opens you up to pascal's mugging.

Lessons from my time in Effective Altruism

when I graduated, I was very keen to get started in an AI safety research group straightaway. But I now think that, for most people in that position, getting 1-2 years of research engineering experience elsewhere before starting direct work has similar expected value

If you'd done this, wouldn't you have missed out on this insight:

I’d assumed that the field would make much more sense once I was inside it, that didn’t really happen: it felt like there were still many unresolved questions (and some mistakes) in foundational premises of the field.

or do you think you could've learned that some other way?

Also, in your case, skilling up in engineering turned out to be less important than updating on personal fit and philosophising. I'm curious if you think you would've updated as hard on your personal fit in a non-safety workplace, and if you think your off-work philosophy would've been similarly good?

(Of course, you could answer: yes there were many benefits from working in the safety team; but the benefits from working in other orgs – e.g. getting non-EA connections – are similarly large in expectation.)

Load More