Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group you're partial towards. (With the caveat that "no credence on longtermism" is underspecified, since we haven't said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)
The framing "PR concerns" makes it sound like all the people doing the actual work are (and will always be) longtermists, whereas the focus on GCR is just for the benefit of the broader public. This is not the case. For example, I work on technical AI safety, and I am not a longtermist. I expect there to be more people like me either already in the GCR community, or within the pool of potential contributors we want to attract. Hence, the reason to focus on GCR is building a broader coalition in a very tangible sense, not just some vague "PR".
Is your claim "Impartial altruists with ~no credence on longtermism would have more impact donating to AI/GCRs over animals / global health"?
To my mind, this is the crux, because:
[I use "donate" rather than "work on" because donations aren't sensitive to individual circumstances, e.g. personal fit. I'm also assuming impartiality because this seems core to EA to me, but of course one could donate / work on a topic for non-impartial/ non-EA reasons]
I can relate, as someone who also struggles with self-worth issues. However, my sense of self-worth is tied primarily to how many people seem to like me / care about me / want to befriend me, rather than to what "senior EAs" think about my work.
I think that the framing "what is the objectively correct way to determine my self-worth" is counterproductive. Every person has worth by virtue of being a person. (Even if I find it much easier to apply this maxim to others than to myself.)
IMO you should be thinking about things like, how to do better work, b...
I strongly disagree that utilitarianism isn't a sound moral philosophy, and don't understand the black and white distinction between longtermism and us not all dying. I might be missing something there is surely at least some overlap betwen those two reasons for preventing AI risk.
I don't know if it's a "black and white distinction", but surely there's a difference between:
Strongly agreed.
Personally, I made the mitigation of existential risk from AI my life mission, but I'm not a longtermist and not sure I'm even an "effective altruist". I think that utilitarianism is at best a good tool for collective decision making under some circumstances, not a sound moral philosophy. When you expand it from living people to future people, it's not even that.
My values prioritize me and people around me far above random strangers. I do care about strangers (including animals) and even hypothetical future people more than zero, but I woul...
I'm confused. What are you trying to say here? You linked a proposal to prioritize violence against women and girls as an EA cause area (which I assume you don't object to?) and a tweet by some person unknown to me saying that critics of EA hold it to a standard they don't apply to feminism (which probably depends a lot on what kind of critics, and on their political background in particular). What do you expect the readers to learn from this or do about it?
Thanks so much for replying, I learned a lot from your response and its clarity helped me update my thinking.
You're very welcome, I'm glad it was useful!
I would expect these to be exceptions rather than norms (because if e.g. wanting to have a career was the norm, over enough time, that would tend to become culturally normative and even in the process of it becoming a more normative view the difference with a SWB measure should diminish).
I'm much more pessimistic. The processes that determine what is culturally normative are complicated, there are many exa...
Hi Joel,
Thank you for the informative reply!
I think there's a big difference between asking people to rate their present life satisfaction and asking people what would make them more satisfied with their life. The latter is a comparison: either between several options or between future and present, depending on the phrasing of the questions. In a comparison it makes sense people report their relative preferences. On the other hand, the former is in some ill-posed reference frame. So I would be much more optimistic about a variant of WELLBY based on the former than on the latter.
I think the fact that SWB measures differs across cultures is actually a good sign that these measures capture what they are supposed to capture... In fact, I would be more concerned if different people with different views and circumstances did not, as you say, 'differ substantially.'
My claim is not "SWB is empirically different between cultures therefore SWB is bad". My claim is, I suspect that cultural factors cause people to choose different numbers for reasons orthogonal to what they actually want. For example, maybe Alice wants to be a career w...
I don't know much about supplements/bednets, but AFAIU there are some economy of scale issues which make it easier for e.g. AMF to supply bednets compared with individuals buying bednets for themselves.
As to how to predict "decision utility when well informed", one method I can think of is look at people who have been selected for being well-informed while similar to target recipients in other respects.
But, I don't at all claim that I know how to do it right, or even that life satisfaction polls are useless. I'm just saying that I would feel better a...
Suppose I'm the intended recipient of a philanthropic intervention by an organization called MaxGood. They are considering two possible interventions: A and B. If MaxGood choose according to "decision utility" then the result is equivalent to letting me choose, assuming that I am well-informed about the consequences. In particular, if it was in my power to decide according to what measure they choose their intervention, I would definitely choose decision-utility. Indeed, making MaxGood choose according to decision-utility is guaranteed to be th...
I am skeptical of using answers to questions such as "how satisfied are you with your life?" as a measure of human preferences. I suspect that the meaning of the answer might differ substantially between people in different cultures and/or be normalized w.r.t. some complicated implicit baseline, such as what a person thinks they should "expect" or "deserve". I would be more optimistic of measurements based on revealed preferences, i.e. what people actually choose given several options when they are well-informed or what people think of their past choices i...
Hello, Vanessa
To complement Michael's reply, I think there's been some decent work related to two of your points, which happens to all be by the same group.
I would be more optimistic of measurements based on revealed preferences, i.e. what people actually choose given several options when they are well-informed or what people think of their past choices in hindsight (or at least what they say they would choose in hypothetical situations, but this is less reliable).
In Benjamin et al. (2012; 2014a) they find that what people choose is well predicted b...
My spouse and I are both heavily involved with EA, but we nevertheless have significant differences in our philosophies. My spouse's world view is pretty much a central example of EA: impartiality, utilitarianism et cetera. On the other hand, I assign far greater weight to helping people who are close to me compared to helping random strangers[1]. Importantly, we know that we have value differences, we accept it, and we are consciously working towards solutions that are aimed to benefit both of our value systems, with some fair balance between the two. Thi...
By Scott Garrabrant et al:
By myself:
These are the sort of thing I'm looking for! In that, on first glance, they're a lot of solid "maybe"s where mostly I've been finding "no"s. So that's encouraging --thank you so much for the suggestions!
Thank you for this comment!
Knowledge about AI alignment is beneficial but not strictly necessarily. Casting a wider net is something I planned to do in the future, but not right now. Among other reasons, because I don't understand the academic job ecosystem and don't want to spend a huge effort studying it in the near-term.
However, if it's as easy as posting the job on mathjobs.org, maybe I should do it. How popular is that website among applicants, as far as you know? Is there something similar for computer scientists? Is there any way to post a job without specifying a geographic location s.t. applicants from different places would be likely to find it?
This is separate to the normative question of whether or not people should have zero pure time preference when it comes to evaluating the ethics of policies that will affect future generations.
I am a moral anti-realist. I don't believe in ethics the way utilitarians (for example) use the word. I believe there are certain things I want, and certain things other people want, and we can coordinate on that. And coordinating on that requires establishing social norms, including what we colloquially refer to as "ethics". Hypothetically, if I have time preference...
Because, ceteris paribus I care about things that happen sooner more than about things that happen latter. And, like I said, not having pure time preference seems incoherent.
As a meta-sidenote, I find that arguments about ethics are rarely constructive, since there is too little in the way of agreed-upon objective criteria and too much in the way of social incentives to voice / not voice certain positions. In particular when someone asks why I have a particular preference, I have no idea what kind of justification they expect (from some ethical principle they presuppose? evolutionary psychology? social contract / game theory?)
I dunno if I count as "EA", but I think that a social planner should have nonzero pure time preference, yes.
The question is, what is your prior about extinction risk? If your prior is sufficiently uninformative, you get divergence. If you dogmatically believe in extinction risk, you can get convergence but then it's pretty close to having intrinsic time discount. To the extent it is not the same, the difference comes through privileging hypotheses that are harmonious with your dogma about extinction risk, which seems questionable.
IMO everyone have pure time preference (descriptively, as a revealed preference). To me it just seems commonsensical, but it is also very hard to mathematically make sense of rationality without pure time preference, because of issues with divergent/unbounded/discontinuous utility functions. My speculative 1st approximation theory of pure time preference for humans is: choose a policy according to minimax regret over all exponential time discount constants starting from around the scale of a natural human lifetime and going to infinity. For a better approximation, you need to also account for hyperbolic time discount.
Can't you get the integral to converge with discounting for exogenous extinction risk and diminishing marginal utility? You can have pure time preference = 0 but still have a positive discount rate.
Quantified uncertainty might be fairly important for alignment, since there is a class of approaches that rely on confidence thresholds to avoid catastrophic errors (1, 2, 3). What might also be important is the ability to explicitly control your prior in order to encode assumptions such as those needed for value learning (but maybe there are ways to do it with other methods).
Kudos for this post. One quibble I have is, in the beginning you write
Potential help includes:
- Money
- Good mental health support
- Friends or helpers, for when things are tough
- Insurance (broader than health insurance)
But later you focus almost exclusively on money. [Rest of the comment was edited out.]
Points where I agree with the paper:
Points where I disagree with the paper:
I cried a lot, especially in the ending. Also really liked the concept of the witch doing all this for the sake of other/future people. And, wow, this part:
“There is beauty in the world and there is a horror,” she said, “and I would not a miss a second of the beauty and I will not close my eyes to the horror.”
Bravo!
Thanks!
There are lots of stories where a magic user meets a representation of death. In some of the ones I'm aware of, death is presented very much as a thing to be welcomed. In others, resisting death is presented as being selfish (or, at least, deeply partial). One of the reasons that I wrote the story is because I wanted to see a version of the meeting-Death trope that presented a different way of thinking about death (a way of thinking that will be familiar to most readers of this forum but that I hadn't previously seen in the context of this trope).
I am deeply touched and honored by this endorsement. I wish to thank the LTFF and all the donors who support the LTFF from the bottom of my heart, and promise you that I will do my utmost to justify your trust.
Personally I prefer websites since they seem to be more efficient in terms of time and travel distance. Especially in the COVID era, online is better. Although I guess it's possible to do an online speed-dating event.
I think it's a great idea. For me it's impossible to have an intimate long-term relationship with someone without shared worldview and values, and I'm sure it's the same for many people. Both of my partners are EAs. One of them lives on a different continent, and there's a reason I had to go so far afield to find someone compatible. Having a dedicated website would make it that much easier.
The concerns about "cultishness" are IMO overblown, and ironically some of those concerns feel *more* "culty" than the thing...
Is there going to be a post-mortem including an explanation for the decision to sell?
I’m not privy to the details of the assessment that OP did, but I was briefly consulted (as a courtesy) before this decision was made, and I understand that there was a proper cost-benefit analysis driving their decisions here.
Compared to when the original decision was made, a few things look different to me:
- I originally underestimated the staffing and maintenance costs for the project
- (I’m still not sure whether there might have been an accessible “shoestring” version)
- After what happened with FTX, money is more constrained, which means it’s less desirable
... (read more)+1, I'd find this very useful too!
For context: After working full-time in EA meta for >3 years, I've been thinking about renting or buying property in/near Berlin or in a cheaper place in Europe to facilitate EA/longtermist events, co-working and maybe also co-living. I know many others are thinking about this too, some of whom area already making plans, and such retrospectives would be really helpful to inform our decisions. If you prefer not to share it publicly, you can also email me.
From the limited info I have, Wytham Abbey seemed a goo... (read more)