All of Vanessa's Comments + Replies

Is there going to be a post-mortem including an explanation for the decision to sell?

I’m not privy to the details of the assessment that OP did, but I was briefly consulted (as a courtesy) before this decision was made, and I understand that there was a proper cost-benefit analysis driving their decisions here.

Compared to when the original decision was made, a few things look different to me:

  • I originally underestimated the staffing and maintenance costs for the project
    • (I’m still not sure whether there might have been an accessible “shoestring” version)
  • After what happened with FTX, money is more constrained, which means it’s less desirable
... (read more)

+1, I'd find this very useful too! 

For context: After working full-time in EA meta for >3 years, I've been thinking about renting or buying property in/near Berlin or in a cheaper place in Europe to facilitate EA/longtermist events, co-working and maybe also co-living. I know many others are thinking about this too, some of whom area already making plans, and such retrospectives would be really helpful to inform our decisions. If you prefer not to share it publicly, you can also email me

From the limited info I have, Wytham Abbey seemed a goo... (read more)

Yes. Moreover, GCR mitigation can appeal even to partial altruists: something that would kill most of everyone, would in particular kill most of whatever group you're partial towards. (With the caveat that "no credence on longtermism" is underspecified, since we haven't said what we assume instead of longtermism; but the case for e.g. AI risk is robust enough to be strong under a variety of guiding principles.)

9
IanDavidMoss
5mo
FWIW, in the (rough) BOTECs we use for opportunity prioritization at Effective Institutions Project, this has been our conclusion as well. GCR prevention is tough to beat for cost-effectiveness even only considering impacts on a 10-year time horizon, provided you are comfortable making judgments based on expected value with wide uncertainty bands. I think people have a cached intuition that "global health is most cost-effective on near-term timescales" but what's really happened is that "a well-respected charity evaluator that researches donation opportunities with highly developed evidence bases has selected global health as the most cost-effective cause with a highly-developed evidence base." Remove the requirement for certainty about the floor of impact that your donation will have, and all of the sudden a lot of stuff looks competitive with bednets on expected-value terms. (I should caveat that we haven't yet tried to incorporate animal welfare into our calculations and therefore have no comparison there.)

The framing "PR concerns" makes it sound like all the people doing the actual work are (and will always be) longtermists, whereas the focus on GCR is just for the benefit of the broader public. This is not the case. For example, I work on technical AI safety, and I am not a longtermist. I expect there to be more people like me either already in the GCR community, or within the pool of potential contributors we want to attract. Hence, the reason to focus on GCR is building a broader coalition in a very tangible sense, not just some vague "PR".

Is your claim "Impartial altruists with ~no credence on longtermism would have more impact donating to AI/GCRs over animals / global health"?

To my mind, this is the crux, because:

  1. If Yes, then I agree that it totally makes sense for non-longtermist EAs to donate to AI/GCRs
  2. If No, then I'm confused why one wouldn't donate to animals / global health instead?

[I use "donate" rather than "work on" because donations aren't sensitive to individual circumstances, e.g. personal fit. I'm also assuming impartiality because this seems core to EA to me, but of course one could donate / work on a topic for non-impartial/ non-EA reasons]

I can relate, as someone who also struggles with self-worth issues. However, my sense of self-worth is tied primarily to how many people seem to like me / care about me / want to befriend me, rather than to what "senior EAs" think about my work.

I think that the framing "what is the objectively correct way to determine my self-worth" is counterproductive. Every person has worth by virtue of being a person. (Even if I find it much easier to apply this maxim to others than to myself.) 

IMO you should be thinking about things like, how to do better work, b... (read more)

I strongly disagree that utilitarianism isn't a sound moral philosophy, and don't understand the black and white distinction between longtermism and us not all dying. I might be missing something there is surely at least some overlap betwen those two reasons for preventing AI risk.

I don't know if it's a "black and white distinction", but surely there's a difference between:

  • Existential risk is bad because the future could have a zillion people, so their combined moral weight dominates all other considerations.
  • Existential risk is bad because (i) I personally
... (read more)

Strongly agreed.

Personally, I made the mitigation of existential risk from AI my life mission, but I'm not a longtermist and not sure I'm even an "effective altruist". I think that utilitarianism is at best a good tool for collective decision making under some circumstances, not a sound moral philosophy. When you expand it from living people to future people, it's not even that.

My values prioritize me and people around me far above random strangers. I do care about strangers (including animals) and even hypothetical future people more than zero, but I woul... (read more)

7
NickLaing
11mo
I strongly disagree that utilitarianism isn't a sound moral philosophy, and don't understand the black and white distinction between longtermism and us not all dying. I might be missing something there is surely at least some overlap betwen those two reasons for preventing AI risk. But although I disagree I think you made your points pretty well :). Out of interest, if you aren't an effective altruist, nor a longermist then what do you call yourself?

Nice work! Many good hopes in there, but, hard to compete with "make furries real".

I'm confused. What are you trying to say here? You linked a proposal to prioritize violence against women and girls as an EA cause area (which I assume you don't object to?) and a tweet by some person unknown to me saying that critics of EA hold it to a standard they don't apply to feminism (which probably depends a lot on what kind of critics, and on their political background in particular). What do you expect the readers to learn from this or do about it?

7
Fearful
2y
The link to the post on VAWG was my mistake - I intended to link to the comments specifically, which got noticeably heated after someone followed up what I thought was an incredibly well-researched and persuasive post with "but what about men's rights." What I thought were pretty charitable responses explaining how that's not actually relevant to the discussion got downvoted beyond belief. In my (limited, yet colorful) experience, EA seems to have a recurring problem allowing gender issues to be prioritized.

Thanks so much for replying, I learned a lot from your response and its clarity helped me update my thinking.

You're very welcome, I'm glad it was useful!

I would expect these to be exceptions rather than norms (because if e.g. wanting to have a career was the norm, over enough time, that would tend to become culturally normative and even in the process of it becoming a more normative view the difference with a SWB measure should diminish).

I'm much more pessimistic. The processes that determine what is culturally normative are complicated, there are many exa... (read more)

Hi Joel,

Thank you for the informative reply!

I think there's a big difference between asking people to rate their present life satisfaction and asking people what would make them more satisfied with their life. The latter is a comparison: either between several options or between future and present, depending on the phrasing of the questions. In a comparison it makes sense people report their relative preferences. On the other hand, the former is in some ill-posed reference frame. So I would be much more optimistic about a variant of WELLBY based on the former than on the latter.

I think the fact that SWB measures differs across cultures is actually a good sign that these measures capture what they are supposed to capture... In fact, I would be more concerned if different people with different views and circumstances did not, as you say, 'differ substantially.'

My claim is not  "SWB is empirically different between cultures therefore SWB is bad". My claim is, I suspect that cultural factors cause people to choose different numbers for reasons orthogonal to what they actually want. For example, maybe Alice wants to be a career w... (read more)

1
helmetedhornbill
2y
Thanks so much for replying, I learned a lot from your response and its clarity helped me update my thinking.   Thanks, the specificity here helped me understand your view better. I suppose with the examples you give -- I would expect these to be exceptions rather than norms (because if e.g. wanting to have a career was the norm, over enough time, that would tend to become culturally normative and even in the process of it becoming a more normative view the difference with a SWB measure should diminish). And more broadly, interventions that have large samples and aim for generalizability should be reasonably representative and also diminish this as a concern.   I suppose I'm also thinking about the potential difference in specific SWB scales. Something like the SWLS scale or the single item measures would not be very domain specific but scales based around the e.g. Wheel of Life tradition tell you a lot more different facets of your life (e.g. you can see high overall scale but low for job satisfaction), so it seems to me that with the right scales and enough items you can address culture or other variance even further.   Thanks again for responding with such precision. What I was unable to articulate well is that your individual preferences are not stable (or I suppose: per person, rather than across people), i.e. Alice when she has $5 will exchange a different amount of free time for an extra $1 then when Alice has $10.    I agree with everything else you've said and especially with: I think this is a hugely underappreciated point. I think some of the SWB measures target this issue somewhat but in a limited fashion. I'd love to see more qualitative interviews and participatory / or co-production interventions. I am always surprised by how many interventions say they cannot ascertain a causal mechanism quantitatively and so do not attempt to... well, ask people what worked and didn't.

I don't know much about supplements/bednets, but AFAIU there are some economy of scale issues which make it easier for e.g. AMF to supply bednets compared with individuals buying bednets for themselves.

As to how to predict "decision utility when well informed", one method I can think of is look at people who have been selected for being well-informed while similar to target recipients in other respects. 

But, I don't at all claim that I know how to do it right, or even that life satisfaction polls are useless. I'm just saying that I would feel better a... (read more)

Suppose I'm the intended recipient of  a philanthropic intervention by an organization called MaxGood. They are considering two possible interventions: A and B.  If MaxGood choose according to "decision utility" then the result is equivalent to letting me choose, assuming that I am well-informed about the consequences. In particular, if it was in my power to decide according to what measure they choose their intervention, I would definitely choose decision-utility. Indeed, making MaxGood choose according to decision-utility is guaranteed to be th... (read more)

2
Lorenzo Buonanno
2y
Would this also apply to e.g. funding any GiveWell top charity besides GiveDirectly, or would that fall into "in practice, this is the best way to maximize the recipient's decision-utility"? I don't think most recipients would buy vitamin supplementation or bednets themselves, given cash. I guess you could say that it's because they're not "well informed", but then how could you predict their "decision utility when well informed" besides assuming it would correlate strongly with maximizing their experience utility? A bit off-topic, but I found GiveWell's staff documents on moral weights fascinating for deciding how much to weigh beneficiaries' preferences, from a very different angle.

I am skeptical of using answers to questions such as "how satisfied are you with your life?" as a measure of human preferences. I suspect that the meaning of the answer might differ substantially between people in different cultures and/or be normalized w.r.t. some complicated implicit baseline, such as what a person thinks they should "expect" or "deserve". I would be more optimistic of measurements based on revealed preferences, i.e. what people actually choose given several options when they are well-informed or what people think of their past choices i... (read more)

3
helmetedhornbill
2y
Hi Vanessa, I really liked how specific and critical your comment was, which I think is ultimately how research can improve, so I've upvoted it :) I'm not linked to this report but have an interest in subjective measures broadly so thought I would add a different perspective for the sake of discussion in response to the two issues your raise. 1. I am skeptical of using answers to questions such as "how satisfied are you with your life?" as a measure of human preferences. I suspect that the meaning of the answer might differ substantially between people in different cultures and/or be normalized w.r.t. some complicated implicit baseline, such as what a person thinks they should "expect" or "deserve".  I think the fact that SWB measures differs across cultures is actually a good sign that these measures capture what they are supposed to capture. Cultures differ in e.g. values (collectivistic vs individualistic), social and gender norms, economic systems, ethics and moral. Surely some of these facets should influence how people see what a good life is, what happiness is, what wellbeing is. In fact, I would be more concerned if different people with different views and circumstances did not, as you say, 'differ substantially.' I think these differences, attributable to culture or individual variance, are not likely to be of concern for what I would imagine would be the more common ways WELLBYs could be used. Most cost effectiveness analyses rely on RCTs or comparable designs with pre and post measures. You could look at changes within the same group of people easily pre and post and compare their differences. Or even beyond such designs, controlling for different sources of variance that we think are important (like age and gender most commonly) is not that tricky. This doesn't seem a big methodological concern to me but would be keen to hear more about how things look from your view. 1. I would be more optimistic of measurements based on revealed preferences, i.

Hello, Vanessa

To complement Michael's reply, I think there's been some decent work related to two of your points, which happens to all be by the same group.  

I would be more optimistic of measurements based on revealed preferences, i.e. what people actually choose given several options when they are well-informed or what people think of their past choices in hindsight (or at least what they say they would choose in hypothetical situations, but this is less reliable).

In Benjamin et al. (2012; 2014a) they find that what people choose is well predicted b... (read more)

8
MichaelPlant
2y
I'm not sure I understand your point. Kahneman famously distinguishes between decision utility - what people do or would choose - and experience utility - how they felt as a result of their choice. SWB measures allow us to get at the second. How would you empirically test which is the better measure of preferences?
Answer by VanessaJun 24, 20226
0
0

My spouse and I are both heavily involved with EA, but we nevertheless have significant differences in our philosophies. My spouse's world view is pretty much a central example of EA: impartiality, utilitarianism et cetera. On the other hand, I assign far greater weight to helping people who are close to me compared to helping random strangers[1]. Importantly, we know that we have value differences, we accept it, and we are consciously working towards solutions that are aimed to benefit both of our value systems, with some fair balance between the two. Thi... (read more)

Answer by VanessaMay 07, 202245
1
0

By Scott Garrabrant et al:

By John Wentworth

By myself:

These are the sort of thing I'm looking for! In that, on first glance, they're a lot of solid "maybe"s where mostly I've been finding "no"s. So that's encouraging --thank you so much for the suggestions!

Thank you for this comment!

Knowledge about AI alignment is beneficial but not strictly necessarily. Casting a wider net is something I planned to do in the future, but not right now. Among other reasons, because I don't understand the academic job ecosystem and don't want to spend a huge effort studying it in the near-term. 

However, if it's as easy as posting the job on mathjobs.org, maybe I should do it. How popular is that website among applicants, as far as you know? Is there something similar for computer scientists? Is there any way to post a job without specifying a geographic location s.t. applicants from different places would be likely to find it?

1
Frank_R
2y
I have noticed that there are two similar websites for mathematical jobs. www.mathjobs.org is operated by the American Mathematical Society and is mostly for positions at universities, although they list jobs at other research institutions, too. www.math-jobs.com redirects you to www.acad.jobs , which has a broader focus. They advertise also government and industry jobs and it is also for job offers in computer science and other academic disciplines.  You have to register on both websites as an employer for several hundreds of dollars before you can post a job offer. I do not know if this is to much. Both sites are probably among the top ten for math related research positions, although this is only based on my gut feelings. Unfortunately, I cannot tell you if it is possible to post remote only jobs. I hope this information helps. 

This is separate to the normative question of whether or not people should have zero pure time preference when it comes to evaluating the ethics of policies that will affect future generations.

I am a moral anti-realist. I don't believe in ethics the way utilitarians (for example) use the word. I believe there are certain things I want, and certain things other people want, and we can coordinate on that. And coordinating on that requires establishing social norms, including what we colloquially refer to as "ethics". Hypothetically, if I have time preference... (read more)

3
JackM
2y
Most people do indeed have pure time preference in the sense that they are impatient and want things earlier rather than later. However, this says nothing about their attitude to future generations. Being impatient means you place more importance on your present self than your future self, but it doesn't mean you care more about the wellbeing of some random dude alive now than another random dude alive in 100 years. That simply isn't what "impatience" means. For example - I am impatient. I personally want things sooner rather than later in my life. I don't however think that the wellbeing of a random person now is more important than the wellbeing of a random person alive in 100 years. That's an entirely separate consideration to my personal impatience.

Because, ceteris paribus I care about things that happen sooner more than about things that happen latter. And, like I said, not having pure time preference seems incoherent. 

As a meta-sidenote, I find that arguments about ethics are rarely constructive, since there is too little in the way of agreed-upon objective criteria and too much in the way of social incentives to voice / not voice certain positions. In particular when someone asks why I have a particular preference, I have no idea what kind of justification they expect (from some ethical principle they presuppose? evolutionary psychology? social contract / game theory?)

7
JackM
2y
This is separate to the normative question of whether or not people should have zero pure time preference when it comes to evaluating the ethics of policies that will affect future generations. Surely the fact that I'd rather have some cake today rather than tomorrow cannot be relevant when I'm considering whether or not I should abate carbon emissions so my great grandchildren can live in a nice world - these simply seem separate considerations with no obvious link to each other. If we're talking about policies whose effects don't (predictably) span generations I can perhaps see the relevance of my personal impatience, but otherwise I don't. Also,  having non-zero pure time preference has counterintuitive implications. From here: So if hypothetically we were alive around King Tut's time and we were given the mandatory choice to either torture him or, with certainty, cause the torture of all 7 billion humans today we would easily choose the latter with a 1% rate of pure time preference (which seems obviously wrong to me). If you do want non-zero rate of pure time preference you will probably need it to decline quickly over time to make much ethical sense (see here and my explanation here).

I dunno if I count as "EA", but I think that a social planner should have nonzero pure time preference, yes.

2
Michael_Wiebe
2y
Why?

The question is, what is your prior about extinction risk? If your prior is sufficiently uninformative, you get divergence. If you dogmatically  believe in extinction risk, you can get convergence but then it's pretty close to having intrinsic time discount.  To the extent it is not the same, the difference comes through privileging hypotheses that are harmonious with your dogma about extinction risk, which seems questionable.

7
Michael_Wiebe
2y
Yes, if the extinction rate is high (and precise) enough , then it converges, but otherwise not. Regarding your first comment, I'm focusing on the normative question, not descriptive (ie. what should a social planner do?). So I'm asking if there are EAs who think a social planner should have nonzero pure time preference.

IMO everyone have pure time preference (descriptively, as a revealed preference). To me it just seems commonsensical, but it is also very hard to mathematically make sense of rationality without pure time preference, because of issues with divergent/unbounded/discontinuous utility functions. My speculative 1st approximation theory of pure time preference for humans is: choose a policy according to minimax regret over all exponential time discount constants starting from around the scale of a natural human lifetime and going to infinity. For a better approximation, you need to also account for hyperbolic time discount.

1
Guy Raveh
2y
I mean, physics solves the divergence/unboundedness Problem with the universe achieveing heat death eventually. So one can assume some distribution on the time bound, at the very least. Whether that makes having no time discount reasonable in practice, I highly doubt.

Can't you get the integral to converge with discounting for exogenous extinction risk and diminishing marginal utility? You can have pure time preference = 0 but still have a positive discount rate.

We plan to run 3 EA Global conferences in 2021

I'm guessing this is a typo and you meant 2022?

2
Lizka
2y
Thanks for catching this! Yep, it's a typo — should be fixed now.

Quantified uncertainty might be fairly important for alignment, since there is a class of approaches that rely on confidence thresholds to avoid catastrophic errors (1, 2, 3). What might also be important is the ability to explicitly control your prior in order to encode assumptions such as those needed for value learning (but maybe there are ways to do it with other methods).

What is this "Effective Crypto"? (Google gave me nothing)

Kudos for this post. One quibble I have is, in the beginning you write

Potential help includes:

  • Money
  • Good mental health support
  • Friends or helpers, for when things are tough
  • Insurance (broader than health insurance)

But later you focus almost exclusively on money. [Rest of the comment was edited out.]

1
Patrick Gruban
2y
I think this is a good point. One possibility of addressing this could be on the level of local EA groups giving organizers the tools and education to identify struggling members and help them better. As a local organizer, I would find additional resources helpful, especially if they are very action-orientated.
6
Ozzie Gooen
2y
Good point about focusing on money; this post was originally written differently, then I tried making it more broad, but I think it wound up being more disjointed than I would have liked. First, I’d also be very curious about interventions other than money. Second though, I think that “money combined with services” might be the most straightforward strategy for most of the benefits except for friends. “Pretty strong services” to help set up people with mental and physical health support could exist, along with insurance setups. I think that setting up new services that are better than existing ones, but much more limited in scope, is possible, but expensive (at least in the opportunity cost of those who would set them up.) Some helpers when things are rough could in theory be hired. Encouraging more friendships seems pretty great, but very different. I imagine that’s more about encouraging good community structures/networks/events and stuff, but I’m not sure. I also want to encourage you and others reading this to brainstorm on the topic. I don’t have any private knowledge, and I imagine others here would have much better insight into much of the problem than I do. (I’m on the older side of EAs now, and am less connected to many of the new/younger/growing communities)

Points where I agree with the paper:

  • Utilitarianism is not any sort of objective truth, in many cases it is not even a good idea in practice (but in other cases it is).
  • The long-term future, while important, should not completely dominate decision making.
  • Slowing down progress is a valid approach to mitigating X-risk, at least in theory.

Points where I disagree with the paper:

  • The papers argues that "for others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent". I think it is completely clear, gi
... (read more)

Yes, I did notice you're subverting the trope here, it was very well done :)

I cried a lot, especially in the ending. Also really liked the concept of the witch doing all this for the sake of other/future people. And, wow, this part:

“There is beauty in the world and there is a horror,” she said, “and I would not a miss a second of the beauty and I will not close my eyes to the horror.”

Bravo!

atb
2y14
0
0

Thanks!

There are lots of stories where a magic user meets a representation of death. In some of the ones I'm aware of, death is presented very much as a thing to be welcomed. In others, resisting death is presented as being selfish (or, at least, deeply partial). One of the reasons that I wrote the story is because I wanted to see a version of the meeting-Death trope that presented a different way of thinking about death (a way of thinking that will be familiar to most readers of this forum but that I hadn't previously seen in the context of this trope).

Kudos for the initiative! I think it makes sense to crosspost this to LessWrong.

3
frances_lorenz
2y
Good idea :) thank you!

Can you (or someone) write a TLDR of why "helping others" would turn off "progressives"?

1
nananana.nananana.heyhey.anon
2y
“Help” sounds paternalistic or presumptuous to progressives.
2
Aaron_Scher
2y
Here you go: https://forum.effectivealtruism.org/posts/GKSYJ9rLnBdtXGAog/aaron_scher-s-shortform?commentId=LLiK7vLmaTdYmev4E

I am deeply touched and honored by this endorsement. I wish to thank the LTFF and all the donors who support the LTFF from the bottom of my heart, and promise you that I will do my utmost to justify your trust.

Personally I prefer websites since they seem to be more efficient in terms of time and travel distance. Especially in the COVID era, online is better. Although I guess it's possible to do an online speed-dating event.

Answer by VanessaJul 28, 202018
0
0

I think it's a great idea. For me it's impossible to have an intimate long-term relationship with someone without shared worldview and values, and I'm sure it's the same for many people. Both of my partners are EAs. One of them lives on a different continent, and there's a reason I had to go so far afield to find someone compatible. Having a dedicated website would make it that much easier.

The concerns about "cultishness" are IMO overblown, and ironically some of those concerns feel *more* "culty" than the thing... (read more)

1
anonymoususer
4y
Glad to hear you think it's a good idea! How do you feel about events such as speed-dating / singles events versus websites?