First I apologize for my late response!
I completely agree with you that being in a limbo state is the least effective place you can be! Exploring is valuable, but at some point you have to act what you have learnt. Even if what you learnt was really not what you were hoping to learn...
My perspective is that I can still have a major impact via donations. The more I earn, the more I can donate. The more frugal I live, the more I can donate too. Unfortunately the EA Community is no longer as supportive of people who see their primary way to impact via donatio...
I have also barely reported, despite keeping the pledge for 10 years. Will finally get my reckoning with missing out on the pin though...
I appreciate that you are putting out numbers and explain the current research landscape, but I am missing clear actions.
The closest you are coming to proposing them is here:
...We need a concerted effort that matches the gravity of the challenge. The best ML researchers in the world should be working on this! There should be billion-dollar, large-scale efforts with the scale and ambition of Operation Warp Speed or the moon landing or even OpenAI’s GPT-4 team itself working on this problem.[17] Right now, there’s too much fretting, too much idle talk, and wa
Thank you Quintin, this was very helpful for me as a non-ML person to understand the other side of Eliezer’s arguments. As your post is quite dense and it took me a while to work through it, I summarised it for myself. I occasionally had to check the context of the original interview (transcript here) to fully parse the arguments made. I thought the summary might also be helpful to share with others (and let me know if I got anything wrong!):
Eliezer thinks current ML approaches won’t scale to AGI, though due to money influx an approach might be found. Q
We end up seeming more deferential and hero-worshipping than we really are.
I feel like this post is missing something. I would expect one of the strongest predictors of the aforementioned behaviors to be age. Are there any people in their thirties you know who are prone to hero-worshipping?
I don’t consider hero-worshipping an EA problem as such, but a young people problem. Of course EA is full of young people!
...Make sure people incoming to the community, or at the periphery of the community, are inoculated against this bias, if you spot it. Point out th
This is a great point. I also think there's a further effect, which is that older EAs were around when the current "heroes" were much-less -impressive university students or similar. Which I think leads to a much less idealising frame towards them.
But I can definitely see that if you yourself are young and you enter a movement with all these older, established, impressive people... hero-worshipping is much more tempting.
Thank you! You’re laying out the argument well that if a previous omnivore eats seafood for every meal where they previously ate meat this will be harmful for animals.
What I’d like to see is some empirical backing how much pescetarians actually swap out seafood for meat given what you’re claiming in the title.
You discuss your own experience of eating 2 pounds of salmon weekly, but when I was pescetarian I had a fish meal once every month or two. If omnivores switch to a pescetarian diet like mine that still seems like a win for animal welfare.
Thank you Lizka. You are making a good point and I have edited the comment above to no longer refer to a specific demographic group.
I would not want anyone to get the impression that Owen's poor behaviour is merely a strong negative update on men. It is a strong negative update on the decency of everybody.
(Though I would expect women to show a lack of decency in slightly different ways than men.)
I still expect some decent people to exist. I just now think there are even more rare than I previously thought.
[hastily written]
Never ever would I have guessed this. You were living proof to me that at least some, if not many, decent men people exist. I am completely devastated.
EA has been dying. But for me, this is the ultimate death blow.
[Edit: Comment was modified to no longer refer to a specific demographic group.]
How do you define "decent"?
I'm a straight guy, and I grew up in an era of pre-#metoo, sex-positive feminism. The doctrine of the day was "men and women are pretty much the same in every way and it's sexist to claim otherwise". "Slut shaming is bad, women can be just as horny as men, wanting women to be chaste and pure is patriarchical and bad, trying to give women special protection from harm is benevolent sexism and therefore bad, treating people the same regardless of their gender is good and desirable."
An anecdote from this era of feminism -- I once r...
I'm commenting as a moderator right now.
I'm really sorry that you're feeling this way. I think a lot of us have strong emotions about this news and don't know how to process it. Given that you wrote "[hastily written]," I assume that this comment is helping you process the news.
At the same time, I think it's important for us to not slip away from our norms on the Forum, which include making sure the space is welcoming to different groups of people, including men. There are a few different ways to interpret the part of your comment that's about men. Unfortu...
Just gonna flag that I feel like -100 agreement for someone being sad feels weird to me. Sure I guess you can disagree that it's the deathblow of EA, but I dunno, just feels a bit much. Not telling anyone off, or trying to create some complex social rule, but maybe it should be % or something.
Thank you, that was a beautiful response. I'm glad I asked!
I share the experience that sometimes my personal experiences and emotions affect how I view different causes. I do think it's good to feel the impacts occasionally, though overall it leads me to being more strict about relying on spreadsheets.
Hmm, I think I ultimately rely only on my emotions. I’ve always been a proponent of “Do The Math, Then Burn The Math and Go With Your Gut”. When it comes to the question of personal cause prioritization, the question is basically “what do I want to do with my life?” No spreadsheet will tell me an answer to that, it’s all emotions. I use spreadsheets to inform my emotions because if I didn’t, a part of me would be unhappy and would nag me to do it.
Thank you, that was very interesting Saulius. You talk a bit about comparisons with other cause areas, but I'm still not entirely sure which cause area you would personally prioritise the most right now ?
Thanks for the question Denise. Probably x-risk reduction, although on some days I’d say farmed animal welfare. Farmed animal welfare charities seem to significantly help multiple animals per dollar spent. It is my understanding that global health (and other charities that help currently living humans) spend hundreds or even thousands of dollars to help one human in the same way. I think that human individuals are more important than other animals but not thousands of times.
Sometimes when I lift my eyes from spreadsheets and see the beauty and richness of ...
But overall, I find that younger kids are much more physically draining, and older kids require much more emotional labor.
This is my experience as well (oldest is 12).
I often say that while small children aren't easy, they are simple. While it seems it should be easier to fulfill the needs of older children if you know what they are, it's much harder to figure out what the right thing to do is in the first place. I have a lot more doubt whether I'm doing right by my oldest than when she was small.
I do agree with you that silence can hurt community epistemics.
In the past I also thought people worried about missing out on job and grant opportunities if they voiced criticisms on the EA Forum overestimated the risks. I am ashamed to say that I thought this was a mere result of their social anxiety and pretty irrational.
Then last year I applied to an explicitly identified longtermist (central) EA org. They rejected me straight away with the reason that I wasn't bought into longtermism (as written up here which is now featured in the EA Handbook as th...
Most of the time where an upper bound is mentioned in job ads (e.g. LinkedIn) it’s less than <1.5 times the lower bound. So I’m implicitly assuming the upper bound, though not mentioned, will be in the same ballpark.
Perhaps this is wrong and I’m supposed to interpret no upper bound as ‘very negotiable, potentially the sky is the limit’. But that possibility didn’t occur to me until you mentioned it.
I do interpret no range at all as a plausible ‘sky is the limit’ though.
I am a woman who could be very much interested in the role. But the lack of an upper bound for compensation is putting me off a bit, it might help to include that.
On average I'd expect more men to be put off by this than women though!
Some people may be psychologically cut out for being a dedicate, but not have a high level of personal fit for any jobs where being a dedicate even makes sense as a thing to do. Not all dedicates go to an Ivy League school, but jobs like technical AI safety researcher, startup founder, program officer at a major foundation, or farmed-animal welfare corporate relations specialist all require very particular sets of abilities. If your abilities point you more in the direction of being (say) a teacher, then being a dedicate is probably not for you.
Do you n...
If you think the moral concerns about abortion is more about the prevention of future people instead of the value of the lives of the embryos, you should probably try to optimise for women having more children in the near term. It is not clear to me why you think preventing abortions is the best way to do so.
Thank you, I agree with a lot of the underlying motive (once upon a time I wrote a research proposal about this, but never got into it). Where I disagree:
This is already mentioned in the comments, but my understanding was that improved contraceptive access is one of the best ways to lower abortions so moral concerns about abortions drive me towards supporting family planning charities.
Women will often not want to have children - so we should ensure they don't conceive in the first place instead of terminating their pregnancies.
What I would add: Something I...
My understanding was as well that improved contraceptive access in poor countries is one of the best things we can do to lower abortions.
Thank you so much for laying out this view. I completely agree, including every single subpoint (except the ones about the male perspective which I don't have much of an opinion on). CEA has a pretty high bar for banning people. I'm in favour of lowering this bar as well as communicating more clearly that the bar is really high and therefore someone being part of the community certainly isn't evidence they are safe.
Thank you in particular for point D. I've never been quite sure how to express the same point and I haven't seen it written up elsewhere.
It's a bit unfortunate that we don't seem to have agreevote on shortforms.
Will the results of this research project be published? I'd really like to have a better sense of biosecurity risk in numbers.
That makes sense! I failed to think of non-human applications.
Edit: "economically crucial" should have been a hint.
Amazing. Well done. I am proud of you!
Thank you so much for sharing your experience, it's really helpful. I have previously wondered what the process looks like in the UK. I am sorry to hear about your mum.
Thank you so much for sharing!
I was only confused by this paragraph:
I can't find anything on his work on preserving sperm for artificial insemination, apparently economically crucial. I worry that is his one negative invention.
Why do you consider this potentially negative?
The assumption I had is we defer a lot of power, both intellectual, social and financial, to a small group of broadly unaccountable, non-transparent people on the assumption they are uniquely good at making decisions, noticing risks to the EA enterprise and combatting them, and that this unique competence is what justifies the power structures we have in EA.
Is this actually true right now? People donating to EA Funds seem like an example of deferring financial decisions, but I don't have data how EAs donate to the Funds vs. decide themselves where to do...
The vast bulk of funds in EA (OpenPhil and, until last week, FTX Future Fund) are controlled by very few people (financial). As is admission to EA Global (social). Intellectual direction is more open with e.g. the EA Forum, but things like big book projects and their promotion (The Precipice, WWOTF) are pretty centralised, as is media engagement in general.
If your goal were doing the most good, why would it matter how you expect EA to treat you in the case of failure?
Because he's a human being and human beings need social support to thrive. I think it's false to equate this perfectly fine human need with a lower motive like status-seeking. If we want people to try hard to do good we as a community should still be there for them when they fall.
I was pretty taken aback by GiveWell's moral weights by age. I had not expected them to give babies such little moral weight compared with DALYs. This means GiveWell considers saving babies' lives to be only as valuable as saving people in their late 30s despite them being almost halfway through their life. The graph makes the drop-off of moral weights at younger ages look less sharp than it is as the x-axis is not to scale.
I looked at the links for further information on this which I'm collating here for anyone else interested:
From the [public] 2020...
Thank you Tobias! I've wanted to learn more about the practical implications of s-risks for a while but never quite knew where to start, I'm really keen to read Part III.
I'm afraid I don't know anything. While I still like my piece it wasn't intended to provide a strong case against longtermism, only to briefly explore my personal disagreements. In such a piece I would want to see the case against longtermism from different value systems as well as actually engaging with the empirics around cause prioritisation, apart from the obvious: being a lot more thorough than I was.
I’m sorry I’m only getting to this comment now: I would like to clarify that the reason I started to work outside the EA sphere was not exclusively financial. I decided against exploring this, but I had some suggestions for a generic grant in my direction. The work I did as a research assistant was also on a grant.
I much prefer a “real job”, and as far as I can tell, there are still very few opportunities in the EA job market I’d find enticing. I care about receiving plenty of feedback and legible career capital and that’s much easier as part of an organiz...
There’s a small selfish part of me which is happy that my “Why I am probably not a longtermist” post is shared as the critical piece on longtermism.
There’s a much bigger part which would wish that someone had written up something much more substantial though! I am a bit appalled that my post seems to be the best we as a movement have to offer to newcomers on critical perspectives.
I did not know this at the time of writing, but GiveWell recommended an Incubation Grant to an Evidence Action programme for syphilis treatment during pregnancy in 2020. They view the moral weights of stillbirth prevention as highly uncertain, in their CEA they are assigning 33 QALYs to a stillbirth averted. This is consistent with a number I found once for what the British NHS assigns.
The CEA for syphilis prevention includes stillbirths averted in its total cost per life saved (coming out to a bit over a $1,000), which is inconsistent with how GiveWell h...
Current reporting on monkeypox, particularly from government agencies/public health officials have been pretty terrible, trying to downplay that MPXV is predominantly spreading through sexual activity between men.
The only source for this claim you give is US based. I have not investigated this broadly, but the first two countries whose disease protection agencies I checked do make very clear that this outbreak is primarily in men who have sex with men.
The UK Health Security Agency on latest updates on monkeypox:
..."While anyone can get monkeypox, the maj
Thank you for writing this Nuno.
Posts around self-worth, not feeling "smart enough" and related topics on the EA Forum don't resonate with me despite having had some superficially similar experiences in EA to the people who are struggling.
My best guess is this is because this is true for me
...Or, in other words, I agree that having psychological safety is good. But I think this is the case for true psychological safety, which could come from a circle of close friends or family who are in fact willing to support you in hard times. So psychological safety >
Thanks for doing this!
The strength of the arguments is very mixed as you say. If you wanted to find good arguments, I think it might have been better to focus on people with more exposure to the arguments. But knowing more about where a diverse set of EAs is at in terms of persuasion is good too, especially for AI safety community builders.
Ah, when you said 'significant amount' I assumed you meant a lot more. 10% of the total does not seem like much to me.
Sorry, I didn't want to imply Caplan was making a more nuanced argument than you suggested! I do think he makes a much more nuanced argument than the OP suggests however.
EAs seem generally receptive to resources like Emily Oster’s books, Brian Caplan’s book, or Scott Alexander’s Biodeterminist Guide (and its sequel), which all suggest to varying degrees that a significant amount of the toil of parenting can be forgone with near-zero cost.
I think this is not only false, but also none of the authors claim this.
I am not excited. In my experience it is common for parents of young children to have a lot of ideas on this they are keen to implement but dial back on this as their kids get older. Implementing such ideas is a lot of work! You are not able to pursue a full-time career while fully homeschooling your kids. You would forfeit all the benefits of them growing bigger and needing you less. Also, my experience is that most parents realise that outdoing the traditional school system or alternatives with homeschooling is a much higher bar than they thought. This was definitely true for me. (My oldest is ~12.)
Paraphrasing Caplan without doublechecking his sources: the shared environmental effects on politics and religion are on political and religious labels, not necessarily on actions. So your kid might also call themselves a Christian, but does not actually go to church that much.
I agree we shouldn't discourage EAs from having kids too much for some of the reasons you mention, but I am not sure who you are arguing against? I think anti-kid sentiment used to be stronger in the early days of EA but I have not seen it around in years.
Wanting to justify having ch...
Thank you for sharing!
My concern about people and animals having net-negative lives has been related to what’s happening with my own depression. My concern is a lot stronger when I’m doing worse personally.
I share the experience that my concern is stronger when I am in a worse mood but I am not sure I share your conclusion.
My concern comes from an intuitive judgement when I am in a bad mood. When I am in a good mood it requires cognitive effort to remember how badly off many other people and animals are.
I don't want to deprioritise the worst off in fav...
Oh, I don't think either conclusion is clearly right. I do worry that me being happy makes it too easy for me to neglect important worries about what things are like for others.
But I think I was sloppy in rounding to "maybe AI ending everything wouldn't be that bad," partly because the world could well get better than it currently is, and partly because unaligned AI could make things worse.
This is a link collection for content relevant to my post published since, for ease of reference.
Focusing on the empirical arguments to prioritise x-risks instead of philosophical ones (which I could not be more supportive of):
Carl Shulman’s 80,000hours podcast on the common sense case for existential risk
Scott Alexander writing about the terms long-termism and existential risks
On the definition of existential risk (as I find Bostrom’s definition dubious):
...You should keep in mind that high-earning positions enable a large amount of donations! Money is a lot more flexible in which cause you can deploy it to. In light of current salaries, one could even work on x-risks as a global poverty EtG strategy.
I think neartermist is completely fine. I have no negative associations with the term, and suspect the only reason it sounds negative is because longtermism is predominant in the EA Community.
This is just a note that I still intend to respond to a lot of comments, but I will be slow! (I went into labour as I was writing my previous batch of responses and am busy baby cuddling now.)
It will depend on what your alternatives are. If you could become a charity entrepreneur, I would expect this option dominates over your proposed path. Perhaps you are pursuing some other direct work options that you can compare to your option once you have received an offer.
But if there are no compelling direct work options (and for most people, there won't be), earning and donating as much as you can is a great path! Donating $10k a year is a great start.