(personal views only) In brief, yes, I still basically believe both of these things; and no, I don't think I know of any other type or action that I'd consider 'robustly positive', at least from a strictly consequentialist perspective.
To be clear, my belief regarding (i) and (ii) is closer to "there exist actions of these types that are robustly positive", as opposed to "any action that purports to be of one these types is robustly positive". E.g., it's certainly possible to try to reduce the risk of human extinction but for that attempt to be ineffective...
Do you have any data you can share on how the population responding to the FTX section/survey differs from the full EAS survey population? E.g. along dimensions like EA engagement, demographics, ... – Or anything else that could shed light on the potential selection effect at this stage? (Sorry if you say this somewhere and I missed it.) Thanks for all your work on this!
Yes we do (and thanks for the comment!).
First, 80.1% of respondents who were asked to answer additional questions about FTX decided to do so. This is similar to the number of respondents (83.1%) who agreed to answer the ‘extra credit’ questions prior to us adding the FTX questions. So, it does not seem like there was a large tendency for respondents to not answer the FTX questions, compared to just a general tendency to not answer extra questions at all.
Second, we looked at whether there are differences in demographics between those who answered the F...
This isn't quite what you're looking for because it's more a partial analogy of the phenomenon you point to rather than a realistic depiction, but FWIW I found this old short story by Eliezer Yudkowsky quite memorable.
In the short-term (the next ten years), WAW interventions we could pursue to help wild animals now seem less cost-effective than farmed animal interventions.
Out of curiosity: When making claims like this, are you referring to the cost-effectiveness of farmed animal interventions when only considering the impacts on farmed animals? Or do you think this claim still holds if you also consider the indirect effects of farmed animal interventions on wild animals?
(Sorry if you say this somewhere and I missed it.)
Ok, let’s consider this for each type of farmed animal welfare intervention:
Thanks for pointing this out, Max!
Based on this, I think it is plausible the nearterm effects of any intervention are driven by the effects on wild animals, namely arthropods and nematodes. For example, in the context of global health and development (see here):
I think GiveWell’s top charities may be anything from very harmful to very beneficial accounting for the effects on terrestrial arthropods.
If this is so, the expected nearterm effects of neartermist interventions (including ones attempting to improve the welfare of farmed animals) are also...
I like this idea! Quick question: Have you considered whether, for a version of this that uses past data/conjectures, one could use existing data compiled by AI Impacts rather than the Wikipedia article from 2015 (as you suggest)?
(Though I guess if you go back in time sufficiently far, it arguably becomes less clear whether Laplace's rule is a plausible model. E.g., did mathematicians in any sense 'try' to square the circle in every year between Antiquity and 1882?)
Tail-effects in education: Since interventions have to scale, they end up being mediocre to "what could be possible."
Related: Bloom's two-sigma problem:
Bloom found that the average student tutored one-to-one using mastery learning techniques performed two standard deviations better than students educated in a classroom environment with one teacher to 30 students
(haven't vetted the Wikipedia article or underlying research at all)
The following link goes to this post rather than the paper you mention:
For reasons just given, I think we should be far more skeptical than some longtermists are. For more, see this paper on simulation theory by me and my co-author Micah Summers in Australasian Journal of Philosophy.
"Moral realism" usually just means that moral beliefs can be true or false. That leaves lots options for explaining what the truth conditions of these beliefs are.
Moral realism is often (though not always) taken to, by definition, also include the claim that at least some moral beliefs are true – e.g. here in the Stanford Encyclopedia of Philosophy. A less ambiguous way to refer to just the view that moral beliefs can be true or false is 'moral cognitivism', as also mentioned here.
This is to exclude from moral realism the view known as 'error theory', whic...
Parfit here is making a reference to Sidgwick's "Government House utilitarianism," which seemed to suggest only people in power should believe utilitarianism but not spread it.
This may be clear to you, and isn't important for the main point of your comment, but I think that 'Government House utilitarianism' is a term coined by Bernard Williams in order to refer to this aspect of Sidgwick's thought while also alluding to what Williams viewed as an objectionable feature of it.
Sigdwick himself, in The Methods of Ethics, referred to the issue as esoteric moral...
Thank you so much for writing this. This may be very helpful when we start working on non-English versions of What We Owe The Future.
Yes, I also thought that the view that Scott seemed to suggest in the review was a clear non-starter. Depending on what exactly the proposal is, it inherits fatal problems from either negative utilitarianism or averagism. One would arguably be better off just endorsing a critical level view instead, but then one has stopped going beyond what's in WWOTF. (Though, to be clear, it would be possible to go beyond WWOTF by discussing some of the more recent and more complex views in population ethics that have been developed, such as attempts to improve upon sta...
The Asymmetry is certainly widely discussed by academic philosophers, as shown by e.g. the philpapers search you link to. I also agree that it seems off to characterize it as a "niche view".
I'm not sure, however, whether it is widely endorsed or even widely defended. Are you aware of any surveys or other kinds of evidence that would speak to that more directly than the fact that there are lot of papers on the subject (which I think primarily shows that it's an attractive topic to write about by the standards of academic philosophy)?
I'd be pretty inte...
I agree with the 'spawned an industry' point and how that makes it difficult to assess how widespread various views really are.
As usual (cf. the founding impetus of 'experimental philosophy'), philosophers don't usually check whether the intuition is in fact widely held, and recent empirical work casts some doubt on that.
Magnus in the OP discusses the paper you link to in the quoted passage and points out that it also contains findings we can interpret in support of a (weak) asymmetry of some kind. Also, David (the David who's a co-author of the paper...
More broadly, living conditions have on average improved enormously since 1920. (And depending on your view on population ethics, you might also think that total human well-being increased by a lot because the world population quadrupled since then.)
This effect is so broad and pervasive that lots of actions by many people in 1920 must have contributed to this, though of course there were some with an outsized effect such as perhaps the invention of the Haber-Bosch process; work by John Snow, Louis Pasteurs, Robert Koch, and others establishing the germ the...
One classic example is Benjamin Franklin, who upon his death in 1790
invested £1000 (about $135,000 in today’s money) each for the cities of Boston and Philadelphia: three-quarters of the funds would be paid out after one hundred years, and the remainder after two hundred years. By 1990, when the final funds were distributed, the donation had grown to almost $5 million for Boston and $2.3 million for Philadelphia.
(From What We Owe The Future, p. 24. See notes (1.34) and (1.35) on the WWOTF website here for references. Franklin's bequest is well-known but po...
Woah...if 40% of wealth were wiped out, that would have no impact on investment? I think we have different assumptions about the elasticity between wealth and donations (my prior is that it's fairly elastic).
This Open Phil blog post is interesting in this context. (Though note in this case the underlying wealth change was, I believe, not driven by crypto and instead mostly by the bear market for tech stocks.)
Good question! I'm pretty uncertain about the ideal growth rate and eventual size of "the EA community", in my mind this among the more important unresolved strategic questions (though I suspect it'll only become significantly action-relevant in a few years).
In any case, by expressing my agreement with Linch, I didn't mean to rule out the possibility that in the future it may be easier for a wider range of people to have a good time interacting with the EA community. And I agree that in the meantime "making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere" is (in some cases) the right goal.
I think realizing that different people have different capacities for impact is importantly true. I also think it's important and true to note that the EA community is less well set up to accommodate many people than other communities. I think what I said is also more kind to say, in the long run, compared to casual reassurances that makes it harder for people to understand what's going on. I think most of the other comments do not come from an accurate model of what's most kind to Olivia (and onlookers) in the long run.
FWIW I strongly agree with this.
I think social group stratification might explain some of the other comments to this post that I found surprising/tone-deaf.
Yes, that's my guess as well.
I feel like a lot of very talented teenagers actively avoid content that seems directly targeted at people their age (unless it seems very selective or something) because they don't expect that to be as engaging / "on their level" as something targeted at university students.
FWIW I think I would also have been pretty unlikely to engage with any material explicitly pitched at adolescents or young adults after about the age of 15, maybe significantly earlier.
Yeah I agree that some talented teenagers don't want to engage with material targeted at their age group.
I try not to use the word teenager on the site (there may be some old references), and write basically as if it's for me at my current age without assuming the knowledge I have.
But I'm not at all sure we've got the tone and design right – I'd appreciate hearing if anyone finds any examples on the site of something that seems condescending, belittling, or unempowering etc..
Thanks for your feedback and your questions!
I'd be curious to know how open the fund is to this type of activity.
We are very open to making grants funding career transitions, and I'd strongly encourage people who could use funding to facilitate a career transition to apply.
For undergraduate or graduate stipends/scholarships specifically, we tend to have a somewhat high bar because
Most people on average are reasonably well-calibrated about how smart they are.
(I think you probably agree with most of what I say below and didn't intend to claim otherwise, reading your claim just made me notice and write out the following.)
Hmm, I would guess that people on average (with some notable pretty extreme outliers in both directions, e.g. in imposter syndrome on one hand and the grandiose variety of narcissistic personality disorder on the other hand, not to mention more drastic things like psychosis) are pretty calibrated about how their cogni...
I think you're entirely right here. I basically take back what I said in that line.
I think the thing I originally wanted to convey there is something like "people systematically overestimate effects like Dunning-Kruger and imposter syndrome," but I basically agree that most of the intuition I have is in pretty strongly range-restricted settings. I do basically think people are pretty poorly calibrated about where they are compared to the world.
(I also think it's notably more likely that Olivia is above average than below average.)
Relatedly, I t...
Thanks, I think it's great to make this data available and to discuss it.
FWIW, while I haven't looked at any updates the UN may have made for this iteration, when briefly comparing the previous UN projections with those by Vollset et al. (2020), available online here, I came away being more convinced by the latter. (I think I first heard about them from Leopold Aschenbrenner.) They tend to predict more rapidly falling fertility rates, with world population peaking well before the end of the century and then declining.
The key difference in methods is that V...
I didn't vote on your comment on either scale, but FWIW my guess is that the disagreement is due to quite a few people having the view that AI x-risk does swamp everything else.
I suspected that, but it didn't seem very logical. AI might swamp x-risk, but seems unlikely to swamp our chances of dying young, especially if we use the model in the piece.
Although he says that he's more pessimistic on AI than his model suggests, in the model, his estimates are definitely within the bounds that other catastrophic risks would seriously change his estimates.
I did a rough estimate with nuclear war vs. natural risk (using his very useful spreadsheet, and loosely based on Rodriguez' estimates) (0.39% annual chance of US-Russ...
I agree that something like this is true and important.
Some related content (more here):
[Info hazard notice: not safe for work.]
Being somewhat self-conscious about being among the older members of the EA community despite being only in my early 30s, I rather turn toward Tocotronic's Ich möchte Teil einer Jugendbewegung sein ("I want to be part of a youth movement").
In a rare feat of prescience, the band also editorialized EA culture in many of their releases starting with their 1991 debut:
This doesn't have all of the properties you're most looking for, but one example is this video by YouTuber Justin Helps which is about correcting an error in an earlier video and explaining why he might have made that error. (I don't quite remember what the original error was about – I think something to do with Hamilton's rule in population genetics.)
FWIW to me as a German native speaker this proposed translation sounds like "long nap", "long slumber" or similar. :)
I agree with your specific claims, but FWIW I thought albeit having some gaps the post was good overall, and unusually well written in terms of being engaging and accessible.
The reason why I overall still like this post is that I think at its core it's based on (i) a correct diagnosis that there is an increased perception that 'EA is just longtermism' both within and outside the EA community, as reflected in prominent public criticism of EA that mostly talk about their opposition to longtermism, and (ii) it describes some mostly correct facts that ex...
Thanks so much for sharing your perspective in such detail! Just dropping a quick comment to say you might be interested in this post on EA for mid-career people by my former colleague Ben Snodin if you haven't seen it. I believe that he and collaborators are also considering launching a small project in this space.
Thank you for taking the time to share your perspective. I'm not sure I share your sense that spending money to reach out to Salinas could have made the same expected difference to pandemic preparedness, but I appreciated reading your thoughts, and I'm sure they point to some further lessons learned for those in the EA community who will keep being engaged in US politics.
I recommended some retroactive funding for this post (via the Future Fund's regranting program) because I think it was valuable and hadn't been otherwise funded. (Though I believe CEA agreed to fund potential future updates.)
I think the main sources of value were:
Hi, EAIF chair here. I agree with Michelle's comment above, but wanted to reply as well to hopefully help shed more light on our thinking and priorities.
As a preamble, I think all of your requests for information are super reasonable, and that in an ideal world we'd provide such information proactively. The main reason we're not doing so are capacity constraints.
I also agree it would be helpful if we shared more about community building activities we'd especially like to see, such as Buck did here and as some AMA questions may have touched uppn. Again this...
Thank you for this well-thought-out response. I appreciate the effort it took you and Michelle to respond to me. I am leaning much more that I was wrong about all this then. And if LA's application was initially part-time, that was one foundational wrong piece. I still wish that I could have received more details about my own application (the email specified that no feedback could be provided), but I will encourage more people I know to apply for CB work.
I have added a qualifier to my original comment that I am probably wrong. As this particular foru...
I'm not sure. – Peter Gabriel, for instance, seems to be an adherent of shorthairism, which I'm skeptical of.
The submission in last place looks quite promising to me actually.
Does anyone know whether Peter Singer is a pseudonym or the author's real name, and whether they're involved in EA already? Maybe we can get them to sign up for an EA Intro Fellowship or send them a free copy of an EA book – perhaps TLYCS?
I don't know but FWIW my guess is some people might have perceived it as self-promotion of a kind they don't like.
(I upvoted Sanjay's comment because I think it's relevant to know about his agreement and about the plans for SoGive Grants given the context.)
Maybe the notes on 'ascription universality' on ai-alignment.com are a better match for your sensibilities.
You might be interested in this paper on 'Backprop as Functor'.
(I'm personally not compelled by the safety case for such work, but YMMV, and I think I know at least a few people who are more optimistic.)
Some mathy AI safety pieces or other related material off the top of my head (in no particular order, and definitely not comprehensive nor weighted toward impact or influence):
(Posting as a comment since I'm not really answering your actual question.)
I think if you find something within AI safety that is intellectually motivating for you, this will more likely than not be your highest-impact option. But FWIW here are some pieces that are mathy in one way or another that in my view still represent valuable work by impact criteria (in no particular order):
...
The link to Carl's comment doesn't work for me, but this one does.
This link from the main text to the same comment also doesn't work for me: