All of Max_Daniel's Comments + Replies

(See also this comment by Carl Shulman which is already mentioned in the post.)

 

The link to Carl's comment doesn't work for me, but this one does. 

This link from the main text to the same comment also doesn't work for me:

as Carl Shulman observed many years ago

7
David_Althaus
7mo
Thanks for pointing this out! I corrected both links. 

(personal views only) In brief, yes, I still basically believe both of these things; and no, I don't think I know of any other type or action that I'd consider 'robustly positive', at least from a strictly consequentialist perspective.

​To be clear, my belief regarding (i) and (ii) is closer to "there exist actions of these types that are robustly positive", as opposed to "any action that purports to be of one these types is robustly positive". E.g., it's certainly possible to try to reduce the risk of human extinction but for that attempt to be ineffective... (read more)

2
Vasco Grilo
9mo
Thanks! I think I have converged towards a similar view.

I don't remember I'm afraid. I don't recall having seen the article you link to, so I doubt it was that. Maybe it was this one.

Do you have any data you can share on how the population responding to the FTX section/survey differs from the full EAS survey population? E.g. along dimensions like EA engagement, demographics, ... – Or anything else that could shed light on the potential selection effect at this stage? (Sorry if you say this somewhere and I missed it.) Thanks for all your work on this!

Yes we do (and thanks for the comment!).

First, 80.1% of respondents who were asked to answer additional questions about FTX decided to do so. This is similar to the number of respondents (83.1%) who agreed to answer the ‘extra credit’ questions prior to us adding the FTX questions. So, it does not seem like there was a large tendency for respondents to not answer the FTX questions, compared to just a general tendency to not answer extra questions at all.

Second, we looked at whether there are differences in demographics between those who answered the F... (read more)

This isn't quite what you're looking for because it's more a partial analogy of the phenomenon you point to rather than a realistic depiction, but FWIW I found this old short story by Eliezer Yudkowsky quite memorable.

In the short-term (the next ten years), WAW interventions we could pursue to help wild animals now seem less cost-effective than farmed animal interventions.

Out of curiosity: When making claims like this, are you referring to the cost-effectiveness of farmed animal interventions when only considering the impacts on farmed animals? Or do you think this claim still holds if you also consider the indirect effects of farmed animal interventions on wild animals?


(Sorry if you say this somewhere and I missed it.)

Ok, let’s consider this for each type of farmed animal welfare intervention:

  • Humane slaughter of farmed animals and wild-caught fish. I’m guessing that it doesn’t impact WAW that much.
  • Reducing animal product production. E.g., diet change advocacy, meat alternatives. Such interventions increase wild populations a lot. If you believe that wild animals live bad lives (which is questionable but I’d give it a 65% probability), then it follows that reducing meat production is likely bad for short-term animal welfare. I personally still think that reduci
... (read more)
6
saulius
1y
Great question. Yes, I think the claim still holds. It’s a bit tricky to explain why so you will have to stick with me. Let’s assume that: * Chicken welfare reforms are the most cost-effective intervention we found if we only consider the direct impact on chickens, * The indirect impacts of these welfare reforms on WAW are so bad that they outweigh the impact on chickens, * Each 1$ we spend to oppose welfare reforms negates 1$ spent on welfare reforms. It would follow that if we ignored the impact on chickens, then opposing welfare reforms would be the new most cost-effective intervention because of its impact on WAW. But that would be a very surprising coincidence. I’d call it surprising divergence (as opposed to surprising convergence). But ah, I’m now realizing that there is much more to this problem. It gets a lot messier. I’ll write more about this later.

Thanks for pointing this out, Max!

Based on this, I think it is plausible the nearterm effects of any intervention are driven by the effects on wild animals, namely arthropods and nematodes. For example, in the context of global health and development (see here):

I think GiveWell’s top charities may be anything from very harmful to very beneficial accounting for the effects on terrestrial arthropods.

If this is so, the expected nearterm effects of neartermist interventions (including ones attempting to improve the welfare of farmed animals) are also... (read more)

3
Fai
1y
This is a great question. I totally missed this consideration while reading this post but this question is imperative to keep in mind while thinking about this topic.

I like this idea! Quick question: Have you considered whether, for a version of this that uses past data/conjectures, one could use existing data compiled by AI Impacts rather than the Wikipedia article from 2015 (as you suggest)?

(Though I guess if you go back in time sufficiently far, it arguably becomes less clear whether Laplace's rule is a plausible model. E.g., did mathematicians in any sense 'try' to square the circle in every year between Antiquity and 1882?)

3
NunoSempere
1y
I wish I had known about the AI Impacts data sooner.  As the point out, looking at remembered conjectures maybe adds some bias. But then later in their post, they mention: Which could also be used to answer this question. But in their dataset, I don't see any conjectures proved between 2014 and 2020, which is odd. Anyways, thanks for the reference!

Tail-effects in education: Since interventions have to scale, they end up being mediocre to "what could be possible."

 

Related: Bloom's two-sigma problem:

Bloom found that the average student tutored one-to-one using mastery learning techniques performed two standard deviations better than students educated in a classroom environment with one teacher to 30 students

(haven't vetted the Wikipedia article or underlying research at all)

The following link goes to this post rather than the paper you mention:

For reasons just given, I think we should be far more skeptical than some longtermists are. For more, see this paper on simulation theory by me and my co-author Micah Summers in Australasian Journal of Philosophy.

5
marcusarvan
1y
Thanks - fixed!

"Moral realism" usually just means that moral beliefs can be true or false. That leaves lots options for explaining what the truth conditions of these beliefs are.

Moral realism is often (though not always) taken to, by definition, also include the claim that at least some moral beliefs are true – e.g. here in the Stanford Encyclopedia of Philosophy. A less ambiguous way to refer to just the view that moral beliefs can be true or false is 'moral cognitivism', as also mentioned here.

This is to exclude from moral realism the view known as 'error theory', whic... (read more)

1
peterhartree
2y
Thanks. I've edited (1) to exclude error theory. (This is a bit tricky because one might be a moral realist who thinks all our current beliefs are false, but we might get some right at some point. But anyway.)

Parfit here is making a reference to Sidgwick's "Government House utilitarianism," which seemed to suggest only people in power should believe utilitarianism but not spread it.

This may be clear to you, and isn't important for the main point of your comment, but I think that 'Government House utilitarianism' is a term coined by Bernard Williams in order to refer to this aspect of Sidgwick's thought while also alluding to what Williams viewed as an objectionable feature of it.

Sigdwick himself, in The Methods of Ethics, referred to the issue as esoteric moral... (read more)

7
ThomasW
2y
Thanks for the background on esoteric morality! Yes, I perhaps should have been more clear that "Government House" was not Sidgwick's term, but a somewhat derogatory term levied against him.

Thank you so much for writing this. This may be very helpful when we start working on non-English versions of What We Owe The Future.

Yes, I also thought that the view that Scott seemed to suggest in the review was a clear non-starter. Depending on what exactly the proposal is, it inherits fatal problems from either negative utilitarianism or averagism. One would arguably be better off just endorsing a critical level view instead, but then one has stopped going beyond what's in WWOTF. (Though, to be clear, it would be possible to go beyond WWOTF by discussing some of the more recent and more complex views in population ethics that have been developed, such as attempts to improve upon sta... (read more)

Thank you, that's interesting and I hadn't seen this.

6
David_Althaus
2y
(I now wrote a comment elaborating on some of these inconsistencies here.)

The Asymmetry is certainly widely discussed by academic philosophers, as shown by e.g. the philpapers search you link to. I also agree that it seems off to characterize it as a "niche view".

I'm not sure, however, whether it is widely endorsed or even widely defended. Are you aware of any surveys or other kinds of evidence that would speak to that more directly than the fact that there are lot of papers on the subject (which I think primarily shows that it's an attractive topic to write about by the standards of academic philosophy)? 

I'd be pretty inte... (read more)

8
MichaelPlant
2y
This impression strikes me as basically spot on. It would have been more accurate for me to say it's taken to be a "widely held to be an intuitive desideratum for theories of population ethics". It does have its defenders, though, e.g. Frick, Roberts, Bader. I agree that there does not seem to be any theory that rationalises this intuition without having other problems (but this is merely a specific instance of the general case that there seems to be no theory of population ethics that retains all our intuitions - hence Arrhenius' famous impossibility result). I'm not aware of any surveys of philosophers on their views on population ethics. AFAIT, the number of professional philosophers who are experts in population ethics - depending on how one wants to define those terms - could probably fit into one lecture room.

I agree with the 'spawned an industry' point and how that makes it difficult to assess how widespread various views really are.

As usual (cf. the founding impetus of 'experimental philosophy'), philosophers don't usually check whether the intuition is in fact widely held, and recent empirical work casts some doubt on that.

Magnus in the OP discusses the paper you link to in the quoted passage and points out that it also contains findings we can interpret in support of a (weak) asymmetry of some kind. Also, David (the David who's a co-author of the paper... (read more)

More broadly, living conditions have on average improved enormously since 1920. (And depending on your view on population ethics, you might also think that total human well-being increased by a lot because the world population quadrupled since then.)

This effect is so broad and pervasive that lots of actions by many people in 1920 must have contributed to this, though of course there were some with an outsized effect such as perhaps the invention of the Haber-Bosch process; work by John Snow, Louis Pasteurs, Robert Koch, and others establishing the germ the... (read more)

One classic example is Benjamin Franklin, who upon his death in 1790

invested £1000 (about $135,000 in today’s money) each for the cities of Boston and Philadelphia: three-quarters of the funds would be paid out after one hundred years, and the remainder after two hundred years. By 1990, when the final funds were distributed, the donation had grown to almost $5 million for Boston and $2.3 million for Philadelphia.

(From What We Owe The Future, p. 24. See notes (1.34) and (1.35) on the WWOTF website here for references. Franklin's bequest is well-known but po... (read more)

Woah...if 40% of wealth were wiped out, that would have no impact on investment? I think we have different assumptions about the elasticity between wealth and donations (my prior is that it's fairly elastic).

This Open Phil blog post is interesting in this context. (Though note in this case the underlying wealth change was, I believe, not driven by crypto and instead mostly by the bear market for tech stocks.)

Yep, this is one of several reasons why I think that Part I is perhaps the best and certainly the most underrated part of the book. :)

Good question! I'm pretty uncertain about the ideal growth rate and eventual size of "the EA community", in my mind this among the more important unresolved strategic questions (though I suspect it'll only become significantly action-relevant in a few years).

In any case, by expressing my agreement with Linch, I didn't mean to rule out the possibility that in the future it may be easier for a wider range of people to have a good time interacting with the EA community. And I agree that in the meantime "making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere" is (in some cases) the right goal.

4
Sophia
2y
Thanks 😊.  Yeah, I've noticed that this is a big conversation right now.  My personal take EA ideas are nuanced and ideas do/should move quickly as the world changes and our information about it changes too. It is hard to move quickly with a very large group of people.  However, the core bit of effective altruism, something like "help others as much as we can and change our minds when we're given a good reason to", does seem like an idea that has room for a much wider ecosystem than we have.  I'm personally hopeful we'll get better at striking a balance.  I think it might be possible to both have a small group that is highly connected and dedicated (who maybe can move quickly) whilst also having more much adjacent people and groups that feel part of our wider team.  Multiple groups co-existing means we can broadly be more inclusive, with communities that accommodate a very wide range of caring and curious people, where everyone who cares about the effective altruism project can feel they belong and can add value.  At the same time, we can maybe still get the advantages of a smaller group, because smaller groups still exist too. More elaboration (because I overthink everything 🤣) Organisations like GWWC do wonders for creating a version of effective altruism that is more accessible that is distinct from the vibe of, say, the academic field of "global priorities research".  I think it is probably worth it on the margin to invest a little more effort into the people that are sympathetic to the core effective altruism idea, but maybe might, for whatever reason, not find a full sense of meaning and belonging within the smaller group of people who are more intense and more weird.  I also think it might be helpful to put a tonne of thought into what community builders are supposed to be optimizing for. Exactly what that thing is, I'm not sure, but I feel like it hasn't quite been nailed just yet and lots of people are trying to move us closer to this from d

I think realizing that different people have different capacities for impact is importantly true. I also think it's important and true to note that the EA community is less well set up to accommodate many people than other communities. I think what I said is also more kind to say, in the long run, compared to casual reassurances that makes it harder for people to understand what's going on. I think most of the other comments do not come from an accurate model of what's most kind to Olivia  (and onlookers) in the long run. 

FWIW I strongly agree with this.

3
Sophia
2y
Will we permanently have low capacity?  I think it is hard to grow fast and stay nuanced but I personally am optimistic about ending up as a large community in the long-run (not next year, but maybe next decade) and I think we can sow seeds that help with that (eg. by maybe making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere).

I think social group stratification might explain some of the other comments to this post that I found surprising/tone-deaf.

Yes, that's my guess as well.

I feel like a lot of very talented teenagers actively avoid content that seems directly targeted at people their age (unless it seems very selective or something) because they don't expect that to be as engaging / "on their level" as something targeted at university students. 

FWIW I think I would also have been pretty unlikely to engage with any material explicitly pitched at adolescents or young adults after about the age of 15, maybe significantly earlier.

Yeah I agree that some talented teenagers don't want to engage with material targeted at their age group. 

I try not to use the word teenager on the site (there may be some old references), and write basically as if it's for me at my current age without assuming the knowledge I have. 

But I'm not at all sure we've got the tone and design right – I'd appreciate hearing if anyone finds any examples on the site of something that seems condescending, belittling, or  unempowering etc..

Thanks for your feedback and your questions!

I'd be curious to know how open the fund is to this type of activity.

We are very open to making grants funding career transitions, and I'd strongly encourage people who could use funding to facilitate a career transition to apply.

For undergraduate or graduate stipends/scholarships specifically, we tend to have a somewhat high bar because 

  • (a) compared to some other kinds of career transitions they involve providing funding for a relatively long period of time and often fund activities that are useful mostly f
... (read more)

These payout reports are now available here, albeit about two weeks later than I promised.

Most people on average are reasonably well-calibrated about how smart they are.

(I think you probably agree with most of what I say below and didn't intend to claim otherwise, reading your claim just made me notice and write out the following.)

Hmm, I would guess that people on average (with some notable pretty extreme outliers in both directions, e.g. in imposter syndrome on one hand and the grandiose variety of narcissistic personality disorder on the other hand, not to mention more drastic things like psychosis) are pretty calibrated about how their cogni... (read more)

I think you're entirely right here. I basically take back what I said in that line. 

I think the thing I originally wanted to convey there is something like "people systematically overestimate effects like Dunning-Kruger and imposter syndrome," but I basically agree that most of the intuition I have is in pretty strongly range-restricted settings. I do basically think people are pretty poorly calibrated about where they are compared to the world. 

(I also think it's notably more likely that Olivia is above average than below average.)

Relatedly, I t... (read more)

Thanks, I think it's great to make this data available and to discuss it.

FWIW, while I haven't looked at any updates the UN may have made for this iteration, when briefly comparing the previous UN projections with those by Vollset et al. (2020), available online here, I came away being more convinced by the latter. (I think I first heard about them from Leopold Aschenbrenner.) They tend to predict more rapidly falling fertility rates, with world population peaking well before the end of the century and then declining.

The key difference in methods is that V... (read more)

4
isabel
2y
this time, the WPP does show a population decline before the end of the century, though they still have a later and higher peak than Vollset et al.  prior to the update, the UN projections were clearly worse than the Vollset ones, now that their projections are closer together, I'm less confident which one is likely closer to the truth. but lean towards Vollset still being better and the UN not having revised down enough yet.  also, fun observation: two of the eight countries contributing most to growth in the next three decades already have shrinking birth cohort sizes, because even though cohorts are getting smaller than previous years, they're still much larger than the elderly population which has the highest mortality rates. (India and the Philippines, though note that there is a really wide discrepancy between Philippines estimates of births and WPP estimates of births, which is wider than what the birth registration gap is purported to be) 

Another relevant Slate Star Codex post is Against Individual IQ Worries.

4
Sophia
2y
I love this post. It is so hard to communicate that the 2nd moment of a distribution (how much any person or thing tends to differ from the average[1]) is often important enough that what is true on average often doesn't apply very well to any individual (and platitudes that are technically false can therefore often be directionally correct in EA/LessWrong circles). 1. ^ This definition was edited in because I only thought of an okay definition ages later.

I didn't vote on your comment on either scale, but FWIW my guess is that the disagreement is due to quite a few people having the view that AI x-risk does swamp everything else.

I suspected that, but it didn't seem very logical. AI might swamp x-risk, but seems unlikely to swamp our chances of dying young, especially if we use the model in the piece. 

Although he says that he's more pessimistic on AI than his model suggests, in the model, his estimates are definitely within the bounds that other catastrophic risks would seriously change his estimates. 

I did a rough estimate with nuclear war vs. natural risk (using his very useful spreadsheet, and loosely based on Rodriguez' estimates)  (0.39% annual chance of US-Russ... (read more)

4
Kirsten
2y
I thought it might be that people simply didn't find the chart misleading, they thought it was clear enough and didn't need any more caveats.

I agree that something like this is true and important.

Some related content (more here):

1
Max Görlitz
2y
Thanks, I was only aware of two of these!

[Info hazard notice: not safe for work.]

Being somewhat self-conscious about being among the older members of the EA community despite being only in my early 30s, I rather turn toward Tocotronic's Ich möchte Teil einer Jugendbewegung sein ("I want to be part of a youth movement").

In a rare feat of prescience, the band also editorialized EA culture in many of their releases starting with their 1991 debut:

  • Digital ist besser  ("Digital is better", an influential argument for transhumanism)
  • Drei Schritte vom Abgrund entfernt ("Three steps away from The Prec
... (read more)

This doesn't have all of the properties you're most looking for, but one example is this video by YouTuber Justin Helps which is about correcting an error in an earlier video and explaining why he might have made that error. (I don't quite remember what the original error was about – I think something to do with Hamilton's rule in population genetics.)

2
Lizka
2y
Thank you!

FWIW to me as a German native speaker this proposed translation sounds like "long nap", "long slumber" or similar. :)

2
EdoArad
2y
yea, I thought it was close enough 😊

Thank you so much for your work on this, I'm excited to see what comes out of it. 

I agree with your specific claims, but FWIW I thought albeit having some gaps the post was good overall, and unusually well written in terms of being engaging and accessible. 

The reason why I overall still like this post is that I think at its core it's based on (i) a correct diagnosis that there is an increased perception that 'EA is just longtermism' both within and outside the EA community, as reflected in prominent public criticism of EA that mostly talk about their opposition to longtermism, and (ii) it describes some mostly correct facts that ex... (read more)

Thanks so much for sharing your perspective in such detail! Just dropping a quick comment to say you might be interested in this post on EA for mid-career people by my former colleague Ben Snodin if you haven't seen it. I believe that he and collaborators are also considering launching a small project in this space.

4
Sarah Reed
2y
Thanks for the lead! The post you linked seems perfectly suited to me. I'll also contact Ben Snodin to inquire about what he may be working on around this matter.

Thank you for taking the time to share your perspective. I'm not sure I share your sense that spending money to reach out to Salinas could have made the same expected difference to pandemic preparedness, but I appreciated reading your thoughts, and I'm sure they point to some further lessons learned for those in the EA community who will keep being engaged in US politics.

I recommended some retroactive funding for this post (via the Future Fund's regranting program) because I think it was valuable and hadn't been otherwise funded. (Though I believe CEA agreed to fund potential future updates.)

I think the main sources of value were:

  • Providing (another) proof of concept that teams of forecasters can produce decision-relevant information & high-quality reasoning in crisis situations on relatively short notice.
  • Saving many people considerable amounts of time. (I know of several very time-pressed people who without that post w
... (read more)

Hi, EAIF chair here. I agree with Michelle's comment above, but wanted to reply as well to hopefully help shed more light on our thinking and priorities.

As a preamble, I think all of your requests for information are super reasonable, and that in an ideal world we'd provide such information proactively. The main reason we're not doing so are capacity constraints.

I also agree it would be helpful if we shared more about community building activities we'd especially like to see, such as Buck did here and as some AMA questions may have touched uppn. Again this... (read more)

Thank you for this well-thought-out response. I appreciate the effort it took you and Michelle to respond to me. I am leaning much more that I was wrong about all this then. And if LA's application was initially part-time, that was one foundational wrong piece. I still wish that I could have received more details about my own application (the email specified that no feedback could be provided), but I will encourage more people I know to apply for CB work. 

I have added a qualifier to my original comment that I am probably wrong. As this particular foru... (read more)

I'm not sure. – Peter Gabriel, for instance, seems to be an adherent of shorthairism, which I'm skeptical of.

4
Zach Stein-Perlman
2y
You might not feel an instinctive affinity for shorthairists, but try to expand your moral circle!

The submission in last place looks quite promising to me actually. 

Does anyone know whether Peter Singer is a pseudonym or the author's real name, and whether they're involved in EA already? Maybe we can get them to sign up for an EA Intro Fellowship or send them a free copy of an EA book – perhaps TLYCS?

3
SiebeRozendal
2y
Maybe we should send a book to all singers named Peter? https://www.gemtracks.com/guides/view.php?title=most-famous-singers-celebrities-named-peter&id=4861

Peter Singer is originally a character in Scott Alexander's "Unsong," mentioned here (mild spoilers), so it's a pseudonym that's a reference for a certain ingroup.

I don't know but FWIW my guess is some people might have perceived it as self-promotion of a kind they don't like.

(I upvoted Sanjay's comment because I think it's relevant to know about his agreement and about the plans for SoGive Grants given the context.)

Maybe the notes on 'ascription universality' on ai-alignment.com are a better match for your sensibilities.

You might be interested in this paper on 'Backprop as Functor'.

(I'm personally not compelled by the safety case for such work, but YMMV, and I think I know at least a few people who are more optimistic.)

Some mathy AI safety pieces or other related material off the top of my head (in no particular order, and definitely not comprehensive nor weighted toward impact or influence):

(Posting as a comment since I'm not really answering your actual question.)

I think if you find something within AI safety that is intellectually motivating for you, this will more likely than not be your highest-impact option. But FWIW here are some pieces that are mathy in one way or another that in my view still represent valuable work by impact criteria (in no particular order):

... (read more)
3
Jenny K E
2y
Absolutely agree with everything you've said here! AI safety is by no means the only math-y impactful work. Most of these don't quite feel like what I'm looking for, in that the math is being used to do something useful or valuable but the math itself isn't very pretty. "Racing to the Precipice" looks closest to being the kind of thing I enjoy. Thank you for the suggestions!
Load more