All of nathan98000's Comments + Replies

Any links to where Scott Alexander deliberately argues that black people have lower IQs for genetic reasons? I've been reading his blog for a decade and I don't recall any posts on this.

7
David Mathers
2d
I should probably stop posting on this or reading the comments, for the sake of my mental health (I mean that literally, this is a major anxiety disorder trigger for me.) But I guess I sort of have to respond to a direct request for sources.    Scott's official position on this is agnosticism, rather than public endorsement*. (See here for official agnosticism: https://www.astralcodexten.com/p/book-review-the-cult-of-smart) However, for years at SSC he put the dreaded neo-reactionaries on his blogroll. And they are definitely race/IQ guys. Meanwhile, he was telling friends privately at the time, that "HBD" (i.e. "human biodiversity", but generally includes the idea that black people are genetically less intelligent) is "probably partially correct or at least very non-provably non-correct": https://twitter.com/ArsonAtDennys/status/1362153191102677001 . That is technically still leaving some room for agnosticism, but it's pretty clear which way he's leaning. Meanwhile, he also was saying in private not to tell anyone he thinks this (I feel like I figured out his view was something like this anyway though? Maybe that's hindsight bias): 'NEVER TELL ANYONE I SAID THIS, not even in confidence'. And he was also talking about how publicly declaring himself to be a reactionary was bad strategy for PR reasons ("becoming a reactionary would be both stupid and decrease my ability to spread things to non-reactionary readers"). (He also discusses how he writes about this stuff partly because it drives blog traffic. Not shameful in itself, but I think people in EA sometimes have an exaggerated sense of Scott's moral purity and integrity that this sits a little awkwardly with.) Overall, I think his private talk on this paints a picture of someone who is too cautious to be 100% sure that Black people have genetically lower IQs, but wants other people to increase their credence in that to >50%, and is thinking strategically (and arguably manipulatively) about how to get them to do

I think any discussion of race that doesn't take the equality of races as a given will be considered inflammatory. And regardless of the merits of the arguments, they can make people uncomfortable and choose not to associate with EA.

I think collections like this are helpful, but it's a misleading to say it presents the "frontier of publicly available knowledge."

Taking just the first section on moral truth as an example, it seems like a huge overstatement to say this collection of podcasts and forum posts gets people to the frontier of this subject. Philosophers have spent a long time on this, writing thousands of papers. And at a glance, it seems like all of OPs linked resources don't even intend to give an overview of the literature on meta-ethics. They instead present their own pers... (read more)

The concept of self-esteem has a somewhat checkered history in psychology. Here, an influential review paper finds that self-esteem leads people to speak up more in groups and to feel happier. But it fails to have consistent benefits in other areas of life such as educational/occupational performance or violence. And it may have detrimental effects, such as risky behavior in teens.

Overall, the benefits of high self-esteem fall into two categories: enhanced initiative and pleasant feelings. We have not found evidence that boosting self-esteem (by therapeuti

... (read more)
4
Sean Sweeney
3d
Thanks for the comment and the link to the review paper!  I think most people, including researchers, don't have a good handle on what self-esteem is, or at least what truly raises or lowers it - I would expect the effect of praise to be weak, but the effect of promoting responsibility for one's emotions and actions to be strong. The closest to my views on self-esteem that I've found so far are those in N. Branden's "Six Pillars of Self-Esteem" - the six pillars are living consciously, self-acceptance, self-responsibility, self-assertiveness, living purposefully, and personal integrity. Unfortunately, because many researchers don't follow this conception of self-esteem, I tend not to trust much research on the real-world effects of self-esteem. Honestly, though, I haven't done a hard search for any research that uses something close to my conception of self-esteem, and your comment has basically pointed out that I should get on that, so thank you! 

FWIW standard conceptions of existential risk would categorize suffering risks as a type of existential risk. For example, Nick Bostrom has defined it as "threats that could cause our extinction or destroy the potential of Earth-originating intelligent life." (emphasis mine)

I think indoctrination (at least among adults) is actually surprisingly difficult. The psychologist Hugo Mercier was recently on the 80,000 Hours podcast to discuss why.

And the other thing which has had much more dramatic consequences is the idea of brainwashing: the idea that if you take prisoners of war and you submit them to really harsh treatment — you give them no food, you stop them from sleeping, you’re beating them up — so you make them, as you are describing, extremely tired and very foggy, and then you get them to read Mao for hours and hours on

... (read more)

Fwiw this review discusses why Rutger Bregman's book is deeply flawed.

"That he felt the need to misrepresent the past and other cultures in order to provide a ‘hopeful’ history is rather a message of despair. Bregman presents hunter-gatherer societies as being inherently peaceful, antiwar, equal, and feminist likely because these are commonly expressed social values among educated people in his own society today. This is not history but mythology."

Interesting post! I think what would have made this more helpful would be a discussion of the kinds of arguments that led you to change your mind in each case. For example, you note that you were convinced of universal prescriptivism but then later came to reject it. A brief discussion of the relevant arguments for/against would be interesting!

[What] is required of the philosopher is also to provide grounding or to think about grounding upon which the intuitions pointed to by a thought experiment are consistent.

Why can't a philosopher just present a counterexample? In fact, it seems arguing from a specific alternative grounding would make Timmerman's argument weaker. As he notes (emphasis mine):

I have purposefully not made a suggestion as to how many (if any) children Lisa is obligated to rescue. I did so to make my argument as neutral as possible, as I want it to be consistent with any normativ

... (read more)

I don't quite understand your objection to Timmerman's thought experiment. You say it's "ad hoc" and "justifies our complacency arbitrarily", but it's unclear what you mean by these terms. And it's unclear why someone should agree that it's ad hoc and arbitrary.

2
JacobBowden2023
3mo
This is a fair criticism, my construction of this post was fairly rushed and I did consider this as an issue with it myself. I think what I am trying to get at is that it is all well and good to throw doubt upon Singer’s principle with another thought experiment, but what is required of the philosopher is also to provide grounding or to think about grounding upon which the intuitions pointed to by a thought experiment are consistent - Singer does this, but I do not think that Timmerman does.

This seems like a good summary! Was this downvoted merely because of a wrong pronoun?

I don't think advocating for libertarian socialism or anarcho-communism is a tractable way to improve the world. I also think it's not at all obvious that it would even be desirable.

And in the US at least, the term "social justice" has become extremely politically loaded, and I think it would be unwise for EAs to explicitly associate themselves with the term.

For someone not familiar with Farrell's work, what's the main problem with it?

I appreciate the post, though I think "The universe is meaningless" section wasn't so convincing. The universe is meaningless because we're the product of natural selection? I would want a better argument than that.

FWIW I think it's still the case that psychologists/neuroscientists are nowhere near developing an accurate lie detector. And the paper you cite doesn't seem to support the claim that lie detection technology is accurate. From the abstract (emphasis mine):

Analyzing the myriad issues related to fMRI lie detection, the article identifies the key limitations of the current neuroimaging of deception science as expert evidence and explores the problems that arise from using scientific evidence before it is proven scientifically valid and reliable. We suggest th

... (read more)

I'm personally skeptical that we'll ever "solve" what the neural basis of sentience is. That said, I think there are still some promising ways a better understanding of psychology can advance standard EA causes. Here's a paper that goes into more depth on this issue:
https://pubmed.ncbi.nlm.nih.gov/35981321/

But for the paradox's setup to make sense, the player must have, in some sense, made his decision before the prediction is made: he is either someone who is going to take both boxes or someone who is just going to take the opaque box.

 

This doesn't seem correct. It's possible to make a better than random guess about what a person will decide in the future, even if the person has not yet made their decision.

This is not mysterious in ordinary contexts. I can make a plan to meet with a friend and justifiably have very high confidence that they'll show up at the agreed time. But that doesn't preclude that they might in fact choose to cancel at the last minute.

-7
Manbearpanda
1y

I suppose I agree that humanity should generally focus more on catastrophic (non-existential) risks.

That said, I think this is often stated explicitly. For example, MacAskill in his recently book explicitly says that many of the actions we take to reduce x-risks will also look good even for people with shorter-term priorities.

Do you have any quote from someone who says we shouldn't care about catastrophic risks at all?

1
mikbp
1y
I'm not saying this. And I really don't see how you came to think I do. The only thing I say is that I don't see how anyone would argue that humanity should devote less effort to mitigate a given risk just because it turns out that it is not actually existential even though it may be more than catastrophic. Therefore, finding out if a risk is actually existential or not is not really valuable. I'm not saying anything new here, I made this point several times above. Maybe it is not very clearly done, but I don't really know how to state it differently.

Maybe a more realistic example would be helpful here. There have been recent reports claiming that, although it will negatively affect millions of people, climate change is unlikely to be an existential risk. Suppose that's true. Do you think EAs should devote as much time and effort preventing climate change-level risks as they do preventing existential risks?

1
mikbp
1y
Let's speak about humanity in general and not about EAs, cause where EA focus does not only depend on the degree of the risk. Yes, I don't think humanity should currently devote less efforts to prevent such risks than x-risks. Probably the point is that we are doing way too less to tackle dangerous non-immediate risks in general, so it does not make any practical difference whether the risk is existential or only almost existential. And this point of view does not seem controversial at all, it is just not explicitly stated. It is not just not-EAs that are devoting a lot of effort to prevent climate change, an increasing fraction of EAs do as well.  

I found this post insightful! Although it's a brief post, I'd recommend providing a brief heading for each section for people who are heavy skimmers.

I'm not sure I understand your point then...

Surely a future in which humanity flourishes into the longterm future is a better one than a future where people are living as "ants." And if we have uncertainty about which path we're on and there are plausible reasons to think we're on the ant path, it can be worthwhile to figure that out so we can shift in a better direction.

1
mikbp
1y
Exactly. Even if the ant path may not be permanent, ie. if we could climb out of it.  My point is that, in terms of the effort I would like humanity to devote to minimise this risk, I don't think it makes any difference whether the ant state is strictly permanent or we could eventually get out of it. Maybe if it were guaranteed to get out of it or even "only" very likely that we could get out of this ant state I could understand devoting less effort in mitigating this risk than if we'd think the AGI will eliminate us (or the ant state would be unescapable).  If we agree on this, the fact that a risk is actually existential or not is in practice close to irrelevant.

For example: Does it make any difference whether a non-alligned superintelligent AGI will actively try to kill all humanity or not? If we are certain that it won't, we would still live in a world where we are the ants and it is humanity.

 

This misunderstands what an existential risk is, at least as used by the philosophers who've written about this. Nick Bostrom, for example, notes that the extinction of humanity is not the only thing that counts as an extinction risk.  (The term "existential risk" is unfortunately a misnomer in this regard.) Something that drastically curtails the future potential of humanity would also count.

0
mikbp
1y
  ;-)

I have no idea, I've spent less than a half hour looking into this. The Cochrane Review shows that there's maaaybe an advantage to water flossing, but there just haven't been that many studies on it. And the studies do assume that participants are  flossing/water flossing at the same frequency. If the pleasant sensation you get from water flossing motivates you to keep doing it, I think that's great!

1
BenSchifman
1y
Thanks! I agree there isn't definitive evidence about water vs other flossing. For me it is so much easier to do water flossing that I also would favor that if it was equal or even slightly less effective than the alternative.  I think my prior is that anything that mechanically moves plaque and food particles from in between your teeth -- be it water, "regular" floss, or something else -- is going to work. It probably depends as much on your technique as to the underlying mechanism and so I think this would be hard to effectively study. 

I like this list!

Just a heads up for the studies about water flossing:

Two of them were  funded by WaterPik and another is published in the "Journal of Baghdad College of Dentistry," which looks... suspicious from my naive perspective.

A recent Cochrane Review compares toothbrushing against tooth brushing + water flossing (aka "oral irrigating"):

Very-low certainty evidence suggested oral irrigators may reduce gingivitis measured by GI at one month (SMD -0.48, 95% CI -0.89 to -0.06; 4 trials, 380 participants), but not at three or six months. Low-certain

... (read more)
4
Larks
1y
Thanks for sharing. Do you think there is plausible that water flossing might be actively worse than regular flossing? I ask because I find water flossing much more pleasant and less aversive, so would favour it even if the evidence suggested it was only as effective.

Upvoted for curating a list of other product recommendations. Very helpful!

Just to explain why I downvoted this:

I thought that the post could be summarized as: Sometimes it's rational to change one's mind quickly.

I agree that's true, but I don't see it as especially insightful. And the idea of listing out more ways in which people might be irrational isn't all that neglected. See, for example, the Wikipedia page on biases, which lists hundreds(?) of biases.

I'd prefer to see more substantive posts on this forum.

I'm sorry to hear that you've experienced sexism both within and outside EA.

Just to clarify your view, you said that:

there is data to suggest the variability hypothesis may be true in some places and for certain kinds of intelligence.

But an implication of the hypothesis is that men will make up a greater proportion of the "intelligent" people in those places for those kinds of intelligence.

Do you think it would be fine to use this information as a prior in those contexts?

Even if there are cases in which it would theoretically be reasonable to employ different priors for men vs. women, I doubt people will be able to reliably identify these cases, choose appropriate priors, and correctly apply the priors they've chosen. When you couple these challenges with the fact that there are significant downsides associated with trying to discriminate in a principled way (e.g., harming people, alienating people, creating self-fulfilling prophesies, making it harder for members of an already disadvantaged group to succeed, etc), it seems like a bad idea to base priors on the variability hypothesis in basically any context.

One trouble I've always had with the capabilities approach is with how one figures out what counts as a capability worth having. For example, I agree it's good for people to be able to choose their career and to walk outside safely at night. But it seems to me like this is precisely because people generally have strong preferences about what career to have and about their safety. If there was a law restricting people from spinning in a circle and clapping one's hands exactly 756 times, this would be less bad than restricting people from walking outside at ... (read more)

2
BrownHairedEevee
1y
I think the capabilities approach can be reframed as a form of multi-level utilitarianism. Capabilities matter, but why? Because they contribute to well-being. How do we prioritize among capabilities? Ask people what capabilities matter to them and prioritize the ones that matter more to more people.[1] Why do we prioritize the ones that matter more to more people? Because they have a greater impact on aggregate well-being. Here, we're using various decision procedures that differ from the archetypal utilitarian calculus (e.g. the capabilities approach, soliciting people's preferences), but the north star is still aggregate utility. 1. ^ From the OP: "The third approach, which I personally prefer, is to not even try to make an index but instead to track various clearly important dimensions separately and to try to be open and pragmatic and get lots of feedback from the people 'being helped.'"
2
Sam Battis
1y
I think that for consequentialists, capability-maximization would fall into the same sphere as identifying and agitating for better laws, social rules, etc. Despite not being deontologists, sophisticated consequentialists  recognize the importance of deontological-type structures, and thinking in terms of capabilities (which seem similar to rights, maybe negative rights in some cases like walking at night) might be useful in the same way that human rights are useful--as a tool to clarify one's general goals and values and interpersonally coordinate action.

Good questions.

I tried to address the fist one in the second part of the Downsides section. It is indeed the case that while the list of capability sets available to you is objective,  your personal ranking of them is subjective and the weights can vary quite a bit. I don't think this problem is worse than the problems other theories face (turns out adding up utility is hard), but it is a problem. I don't want to repeat myself too much, but you can respond to this by trying to make a minimal list of capabilities that we all value highly (Nussbaum), or... (read more)

As someone who leans towards hedonistic utilitarianism, I would agree with this impression. It seemed like the post asserted that utilitarianism must be true and that alternative intuitions could be dismissed without any good corresponding argument.

I would also add that there are many different flavors of utilitarianism, and it's unclear which, if any, is the correct theory to hold. This podcast has a good breakdown of the possibilities.

https://clearerthinkingpodcast.com/episode/042

I think this post makes many correct observations about the EA movement, but it draws the wrong conclusions.

For example, it's true that EAs will sometimes use uncommon phrases like "tail-risk" and "tractability". But that's because these are important concepts! Heck,  just "probability" is a word that might scare off most people too. But it would be a mistake to water down one's language to attract as many people as possible.

More generally, the EA movement isn't trying to grow as fast as possible. It's not trying to get everyone's attention. Instead, ... (read more)

Yes, I think it would be best to hold off. I think you'll find MacAskill addresses most of your concerns in his book.

I think you keep misinterpreting me, even when I make things explicit. For example, the mere fact that X is good doesn’t entail that people are immoral for not doing X.

Maybe it would be more productive to address arguments step by step.

Do you think it would be bad to hide a bomb in a populated area and set it to go off in 200 years?

-5
Noah Scales
2y
1[comment deleted]2y

If you agree we should help those who will have moral status, that's it. That's one of the main pillars of longtermism. Whether or not present and future moral status are "comparable" in some sense is beside the point. The important point of comparison is whether they both deserve to be helped, and they do.

1
Noah Scales
2y
I agree that we should  help those who have moral status now, whether those people are existing or just will exist someday . People who will exist someday are people who will exist in our beliefs about the pathway into the future that we are on.  There is a set of hypothetical future people on pathways into the future that we are not on. Those pathways are of two types: * pathways that we are too late to start down (impossible future people) * pathways that we could still start down (possible future people or plausible future people) If you contextualize something with respect to a past time point, then it is trivial to make it impossible. For example, "The child I had when I was 30 is an impossible future person." With that statement, I describe an impossible person because I contextualized its birth as occurring when I was 30. But I didn't have a child when I was 30, and I am almost two decades older than 30. Therefore, that hypothetical future person is impossible. Then there's the other kind of hypothetical future person, for example, a person that I could still father. My question to you is whether that person should have moral status to me now, even though I don't believe that the future will be welcoming and beneficial for a child of mine.  If you believe that a hypothetical future child does have moral status now, then you believe that I am behaving immorally by denying it opportunities for life because in your belief, the future is positive and my kid's life will be a good one, if I have the kid. I don't like to be seen as immoral in the estimation of others who use flawed reasoning. The flaw in your reasoning is that the hypothetical future child that I won't have has moral status and that I should act on its behalf even though I won't conceive it.  You could be right that the future is positive. You are wrong that the hypothetical future child has any moral status by virtue of its future existence when you agree that the child might not ever exist

Longtermists think we should help those who do (or will) have moral status.

1
Noah Scales
2y
Oh, I agree with that, but is "future moral status" comparable to or the same as "present moral status"?

No, it's because future moral status also matters.

1
Noah Scales
2y
Huh. "future moral status" Is that comparable to present moral status in any way?

FWIW this article has a direct account of persistence hunting among the Tarahumara. It also cites other accounts of persistence hunting among the Kalahari and Saami.

I think they will have moral status once they exist, and that's enough to justify acting for the sake of their welfare.

1
Noah Scales
2y
Do you believe that: 1. possible future people have moral status once they exist 2. it's enough that future people with moral status are possible to justify acting on their behalf  I believe point 1.  If you believe point 2, is that because you believe that possible future people have moral status now?

I disagree with the certainty you express, I'm not so sure, but that's a separate discussion, maybe for another time.

I haven't expressed certainty. It's possible to expect X to happen without being certain X will happen. Example: I expect for there to be another pandemic in the next century, but I'm not certain about it.

I assume then that you feel assured that whatever steps you take to prevent human extinction are also steps that you feel certain will work, am I right?

No, this is incorrect for the same reason as above.

The whole point of working on existen... (read more)

1
Noah Scales
2y
OK, so you aren't so sure that lots of humans will live in the future, but those possible humans still have moral status, is that right?

Yes, I do expect the future to contain future people. And I think it's important to make sure their lives go well.

Another crux seems to be that you think helping future people will involve some kind of radical sacrifice of people currently alive. This also doesn't follow.

Consider: People who are currently alive in Asia have moral status. People who are currently alive in Africa have moral status. It doesn't follow that there's any realistic scenario where we should sacrifice all the people in Asia for the sake of Africans or vice versa.

Likewise, there are actions we can take to help future generations without the kind of dramatic sacrifice of the present that you're envisioning. 

1
Noah Scales
2y
OK then! If you believe that the future will contain future people, then I have no argument with you giving those future people moral status equivalent to those alive today. I disagree with the certainty you express, I'm not so sure, but that's a separate discussion, maybe for another time. I do appreciate what you've offered here, and I applaud your optimistic certainty. That is what I call belief in a future.  I assume then that you feel assured that whatever steps you take to prevent human extinction are also steps that you feel certain will work, am I right? EDIT: Or you feel assured that one of the following holds: *  whatever steps someone takes will prevent human extinction, * humanity will survive catastrophic events, no matter the events * existential risks will not actually cause human extinction, maybe because they are not as threatening as some think

But why do longtermists think that the future should contain many billions of people and that it is our task to make those people's lives happier?

Different longtermists will have different answers to this.  For example, many people think they have an obligation to make sure their grandchildren's lives go well. It's a small step from there to say that other people in the future besides one's grandchildren are worth helping.

Or consider someone who buries a bomb in a park and sets the timer to go off in 200 years. It seems like that's wrong even though n... (read more)

1
Noah Scales
2y
OK, thanks for the response. Yes, well, perhaps  it's true that longtermists expect that the future will contain lots, many billions or trillions, of future people.  I do not believe: *  that such a future is a good or moral outcome.  *  that such a future is a certain outcome.  I'm still wondering: * whether you believe that the future will contain future people. * whether people that you believe are hypothetical or possible future people have moral status I think I've said this a few times already, but the implication of a possible future person having moral status is that the person has moral status comparable to people who are actually alive and people who will definitely be alive. Do you believe that a possible future person has moral status?

I'm not following the reasoning for most of your claims, so I'll just address the main claims I understand and disagree with.

If longtermists truly believe that the future will contain a lot of people, then they consider that future inevitable.

This doesn't follow. There's a difference between saying "X will probably happen" and "X will inevitably happen."

Compare: Joe will probably get into a car accident in the next 10 years, so he should buy car insurance.

This is analogous to the longtermist position: There will probably be events that test the resilience ... (read more)

1
Noah Scales
2y
It is not contradictory for you or for longtermists to work against the extinction of the human race while you believe that the human race will continue, provided you think that those actions to prevent extinction are a cause of the continuation of the human race and that you believe those actions will be performed (not could be performed). A separate question is whether those actions should be performed. I believe that longtermists believe that the future should contain many billions of people in a few hundred years, and that those hypothetical future people have moral status to longtermists. But why do longtermists think that the future should contain many billions of people and that it is our task to make those people's lives happier? I think the normal response is "But it is good to continue the human race. I mean, our survival is good, the survival of the species is good, procreating is good, we're good to have in the universe. Taking action toward saving our species is good in the face of uncertainty even if the actions could fail, maybe some people would have to sacrifice so that our species continues, but our species is worth it. Eventually there can be trillions of us, and more of us is better provided  humans are all doing well then"  but those are not my morals.  I want to be clear: we current humans could all live long happy lives, existing children could grow up, and also live long happy lives, existing fetuses could mature to term and be born, and live to a ripe old human age, long and happily. So long as no one had any more children, because we all used contraception, our species would die out. I am morally ok with that scenario. I see no moral contradiction in it. If you do, let me know. What is worrisome to me is that the above scenario, if it occurred in the context of hypothetical future people having moral status, would include the implication that those people who chose to live well but die childless, were all immoral. I worry that longtermi

I don't think I'm following your reasoning.

It's true that longtermists expect for there to be many people in the future, but as far as I'm aware, no one has suggested taking any actions to make that number as large as possible. And no one has suggested we sacrifice the current 8 billion people alive today for some potential future benefit.

The main recommendations are to make sure that we survive the next century and that values aren't terrible in the future. This doesn't at all entail that people should hoard resources and fend for themselves. 

0
Noah Scales
2y
So, if longtermists believe that the far future will contain many people, then they do not feel the need to work against our extinction, correct?  When I say believe that the future will contain a lot of people, that is in fact what I mean. If longtermists truly believe that the future will contain a lot of people, then they consider that future inevitable. Is that what you think longtermists believe? If you understand the question, I think that your answer will be no, that longtermists do not in fact believe that the future will contain a lot of people, or else they would not include human extinction as a plausible scenario to take actions to avoid. It is an implication of longtermist thought that a future of extremely large numbers of people, when created as a goal, provides moral weight to longtermist actions taken toward the goal of large numbers of future people. Those longtermist actions can be in contradiction to the well-being of present people, but on balance  be moral if: you consider a hypothetical future person to have moral status. Such hypothetical future people are people who are not yet conceived, that is, have not yet been a fetus. A concern to me is whether longtermists believe that hypothetical future people have moral status. If they do, then any longtermist action can potentially be justified in terms of the well-being of those presumed people. Furthermore, if you do in fact believe that those people will exist, (not could exist, will exist), then it makes complete sense to give those people moral status. It's when the existence of those presumed people is a choice, or a possibility, or a desired scenario, but not the only plausible future, that the decision to give those people moral status in moral decision-making is  in error, and self-serving besides.  I will offer without evidence, because I consider it empirically obvious, that for selfish reasons people will hoard resources and allow outsiders to fend for themselves, particularly i

I appreciate the fact that you took the time to reflect on what you've heard about longtermism. That said, I'll highlight areas where I strongly disagree.

It is obvious that any plan that sacrifices the well-being of the present human population to serve a presumed larger future human population will not be morally justifiable

This is not at all obvious to me. We make justifiable sacrifices to well-being all the time. Consider a hospital that decides not to expend all of its resources on its current patients because it knows there will be future patients in ... (read more)

1
Noah Scales
2y
Thanks for the response. My writing was a bit sloppy, so I need to add the context from the previous sentence of my original post. The full quote is:  "Longtermists do not guarantee that the far-future will contain lots of people, but only that it could. It is obvious that any plan that sacrifices the well-being of the present human population to serve a presumed larger future human population will not be morally justifiable..." and what I meant was that longtermists would like the future to contain lots of people. That seems evident to me in their hopeful discussions of our potential as a space-colonizing civilization numbering in trillions, or our transcendence into virtual worlds populated by trillions of conscious AI, etc.  If the size or presence of that future population is a choice, then sacrifice of the already large present human population for the sake of a future population of optional size is not morally justifiable.  I am referring to situations where the 8 billion people on the planet are deemed morally less important than a larger future population whose existence is contingent on plans that ignore the well-being of the present population to some extent.  For example, plans that allocate resources specifically to a (relatively) small subset of the global population in order to ensure against the collapse of civilization for that subset.  I believe that our population size raises both our suffering and extinction risks unless we endorse and pursue lifestyle efficiencies specifically to reduce those risks (for example, by changing our food production practices). If we do, then I believe that we are doing what is morally right in protecting our entire global population.  The alternative appears to depend on the presumption that so long as we protect our species, each subset of our civilization can hoard resources and fend for itself.  In our globally connected economies, that approach will create unintentionally small, and vulnerable, subsets. The

Then it becomes a choice of accepting the VNM axioms or proposition 3 above.

Like I said, I agree that we should reject 3, but the reason for rejecting 3 is not because it is based on intuition (or based on a non-fundamental intuition).  The reason is because it's a less plausible intuition relative to others. For example, one of the VNM axioms is transitivity: if A is preferable to B, and B is preferable to C, then A is preferable to C.

That's just much more plausible than the Yitz's suggestion that we shouldn't be "vulnerable to adversarial attacks" o... (read more)

1
Transient Altruist
2y
Yes this is exactly what I'm saying

Consider three propositions:

  1. Expected value theory is true.
  2. Expected value theory recommends sometimes taking bets that we expect to lose.
  3. We should not adopt decision theories that recommend sometimes taking bets that we expect to lose.

You reject 3.

Yitz rejects 1.

This is not a matter of making more or fewer assumptions. Instead, it's a matter of weighing which of the propositions one finds least plausible. There may be further arguments to be made for or against any of these points, but it will eventually bottom out at intuitions.

1
Transient Altruist
2y
Oh wait sorry I got confused with totally different comment that did add an extra assumption. My bad... As for the actual comment this thread is about, expected value theory can be derived from the axioms of VNM-rationality (which I know nothing about btw), whereas proposition 3 is not really based on anything as far as I'm aware, it's just a kind of vague axiom of itself. I feel we should restrain from using intuitions as much as possible except when forced to at the most fundamental level of logic — like how we don't just assume 1+1=2, we reduce it to a more fundamental level of assumptions: the ZFC axioms. In summary, propositions 1 and 3 are mutually exclusive, and I think 1 should be accepted more readily due to it being founded in a more fundamental level of assumptions.

Although I don't think Yitz's comment is persuasive, I don't think your response is either. What's the "logic-founded" reason for accepting the wager? You might say expected value theory, but then, it's possible to ask what the reason for that is, etc. It's intuition all the way down.

1
Transient Altruist
2y
That's true but I think we need to make the least number of intuition based assumptions possible. Yitz's suggestion adds an extra assumption ON TOP of expected value theory, so I would need a reason to add that assumption. Oops I got mixed up and that response related to a totally different comment. See my reply below for my actual response

Good points, though it's worth noting that the people who comment on NYT articles are probably not representative of the typical NYT reader

In Amanda Askell's site, linked in another comment by ColdButtonIssues, she gives a reason to think an evidentialist god could be more likely: ‘Divine hiddenness’ plus God making us capable of evidentialism. Roughly, the idea is to ask the question, "Why would a god want us to irrationally believe in it?"

It's also plausible that people's beliefs in a supernatural punishing/rewarding god can be explained by evolutionary/cultural factors that wouldn't reliably track the truth.

This post left a bad taste in my mouth, and I wanted to briefly touch on why:

1. You say that the right time to act is now, but this is extremely ambiguous.

What should people do now? Maybe you're referring to some of the actions mentioned later in the post like "consciously deciding if this is worth your time" and "doing research".

This reminds me of a scene from Friends where one of the characters says that he has a plan. And his plan is that someone should come up with a new plan.

And there seems to be an inconsistency in your approach. You say there is a "... (read more)

1
Alje
2y
Hello Nathan! Thank you for your reply! I appreciate the honesty and your comments are very clear. Allow me to elaborate: 1. Yes! The Friends scene where a character has a plan to make a plan (together) is exactly what I mean! I feel we are doing too little planning. Why wouldn't this be a valid argument? It is like when you're on a holiday, you've just arrived and everyone just starts doing something. One person starts building a tent, someone else is going for the dishes. You notice that everyone seems to forget to go to the camping owner to see if you're even allowed to set up your tent. It also seems that we might need to go shopping before we start cooking. I think the right thing to do in those circumstances would be to call everyone together and make a plan together. 2a. Once again, your comment is very clear. If we already have 4.9 million books, why write another one? My point is this: even if there 4.9 million books, their content does not seem to have reached the EA community. What I'd like to see is to have a dedicated (EA) team find (and summarize) the best books and figure out the implications for EA, as I (tried to) describe in appendix 1. 2b. You suggest many different topics of research. That's great. I agree that all of these are very much worth studying. It was never my intention to limit the scope of research (on the contrary!) and I think you mention worthwhile avenues. You also mention ideas other than research. I'm not a fan of those. I feel the comparative advantage of EA is "think first, act afterwards." Also I feel that relief for refugees is lower in scope (and neglectedness). Your summary does capture the essence of my article. However, I feel it doesn't do it justice. I still feel that noticing that EA seems to be having a "act first, think later" mindset (instead of our comparative advantage "think first, act later") is extremely important. And that appendix 1 offers both an indication of where our thinking is lacking, and indicatio
1
FCCC
2y
Wow, that essay explains strong anecdotes a lot better than I did. I knew about the low-variance aspect, but his third point and onwards made things even clearer for me. Thanks for the link!
Load more