All of ColdButtonIssues's Comments + Replies

Good point. I think you would probably only consider the direct costs to those donors (pain/morbidity/risk) and not foregone donations, since presumably the typical liver donor participating in a chain is not devoting a lot of their earnings to impactful charity.

I only am familiar with the US system unfortunately. I think this evaluation holds up pretty well for EAs even though its some years old. 

Yes,  I agree it's frustrating. I did a more detailed one when considering living kidney donation. Plus, living liver donation is less common.

My fast liver donation BOTEC assumes 80k hours of working hours (reduce if older?).

1 in 250 chance of death (source, maybe too high)= -320 work hours

About a month of work lost due to recovery (source)= -160 work hours. 

So maybe spending 500 work hours to extend one persons life. 

Ignoring time off work due to potential reimbursment, if you netted $15 per hour for the hours lost to risk of death and dona... (read more)

4
NickLaing
8mo
Love this BOTEC - thumbs up for more loose BOTECs on the forum. The chance of death is too high to be realistic - better I think to go with the 1000, which brings your BOTEC closer to $2000. I would at least double the earning to $30 on average though, so then $4000.  Either way, like you say hardly slamdunk cost-effective Good job, and it surprises me that this seems so borderline cost effective. Nice one
4
Jason
8mo
Although a non-directed donation could potentially enable a significant chain of donations. I think one could count all recipients in the chain if the non-directed donation is a but-for cause of them receiving livers, but would need to include costs to all donors as well.
1
AnonymousTurtle
8mo
Do you have a link? I'm vaguely considering kidney donation, but haven't found a lot of reliable information on the cost-effectiveness, including opportunity costs. Did you also consider what would be the optimal country to donate a kidney? I expect different countries to have very different needs and donation chain opportunities, so it plausibly makes sense for me to donate a kidney in a different country.

Hi Kyle,

If you plan on donating, I think donating through UNOS's pilot program for paired liver donation is the highest impact way for an American to donate lobe currently. 

I would do a BOTEC for how much benefit the recipient would get versus the expected loss of life to you due to surgery risk and long-term effects.

If you are earning to give, I would check out your employer's policy for time off for organ donation as well as the possibility for reimbursement of expenses through NLDAC (which you very well may be familiar with through your kidney expe... (read more)

1
Kyle J. Lucchese
8mo
Thanks for your comment! The UNOS pairing and BOTEC are great callouts. Fortunately, Johns Hopkins Hospital is a part of the program network. As for the BOTEC: I am going to spend more time researching across sources (including interviews and with the donor team), but finding solid data to factor in has thus far been challenging.

"The other stuff seems more reasonable but if you're going to restrict immigrants' ability to work on AI you might as well restrict natives' ability to work on AI as well. I doubt that the former is much easier than the latter."

This part of your comment I disagree on. There are specific provisions in US law to protect domestic physicians, immigrants on H1B visas have way fewer rights and are more dependent on their employers than citizen employees, and certain federal jobs or contractor positions are limited to citizens/permanent residents. I think this is... (read more)

Thanks for the feedback. 

When I was writing this- and when I think about AI Risk in general- as someone without a ML background, I tend to fall back on looking for non-technical heuristics like interest rates/market caps of hardware companies. So I am influenced perhaps more than a more technical person would be by these kind of meta or revealed preferences arguments.

I think Democrats (and left-wingers in other countres) could embrace increasing high-skilled immigration in ways that steer talent away from AI. In the US. H1-B visas could be changed to ... (read more)

2
Chris Leong
11mo
I guess my perspective is that all that these revealed preferences show is that people prefer to maintain their social status (benefit accrues to them personally) rather than support an unpopular change that is extremely unlikely to happen and where their support is extremely unlikely to make a difference (benefits are distributed). So even if I accept this method of finding truth, it actually shows less than it might appear at first glance.

More sympathetic to biosecurity issues than at the start of the year. Pretty convinced there are clear things that would be useful to do and help a lot of people. Plus, FTX situation cut out a lot of money that went to the general area such as SBF's brother's group-Guarding Against Pandemics.

Sales tax: Interesting. I live in a state with sales tax but it doesn't apply to lottery tickets.

Could also make sense for people who don't itemize so don't benefit from charitable deduction but would itemize if they won the larger prizes.

I didn't downvote you. I think you're using Pascal's Mugging idiosyncratically.

Pascal's Mugging is normally for infinitesimal odds and astronomical payouts, with both odds and payouts often being really uncertain.

Here odds and payout are well-defined. The odds while extreme aren't infinitesimal.

I think we should be doing lots of things with one in a million chances. Start-ups that could change the world, promising AI research paths, running for president or prime minister. :)

4
Greg_Colbourn
1y
I say "seems like a bit of a"; I get that it's not literally infinitesimal odds (or an astronomical payout[1]), but it's small enough that it's similarly not worth going for imo.  I don't think any of these are one-in-a-million chances for most people, let alone most EAs, and if anyone goes into them thinking they are, they should be doing something else! Hundred-to-one or even thousand-to-one shots are reasonable for EAs to be making I think, but not million-to-one  (or worse). This is especially true given the current size of the community (it would make more sense going for million-to-one opportunities if there were of the order of a million other EAs going for them). 1. ^ And the odds are certain, and the payout more certain than with most Pascal's Muggings, but still not certain, as it depends on the number of other potential winners

Not quite a discipline, but I think American Christianity lost cultural influence by denominations ceding control of their colleges (based off this book).

Had the men's right movement established men's studies as more distinct from women's studies maybe they would have benefited (hard to believe they ever had the political power to achieve this.)

I can imagine a world where sociobiology became its own discipline. It did not.

I think the establishment of chiropractic schools legitimized the practice in the United States compared to other alternative medicines.... (read more)

The anecdata point is pretty interesting to me- I'm not an economist. Do you think if the field combined things like DALYs vs QUALYS or debates about subjective life expectation or stuff like that would be interesting to students?

I don't think it would be harmed by existing within normal econ departments- some normal econ depts. have ag economics within them and other places Ag Econ is independent.

3
david_reinstein
2y
Yes, it's true that Ag Economics (e.g., at Berkeley) presents a pretty good example of the coexistence of separate departments for two very much overlapping fields. Might be worth looking into more detail about how the Ag Econ departments managed to set that arrangement up. By the way, the stuff is all potentially interesting to idealistic students. I fear I was largely teaching a group of personal-financially-motivated students. But still, I'm not sure the term "Welfare economics" will bring them in. Worth doing some surveying on, perhaps

I'm skeptical of elevating children's rights in this way, because people already claim to care intensely about the value of children and their futures, but differ on how to do that. The UN wants to make it harder for kids to work, I can think of libertarians who disagree. Or education about sex and sexuality- both sides claim they are protecting children and so forth.

With more novel concepts or trying to get people to widen their circle of concern to include animals or far future generations, I think maybe that's a worthwhile way to go. But people care about kids a lot- or at least claim to!

Maybe there's some smart solution but I can't think of good ways to advance your goal. 

-1
LiaH
2y
I would be suspicious of anyone (the libertarians you describe) who claims to be protecting children by endorsing child labour. 

Thanks for writing this.  I'm interested in politics and political interventions as potential EA causes. But I do disagree with you. I think this cause is not a good use of resources because it's not tractable and because I think it wouldn't have any valuable direct effects either. (The indirect effects on EA diversity and composition are not considered in this comment.)

Tractable- you won't get 2/3 of the Senate to concur. Opposition to these treaties is standard on the right. I would be very surprised Democrats get a majority that large in the next d... (read more)

1
LiaH
2y
I appreciate your questions on both of these points.   Tractability - Yes, I see the senate as the roadblock, depending on the party makeup within it. Of course, lobbying state-specific-laws might be more successful, but not as comprehensive. This is the reason I am suggesting going for the big goal. It is more about universal acknowledgement of child rights as agent-less future people. Even if the senate is destined to block it, do you see the possible value in bringing child rights to the agenda, raising the issue in the news, raising public awareness, spinning the possibility US ratification as "American champions of child rights", or any similar secondary goals?   Value of ratification - True,  ratification does not directly guarantee improved child survival or welfare. It is why I am suggesting it as "hits based".   As I am sure you know, UN treaties are only as strong as the sanctions other countries choose to place on violators.  If the US ratified, as a relative global power, it would carry weight in sanctions, which it cannot do now.  The benefit to US children I see as a positive externality only.  The goal would be in what universal consensus represents, step one in a global value change toward the importance of future people.   As someone with interest in political interventions as EA cause areas, I am curious whether you think there is a better approach?

What are the best strategies for political movements that claim to advocate for a voiceless group to take? (longtermism for future generations, animal rights for animals, pro-lifers for fetuses...)

Should groups with very niche, technocratic issues try to join a party or try to stay non-partisan? Implications for AI, biorisk, and so on.

Can Americanists come up with a measure of democratic decline that's actually decent and not just a reskinned Polity/FreedomHouse metric?

EAs love economists. Can political scientists develop concepts that get them the same af... (read more)

I was thinking the reference class was something like "people explicitly orienting their actions for the benefit of  far future generations." 

I was trying to be more specific than every good deed that also benefits the future. I didn't want to include things like "this vaccine will save our children (and future generations)" or "we will win this war against our evil enemy (and also for our children's sake)". 

What seems new about longtermism to me is not the belief that good things will have positive consequences in the long-run- "classic" EA... (read more)

2
ChristianKleineidam
2y
Anti-nuclear advocates frequently talk about the long time that certain isotopes need to decay. Stewart Brand who came out of the environmentalist field founded the LongNow foundation and there are plenty of people in that field who think similar to him.

Thanks for the clear explanation of the downside of VSL- I learned a lot!

2
Joel Tan
2y
Glad it was useful!

Thanks for the counterexamples!

I'm trying to think of a way to get a fair example: Coding party manifestos by attention to long-term future and trying to rate their success in office? I'm really unsure.

I do think Communism was on average a more longtermist movement than democratic revolutions. Maybe the typical revolutionary in all revolutions had similar goals, but Marx and many of his followers had a vision for how history was supposed to play out, and envisioned an intermediate form of society, between the revolution and an eventual classless society.

In contrast, a lot of democratic revolutions were more like "King George bad." I don't think the American founding fathers were utopian in the same sense as a lot of Marxists.

4
Peter
2y
You don't think the Russian revolution was like "Tsar Nicholas bad"? I mean, "liberty and justice for all" sounds like a pretty strong vision of the future to me.  I guess I'd like to see more evidence that 1) there were significant differences in caring about the future between movements and 2) how these differences contributed to movement failures concretely.  If I had to guess, I'd hypothesize that there's something else that is the main factor(s), like social dominance orientation of leaders and the presence or absence of group mechanisms to resist that or channel it in less destructive ways. 

"Imagine that every time there was a big crisis in the news, some EAs produced well-researched, sensible lists of the most plausibly-effective ways for people to help with that crisis. The lists would be produced voluntarily by EAs who were passionate about or informed about the cause, and shared widely by other EAs. "

I agree that working on LICAs can be a good idea for individuals EAs and I think your examples were well-chosen. I disagree that it is a good idea for the EA community or institutions to work on LICAs.

I completely agree that addressing import... (read more)

2
Amber Dawn
2y
Yeah, that is a good point. It makes a lot of sense for EA orgs to avoid divisive issues, particularly if they are not among the most pressing anyway.  A friend pointed out elsewhere that if producing LICAs was the norm for institutions, you might end up with institutions producing recommendations on both sides of a contentious social issue - e.g., how to effectively improve abortion access, and how to effectively reduce it. This could be bad both for PR reasons (*everyone* would hate us!) and because different sets of EAs are essentially doing work that cancels each other out.

Aspiring EA researchers should consider taking a shot at winning federal bounties.

A lot of EAs are relatively young and EA org jobs are competitive. Challenge.gov list prizes you can compete for by developing new products or proposing strategic plans. Options for engineers, social scientists, and communications type to build career capital, possibly win money, and make the US government better at its job.

Current prizes include a ton of AI-related stuff but also market analysis for environmental technology and math education programs.

I just find the form of the argument really unconvincing. It reads as a general argument against demanding moral theories. He has the points that

  1. Valuing embryos would require a lot of work regarding spontaneous abortion and people don't want to do what that entails.
  2. People don't act like they value embryos.

If this argument works, it also seems like we should say caring about animal welfare is absurd (how many people think we should modify the environment to help wild animals), caring about the far future is absurd, and so forth. I think in function this is a general anti-EA argument, although in practice Ord obviously did a lot to start and support EA and promote concern for the future.

2
Gregory Lewis
2y
I agree this form of argument is very unconvincing. That "people don't act as if Y is true" is a pretty rubbish defeater for "people believe Y is true", and a very rubbish defeater for "X being true" simpliciter. But this argument isn't Ord's, but one of your own creation. Again, the validity of the philosophical argument doesn't depend on how sincerely a belief is commonly held (or whether anyone believes it at all). The form is simply modus tollens: 1. If X (~sanctity of life from conception) then Y (natural embryo loss is - e.g. a much greater moral priority than HIV) 2. ¬Y (Natural embryo loss is not a much greater moral priority than (e.g.) HIV) 3. ¬X (The sanctity of life from conception view is false) Crucially, ¬Y is not motivated by interpreting supposed revealed preferences from behaviour. Besides it being ~irrelevant ("Person or group does not (really?) believe Y -->?? Y is false") this apparent hypocrisy can be explained by ignorance rather than insincerity: it's not like statistics around natural embryo loss are common knowledge, so their inaction towards the Scourge could be owed to them being unaware of it. ¬Y is mainly motivated by appeals to Y's apparent absurdity. Ord (correctly) anticipates very few people on reflection would find Y plausible, and so would find if X indeed entailed Y, this would be a reason to doubt X. Again, it is the implausibility on rational reflection, not the concordance of practice to those who claim to believe it, which drives the argument .

I'm not making any claim about the moral value of embryos. 

I just think Ord's claim that embryo valuers don't care about embryos (in the right way, in all circumstances) says that their general view can be discarded is not convincing. I know tons of people who claim X, but don't act on X. That doesn't mean they're wrong about X- they might be weak or hypocritical or bad at reasoning!

2
Ramiro
2y
I shouldn't have implied you made any claim about the moral value of embryos. I should have said, instead, that someone who thinks they are morally valuable would bite the bullet... And Ord thinks that's a very problematic position - it apparently implies that we should try to prevent the loss embryos even if it happens right after conception. So it is not absurd to see his point as akin to a reductio. On the other hand If they are "weak or hypocritical or bad at reasoning" then their failing to act on X conflicts with other beliefs - like their belief in X. They can solve the conflict by either dropping X, or changing their behavior. We usaully don't convince people to donate to GD by first convincing them that suffering matters; we assume they agree that suffering matters and show them that their belief that suffering matters imply they should donate... If someone says "Oh, but now I don't think suffering matters anymore" you need to use different arguments - you have to show that this new position conflicts with other premises they defend

I'm not convinced of the act omission distinction, but I'm not ready to throw it away. 

I think one argument about wild animal suffering that might be true or might be rationalization is that there's nothing we can do for now- but you can promote general compassion to animals through activism or veganism or something.

I genuinely think he wrote what turned out to be a decent anti-EA article. 

If most people follow their moral principles, they run into really challenging situations- like confronting millions of spontaneous abortions per year.  One response is to bite the bullet (rare), one is to not think about the implications of your moral commitments (common), and another is to argue that the fact that nobody follows a principle fully, you can discard it (I dislike this approach), but it's a possible conclusion.

Instead, I think people should bite the bullet o... (read more)

2
Ramiro
2y
I was gonna add that maybe what you meant is that Ord's argument would justify a subjectivist moral theory. But I don't see anything implying it in the text, though. His point is more like some sort of reflexive equilibrium, when one has to see which of the inconsistent moral intuitions must remain. that's how moral reasoning usually works. Your way of solving this is by biting the bullet that embryos might be the moral tragedy of our time. Others will solve the inconsistency in some other way. But one's modus ponens is someone else's modus tollens (https://www.gwern.net/Modus)... if someone replied that selfish behavior is inconsistent with altruistic principles, I'd have to agree - at least since Thrasymacus people have used this when arguing for some sort of moral skepticism. The reasonig is logically valid; that's precisely why this position is kind of hard to retort. You can't just reply that this is wrong; you must show how it conflicts with other beliefs the person is supposed to have. Edit: on the fun side: on of the moral pros of assisted reproduction is that it helps save us from the non-identity problem, as it mitigates the contingencies of human reproduction.

Thanks, I understand the distinction you're making . I still disagree that we can reject their moral claims because they don't take it care far enough- I think animal advocates are pretty sincere even though virtually none of them ever care about wild animals. But I still animal advocates make fair points.

5
markov_user
2y
Very good argument imo! It shows there's a different explanation rather than "people don't really care about dying embryos" that can be derived from this comparison. People tend to differentiate between what happens "naturally" (or accidentally) vs deliberate human actions. When it comes to wild animal suffering, even if people believe it exists, many will think something along the lines of "it's not human-made suffering, so it's not our moral responsibility to do something about it" - which is weird to a consequentialist, but probably quite intuitive for most people. It takes a few non-obvious steps in reasoning to get to the conclusion that we should care about wild animal suffering. And while fewer steps may be required in the embryo situation, it is still very conceivable that a person who actually cares a lot about embryos might not initially get to the conclusion that the scope of the problem exceeds abortion.
3
Guy Raveh
2y
That's a better point. Also, Guilty as charged. I sometimes think whether I should, but so far I've always come to the conclusion that either there's a major moral difference (e.g. our direct responsibility for the suffering, or the moral importance of nature), or that interventions to meaningfully change wild animal suffering are bound to have devastating side effects.

People who claim to care about embryos may oppose abortion or even support embryo adoption- does their failure to care about spontaneous abortions discredit them? They're not doing all they can.

People who care to claim about animals may be vegans- does their failure to become an animal rights advocate discredit them? They're not doing all they can.

People who claim to care about the global poor may donate money- but do they donate all their money? They're not doing all they can.

I reject the form of this argument. People are hypocrites and moral failures- they can still be correct  in their claims.

5
Guy Raveh
2y
I'm not asking whether they're doing something about spontaneous abortion (maybe they can't or have other priorities), but whether they even care about it. I think that is a measurement of the seriousness of their professed belief.

Markets (and policymakers) have to make decisions all the time. They might not be perfect, but I'm not aware of a better gauge of nuclear risk- or a least a gauge I find more trustworthy. Another interpretation of Buiter's article is that he's just wrong.

I think climate change is unlikely to rise to the level of a global catastrophic risk. But if it were, it would still leave open the question, what party an individual should enter to most improve the situation.

As I wrote:"I would urge them to think on the margin. The more awesome the Democratic Party is o... (read more)

1
Ember
2y
The first part isn't an argument it's just a dismissal. You haven't engaged with anything I've said on it. You have to discount expert opinion in favor of market trends in order to hold this position in a context where market trends are particularly suspect.  The second part denies the broad scientific consensus on the threat of climate change. Here's a quote from the UN on climate change "the UN Secretary-General insisted that unless governments everywhere reassess their energy policies, the world will be uninhabitable." Do you take the UN to be untrustworthy?  https://news.un.org/en/story/2022/04/1115452#:~:text=A%20new%20flagship%20UN%20report,limit%20global%20warming%20to%201.5 The argument of which party is better to join as an individual cannot be had without the recognition that the GOP being in power is a global catastrophic risk. Otherwise, we risk losing track of what the actual risk factors are.  Separately this assumes the democrats are good on their issues when at best they are painfully mediocre. 

"On the nuclear risk front, the republican party is also clearly horrific."

If the GOP was worse on nuclear risk, wouldn't we expect property values in major cities to crash when a Republican is elected president (because risk of being nuked goes up) and soar when a Democratic is elected? Or wouldn't we expect other countries to start shifting resources to fallout shelters and nuclear winter prep when a Republican is elected? I've never seen that happen.

I'm not aware of any market response or other responses due to swings in nuclear war risk that would coincide with presidential elections according to your model. I don't think there's a predictable difference in nuclear war risk between the parties. 

1
Ember
2y
No there's no reason to expect nuclear risk to meaningfully affect property values, or that other country would respond appropriately. A nuclear war has never happened so without a clear understanding of the epistemic problems involved(which both markets and countries are failing to take into account) nuclear risk isn't really taken as a meaningful factor.  I might do a deep dive on this later, as I haven't read up that much on the subject, but a quick google brought me to this article about how markets fail to capture the increased nuclear risk from the war in Ukraine. https://www.project-syndicate.org/commentary/asset-markets-ignoring-nuclear-risk-by-willem-h-buiter-2022-03 Additionally, lots of these things increase nuclear risk beyond the term of the given political figure. Donald Trump is out of office but the Iran Nuclear Deal is still dead. Do you not think that the Iran Nuclear deal decreased nuclear risk? What about Trump's statement that he would threaten nuclear war? do you think he's lying? You have to make a ton of ad hoc assumptions to maintain this position.  Also, do you agree that their position on climate change makes them being in power a global catastrophic risk? It seems like the point is unarguable, but you are welcome to give pushback.

Great article. I had thought about similar reasoning in terms of X-risk, but thinking about how it applies to catastrophes that reduces the number of observers is important, too.

Hadn't thought about this in terms of nuclear risk, either.

1
Ember
2y
I'm glad you enjoyed it :) 

I think it's an open question as to how much we can learn from failed apocalyptic predictions- https://forum.effectivealtruism.org/posts/2MjuJumEaG27u9kFd/don-t-be-comforted-by-failed-apocalypses

Also with the possible exception of the earliest Christians who were hoping for an imminent second coming, I'm quite sure most Christians have not predicted an imminent apocalypse so we're talking about specific sects and pastors (admittedly some quite influential).  You do say you're talking about early Christians at the start of the article, but I think the ... (read more)

2
Ember
2y
Anthropic shadow certainly creates some degree of uncertainty, however, it seems to apply less in this case than it might in say the case of nuclear war. (I'm actually about to submit another submission about anthropic shadow in that case) It seems like AI development wasn't slowed down by a few events but rather due to overarching complexities in its development. It's my understanding that anthropic shadow is mostly applicable in cases where there have been close calls, and is less applicable in non-linear cases. However, I might be mistaken. The conclusion doesn't read this way to me, as to me the statement "apocalyptic claims made by Christians" doesn't imply that all Christians make apocalyptic claims. However, it does seem to have created unnecessary confusion, I will add the word some. 

"When I encounter questions like “is a world where we add X many people with Y level of happiness better or worse?” or “if we flatten the happiness of a population to its average, is that better or worse?”—my reaction is to reject the question.

First, I can’t imagine a reasonable scenario in which I would ever have the power to choose between such worlds."

You're a Senator. A policy analyst points out a new proposed tax reform will boost birth rates- good or bad?

You're an advice columnist- people write you questions about starting a family. All else equal, d... (read more)

2
jasoncrawford
2y
These are good examples. But I would not decide any of these questions with regard to some notion of whether the world was better or worse with more people in it. * Senator case: I think social engineering through the tax code is a bad idea, and I wouldn't do it. I would not decide on the tax reform based on its effect on birth rates. (If I had to decide separately whether such effects would be good, I would ask what is the nature of the extra births? Is the tax reform going to make hospitals and daycare cheaper, or is it going to make contraception and abortion more expensive? Those are very different things.) * Advice columnist: I would advise people to start a family if they want kids and can afford them. I might encourage it in general, but only because I think parenting is great, not because I think the world is better with more people in it. * Pastor: I would realize that I'm in the wrong profession as an atheist, and quit. Modulo that, this is the same as the advice columnist. * Redditor: I don't think people should put pressure on their kids, or anyone else, to have children, because it's a very personal decision. All of this is about the personal decision of the parents (and whether they can reasonably afford and take care of children). None of it is about general world-states or the abstract/impersonal value of extra people.

"You start about by talking about Masons and Elks but later reference American fraternal organizations. I don’t know if you include sororities in this, but I would suggest bringing sororities to parity with the rules of fraternities."

These are not college fraternities, they are general fraternities which almost all accept men and women.

-1
Yelnats T.J.
2y
Thanks for the clarification!

I think this is true for some people, but not for most people. Religion seems helpful for happiness, health, having a family, etc which are some of the most common terminal goals out there.

1
Mart_Korz
11mo
This is a good point, although I would argue that the reasons why practicing religion has these advantages is unrelated to it being a case of Pascal's wager (if we let Pascal's wager stand for promises of infinite value in general).

I would say if we use other people's judgment as a guide for our own, it's an argument for the belief in the divine/God/the supernatural and it becomes hard to say Christianity and Islam have negligible probability. So rules that are like "ignore tiny probability" don't work. Your idea of discounting probability as utility rises still works but we've talked about why I don't think that's compelling enough.

I don't have good survey evidence on Pascal's Wager, but I think a lot of religious believers would agree with the general concept- don't risk your soul,... (read more)

I'm not against it- I think it's an okay way of framing something real. Your phrasing here is pretty sensible to me. 

"Let's say we could identify exemplary societies across the past, present, and future. Furthermore, assume that, on some questions, these societies had a consensus common sense view. Finally, assume that, in some cases, we can predict what that intertemporal consensus common sense view would be.

Given all three of these assumptions, then I think we should consider adopting that point of view."

But I have concerns about the future perspect... (read more)

2
DirectedEvolution
2y
"My view makes perfect sense, contemporary culture is crazy, and history will bear me out when my perspective becomes a durable new form of common sense" is a statement that, while it scans as arrogant, could easily be true - and has been many times in the past. It at least explains why a person who ascribes to "social intelligence" as a guide might still hold many counterintuitive opinions. I agree with you though that it's not useful for settling disputes when people disagree in their predictions about "universal common sense." If you believe that current and past common sense is a better guide, then doesn't that work against Pascal's Wager? I mean, how many people now, or in the past, would agree with you that Pascal's Wager is a good idea? I think it has stuck around in part because it's so counterintuitive. We don't exactly see a ton of deathbed conversions, much less for game-theoretic reasons.

I think imagining that current view X is justified, because one imagines that future generations will also believe in X is really unconvincing.

I think most people think their views will be more popular in the future. Liberal democrats and Communists have both argued that their view would dominate the world. I don't think it adds anything other than illustrating the speaker is very confident of the merits of their worldview.

If for instance, demographers put together an amazing case that most future humans would be Mormon, would you change your mind? If you became convinced that AI would kill humanity next decade and we're in the last generation so there are no future humans, would you change your mind?

2
DirectedEvolution
2y
I've had a little more chance to flesh out this idea of "universal common sense." I'm now thinking of it as "the wisdom of the best parts of the past, present, and future." Let's say we could identify exemplary societies across the past, present, and future. Furthermore, assume that, on some questions, these societies had a consensus common sense view. Finally, assume that, in some cases, we can predict what that intertemporal consensus common sense view would be. Given all three of these assumptions, then I think we should consider adopting that point of view. In the AI doom scenario, I think we should reject the common sense of the denizens of that future on matters pertaining to AI doom, as they weren't wise enough to avoid doom. In the Mormon scenario, I think that if the future is Mormon, then that suggests Mormonism would probably be a good thing. I generally trust people to steer toward good outcomes over time. Hence, if I believed this, then that would make me take Mormonism much more seriously. I have a wide confidence interval for this notion of "universal common sense" being useful. Since you seem to be confidently against it, do you have futher objections to it? I appreciate the chance to explore it with a critical lens.

I would put it a different way.

If we use normal decision-making rules that many people use, especially consequentialists, we find that Pascal's wager is a pretty strong argument. There are many weak objections to and some more promising objections. But unless we're certain of these objections it seems difficult to escape the weight of infinity.

If we look to other more informal ways to make decisions- favoring ideas that are popular, beneficial, and intuitive, then major religions that claim to offer a route to infinity are pretty popular, arguably benefici... (read more)

2
DirectedEvolution
2y
  Great question. Let me offer the idea of "universal common sense." "Common sense" is "the way most people look at things." The way people commonly use this phrase today is what we might call "local common sense." It is the common sense of the people who are currently alive and part of our culture. Local common sense is useful for local questions. Universal common sense is useful for universal questions. Since religion, as well as science, claim to be universal questions, we ought to rely on universal common sense. The galactic wisdom of crowds, if you will. Of course, we can't talk to people in the past or future. But even when we rely on local common sense, we are in some sense making a prediction about what our peers would say if we asked them the question we have in mind. We can still make a prediction about what, say, a stone age person, or a person living 10,000 years in the future, would say if we asked them about whether Catholicism was real. The stone age person wouldn't know what you're talking about. The person 10,000 years in the future, I suspect, wouldn't know either, as Catholicism might have largely vanished into history. However, I expect that science will still be going strong 10,000 years in the future, if humanity lives to that point. And I expect that by then, vastly more people will believe (or have believed) in a form of scientific materialism than will believe in any particular religion. Hence, I predict that "universal common sense" is that we ought not spend much time at all investigating the truth of any particular religion.

I'm mostly interested in the first. I think people should take Pascal's wager!

Yes, I feel comfortable saying if the EV changes based on our action, we are responsible in some sense or produced it. 

In Newcomb's paradox, I think you can "produce" additional dollars.

2
DirectedEvolution
2y
I guess it’s useful then to clarify which point we’re interested in. I personally am interested in the question “given free will and personal control over the outcome, should we choose a strategy of pursuing infinite utility?” I am less interested in “if you did not have control over the outcome, would you say it’s better if the universe was deterministically set up such that we are pursuing infinite utility?” Are you interested in the second question?

"Another way of putting it - the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?"

Thanks for the example. Yes, I think you've convinced me on this point. I think I want to say something like "when we have a good sense of the distribution of events, we know the bigger the departure from typical events, the less likely it is." 

But I still think (and maybe this is going back to #1 a little) that th... (read more)

2
DirectedEvolution
2y
The word "produce" is causal language. It seems to me that even if our actions are correlated with other people, there's no reason to think that we in particular are the ones controlling that correlated action. Do you think we can be said to "produce" utility if we're not causally in control of that production?

Right, the distinction between expected value from tech expected utility from offers from people makes sense. But I think your axiom still doesn't provide enough reason to reject Pascal's Wager.

  1. I'm not sure if we can say we have good grounds to apply this discounting to God or the divine in general.  Can we put that in the same bucket as human offers? I guess you could say yes by arguing that God is just a human invention but isn't that like assuming the conclusion or something?
  2.  I don't think probability declines as fast as promised value rises-
... (read more)
2
DirectedEvolution
2y
I suspect that the answer to some of these questions at an intersection between psychology and mathematics. Our understanding of physics is empirical. Before making observations of the universe, we'd have no reason to entertain the hypothesis that "light exists." There would be infinite possibilities, each infinitely unlikely. Yet somehow, based on our observations, we find it wise to believe that our current understanding of how physics works is true. How did we go from a particular physics model being infinitely unlikely to it being considered almost certainly true, based on finite amounts of evidence? It seems that we have a sort of mental "truth sensor," which gets activated based on what we observe. A mathematician's credence in the correctness of a proof is ultimately sourced from their "truth sensor" getting activated based on their observation of the consistency of the relationships within the proof. So we might ultimately have to reframe this question as "why do/don't arguments for Pascal's Wager activate our 'truth sensor'?" This is an easier question to answer, at least for me. I see no compelling way to attack the problem, nobody else seems to either, I see the claims of world religions about how to achieve utility as being about as informative as taking advice from monkeys on typewriters, and accepting Pascal's Wager seems deeply contrary to common sense. These are unfortunately only reasons not to spend time thinking more deeply about the problem, and don't contribute in any productive way to moving toward a resolution :/
3
DirectedEvolution
2y
I’m not sure about #1 or #3. I do think that #2 is false, again on mechanistic grounds. It’s harder to get a billion dollars than a million dollars, and that continues to apply as the sums of money offered grow larger. Another way of putting it - the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?

This is an interesting response, but doesn't it run into a problem where you could have large amounts of evidence that Action X provides infinite payoff but have to ignore it.

Imagine really credible scientist/theologians discover there's a 90% chance that X gives you infinite payoff and 90% chance Y gives you $5, but you feel obligated to grab the $5 just because you're an infinity skeptic? 

I also think this isn't consistent with how people decide things in general- we didn't need more evidence that COVID vaccines worked than flu vaccines worked, even though the expected utility from COVID vaccines was much higher.

2
DirectedEvolution
2y
This is a good response! It's common sense that our prior for whether or not a technology will work for a given purpose depends on empiricism. This accounts for why we'd reject the million dollar post office run - we have abundant empirical and mechanistic evidence that offers of ~free money are typically lies or scams. Utility can be an inverse proxy for mechanistic plausibility, but only because of efficient market hypothesis-like considerations. If there was a $20 on the sidewalk, somebody would have already picked it up.

In theory, you could be stuck doing bizarre things like that. But I don't think you would in this world. Most reasonably taking infinity seriously, probably involves converting to Christianity or if not that Islam or if not that some other established religions.

Major religions normally condemn occultic practices and superstitions from outside that practice. If someone comes up to you and claims to be a demon that will inflict suffering, someone who has already bet on the Christian God or Allah for instance, can just say go away- I'm already maximizing my chance of infinite reward and minimizing my chance of infinite punishment.

I think one takeaway is that given the stakes of the question- people should actually assess the arguments offered for each religion's truth. It's probably not correct to just assume a thought experiment (the Evidentialist God is as plausible as Gods for which there is (at least purported) evidence that many find convincing.

But if Evidentialist God is the most likely, we should dedicate ourselves to spreading Bayesian statistics or something like that.

I think it makes sense to spend a substantial amount of time researching religions. If you're terminally i... (read more)

What major religion are you thinking of that has that ranking? Islam seems to treat Christianity/Islam as preferable to paganism/irreligion. 

1
Mart_Korz
11mo
This is not enough to claim that Christianity as a whole holds this position, but there certainly exist sentiments in this direction such as Revelation 3:15--16
1
ryancbriggs
2y
Many rankings will add the required complexity, I think, but I’ve definitely heard this said about Jews (by Christians). Surely many Christians would also disagree ofc.

There isn't one. To reject Pascal's Wager, you just have to conclude that you don't care about infinity. Taking Pascal's Wager is the correct utilitarian response. You probably need to weight religions both by how likely they are to be true and how likely you can "win" conditional upon them being true.

Amanda Askell has a good rundown on why most objections to Pascal's Wager are bad.

3
BrownHairedEevee
2y
Askell's first response is a non sequitur. The person deciding to take Pascal's wager does so under uncertainty about which of the n gods will get them into heaven. The response is assuming you're already in the afterlife and will definitely get into heaven if you choose door A. However, the n-god Pascal's wager suggests that believing in any one of the possible gods (indeterminate EU) is better than believing in no god (-infinite EU). Believing in all of them is even better (+infinite EU). There's nothing in the problem statement saying that each god will send you to hell for believing in any other god (although it can be inferred from the Ten Commandments that Yahweh will do so).
2
Kinbote
2y
I'm not sure I buy her last argument. Pascal's Wager does seem like a reductio ad absurdum of expected utility theory. Because if you accepted, it then, by equivalent logic, you would have to perform every other belief, no matter how improbable, as long as it had an infinite payoff. For example, somebody could tell me that if I stepped on a crack, the universe will end. And since there's a non-zero chance that they're correct, I couldn't step on any cracks ever again. As long as these potentially infinite payoff outcomes aren't mutually exclusive, you would have to accept them.  And there's no bound on the number of them. Imagine being OCD in this world! Since this is clearly insane, there must be a fundamental flaw with how expected utility theory deals with infinities. Yet another reason to embrace virtue ethics :)
1
Transient Altruist
2y
:( this is not the answer I was hoping for... (I don't believe in heaven or hell so the prospect of accepting the wager is a bit depressing) Thanks a lot though for the response and the really helpful link!

Yeah, I think this would worth be trying.

Even more speculatively, I was thinking of whether trying to reduce wedding costs would be effective. Married couples have much higher birth rates than unmarried couples, and it looks like it might be partly causal. For instance, I know couples who waited to have children till they married and waited to marry until they could pay for a wedding.

Load more