I only am familiar with the US system unfortunately. I think this evaluation holds up pretty well for EAs even though its some years old.
Yes, I agree it's frustrating. I did a more detailed one when considering living kidney donation. Plus, living liver donation is less common.
My fast liver donation BOTEC assumes 80k hours of working hours (reduce if older?).
1 in 250 chance of death (source, maybe too high)= -320 work hours
About a month of work lost due to recovery (source)= -160 work hours.
So maybe spending 500 work hours to extend one persons life.
Ignoring time off work due to potential reimbursment, if you netted $15 per hour for the hours lost to risk of death and dona...
Hi Kyle,
If you plan on donating, I think donating through UNOS's pilot program for paired liver donation is the highest impact way for an American to donate lobe currently.
I would do a BOTEC for how much benefit the recipient would get versus the expected loss of life to you due to surgery risk and long-term effects.
If you are earning to give, I would check out your employer's policy for time off for organ donation as well as the possibility for reimbursement of expenses through NLDAC (which you very well may be familiar with through your kidney expe...
"The other stuff seems more reasonable but if you're going to restrict immigrants' ability to work on AI you might as well restrict natives' ability to work on AI as well. I doubt that the former is much easier than the latter."
This part of your comment I disagree on. There are specific provisions in US law to protect domestic physicians, immigrants on H1B visas have way fewer rights and are more dependent on their employers than citizen employees, and certain federal jobs or contractor positions are limited to citizens/permanent residents. I think this is...
Thanks for the feedback.
When I was writing this- and when I think about AI Risk in general- as someone without a ML background, I tend to fall back on looking for non-technical heuristics like interest rates/market caps of hardware companies. So I am influenced perhaps more than a more technical person would be by these kind of meta or revealed preferences arguments.
I think Democrats (and left-wingers in other countres) could embrace increasing high-skilled immigration in ways that steer talent away from AI. In the US. H1-B visas could be changed to ...
More sympathetic to biosecurity issues than at the start of the year. Pretty convinced there are clear things that would be useful to do and help a lot of people. Plus, FTX situation cut out a lot of money that went to the general area such as SBF's brother's group-Guarding Against Pandemics.
Sales tax: Interesting. I live in a state with sales tax but it doesn't apply to lottery tickets.
Could also make sense for people who don't itemize so don't benefit from charitable deduction but would itemize if they won the larger prizes.
I didn't downvote you. I think you're using Pascal's Mugging idiosyncratically.
Pascal's Mugging is normally for infinitesimal odds and astronomical payouts, with both odds and payouts often being really uncertain.
Here odds and payout are well-defined. The odds while extreme aren't infinitesimal.
I think we should be doing lots of things with one in a million chances. Start-ups that could change the world, promising AI research paths, running for president or prime minister. :)
Not quite a discipline, but I think American Christianity lost cultural influence by denominations ceding control of their colleges (based off this book).
Had the men's right movement established men's studies as more distinct from women's studies maybe they would have benefited (hard to believe they ever had the political power to achieve this.)
I can imagine a world where sociobiology became its own discipline. It did not.
I think the establishment of chiropractic schools legitimized the practice in the United States compared to other alternative medicines....
The anecdata point is pretty interesting to me- I'm not an economist. Do you think if the field combined things like DALYs vs QUALYS or debates about subjective life expectation or stuff like that would be interesting to students?
I don't think it would be harmed by existing within normal econ departments- some normal econ depts. have ag economics within them and other places Ag Econ is independent.
I'm skeptical of elevating children's rights in this way, because people already claim to care intensely about the value of children and their futures, but differ on how to do that. The UN wants to make it harder for kids to work, I can think of libertarians who disagree. Or education about sex and sexuality- both sides claim they are protecting children and so forth.
With more novel concepts or trying to get people to widen their circle of concern to include animals or far future generations, I think maybe that's a worthwhile way to go. But people care about kids a lot- or at least claim to!
Maybe there's some smart solution but I can't think of good ways to advance your goal.
Thanks for writing this. I'm interested in politics and political interventions as potential EA causes. But I do disagree with you. I think this cause is not a good use of resources because it's not tractable and because I think it wouldn't have any valuable direct effects either. (The indirect effects on EA diversity and composition are not considered in this comment.)
Tractable- you won't get 2/3 of the Senate to concur. Opposition to these treaties is standard on the right. I would be very surprised Democrats get a majority that large in the next d...
What are the best strategies for political movements that claim to advocate for a voiceless group to take? (longtermism for future generations, animal rights for animals, pro-lifers for fetuses...)
Should groups with very niche, technocratic issues try to join a party or try to stay non-partisan? Implications for AI, biorisk, and so on.
Can Americanists come up with a measure of democratic decline that's actually decent and not just a reskinned Polity/FreedomHouse metric?
EAs love economists. Can political scientists develop concepts that get them the same af...
I was thinking the reference class was something like "people explicitly orienting their actions for the benefit of far future generations."
I was trying to be more specific than every good deed that also benefits the future. I didn't want to include things like "this vaccine will save our children (and future generations)" or "we will win this war against our evil enemy (and also for our children's sake)".
What seems new about longtermism to me is not the belief that good things will have positive consequences in the long-run- "classic" EA...
Thanks for the counterexamples!
I'm trying to think of a way to get a fair example: Coding party manifestos by attention to long-term future and trying to rate their success in office? I'm really unsure.
I do think Communism was on average a more longtermist movement than democratic revolutions. Maybe the typical revolutionary in all revolutions had similar goals, but Marx and many of his followers had a vision for how history was supposed to play out, and envisioned an intermediate form of society, between the revolution and an eventual classless society.
In contrast, a lot of democratic revolutions were more like "King George bad." I don't think the American founding fathers were utopian in the same sense as a lot of Marxists.
"Imagine that every time there was a big crisis in the news, some EAs produced well-researched, sensible lists of the most plausibly-effective ways for people to help with that crisis. The lists would be produced voluntarily by EAs who were passionate about or informed about the cause, and shared widely by other EAs. "
I agree that working on LICAs can be a good idea for individuals EAs and I think your examples were well-chosen. I disagree that it is a good idea for the EA community or institutions to work on LICAs.
I completely agree that addressing import...
Aspiring EA researchers should consider taking a shot at winning federal bounties.
A lot of EAs are relatively young and EA org jobs are competitive. Challenge.gov list prizes you can compete for by developing new products or proposing strategic plans. Options for engineers, social scientists, and communications type to build career capital, possibly win money, and make the US government better at its job.
Current prizes include a ton of AI-related stuff but also market analysis for environmental technology and math education programs.
I just find the form of the argument really unconvincing. It reads as a general argument against demanding moral theories. He has the points that
If this argument works, it also seems like we should say caring about animal welfare is absurd (how many people think we should modify the environment to help wild animals), caring about the far future is absurd, and so forth. I think in function this is a general anti-EA argument, although in practice Ord obviously did a lot to start and support EA and promote concern for the future.
I'm not making any claim about the moral value of embryos.
I just think Ord's claim that embryo valuers don't care about embryos (in the right way, in all circumstances) says that their general view can be discarded is not convincing. I know tons of people who claim X, but don't act on X. That doesn't mean they're wrong about X- they might be weak or hypocritical or bad at reasoning!
I'm not convinced of the act omission distinction, but I'm not ready to throw it away.
I think one argument about wild animal suffering that might be true or might be rationalization is that there's nothing we can do for now- but you can promote general compassion to animals through activism or veganism or something.
I genuinely think he wrote what turned out to be a decent anti-EA article.
If most people follow their moral principles, they run into really challenging situations- like confronting millions of spontaneous abortions per year. One response is to bite the bullet (rare), one is to not think about the implications of your moral commitments (common), and another is to argue that the fact that nobody follows a principle fully, you can discard it (I dislike this approach), but it's a possible conclusion.
Instead, I think people should bite the bullet o...
Thanks, I understand the distinction you're making . I still disagree that we can reject their moral claims because they don't take it care far enough- I think animal advocates are pretty sincere even though virtually none of them ever care about wild animals. But I still animal advocates make fair points.
People who claim to care about embryos may oppose abortion or even support embryo adoption- does their failure to care about spontaneous abortions discredit them? They're not doing all they can.
People who care to claim about animals may be vegans- does their failure to become an animal rights advocate discredit them? They're not doing all they can.
People who claim to care about the global poor may donate money- but do they donate all their money? They're not doing all they can.
I reject the form of this argument. People are hypocrites and moral failures- they can still be correct in their claims.
Markets (and policymakers) have to make decisions all the time. They might not be perfect, but I'm not aware of a better gauge of nuclear risk- or a least a gauge I find more trustworthy. Another interpretation of Buiter's article is that he's just wrong.
I think climate change is unlikely to rise to the level of a global catastrophic risk. But if it were, it would still leave open the question, what party an individual should enter to most improve the situation.
As I wrote:"I would urge them to think on the margin. The more awesome the Democratic Party is o...
"On the nuclear risk front, the republican party is also clearly horrific."
If the GOP was worse on nuclear risk, wouldn't we expect property values in major cities to crash when a Republican is elected president (because risk of being nuked goes up) and soar when a Democratic is elected? Or wouldn't we expect other countries to start shifting resources to fallout shelters and nuclear winter prep when a Republican is elected? I've never seen that happen.
I'm not aware of any market response or other responses due to swings in nuclear war risk that would coincide with presidential elections according to your model. I don't think there's a predictable difference in nuclear war risk between the parties.
Great article. I had thought about similar reasoning in terms of X-risk, but thinking about how it applies to catastrophes that reduces the number of observers is important, too.
Hadn't thought about this in terms of nuclear risk, either.
I think it's an open question as to how much we can learn from failed apocalyptic predictions- https://forum.effectivealtruism.org/posts/2MjuJumEaG27u9kFd/don-t-be-comforted-by-failed-apocalypses
Also with the possible exception of the earliest Christians who were hoping for an imminent second coming, I'm quite sure most Christians have not predicted an imminent apocalypse so we're talking about specific sects and pastors (admittedly some quite influential). You do say you're talking about early Christians at the start of the article, but I think the ...
"When I encounter questions like “is a world where we add X many people with Y level of happiness better or worse?” or “if we flatten the happiness of a population to its average, is that better or worse?”—my reaction is to reject the question.
First, I can’t imagine a reasonable scenario in which I would ever have the power to choose between such worlds."
You're a Senator. A policy analyst points out a new proposed tax reform will boost birth rates- good or bad?
You're an advice columnist- people write you questions about starting a family. All else equal, d...
"You start about by talking about Masons and Elks but later reference American fraternal organizations. I don’t know if you include sororities in this, but I would suggest bringing sororities to parity with the rules of fraternities."
These are not college fraternities, they are general fraternities which almost all accept men and women.
I think this is true for some people, but not for most people. Religion seems helpful for happiness, health, having a family, etc which are some of the most common terminal goals out there.
I would say if we use other people's judgment as a guide for our own, it's an argument for the belief in the divine/God/the supernatural and it becomes hard to say Christianity and Islam have negligible probability. So rules that are like "ignore tiny probability" don't work. Your idea of discounting probability as utility rises still works but we've talked about why I don't think that's compelling enough.
I don't have good survey evidence on Pascal's Wager, but I think a lot of religious believers would agree with the general concept- don't risk your soul,...
I'm not against it- I think it's an okay way of framing something real. Your phrasing here is pretty sensible to me.
"Let's say we could identify exemplary societies across the past, present, and future. Furthermore, assume that, on some questions, these societies had a consensus common sense view. Finally, assume that, in some cases, we can predict what that intertemporal consensus common sense view would be.
Given all three of these assumptions, then I think we should consider adopting that point of view."
But I have concerns about the future perspect...
I think imagining that current view X is justified, because one imagines that future generations will also believe in X is really unconvincing.
I think most people think their views will be more popular in the future. Liberal democrats and Communists have both argued that their view would dominate the world. I don't think it adds anything other than illustrating the speaker is very confident of the merits of their worldview.
If for instance, demographers put together an amazing case that most future humans would be Mormon, would you change your mind? If you became convinced that AI would kill humanity next decade and we're in the last generation so there are no future humans, would you change your mind?
I would put it a different way.
If we use normal decision-making rules that many people use, especially consequentialists, we find that Pascal's wager is a pretty strong argument. There are many weak objections to and some more promising objections. But unless we're certain of these objections it seems difficult to escape the weight of infinity.
If we look to other more informal ways to make decisions- favoring ideas that are popular, beneficial, and intuitive, then major religions that claim to offer a route to infinity are pretty popular, arguably benefici...
Yes, I feel comfortable saying if the EV changes based on our action, we are responsible in some sense or produced it.
In Newcomb's paradox, I think you can "produce" additional dollars.
"Another way of putting it - the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?"
Thanks for the example. Yes, I think you've convinced me on this point. I think I want to say something like "when we have a good sense of the distribution of events, we know the bigger the departure from typical events, the less likely it is."
But I still think (and maybe this is going back to #1 a little) that th...
Right, the distinction between expected value from tech expected utility from offers from people makes sense. But I think your axiom still doesn't provide enough reason to reject Pascal's Wager.
This is an interesting response, but doesn't it run into a problem where you could have large amounts of evidence that Action X provides infinite payoff but have to ignore it.
Imagine really credible scientist/theologians discover there's a 90% chance that X gives you infinite payoff and 90% chance Y gives you $5, but you feel obligated to grab the $5 just because you're an infinity skeptic?
I also think this isn't consistent with how people decide things in general- we didn't need more evidence that COVID vaccines worked than flu vaccines worked, even though the expected utility from COVID vaccines was much higher.
In theory, you could be stuck doing bizarre things like that. But I don't think you would in this world. Most reasonably taking infinity seriously, probably involves converting to Christianity or if not that Islam or if not that some other established religions.
Major religions normally condemn occultic practices and superstitions from outside that practice. If someone comes up to you and claims to be a demon that will inflict suffering, someone who has already bet on the Christian God or Allah for instance, can just say go away- I'm already maximizing my chance of infinite reward and minimizing my chance of infinite punishment.
I think one takeaway is that given the stakes of the question- people should actually assess the arguments offered for each religion's truth. It's probably not correct to just assume a thought experiment (the Evidentialist God is as plausible as Gods for which there is (at least purported) evidence that many find convincing.
But if Evidentialist God is the most likely, we should dedicate ourselves to spreading Bayesian statistics or something like that.
I think it makes sense to spend a substantial amount of time researching religions. If you're terminally i...
What major religion are you thinking of that has that ranking? Islam seems to treat Christianity/Islam as preferable to paganism/irreligion.
There isn't one. To reject Pascal's Wager, you just have to conclude that you don't care about infinity. Taking Pascal's Wager is the correct utilitarian response. You probably need to weight religions both by how likely they are to be true and how likely you can "win" conditional upon them being true.
Amanda Askell has a good rundown on why most objections to Pascal's Wager are bad.
Yeah, I think this would worth be trying.
Even more speculatively, I was thinking of whether trying to reduce wedding costs would be effective. Married couples have much higher birth rates than unmarried couples, and it looks like it might be partly causal. For instance, I know couples who waited to have children till they married and waited to marry until they could pay for a wedding.
Good point. I think you would probably only consider the direct costs to those donors (pain/morbidity/risk) and not foregone donations, since presumably the typical liver donor participating in a chain is not devoting a lot of their earnings to impactful charity.