Harrison D

Wiki Contributions

Comments

An estimate of the value of Metaculus questions

When I did some research on the use of forecasting to support government policymaking, one of the issues I quickly encountered was that for some questions, if your forecast is counterfactually (upon making the forecast) accurate and influential upon decision makers, it can lead to policies which prevent the event from occurring and thus making it an inaccurate forecast. Of course, some decisions are not about preventing some event from occurring but rather responding to such an event (e.g., preparedness for a hurricane), in which case there’s not much issue. I could only skim and keyword search the post and failed to see an emphasis on that, but apologies if I just missed it. Do you think this is less of an issue in EA-relevant forecasting than, e.g., international security policymaking? My extremely underdeveloped intuition has been “probably yes,” but what are your thoughts?

Has anyone wrote something using moral cluelessness to "debunk" anti-consequentialist thought experiments?

Could you elaborate further what you have in mind with a moral cluelessness response?

Personally, I’ll say that most of those supposed anti-consequentialist thought experiments are helpful for illustrating why you need to be thoughtful when applying consequentialism (as is the case with every moral framework), but they do nothing to refute consequentialism. For example, there are many perfectly good utilitarian reasons to not conduct organ harvesting, such as threatening trust in institutions, risking your ability to do good in the future, the fact that you are considering that option suggests you might be suffering from delusion or some other problem which impairs your ability to assess the likelihood of being discovered or the benefits of harvesting the organs, etc.

Should you optimise for the long-term survival of people who don't care for their own long-term survival?

It feels easier to do any form of moral advocacy on someone before they are in power, versus after they are in power.

This seems like a good point. And to your broader question, I do think it’s possible to get general rules of thumb (for personal though) and identify norms that ought to be social endorsed/pushed. However, you have to make sure to frame/ask the question correctly, which I think includes the point about “acceptable” vs. “beneficial” (and more generally, about taking utilitarianism as the foundational framework).

Nuclear Espionage and AI Governance

I’m surprised to not see much discussion about the highly dual-use and commercial-involvement aspects of AI in section 2 (unless I just missed it?). I have come to see that as one of the major differences between the Manhattan project/nuclear race and the hypothetical AI race. (Before I go into that point, I’ll just ask: did that come up in your research or did I just miss it?)

Should you optimise for the long-term survival of people who don't care for their own long-term survival?

I was thinking of mandatory vaccinations, but I thought it seemed more like a case of requirements for community health rather than paternalism proper (focused on the wellbeing of the target of enforcement).

Should you optimise for the long-term survival of people who don't care for their own long-term survival?

I think it’s really tough to try to make some simplistic+universal set of rules. It also is really important to make sure you have a clear sense of what you are asking: “acceptable” (which is more in line with either a deontology framework or “would a utilitarian framework support social condemnation of this action [regardless of whether the action was itself utilitarian]”) may not be the same as “beneficial” (per a utilitarian framework). If it’s the latter, you might have rules of thumb but you will certainly struggle to get something comprehensive: there will be so many exceptions and fringe cases. If it’s the former (“acceptable”) it’s perhaps not quite as impossible, but even still I am skeptical that a solid ruleset could be devised / would be worth devising. I listed some of the most common examples of where you may find exceptions to the principle of not violating autonomy (e.g., verifiably foolish/immature behavior, failure to understand the long-term consequences, mental impairment).

In the end, nobody “gets to define” wellbeing in some “cosmic authority” sense: a parent, friend, government, or even a stranger at various times might be able to make the determination. I think it’s better to approach it more loosely, identifying principles with the recognition that there will be exceptions. For example, it’s worth highlighting that parents tend to 1) be making decisions for children (who are less rational/mature); 2) know their own children better; and 3) be less personally biased/more motivated to care about their children’s wellbeing than the government or a stranger. But that still doesn’t mean parents always ought to “get to decide” (but as a matter of law/policy, there are strong justifications for being biased against/hesitant towards intervening in parenting).

If your goal is “how should we set policy”, that makes it more answerable (and worth answering), but I don’t know what my answer would be on such a broad level. My objections/points/examples are mainly meant as a partial springboard for brainstorming as well as quick tests to see if a given proposal is reasonable (for example, if the proposed policy/law doesn’t take into consideration the issue of temporary mental impairment then it’s probably a bad policy).

Should you optimise for the long-term survival of people who don't care for their own long-term survival?

I want to be clear that there are certainly a lot of wrong ways to approach this, and that one should be very careful whenever they try to override or restrict someone’s decision-making: Generally, the instances where this violation of autonomy is clearly a good thing are quite rare in comparison to situations where it would be a bad thing. Also, I’ll clarify that “maximize their own preferences” probably wasn’t the best way of phrasing it (I typed that message in a rush while in transit)—a more accurate phrasing would have been something more like “maximizes their wellbeing”.

The point about preferences/wellbeing, though, is that there are times where people, whether as children, teens, adults, or seniors, want to do something despite the fact that it would be detrimental to their short or long-term wellbeing. In some of the more-extreme cases, it may even be that a few months or years later the person would look back and say “Wow, I’m really glad you stopped me from going through with that.”

As to why someone might want to make a decision that you/an outsider can confidently foresee as detrimental to their well-being, sometimes that is due to impaired decision-making capabilities from some mental disorder (e.g., bipolarity) or substance abuse. In some of these scenarios, intervention can definitely be justified (although one still has to be careful, including to not make the situation worse). Sometimes the poor decision-making is due to the fact that people sometimes make mistakes and/or otherwise fail to appreciate the dangers of their actions, and they don’t want to listen to advice. Sometimes the poor decision-making is simply lack of thinking or laziness (e.g., the case for opting-in people for savings plans).

As to your assumptions of my model: I am supposing that people have values, preferences, etc. but that their explicit/stated preferences at a given point in time may not reflect what is actually beneficial to them overall—especially not in the long term (e.g., someone who says they want some highly dangerous/addictive drug right now but who would actually benefit from having a longer or addiction-free life).

As to how we should respond, that is a far trickier matter, since there are all kinds of factors that determine what response or intervention is most beneficial—if anything. Indeed, in many situations interventions might make the problem worse, in other situations the restrictions might be unnecessary/neutral for most people and harmful/poorly designed for most of the remaining people.

Should you optimise for the long-term survival of people who don't care for their own long-term survival?

Second, briefly, I’ll add that “civilization” is not some monolithic moral patient, hive mind, or other thing: you can’t broadly say “civilization doesn’t want to be saved.” Regardless, I’ll bite the (nerf?) bullet here and bluntly say, I don’t really think it matters much if a hypothetical majority of the current ~7B people don’t want civilization to continue. It is important to consider why so many people might hypothetically think civilization shouldn’t continue (e.g., people have determined that the growth of civilization will produce net suffering overall), but if they’re just being selfish (“I’d rather live a good life now than worry about future generations”) and in reality future civilization would produce significant wellbeing for however many billions or trillions of future people, then yeah, I wouldn’t worry too much about what current people think or want. Thankfully, I would argue we don’t live in such a world (even if people are too short-termist)

Should you optimise for the long-term survival of people who don't care for their own long-term survival?

Two quick thoughts:

  1. I would definitely say there are good examples of so-called “paternalistic” policies: some people may engage in acts (e.g., suicide attempts) because they are suffering from temporary or long-term mental impairment. Additionally, I think nudge policies like opt-out instead of opt-in for saving money have generally been held up as good policy interventions. More broadly, I’d suggest there are many kinds of health and safety regulations which, although far from perfect as a whole, have probably as a whole helped people who would have willingly (foolishly) taken some bad e.g., medicine or building supplies (although I certainly would not stake this overall point on that one example). By and large, we recognize that children are not well equipped to make good decisions for themselves, and there is no magic age where someone becomes perfectly rational. However, I will say that once people start to reach around the age of ~20 they develop a level of self-rationality that typically does better than a centralized governments in maximizing their own preferences, but still not universally.
  2. Some people’s rejection of life-extension technology/medicine may partially be a coping mechanism with the belief that short life is inescapable.
[PR FAQ] Tagging users in posts and comments

Seems like a fairly reasonable feature, and it’s quite common across many platforms.

Load More