vaniver

168Joined Oct 2016

Comments
20

Can you explain the "same upsides" part?

Yeah; by default people have entangled assets which will be put at risk by starting or investing in a new project. Limiting the liability that originates from that project to just the assets held by that project means that investors and founders can do things that seem to have positive return on their own, rather than 'positive return given that you're putting all of your other assets at stake.'

[Like I agree that there's issues where the social benefit of actions and the private benefits of actions don't line up, and we should try to line them up as well as we can in order to incentivize the best action. I'm just noting that the standard guess for businesses is "we should try to decrease the private risk of starting new businesses"; I could buy that it's different for the x-risk environment, where we should not try to decrease the private risk of starting new risk reduction projects, but it's not obviously the case.]

Therefore, we should be very wary of funding mechanisms that incentivize people to treat extremely harmful outcomes as if they were neutral (when making decisions about doing/funding projects that are related to anthropogenic x-risks).

Sure, I agree with this, and with the sense that the costs are large. The thing I'm looking for is the comparison between the benefits and the costs; are the costs larger?

[EDIT: Also, interventions that are carried out if and only if impact markets fund them seem selected for being net-negative, because they are ones that no classical EA funder would fund.]

Sure, I buy that adverse selection can make things worse; my guess was that the hope was that classical EA funders would also operate thru the market. [Like, at some point your private markets become big enough that they become public markets, and I think we have solid reasons to believe a market mechanism can outperform specific experts, if there's enough profit at stake to attract substantial trading effort.]

This reminds me a lot of limited liability (see also Austin's comment, where he compares it to the for-profit startup market, which because of limited liability for corporations bounds prices below by 0).

This is a historically unusual policy (full liability came first), and seems to me to have basically the same downsides (people do risky things, profiting if they win and walking away if they lose), and basically the same upsides (according to the theory supporting LLCs, there's too little investment and support of novel projects). 

Can you say more about why you think this consideration is sufficient to be net negative? (I notice your post seems very 'do-no-harm' to me instead of 'here are the positive and negative effects, and we think the negative effects are larger', I'm also interested in Owen's impression on whether or not impact markets lead to more or less phase 2 work.)

I'm interested in fleshing out "what you're looking for"; do you have some examples of things written in the past which changed your minds, which you would have awarded prizes to?

For example, I thought about my old comment on patient long-termism, which observes that in order to say "I'm waiting to give later" as a complete strategy you need to identify the conditions under which you would stop waiting (as otherwise, your strategy is to give never). On the one hand, it feels "too short" to be considered, but on the other hand, it seems long enough to convey its point (at least, embedded in context as it was), and so any additional length would be 'more cost without benefit'.

And if this is just a one-off, then it seems a lot less concerning, and taking action seems much less pressing. (Though it seems much easier to verify that this is a pattern, by finding other people in a similar situation to yours, than to verify that it isn't, since there are incentives to be quiet about this sort of thing).


Is this the case? Often the reaction to the 'first transgression' will determine whether or not to do future ones--if people let it slide, then probably they don't care that much, whereas if they react strongly, it's important to repent and not do again.

And when there are patterns of behavior, especially in cases with significant power dynamics, it seems unlikely that you'd be able to collect such stories (in a usable way) without there being a prominent example of someone who shared their story and it went well for them. 

What I'm saying is that if you believe that x-risk is 0.1%, then you think we're at least one in a million.

I think you're saying "if you believe that x-risk this century is 0.1%, then survival probability this century is 99.9%, and for total survival probability over the next trillion years to be 0.01%, there can be at most 9200 centuries with risk that high over the next trillion years (.999^9200=0.0001), which means we're in (most generously) a one-in-one-million century, as a trillion years is 10 billion centuries, which divided by ten thousand is a million." That seem right?

Then, if the expected cost-effectiveness of the best opportunities varies substantially over time, there will be just one point in time at which your philanthropy will have the most impact, and you should try to max out your philanthropy at that time period, donating all your philanthropy at that time if you can.

Tho I note that the only way one would ever take such opportunities, if offered, is by developing a view of what sorts of opportunities are good that is sufficiently motivating to actually take action at least once every few decades.

For example, when the most attractive opportunity so far appears in year 19 of investing and assessing opportunities, will our patient philanthropist direct all their money towards it, and then start saving again? Will they reason that they don't have sufficient evidence to overcome their prior that year 19 is not more attractive than the years to come? Will they say "well, I'm following the Secretary Problem solution, and 19 is less than 70/e, so I'm still in info-gathering mode"?

They won't, of course, know which path had higher value in their particular world until they die, but it seems to me like most of the information content of a strategy that waits to pull the trigger is in when it decides to pull the trigger, and this feels like the least explicit part of your argument.

Compare to investing, where some people are fans of timing the market, and some people are fans of dollar-cost-averaging. If you think the attractiveness of giving opportunities is going to be unpredictably volatile, then doing direct work or philanthropy ever year is the optimal approach. If instead you think the attractiveness of giving opportunities is predictably volatile, or predictably stable, then doing patient philanthropy makes more sense.

What seems odd to me is simultaneously holding the outside view sense that we have insufficient evidence to think that we're correctly assessing a promising opportunity now, and having the sense that we should expect that we will correctly assess the promising opportunities in the future when they do happen.

Now that the world has experienced COVID-19, everyone understands that pandemics could be bad

I found it somewhat surprising how quickly the pandemic was polarized politically; I am curious whether you expect this group to be partisan, and whether that would be a positive or negative factor.

[A related historical question: what were the political party memberships of members of environmental groups in the US across time? I would vaguely suspect that it started off more even than it is today.]

I felt confused about why I was presented with a fully general argument for something I thought I indicated I already considered.

In my original comment, I was trying to resolve the puzzle of why something would have to appear edgy instead of just having fewer filters, by pointing out the ways in which having unshared filters would lead to the appearance of edginess. [On reflection, I should've been clearer about the 'unshared' aspect of it.]

you didn't want to voice unambiguous support for the view that the comment wordings were in fact not easy to improve on given the choice of topic.

I'm afraid this sentence has too many negations for me to clearly point one way or the other, but let me try to restate it and say why I made a comment:

The mechanistic approach to avoiding offense is to keep track of the ways things you say could be interpreted negatively, and search for ways to get your point across while not allowing for any of the negative interpretations. This is a tax on saying anything, and it especially taxes statements on touchy subjects, and the tax on saying things backpropagates into a tax on thinking them.

When we consider people who fail at the task of avoiding giving offense, it seems like there are three categories to consider:

1. The Blunt, who are ignoring the question of how the comment will land, and are just trying to state their point clearly (according to them).

2. The Blithe, who would put effort into rewording their point if they knew how to avoid giving offense, but whose models of the audience are inadequate to the task.

3. The Edgy, who are optimizing for being 'on the line' or in the 'plausible deniability' region, where they can both offend some targets and have some defenders who view their statements as unobjectionable.

While I'm comfortable predicting those categories will exist, confidently asserting that someone falls into any particular category is hard, because it involves some amount of mind-reading (and I think the typical mind fallacy makes it easy to think people are being Edgy, because you assume they see your filters when deciding what to say). That said, my guess is that Hanson is Blunt instead of Edgy or Blithe.

Comparing trolley accidents to rape is pretty ridiculous for a few reasons:

I think you're missing my point; I'm not describing the scale, but the type. For example, suppose we were discussing racial prejudice, and I made an analogy to prejudice against the left-handed; it would be highly innumerate of me to claim that prejudice against the left-handed is as damaging as racial prejudice, but it might be accurate of me to say both are examples of prejudice against inborn characteristics, are perceived as unfair by the victims, and so on.

And so if you're not trying to compare expected trauma, and just come up with rules of politeness that guard against any expected trauma above a threshold, setting the threshold low enough that both "prejudice against left-handers" and "prejudice against other races" are out doesn't imply that the damage done by both are similar.


That said, I don't think I agree with the points on your list, because I used the reference class of "vehicular violence or accidents," which is very broad. I agree there's an important disanalogy in that 'forced choices' like in the trolley problem are highly atypical for vehicular accidents, most of which are caused by negligence of one sort or another, and that trolleys themselves are very rare compared to cars, trucks, and trains, and so I don't actually expect most sufferers of MVA PTSD to be triggered or offended by the trolley problem. But if they were, it seems relevant that (in the US) motor vehicle accidents are more common than rape, and lead to more cases of PTSD than rape (at least, according to 2004 research; I couldn't quickly find anything more recent).

I also think that utilitarian thought experiments in general radiate the "can't be trusted to abide by norms" property; in the 'fat man' or 'organ donor' variants of the trolley problem, for example, the naive utilitarian answer is to murder, which is also a real risk that could make the conversation include an implicit threat.

Load More