vaniver

215Joined Oct 2016

Comments
22

Can we all just agree that if you’re gonna make some funding decision with horrendous optics, you should be expected to justify the decision with actual numbers and plans?

Justify to who? I would like to have an EA that has some individual initiative, where people can make decisions using their resources to try to seek good outcomes. I agree that when actions have negative externalities, external checks would help. But it's not obvious to me that those external checks weren't passed in this case*, and if you want to propose a specific standard we should try to figure out whether or not that standard would actually help with optics.

Like, if the purchase of Wytham Abbey had been posted on the EA forum, and some people had said it was a good idea and some people said it was a bad idea, and then the funders went ahead and bought it, would our optics situation look any different now? Is the idea that if anyone posted that it was a bad idea, they shouldn't have bought it?

[And we need to then investigate whether or not adding this friction to the process ends up harming it on net; property sales are different in lots of places, but there are some where adding a week to the "should we do this?" decision-making process means implicitly choosing not to buy any reasonably-priced property, since inventory moves too quickly, and only overpriced property stays on the market for more than a week.]

 

* I don't remember being consulted about Wytham, but I'm friends with the people running it and broadly trust their judgment, and guess that they checked with people as to whether or not they thought it was a good idea. I wasn't consulted about the specific place Irena ended up buying, but I was consulted somewhat on whether or not Irena should buy a venue, and I thought she should, going so far as being willing to support it with some of my charitable giving, which ended up not being necessary.

From The Snowball, dealing with Warren Buffett's son's stint as a director and PR person for ADM:

The second the FBI agents left, Howie called his father, flailing, saying, I don't know what to do, I don't have the facts, how do I know if these allegations are true? My name is on every press release. How can I be the spokesman for the company worldwide? What should I do, should I resign?

Buffett refrained from the obvious response, which was that, of his three children, only Howie could have wound up with an FBI agent in his living room after taking his first job in the corporate world. He listened to the story non-judgmentally and told Howie that it was his decision whether to stay at ADM. He gave only one piece of advice: Howie had to decide within the next twenty-four hours. If you stay in longer than that, he said, you'll become one of them. No matter what happens, it will be too late to get out.

That clarified things. Howie now realized that waiting was not a way to get more information to help him decide, it was making the decision to stay. He had to look at his options and understand as of right now what they meant.

If he resigned and they were innocent, he would lose friends and look like a jerk.

If he stayed and they were guilty, he would be viewed as consorting with criminals.

The next day Howie went in, resigned, and told the general counsel that he would take legal action against the company if they put his name on any more press releases. Resigning from the board was a major event. For a director to resign was like sending up a smoke signal that said the company was guilty, guilty, guilty. People at ADM did not make it easy for Howie. They pushed for reprieve, they asked how he could in effect convict them without a trial. Howie held firm, however, and got out.

Can you explain the "same upsides" part?

Yeah; by default people have entangled assets which will be put at risk by starting or investing in a new project. Limiting the liability that originates from that project to just the assets held by that project means that investors and founders can do things that seem to have positive return on their own, rather than 'positive return given that you're putting all of your other assets at stake.'

[Like I agree that there's issues where the social benefit of actions and the private benefits of actions don't line up, and we should try to line them up as well as we can in order to incentivize the best action. I'm just noting that the standard guess for businesses is "we should try to decrease the private risk of starting new businesses"; I could buy that it's different for the x-risk environment, where we should not try to decrease the private risk of starting new risk reduction projects, but it's not obviously the case.]

Therefore, we should be very wary of funding mechanisms that incentivize people to treat extremely harmful outcomes as if they were neutral (when making decisions about doing/funding projects that are related to anthropogenic x-risks).

Sure, I agree with this, and with the sense that the costs are large. The thing I'm looking for is the comparison between the benefits and the costs; are the costs larger?

[EDIT: Also, interventions that are carried out if and only if impact markets fund them seem selected for being net-negative, because they are ones that no classical EA funder would fund.]

Sure, I buy that adverse selection can make things worse; my guess was that the hope was that classical EA funders would also operate thru the market. [Like, at some point your private markets become big enough that they become public markets, and I think we have solid reasons to believe a market mechanism can outperform specific experts, if there's enough profit at stake to attract substantial trading effort.]

This reminds me a lot of limited liability (see also Austin's comment, where he compares it to the for-profit startup market, which because of limited liability for corporations bounds prices below by 0).

This is a historically unusual policy (full liability came first), and seems to me to have basically the same downsides (people do risky things, profiting if they win and walking away if they lose), and basically the same upsides (according to the theory supporting LLCs, there's too little investment and support of novel projects). 

Can you say more about why you think this consideration is sufficient to be net negative? (I notice your post seems very 'do-no-harm' to me instead of 'here are the positive and negative effects, and we think the negative effects are larger', I'm also interested in Owen's impression on whether or not impact markets lead to more or less phase 2 work.)

I'm interested in fleshing out "what you're looking for"; do you have some examples of things written in the past which changed your minds, which you would have awarded prizes to?

For example, I thought about my old comment on patient long-termism, which observes that in order to say "I'm waiting to give later" as a complete strategy you need to identify the conditions under which you would stop waiting (as otherwise, your strategy is to give never). On the one hand, it feels "too short" to be considered, but on the other hand, it seems long enough to convey its point (at least, embedded in context as it was), and so any additional length would be 'more cost without benefit'.

And if this is just a one-off, then it seems a lot less concerning, and taking action seems much less pressing. (Though it seems much easier to verify that this is a pattern, by finding other people in a similar situation to yours, than to verify that it isn't, since there are incentives to be quiet about this sort of thing).


Is this the case? Often the reaction to the 'first transgression' will determine whether or not to do future ones--if people let it slide, then probably they don't care that much, whereas if they react strongly, it's important to repent and not do again.

And when there are patterns of behavior, especially in cases with significant power dynamics, it seems unlikely that you'd be able to collect such stories (in a usable way) without there being a prominent example of someone who shared their story and it went well for them. 

What I'm saying is that if you believe that x-risk is 0.1%, then you think we're at least one in a million.

I think you're saying "if you believe that x-risk this century is 0.1%, then survival probability this century is 99.9%, and for total survival probability over the next trillion years to be 0.01%, there can be at most 9200 centuries with risk that high over the next trillion years (.999^9200=0.0001), which means we're in (most generously) a one-in-one-million century, as a trillion years is 10 billion centuries, which divided by ten thousand is a million." That seem right?

Then, if the expected cost-effectiveness of the best opportunities varies substantially over time, there will be just one point in time at which your philanthropy will have the most impact, and you should try to max out your philanthropy at that time period, donating all your philanthropy at that time if you can.

Tho I note that the only way one would ever take such opportunities, if offered, is by developing a view of what sorts of opportunities are good that is sufficiently motivating to actually take action at least once every few decades.

For example, when the most attractive opportunity so far appears in year 19 of investing and assessing opportunities, will our patient philanthropist direct all their money towards it, and then start saving again? Will they reason that they don't have sufficient evidence to overcome their prior that year 19 is not more attractive than the years to come? Will they say "well, I'm following the Secretary Problem solution, and 19 is less than 70/e, so I'm still in info-gathering mode"?

They won't, of course, know which path had higher value in their particular world until they die, but it seems to me like most of the information content of a strategy that waits to pull the trigger is in when it decides to pull the trigger, and this feels like the least explicit part of your argument.

Compare to investing, where some people are fans of timing the market, and some people are fans of dollar-cost-averaging. If you think the attractiveness of giving opportunities is going to be unpredictably volatile, then doing direct work or philanthropy ever year is the optimal approach. If instead you think the attractiveness of giving opportunities is predictably volatile, or predictably stable, then doing patient philanthropy makes more sense.

What seems odd to me is simultaneously holding the outside view sense that we have insufficient evidence to think that we're correctly assessing a promising opportunity now, and having the sense that we should expect that we will correctly assess the promising opportunities in the future when they do happen.

Now that the world has experienced COVID-19, everyone understands that pandemics could be bad

I found it somewhat surprising how quickly the pandemic was polarized politically; I am curious whether you expect this group to be partisan, and whether that would be a positive or negative factor.

[A related historical question: what were the political party memberships of members of environmental groups in the US across time? I would vaguely suspect that it started off more even than it is today.]

I felt confused about why I was presented with a fully general argument for something I thought I indicated I already considered.

In my original comment, I was trying to resolve the puzzle of why something would have to appear edgy instead of just having fewer filters, by pointing out the ways in which having unshared filters would lead to the appearance of edginess. [On reflection, I should've been clearer about the 'unshared' aspect of it.]

Load More