M

MichaelStJules

9528 karmaJoined May 2016

Sequences
1

Welfare and moral weights

Comments
2172

Topic contributions
12

When Hormel employees and other associated people gave $500k to an end-of-life care charity - a donation which is part of Lewis's data - I don't think this was a secret scheme to increase beef consumption.

Ya, I wouldn't want to count that. I didn't check what the data included.

People who work in agriculture aren't some sort of evil caricature who only donate money to oppose animal protection; a lot of their donations are probably motivated by the same concerns that motivate everyone else.

I agree. I think if the money is coming through an interest/industry group or company, not just from an employee or farmer, then it's probably usually lobbying for that interest/industry group or company. Contributions from individuals could be more motivated by political identity and other issues than just protecting whatever industry they work in.

Vegans could donate to an animal protection group, like HSUS, to lobby on their behalf. That should make it clear why they’re donating.

I doubt that was to support animal protection, though.

At least, it seems SBF lied or misled about Alameda having privileged access, because Alameda could borrow and go badly into the negative without posting adequate collateral and without liquidation, and this was something only Alameda was allowed to do, and was intentional by design. This seems like fraud, but doing this wouldn't imply Alameda would borrow customers' funds without consent and violate FTX's terms of service, which seems like the bigger problem at the centre of the case.

Also, it seems their insurance fund numbers were fake and overinflated.

https://www.citationneeded.news/the-fraud-was-in-the-code/

I haven't followed the case that closely and there's a good chance I'm missing something, but it's not obvious to me that they intended to allow Alameda to borrow customer funds that weren't explicitly and consensually offered for borrowing on FTX (according to FTX's own terms of service). However, I'm not sure what happened to allow Alameda to borrow such funds.

By design, only assets explicitly and consensually in a pool for lending (or identical assets with at most the same total per asset, e.g. separately USD, Bitcoin, etc.) should be up for being borrowed. You shouldn't let customers borrow more than is being consensually offered by customers.[1] That would violate FTX's terms of service. It also seems like an obvious thing to code.

But they didn't ensure this, so what could have happened? Some possibilities:

  1. They just assumed the amounts consensually available for lending would always match or exceed the amounts being borrowed without actually checking or ensuring this by design, separately by asset type (USD, Bitcoin, etc..). As long as more was never borrowed, they would not violate their terms of service. That's a bad design, but plausibly not fraud. But then they allowed Alameda to borrow more, and Alameda borrowed so much it dipped into customer funds without consent. They could have done this without knowing it, so again plausibly not fraud. Or, maybe they did do it knowingly, so it would be fraud. But I'm not sure the evidence presented supports intent beyond a reasonable doubt.
  2. They assumed they had enough net assets to cover customer assets, even if they had to sell different assets from what customers thought they held. Or, they might have assumed they'd be able to cover whatever users would want to withdraw at a time, even if it meant not actually holding at least the same assets in the same or greater amounts, e.g. the same amount of USD or more, the same amount of Bitcoin or more, and so on. In either case, if they didn't care that they would not actually hold the same assets separately in the same or greater respective amounts (e.g. separately enough USD, enough Bitcoin, etc.) than what the customers retained rights to, this would be against FTX's terms of service, and it would seem they never really intended to honor their own terms of service, which looks like fraud.
  1. ^

    Which assets are actually borrowed and lent don't need to match exactly. If A wants to lend Bitcoin and B wants to borrow USD, FTX could take A's Bitcoin, sell it for USD and then lend the USD to B. That would be risky in case the Bitcoin price increased, but A and B could assume this risk or FTX could use an insurance fund or otherwise disperse the risk across funds opted into lending/borrowing, depending on the terms of service. This needn't dip into other customer funds without consent. I don't know if FTX did this.

This doesn't necessarily totally eliminate all risk aversion, because the outcomes of actions can also be substantially correlated across correlated agents for various reasons, e.g. correlated agents will tend to be biased in the same directions, the difficulty of AI alignment is correlated across the multiverse, the probability of consciousness and moral weights of similar moral patients will be correlated across the multiverse, etc.. So, you could only apply the LLN or CLT after conditioning separately on the different possible values of such common factors to aggregate the conditional expected value across the multiverse, and then you recombine.

On cluelessness: if you have complex cluelessness as deep uncertainty about your expected value conditional on the possibility of acausal influence, then it seems likely you should still have complex cluelessness as deep uncertainty all-things-considered, because deep uncertainty will be infectious, at least if its range is higher (or incomparable) than that assuming acausal influence is impossible.

For example, suppose

  1. 10% to acausal influence, and expected value of some action conditional on it of -5*10^50 to 10^51, a range due to deep uncertainty.
  2. 90% to no acausal influence, and expected value of 10^20 conditional on it.

Then the unconditional expected effects are still roughly -5*10^49 to 10^50, assuming the obvious intertheoretic comparisons between causal and acausal decision theories from MacAskill et al., 2021, and so deeply uncertain. If you don't make intertheoretic comparisons, then you could still be deeply uncertain, but it could depend on how exactly you treat normative uncertainty.

If you instead use precise probabilities even with acausal influence and the obvious intertheoretic comparisons, then it would be epistemically suspicious if the expected value conditional on acausal influence were ~0 and didn't dominate the expected value without acausal influence. One little piece of evidence biasing you one way or another gets multiplied across the (possibly infinite) multiverse under acausal influence.[1]

  1. ^

    Maybe the expected value is also infinite without acausal influence, too, but a reasonable approach to infinite aggregation would probably find acausal influence to dominate, anyway.

I would add that acausal influence is not only not Pascalian, but that it can make other things that may seem locally Pascalian or at least quite unlikely to make a positive local difference — like lottery tickets, voting and maybe an individual's own x-risk reduction work — become reasonably likely to make a large difference across a multiverse, because of variants of the Law of Large Numbers or Central Limit Theorem. This can practically limit risk aversion. See Wilkinson, 2022 (EA Forum post).

(EDITED)

I didn't refer to ignorance of the law. The point is that if you don't know you took something without paying, it's not theft. Theft requires intent.

https://www.law.cornell.edu/wex/theft

A jury can find someone guilty of theft (or fraud) without adequate evidence of intent, but that would be a misapplication of the law and arguably wrongful conviction.

If you find out later you took something without paying and make no attempt to return or repay or don't intend to do so, it might be theft, because then you intend to keep what you've taken. I don't know. If you're unable to pay it back for whatever reason already at the time of finding out (because of losses or spending), I don't know how that would be treated, but probably less harshly, and maybe just under civil law, with a debt repayment plan or forfeiture of future assets, not criminal conviction.

Either way, fraud definitely requires intent.

Ah, I misunderstood.

Still, I think they did violate their own terms of service and effectively misappropriated/misused customers' funds, whether or not it was intentional/fraud/criminal.

If I tell you I'll hold your assets and won't loan them out or trade with them, and don't take reasonable steps to ensure that and instead actually accidentally loan them out and trade with them, then I've probably done something wrong.

It may not be criminal without intent, though, and I just owe you your assets (or equivalent value) back. Accidentally walking out of a store with an item you didn't pay for isn't a crime, although you're still liable for returning it/repayment.

I just read the Wikipedia page on the case and didn't see compelling evidence of intent, at least not beyond a reasonable doubt https://en.m.wikipedia.org/wiki/United_States_v._Bankman-Fried

Also Googling "sbf prove intent" (without quotes) didn't turn up anything very compelling, at least for the first handful of results, in my view.

There's a lot here, so I'll respond to what seems to be most cruxy to me.

Another way to get this intuition is to imagine an unfeeling robot that derives the concept of utility from some combination of interviewing moral patients and constructing a first principles theory[2]. It could even get the correct theory, and derive that e.g. breaking your arm is 10 times as bad as stubbing your toe. It would still be in the dark about how bad these things are in absolute terms though.

I agree with this, but I don't think this is our epistemic position, because we can understand all value relative to our own experiences. (See also a thread about an unfeeling moral agent here.)

My claim is that the epistemic position of all the different theories of welfare are effectively that of this robot. And as a result of this, observing any absolute amount of welfare (utility) under theory A shouldn't update you as to what the amount would be under theory B, because both theories were consistent with any absolute amount of welfare to begin with. In fact they were "maximally uncertain" about the absolute amount, no amount should be any more or less of a surprise under either theory.

I agree that directly observing the value of a toe stub, say, under hedonism might not tell you much or anything about its absolute value under non-hedonistic theories of welfare.[1]

However, I think we can say more under variants of closer precise theories. I think you can fix the badness of a specific toe stub across many precise theories. But then also separately fix the badness of a papercut and many other things under the same theories. This is because some theories are meant to explain the same things, and it's those things to which we're assigning value, not directly to the theories themselves. See this section of my post. And those things in practice are human welfare (or yours specifically), and so we can just take the (accessed) human-relative stances.

You illustrate with neuron count theories, and I would in fact say we should fix human welfare across those theories (under hedonism, say, and perhaps separately for different reference point welfare states), so evidence about absolute value under one hedonistic neuron count theory would be evidence about absolute value under other hedonistic theories.

I suspect conscious subsystems don't necessarily generate a two envelopes problem; you just need to calculate the expected number of subsystems and their expected aggregate welfare relative to accessed human welfare. But it might depend on which versions of conscious subsystems we're considering.

For predictions of chicken sentience, I'd say to take expectations relative to human welfare (separately with different reference point welfare states).

  1. ^

    I'd add a caveat that evidence about relative value under one theory can be evidence under another. If you find out that a toe stub is less bad than expected relative to other things under hedonism, then the same evidence would typically support that it's less bad for desires and belief-like preferences than you expected relative to the same other things, too.

Load more