FCCC

Comments

Politics is far too meta

Criticism can be great. But I think we need an agreed-upon order of critical focus to have more productive arguments. Maybe this:

  1. Given the assumptions of the argument, does the policy satisfy its specified goals?
  2. Are the goals of the policy good? Is there a better goalset we could satisfy?
  3. Is the policy technically feasible to implement?
  4. Is the policy politically feasible to implement?

I think talking about political feasibility should never ever be first thing we bring up when debating new ideas.

Politics is far too meta

saying that it's unfeasible will tend to make it more unfeasible

Thank you for saying this. It's frustrating to have people who agree with you bat for the other team. I'd like to see how accurate people are for their infeasibility predictions: Take a list of policies that passed, a list that failed to pass, mix them together, and see how much better you can unscramble them than random chance. Your "I'm not going to talk about political feasibility in this post" idea is a good one that I'll use in future.

Poor meta-arguments I've noticed on the Forum:

  • Using a general reference class when you have a better, more specific class available (e.g. taking an IQ test, having the results in your hand, and refusing to look at them because "I probably got 100 points, because that's the average.")
  • Bringing up common knowledge, i.e. things that are true, but everyone in the conversation already knows and applies that information. (E.g. "Logical arguments can be wrong in subtle ways, so just because your argument looks airtight, doesn't mean it is". A much better contribution is to actually point out the weaknesses in the specific argument that's in front of you.)
  • And, as you say, predictions of infeasibility.
Good v. Optimal Futures

Ah, another victim of a last-minute edit (originally, I wrote "which is necessarily possible").

within some small number

In terms of cardinal utility? I think drawing any line in the sand has problems when things are continuous because it falls right into a slippery slope (if doesn't make a real difference, what about drawing the line at , and then what about ?).

But I think of our actions as discrete. Even if we design a system with some continuous parameter, the actual implementation of that system is going to be in discrete human actions. So I don't think we can get arbitrarily small differences in utility. Then maximalism (i.e. going for only ideal outcomes) makes sense when it comes to designing long-lasting institutions, since the small (but non-infinitesimal) differences add up across many people and over a long time.

Good v. Optimal Futures

I think he's saying "optimal future = best possible future", which necessarily has a non-zero probability.

How to Fix Private Prisons and Immigration

Agreed, but at least in theory, a model that takes into account inmate's welfare at the proper level will, all else being equal, do better under utilitarian lights than a model that does not take into account inmate welfare.

What if the laws forced prisons to treat inmates in a particular way, and the legal treatment of inmates coincided with putting each inmate's wellbeing at the right level? Then the funding function could completely ignore the inmate's wellbeing, and the prisons' bids would drop to account for any extra cost to support the inmate's wellbeing or loss to societal contribution. That's what I was trying to do by saying the goal was to "maximize the total societal contribution of any given set of inmates within the limits of the law". There definitely should be limits on how a prison can treat its inmates, even if it were to serve the rest of society's interests.

But the more I think about it, the more I like the idea of having the inmate's welfare as part of the funding function. It would avoid having to go through the process of developing the right laws to make the prison system function as intended, and it's better at self-correcting when compared to laws (i.e. the prisons that are better at supporting inmate welfare will outcompete the prisons that are bad at it). And it would probably reduce the number of people who think that supporters of this policy change don't care about what happens to inmates, which is nice.

How to Fix Private Prisons and Immigration

That's a good point. You could set up the system so that it's "societal contribution" + funding - price (which is what it is at the moment) + "Convict's QALYs in dollars" (maybe plus some other stuff too). The fact that you have to value a murder means that you should already have the numbers to do the dollar conversion of the QALYs.

I'm hesitant to make that change though. The change would allow prisons to trade off societal benefit for the inmate's benefit, who, as some people say, "owes a debt to society". Allowing this trade-off would also reduce the deterrence effect of prisons on would-be offenders, so denying the trade-off is not necessarily an anti-utilitarian stance.

And denying the trade-off doesn't mean the inmate is not looked after either. There's a kind of... "Laffer Curve" equivalent where decreasing inmate wellbeing beyond a certain point necessarily means a reduction in societal contribution (destroying an inmate's mind is not good for their future societal contribution). So inmate wellbeing is not minimized by the system I've described (it's not maximized either).

I'm not 100 percent set on the exact funding function. I might change my mind in the future.

How to Fix Private Prisons and Immigration

You mean the first part? (I.e. Why pay for lobbying when you share the "benefits" with your competitors and still have to compete?) Yeah, when a company becomes large enough, the benefits of a rule change can outweigh the cost of lobbying.

But, for this particular system, if a prison is large enough to lobby, then they're going to have a lot of liabilities from all of their former and current inmates. If they lobby for longer sentences or try to make more behaviours illegal, and one of their former inmates is caught doing one of these new crimes, the prison has to pay.

One way prisons could avoid this is by paying someone else to take on these liabilities. But, in the contract, this person could ensure the prison pays for compensation for any lobbying that damages them.

So a lobbying prison (1) benefits from more inmates in the future, (2) has to pay the cost of lobbying, and (3) has to pay more for the additional liabilities of their past and current inmates (not for their future inmates though, because the liabilities will be offset by a lower initial price for those inmate contracts). Points 1 and 2 are the same under the current prison system. Point 3 is new, and it should push in the direction of less lobbying, at least once the system has existed for a while.

Lotteries for everything?

There are mechanisms that aggregate distributed knowledge, such as free-market pricing.

I cannot really evaluate the value of a grant if I have not seen all the other grants.

Not with 100 percent accuracy, but that's not the right question. We want to know whether it can be done better than chance. Someone can lack knowledge and be biased and still reliably do better than random (try playing chess against a computer that plays uniformly random moves).

In addition, if there would be an easy and obvious system people would probably already have implemented it.

Wouldn't the "efficient-policy hypothesis" imply that lotteries are worse than the existing systems? I don't think you really believe this. Are our systems better than most hypothetical systems? Usually, but this doesn't mean there's no low-hanging fruit. There's plenty of good policy ideas that are well-known and haven't been implemented, such as 100 percent land-value taxes.

Let's take a subset of the research funding problem: How can we decide what to fund for research about prisoner rehabilitation? I've suggested a mechanism that would do this.

Lotteries for everything?
Answer by FCCCDec 04, 20201

When designing a system, you give it certain goals to satisfy. A good example of this done well is voting theory. People come up with apparently desirable properties, such the Smith criterion, and then demonstrate mathematically that certain voting methods succeed or fail the criterion. Some desirable goals cannot be achieved simultaneously (an example of this is Arrow's impossiblility theorem).

Lotteries give every ticket has an equal chance. And if each person has one ticket, this implies each person has an equal chance. But this goal is in conflict with more important goals. I would guess that lotteries are almost never the best mechanism. Where they improve situations is for already bad mechanisms. But in that case, I'd look further for even better systems.

If people fill in the free-text box in the survey, this is essentially the same as sending an email. If I disagree with the fund's decisions, I can send them my reasons why. If my reasons aren't any good, the fund can see that, and ignore me; if I have good reasons, the fund should (hopefully) be swayed.

Votes without the free-text box filled in can't signal whether the voter's justifications are valid or not. Opinions have differing levels of information backing them up. An "unpopular" decision might be supported by everyone who knows what they're talking about; a "popular" decision might be considered to be bad by every informed person.

Load More