All of FCCC's Comments + Replies

Should Grants Fund EA Projects Retrospectively?

Well now I'm definitely glad I wrote "is not a new idea". I didn't know so many people had discussed similar proposals. Thank you all for the reading material. It'll be interesting to hear some downsides to funding retrospectively.

I mentioned the Future of Life Institute which, for those who haven't checked it out yet, does the "Future of Life" award. (Although, now that I think about it, all awards are retrospective.) They also do a podcast, which I haven't listened to in a while but, when I was listening, they had some really interesting discussions.

Politics is far too meta

It's not that any criticism is bad, it's that people who agree with an idea (when political considerations are ignored) are snuffing it out based on questionable predictions of political feasibility. I just don't think people are good at predicting political feasibility. How many people said Trump would never be president (despite FiveThirtyEight warning there was a 30 percent chance)?

Rather than the only disagreement being political feasibility, I would actually prefer someone to be against a policy and criticise it based on something more substantive (li... (read more)

2ryan_b6moI think this is much closer to the core problem. If we don't evaluate the object-level at all, our assessment of the political feasibility winds up being wrong. When I hear people say "politically feasible" what they mean at the object level is "will the current officeholders vote for it and also not get punished in their next election as a result." This ruins the political analysis, because it artificially constrains the time horizon. In turn this rules out political strategy questions like messaging (you have to shoe-horn it into whatever the current messaging is) or salience (stuck with whatever the current priorities are in public opinion) or tradeoffs among different policy priorities entirely. All of this leaves aside enough time to work on fundamental things like persuading the public, which can't be done over a single election season and is usually abandoned for shorter term gains.
Politics is far too meta

saying that it's unfeasible will tend to make it more unfeasible

Thank you for saying this. It's frustrating to have people who agree with you bat for the other team. I'd like to see how accurate people are for their infeasibility predictions: Take a list of policies that passed, a list that failed to pass, mix them together, and see how much better you can unscramble them than random chance. Your "I'm not going to talk about political feasibility in this post" idea is a good one that I'll use in future.

Poor meta-arguments I've noticed on the Forum:

  • Usi
... (read more)

It's frustrating to have people who agree with you bat for the other team.

I don't like "bat for the other team" here; it reminds me of "arguments are soldiers" and the idea that people on your "side" should agree your ideas are great, while the people who criticize your ideas are the enemy.

Criticism is good! Having accurate models of tractability (including political tractability) is good!

What I would say is:

  • Some "criticisms" are actually self-fulfilling prophecies, rather than being objective descriptions of reality. EAs aren't wary enough of these, and d
... (read more)
Good v. Optimal Futures

within some small number

In terms of cardinal utility? I think drawing any line in the sand has problems when things are continuous because it falls right into a slippery slope (if doesn't make a real difference, what about drawing the line at , and then what about ?).

But I think of our actions as discrete. Even if we design a system with some continuous parameter, the actual implementation of that system is going to be in discrete human actions. So I don't think we can get arbitrarily small differences in utility. Then maximalism (i.e. g... (read more)

Good v. Optimal Futures

I think he's saying "optimal future = best possible future", which necessarily has a non-zero probability.

2athowes9moEvents which are possible may still have zero probability, see "Almost never" on this Wikipedia page [https://en.wikipedia.org/wiki/Almost_surely]. That being said I think I still might object even if it wasϵ-optimal (within some small numberϵ>0of achieving the mathematically optimal future) unless this could be meaningfully justified somehow.
How to Fix Private Prisons and Immigration

Agreed, but at least in theory, a model that takes into account inmate's welfare at the proper level will, all else being equal, do better under utilitarian lights than a model that does not take into account inmate welfare.

What if the laws forced prisons to treat inmates in a particular way, and the legal treatment of inmates coincided with putting each inmate's wellbeing at the right level? Then the funding function could completely ignore the inmate's wellbeing, and the prisons' bids would drop to account for any extra cost to support the inmate's we... (read more)

How to Fix Private Prisons and Immigration

That's a good point. You could set up the system so that it's "societal contribution" + funding - price (which is what it is at the moment) + "Convict's QALYs in dollars" (maybe plus some other stuff too). The fact that you have to value a murder means that you should already have the numbers to do the dollar conversion of the QALYs.

I'm hesitant to make that change though. The change would allow prisons to trade off societal benefit for the inmate's benefit, who, as some people say, "owes a debt to society". Allowing this trade-off would also reduce the de... (read more)

2Linch9moThanks for your engagement! Agreed, but at least in theory, a model that takes into account inmate's welfare at the proper level will, all else being equal, do better under utilitarian lights than a model that does not take into account inmate welfare. This may be an obvious point, but I've made this same mistake ~4 years ago when discussing a different topic (animal testing), so I think it's worth flagging explicitly. Please feel free to edit the post if you do! I worry that many posts (my own included) on the internet are stale, and we don't currently have a protocol in place for declaring things to be outdated.
How to Fix Private Prisons and Immigration

You mean the first part? (I.e. Why pay for lobbying when you share the "benefits" with your competitors and still have to compete?) Yeah, when a company becomes large enough, the benefits of a rule change can outweigh the cost of lobbying.

But, for this particular system, if a prison is large enough to lobby, then they're going to have a lot of liabilities from all of their former and current inmates. If they lobby for longer sentences or try to make more behaviours illegal, and one of their former inmates is caught doing one of these new crimes, the prison... (read more)

Lotteries for everything?

There are mechanisms that aggregate distributed knowledge, such as free-market pricing.

I cannot really evaluate the value of a grant if I have not seen all the other grants.

Not with 100 percent accuracy, but that's not the right question. We want to know whether it can be done better than chance. Someone can lack knowledge and be biased and still reliably do better than random (try playing chess against a computer that plays uniformly random moves).

In addition, if there would be an easy and obvious system people would probably already have implemente

... (read more)
Lotteries for everything?
Answer by FCCCDec 04, 20201

When designing a system, you give it certain goals to satisfy. A good example of this done well is voting theory. People come up with apparently desirable properties, such the Smith criterion, and then demonstrate mathematically that certain voting methods succeed or fail the criterion. Some desirable goals cannot be achieved simultaneously (an example of this is Arrow's impossiblility theorem).

Lotteries give every ticket has an equal chance. And if each person has one ticket, this implies each person has an equal chance. But this goal is in conflict with ... (read more)

1FJehn9moI think the problem that it is really, really hard to come up with better systems. As mentioned above research grants have quite a few problems. Those problems are founded in human bias and a lack of knowledge. I cannot really evaluate the value of a grant if I have not seen all the other grants and I might be influenced by my biases so give it to a scientist I like or trust. In addition, if there would be an easy and obvious system people would probably already have implemented it. So, lotteries solve this problem. There might be better approaches, but many of them probably need an allknowing and unbiased arbiter and I have the impression that we lack those. Basically it boils down to the question: Am I better in evaluting this than chance? And I think people often are not due to their unconcious bias and knowledge gaps.

If people fill in the free-text box in the survey, this is essentially the same as sending an email. If I disagree with the fund's decisions, I can send them my reasons why. If my reasons aren't any good, the fund can see that, and ignore me; if I have good reasons, the fund should (hopefully) be swayed.

Votes without the free-text box filled in can't signal whether the voter's justifications are valid or not. Opinions have differing levels of information backing them up. An "unpopular" decision might be supported by everyone who knows what they're talking about; a "popular" decision might be considered to be bad by every informed person.

EA's abstract moral epistemology
Answer by FCCCOct 22, 20204

My idea of EA's essential beliefs are:

  • Some possible timelines are much better than others
  • What "feels" like the best action often won't result in anything close to the best possible timeline
  • In such situations, it's better to disregard our feelings and go with the actions that get us closer to the best timeline.

This doesn't commit you to a particular moral philosophy. You can rank timelines by whatever aspects you want: Your moral rule can tell you to only consider your own actions, and disregard their effects on the behaviour of other people's actions... (read more)

Can my self-worth compare to my instrumental value?

It happens in philosophy sometimes too: "Saving your wife over 10 strangers is morally required because..." Can't we just say that we aren't moral angels? It's not hypocritical to say the best thing is to do is save the 10 strangers, and then not do it (unless you also claim to be morally perfect). Same thing here. You can treat yourself well even if it's not the best moral thing to do. You can value non-moral things.

8willbradshaw1yThis feels...not wrong, exactly, but also not what I was driving at with this comment. At least, I think I probably disagree with your conception of morality.
Can my self-worth compare to my instrumental value?

I think you're conflating moral value with value in general. People value their pets, but this has nothing to do with the pet's instrumental moral value.

So a relevant question is "Are you allowed to trade off moral value for non-moral value?" To me, morality ranks (probability distributions of) timelines by moral preference. Morally better is morally better, but nothing is required of you. There's no "demandingness". I don't buy into the notions of "morally permissible" or "morally required": These lines in the sand seem like sociological observations (e.g... (read more)

Timeline Utilitarianism

Hey Bob, good post. I've had the same thought (i.e. the unit of moral analysis is timelines, or probability distributions of timelines) with different formalism

The trolley problem gives you a choice between two timelines (). Each timeline can be represented as the set containing all statements that are true within that timeline. This representation can neatly state whether something is true within a given timeline or not: “You pull the lever” , and “You pull the lever” . Timelines contain statements that are combined as well as statements that

... (read more)
Deliberate Consumption of Emotional Content to Increase Altruistic Motivation

I watched those videos you linked. I don't judge you for feeling that way. 

Did you convert anyone to veganism? If people did get converted, maybe there were even more effective ways to do so. Or maybe anger was the most effective way; I don't know. But if not, your own subjective experience was worse (by feeling contempt), other people felt worse, and fewer animals were helped. Anger might be justified but, assuming there was some better way to convert people, you'd be unintentionally prioritizing emotions ahead of helping the animals. 

Another th... (read more)

The case of the missing cause prioritisation research

“writing down stylized models of the world and solving for the optimal thing for EAs to do in them”

I think this is one of the most important things we can be doing. Maybe even the most important since it covers such a wide area and so much government policy is so far from optimal.

you just solve for the policy ... that maximizes your objective function, whatever that may be. 

I don't think that's right. I've written about what it means for a system to do "the optimal thing" and the answer cannot be that a single policy maximizes your objective function:... (read more)

3evelynciara1yI wonder if we could create an open source library of IAMs for researchers and EAs to use and audit.
Use resilience, instead of imprecision, to communicate uncertainty
And bits describe proportional changes in the number of possibilities, not absolute changes...
And similarly, the 3.3 bits that take you from 100 possibilities to 10 are the same amount of information as the 3.3 bits that take you from 10 possibilities to 1. In each case you're reducing the number of possibilities by a factor of 10.

Ahhh. Thanks for clearing that up for me. Looking at the entropy formula, that makes sense and I get the same answer as you for each digit (3.3). If I understand, I incorrectly conflated "information" with "value of information".

Use resilience, instead of imprecision, to communicate uncertainty
I think this is better parsed as diminishing marginal returns to information.

How does this account for the leftmost digit giving the most information, rather than the rightmost digit (or indeed any digit between them)?

per-thousandths does not have double the information of per-cents, but 50% more

Let's say I give you $1 + $ where is either 0, $0.1, $0.2 ... or $0.9. (Note $1 is analogous to 1%, and is equivalent adding a decimal place. I.e. per-thousandths vs per-cents.) The average value of , given a uniform distribution, is $0.45. Thus, agains... (read more)

[This comment is no longer endorsed by its author]Reply
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher
Does this match your view?

Basically, yeah.

But I do think it's a mistake to update your credence based off someone else's credence without knowing their argument and without knowing whether they're calibrated. We typically don't know the latter, so I don't know why people are giving credences without supporting arguments. It's fine to have a credence without evidence, but why are people publicising such credences?

4MichaelA1yI'd agree with a modified version of your claim, along the following lines: "You should update more based on someone's credence if you have more reason to believe their credence will track the truth, e.g. by knowing they've got good evidence (even if you haven't actually seen the evidence) or knowing they're well-calibrated. There'll be some cases where you have so little reason to believe their credence will track the truth that, for practical purposes, it's essentially not worth updating." But your claim at least sounds like it's instead that some people are calibrated while others aren't (a binary distinction), and when people aren't calibrated, you really shouldn't update based on their credences at all (at least if you haven't seen their arguments). I think calibration increases in a quantitative, continuous way, rather than switching from off to on. So I think we should just update on credences more the more calibrated the person they're from is. Does that sound right to you?
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher
But you say invalid meta-arguments, and then give the example "people make logic mistakes so you might have too". That example seems perfectly valid, just often not very useful.

My definition of an invalid argument contains "arguments that don't reliably differentiate between good and bad arguments". "1+1=2" is also a correct statement, but that doesn't make it a valid response to any given argument. Arguments need to have relevancy. I dunno, I could be using "invalid" incorrectly here.

And I'd also say
... (read more)
2MichaelA1yOh, when you said "Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.)", I assumed (perhaps mistakenly) that by "moral uncertainty" you meant something vaguely like the idea that "We should take moral uncertainty seriously, and think carefully about how best to handle it, rather than necessarily just going with whatever moral theory currently seems best to us." So not just the idea that we can't be certain about morality (which I’d be happy to say is just “correct”), but also the idea that that fact should change our behaviour is substantial ways. I think that both of those ideas are surprisingly rare outside of EA, but the latter one is rarer, and perhaps more distinctive to EA (though not unique to EA, as there are some non-EA philosophers who've done relevant work in that area). On my "inside-view", the idea that we should "take moral uncertainty seriously" also seems extremely hard to contest. But I move a little away from such confidence, and probably wouldn't simply call it "correct", due to the fact that most non-EAs don't seem to explicitly endorse something clearly like that idea. (Though maybe they endorse somewhat similar ideas in practice, even just via ideas like "agree to disagree".)
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

It's almost irrelevant, people still should provide their supporting argument of their credence, otherwise evidence can get "double counted" (and there's "flow on" effects where the first person who updates another person's credence has a significant effect on the overall credence of the population). For example, say I have arguments A and B supporting my 90% credence on something. And you have arguments A, B and C supporting your 80% credence on something. And neither of us post our reasoning; we just post our credences.... (read more)

4Linch1yI don't find your arguments persuasive for why people should give reasoning in addition to credences. I think posting reasoning is on the margin of net value, and I wish more people did it, but I also acknowledge that people's time is expensive so I understand why they choose not to. You list reasons why giving reasoning is beneficial, but not reasons for why it's sufficient to justify the cost. My question probing predictive ability of EAs earlier was an attempt to set right what I consider to be an inaccuracy in the internal impressions EAs have about the ability of superforecasters. In particular, it's not obvious to me that we should trust the judgments of superforecasters substantially more than we trust the judgments of other EAs.
2MichaelA1yMy view is that giving explicit, quantitative credences plus stating the supporting evidence is typically better than giving explicit, quantitative credences without stating the supporting evidence (at least if we ignore time costs, information hazards [https://www.lesswrong.com/posts/R7szBR5H487XutfKy/what-are-information-hazards] , etc.), which is in turn typically better than giving qualitative probability statements (e.g., "pretty sure") without stating the supporting evidence, and often better than just saying nothing. Does this match your view? In other words, are you essentially just arguing that "providing supporting arguments is a net benefit"? I ask because I had the impression that you were arguing that it's bad for people to give explicit, quantitative credences if they aren't also giving their supporting evidence (and that it'd be better for them to, in such cases, either use qualitative statements or just say nothing). Upon re-reading the thread, I got the sense that others may have gotten that impression too, but also I don't see you explicitly make that argument.
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher
The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what "pretty sure" means in common language), but ought to have drastically different implications for behavior!

Yes you're right. But I'm making a distinction between people's own credences and their ability to update the credences of other people. As far as changing the opinion of the reader, when someone says "I haven't thought much about it", it should be an indicator to not update your own credence by very m... (read more)

2Linch1yI'm curious if you agree or disagree with this claim: With a specific operationalization like:
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

I'm not sure how you think that's what I said. Here's what I actually said:

A superforecaster's credence can shift my credence significantly...
If the credence of a random person has any value to my own credence, it's very low...
The evidence someone provides is far more important than someone's credence (unless you know the person is highly calibrated and precise)...
[credences are] how people should think...
if you're going to post your credence, provide some evidence so that you can update other people's credences too.
... (read more)
Use resilience, instead of imprecision, to communicate uncertainty
Yes, in most cases if somebody has important information that an event has XY% probability of occurring, I'd usually pay a lot more to know what X is than what Y is.

As you should, but Greg is still correct in saying that Y should be provided.

Regarding the bits of information, I think he's wrong because I'd assume information should be independent of the numeric base you use. So I think Y provides 10% of the information of X. (If you were using base 4 numbers, you'd throw away 25%, etc.)

But again, there's no point in throwing away that 10%.

3Tyle_Stelzig1yIn the technical information-theoretic sense, 'information' counts how many bits are required to convey a message. And bits describe proportional changes in the number of possibilities, not absolute changes. The first bit of information reduces 100 possibilities to 50, the second reduces 50 possibilities to 25, etc. So the bit that takes you from 100 possibilities to 50 is the same amount of information as the bit that takes you from 2 possibilities to 1. And similarly, the 3.3 bits that take you from 100 possibilities to 10 are the same amount of information as the 3.3 bits that take you from 10 possibilities to 1. In each case you're reducing the number of possibilities by a factor of 10. To take your example: If you were using two digits in base four to represent per-sixteenths, then each digit contains the 50% of the information (two bits each, reducing the space of possibilities by a factor of four). To take the example of per-thousandths: Each of the three digits contains a third of the information (3.3 bits each, reducing the space of possibilities by a factor of 10). But upvoted for clearly expressing your disagreement. :)
Use resilience, instead of imprecision, to communicate uncertainty

I agree. Rounding has always been ridiculous to me. Methodologically, "Make your best guess given the evidence, then round" makes no sense. As long as your estimates are better than random chance, it's strictly less reliable than just "Make your best guess given the evidence".

Credences about credences confuse me a lot (is there infinite recursion here? I.e. credences about credences about credences...). My previous thoughts have been give a credence range or to size a bet (e.g. "I'd bet $50 out of my $X of wealth at a Y o... (read more)

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher
I mean, very frequently it's useful to just know what someone's credence is. That's often an order of magnitude cheaper to provide, and often is itself quite a bit of evidence.

I agree, but only if they're a reliable forecaster. A superforecaster's credence can shift my credence significantly. It's possible that their credences are based off a lot of information that shifts their own credence by 1%. In that case, it's not practical for them to provide all the evidence, and you are right.

But most people are poor forecaster... (read more)

2Habryka1yYes, but unreliability does not mean that you instead just use vague words instead of explicit credences. It's a fine critique to say that people make too many arguments without giving evidence (something I also disagree with, but that isn't the subject of this thread), but you are concretely making the point that it's additionally bad for them to give explicit credences! But the credences only help, compared to vague and ambiguous terms that people would use instead.
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report.

Habryka seems to be talking about people who have evidence and are just not stating it, so we might be talking past one another. I said in my first comment "There's also a lot of pseudo-superforcasting ... without any evidence backing up those credences." I didn't say "without stating any evidence backing up those cred... (read more)

4Linch1yI agree that EAs put superforecasters and superforecasting techniques on a pedestal, more than is warranted. Yes, I think it's a lot worse. Consider the two statements: And The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what "pretty sure" means in common language), but ought to have drastically different implications for behavior! I basically think EA and associated communities would be better off to have more precise credences, and be accountable for them. Otherwise, it's difficult to know if you were "really" wrong, even after checking hundreds of claims!
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher
From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences

Sure there is: By communicating, we're trying to update one another's credences. You're not going to be very successful in doing so if you provide a credence without supporting evidence. The evidence someone provides is far more important than someone's credence (unless you know the person is highly calibrated and precise). If you have a credence that you keep to yourself, then yes, there's no need for supporting... (read more)

Ambiguous statements are bad, 100%, but so are clear, baseless statements.

You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report. The former claim is undoubtedly true, but it doesn't necessarily describe a problematic phenomenon. (See Greg Lewis's recent post; I'm not sure if you disagree.). The latter claim would be very worrying if true, but I don't see reason to believe that ... (read more)

7Habryka1yI mean, very frequently it's useful to just know what someone's credence is. That's often an order of magnitude cheaper to provide, and often is itself quite a bit of evidence. This is like saying that all statements of opinions or expressions of feelings are bad, unless they are accompanied with evidence, which seems like it would massively worsen communication.
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher
EA epistemology is weaker than expected.

I'd say nearly everyone's ability to determine an argument's strength is very weak. On the Forum, invalid meta-arguments* are pretty common, such as "people make logic mistakes so you might have too", rather than actually identifying the weaknesses in an argument. There's also a lot of pseudo-superforcasting, like "I have 80% confidence in this", without any evidence backing up those credences. This seems to me like people are imitating sound arguments without actually unders... (read more)

4MichaelA1yHere are two claims I'd very much agree with: * It's often best to focus on object-level arguments rather than meta-level arguments, especially arguments alleging bias * One reason for that is that the meta-level arguments will often apply to a similar extent to a huge number of claims/people. E.g., a huge number of claims might be influenced substantially by confirmation bias. * (Here are two [http://web.archive.org/web/20200212212236/https://slatestarcodex.com/2019/07/17/caution-on-bias-arguments/] relevant posts [https://www.lesswrong.com/posts/o28fkhcZsBhhgfGjx/status-regulation-and-anxious-underconfidence] .) Is that what you meant? But you say invalid meta-arguments, and then give the example "people make logic mistakes so you might have too". That example seems perfectly valid, just often not very useful. And I'd also say that that example meta-argument could sometimes be useful. In particular, if someone seems extremely confident about something based on a particular chain of logical steps, it can be useful to remind them that there have been people in similar situations in the past who've been wrong (though also some who've been right). They're often wrong for reasons "outside their model", so this person not seeing any reason they'd be wrong doesn't provide extremely strong evidence that they're not. It would be invalid to say, based on that alone, "You're probably wrong", but saying they're plausibly wrong seems both true and potentially useful. (Also, isn't your comment primarily meta-arguments of a somewhat similar nature to "people make logic mistakes so you might have too"? I guess your comment is intended to be a bit closer to a specific reference class forecast type argument?) Describing that as pseudo-superforecasting feels unnecessarily pejorative. I think such people are just forecasting / providing estimates. They may indeed be inspired by Tetlock's work or other work with superforecasters, but that doesn't
There's also a lot of pseudo-superforcasting, like "I have 80% confidence in this", without any evidence backing up those credences.

From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences, and in general I think there is a lot of value in people providing credences even if they don't provide additional evidence, if only to avoid problems of ambiguous language.

Maximizing the Long-Run Returns of Retirement Savings

Yeah, that's right. The problem with my toy model is that it assumes that funds can actually estimate their optimal bid, which would need to be an exact prediction of the their future returns at an exact time, which is not possible. Allowing bids to reference a single, agreed-upon global index reduces the problem to a prediction of costs, which is much easier for the funds. And in the long run, returns can't be higher the return of the global index, so it should maximize long-run returns.

However, most (?) indices are made by committees, which I ... (read more)

3Larks1yMy understanding is the committees generally make rules for the indices, and then apply them relatively mechanistically, though they do occasionally change the rules. I think it is hard to totally get rid of this. You need some way to judge that a company's market cap is actually representative of market trading, as opposed to being manipulated by insiders (like LFIN was). Presumably if the index committee changed it to something absurd the regulator could change their index provider for the next year's bidding, though you are at risk of small changes that do not meet the threshold for firing. As a minor technical note gross returns often are (very slightly) higher than the index's, because the managers can profit from stock lending. This is what allows zero-fee ETFs (though they are also somewhat a marketing ploy).
Maximizing the Long-Run Returns of Retirement Savings

Yeah, it's definitely flawed. I was more thinking that the bids could be made as a difference between an index (probably a global one). So the profit-maximizing bids for the funds would be the index return (whatever it happens to be) minus their expected costs. And then you have large underwriters of the firms, who make sure that the fund's processes are sound. What I'd like is everyone to be in Vanguard/Blackrock, but there should be some mechanism for others to overthrow them if someone can match the index at a lower cost.

2Larks1yAhhh, so basically the idea is that no underwriter would be willing to vouch for anything but a credible index shop. Seems plausible.
Maximizing the Long-Run Returns of Retirement Savings

Caught red handed. I'd been thinking about this idea for a while and was trying to get the maths to work last night, so I had my prison/immigration idea next to me for reference.

I like this idea; we should have many more second-price auctions out there. Do you have any further references about it?

Thanks. I'm not the best person to ask about auctions. For people looking for an introduction, this video is pretty good. If anyone's got a good textbook, I'd be interested.

How to Fix Private Prisons and Immigration

Ah yes, I have mentioned in other comments about regulation keeping private prisons in check too. I should have restated it here. I am in favour of checks and balances, which is why my goal for the system contains "...within the limits of the law". I agree with almost everything you say here (I'd keep some public prisons until confident that the market is mature enough to handle all cases better than the public system, but I wouldn't implement your 10-year loan).

Human rights laws. Etc.

Yep, I'm all for that. One thing that people ar... (read more)

How to Fix Private Prisons and Immigration
Basically everyone was convinced by the theory of small loans to those too poor to receive finance.

I was against microfinance, but I also don't know how they justified the idea. I think empirical evidence should be used to undermine certain assumptions of models, such as "People will only take out a loan if it's in their own interests". Empirically, that's clearly not always the case (e.g. people go bankrupt from credit cards), and any model that relies on that statement being true may fail because of it. A theoretical argument wi... (read more)

2weeatquince1yUnfortunately I don’t have much useful to contribute on this. I don’t have experience running trials and pilots. I would think through the various scenarios by which a pilot could get started and then adapt to that. Eg what if you had the senior management of one prison that was keen. What about a single state. What about a few prisons. Also worth recognising that data might take years. I used to know someone who worked on prison data collection and assessing success of prisons, if I see her at some point I could raise this and message you.
How to Fix Private Prisons and Immigration
Theoretical reasons are great but actual evidence [is] more important.

Good theoretical evidence is "actual evidence". No amount of empirical evidence is up to the task of proving there are an infinite number of primes. Our theoretical argument showing there are an infinite number of primes is the strongest form of evidence that can be given.

That's not to say I think my argument is airtight, however. My argument could probably be made with more realistic assumptions (alternatively, more realistic assumptions might show my proposed system is ... (read more)

2weeatquince1yI strongly disagree. Additional checks and balances that prevent serious problems occurring are good. You have already said your system could go wrong (you said "more realistic assumptions might show my proposed system is fundamentally mistaken") and maybe it could go wrong in subtle ways that take years to manifest as companies learn how they can twist the rules. You should be in favour of checks and balances, and might want to explore what additional systems of checks would work best for your proposal. Options include: A few prisons running on a different system (eg state-run). A regulator for your auction based prisons. Transparency. The prisons being on 10 year loans from the state with contacts the need regular renewing so they would default to state ownership. Human rights laws. Etc. Maybe all of the above are things to have. As an example, one thing that could go wrong (although it looks like you have touched on this elsewhere in the comments) is prisons may not have a strong incentive to care about the welfare of the prisoners whilst they are in the prison.
1weeatquince1yMAIN POINTS: Looks like we are mostly on the same page. We both recognise the need for theoretical data and empirical data to play a role and we both think that you have a good idea for prison reform. I still get the impression that you undervalue empirical evidence of existent systems compared to theoretical evidence and may under invest in understanding evidence that goes against the theory or could improve the model. (Or may be I am being too harsh and we agree here too, hard to judge from a short exchange like this.) I am not sure I can persuade you to change much on this but I go into detail on a few points below. Anyway even if you are not persuaded I expect (well hope) that you would need to gather the empirical evidence before any senior policy makers look to implement this so either way that seems like a good next step. Good luck :-) TO ADDRESS THE POINTS RAISED: Firstly, apologies. I am not sure I explained things very well. Was late and I minced my words a bit. By "actual evidence" I was trying to just encompass the case of a similar policy already being in place and working. Eg we know tobacco tax works well at achieving the policy aim of reducing smoking because we can see it working. Sorry for any confusion caused. A better example from development is microcredit (mirofinace). Basically everyone was convinced by the theory of small loans to those too poor to receive finance. The guy who came up with the idea got a freking Nobel Prize. Super-skeptics GiveWell used to have a page on the best microcredit charity. But turns out (from multiple meta-analyses) that there was basically no way to make it work in practice (not for the worlds poorest). Blanket statements like this – suggesting your idea or similar is the ONLY way prisons can work still concerns me and makes me think that you value theoretical data too highly compared to empirical data. I don’t know much about prisons systems but I would be shocked if there was NO other good way to have
How to Fix Private Prisons and Immigration

Thanks for the kind comment.

My guess is that the US would be the best place to start (a thick "market", poor outcomes), but I'm talking about prison systems in general.

I'm not familiar with the UK system, but I haven't heard of any prison system with a solid theoretical grounding. Theory is required because we want to compare a proposed system to all other possible systems and conclude that our proposal is the best. You want theoretical reasons to believe that your system will perform well, and that good performance will endure.

Mos... (read more)

2weeatquince1yHi, THEORY AND EVIDENCE Correct me if I am wrong but you seem to be implying that the "theoretical reasons" why a policy idea will work are necessary and more important than empirical evidence that a system has worked in some case (which may be misleading due to confounding factors like good people). If so I strongly disagree: * Based on my 7 years experience working in UK policy would lead me to say the opposite. Theoretical reasons are great but actual evidence that a particular system has worked is super great, and in most cases more important. * Of course both can be useful. The world is complicated and policy is complicated and both evidence and theory can lead you down the wrong path. Good theoretical policy ideas can turn out to be wrong and well-evidence policy idea may not replicate as expected. * Consider international development. The effective altruism community has been saying for years (and backing up these claims) that in development you cannot just do things that theoretically sound like they will work (like building schools) but you need to do things that have empirical evidence of working well. * People are very very good at persuading themselves in what they believe (eg confirmation bias). A risk with policies driven by theoretical reasoning is that its adherents have ideological baggage and motivated reasoning and do not shift in line with new evidence. This is less of a risk for policy driven based on what works. ON PRISONS I have not considered all the details but I do think you have a decent policy idea here. I would be interested to see it tried. I would make the following, hopefully constructive, suggestions to you. 1. Focus on countries where the prison system is actually broken There is a lot of failings in policy and limited capacity to address them all. I do think "if it is not broke don’t fix it" is often a good maxim in policy and countries with working systems should not be the f
How to Fix Private Prisons and Immigration

I emailed Robin Hanson about my immigration idea in 2018. His post was in 2019. But to be fair, he came up with futarchy well before I started working on policy.

pay annual dividends proportional to these numbers

Doing things in proportion (rather than selling the full value) undervalues the impact of good forecasts. Since making forecasts has a cost, proportional payment (where the proportionality constant is not equal to 1) would generate inefficient outcomes: Imagine the contribution of the immigrant is $100 and it costs $80 to make the forecast, then paying forecasters anything less than 80% will cause poor outcomes.

How to Fix Private Prisons and Immigration
Does any country in the world create estimates of individual citizens' cost to social services? Does any country in the world have a system where companies can bid for the right to collect individuals' future tax revenue? Has anyone else (politicians, researchers, etc.) ever argued for a system resembling this one?

I'm not aware of any "net tax contribution" measurement, but I haven't done an extensive search either. I'm not aware of anyone arguing for anything close to the system I proposed. The closest (but still far aw... (read more)

How to Fix Private Prisons and Immigration
I agree that seems likely, but in my mind it's not the main reason to prevent it, and treating it as an afterthought or a happy coincidence is a serious omission.

No, this consequence was one of my intentions. It was not an afterthought. Not every goal needs to be stated, they can be implied.

You measure them only by what they can do for others

...by the convict's own free will. And just because that's the only thing being measured, doesn't mean I'm disregarding everything else. Societal contribution and a person's value are di... (read more)

3Linch9moPossibly a tangent, but I think it's maybe relevant that QALYs do not have that problem.
How to Fix Private Prisons and Immigration

That's a good point to bring up. There are a few ends that other people assign to prisons that come to mind: rehabilitation, deterrence, punishment, and removing the criminal from the population (protecting innocents). However, some of these goals can be achieved by other systems. The death penalty is completely compatible with the system I proposed: Though you may disagree with killing criminals for other reasons, it is (at least on the face of it) a deterrent, and it doesn't need to be carried out by prisons. The law could specify ways in which... (read more)

How to Fix Private Prisons and Immigration
My instinctive emotional reaction to this post is that it worries me, because it feels a bit like "purchasing a person", or purchasing their membership in civil society. I think that a common reaction to this kind of idea would be that it contributes to, or at least continues, the commodification and dehumanization of prison inmates, the reduction of people to their financial worth / bottom line

No one is going to run a prison for free--there has to be some money exchanged (even in public prisons, you must pay the employees). Whether that exchange... (read more)

3BenMillwood1yIn the predominant popular consciousness, this is not sufficient for the exchange to be moral. Buying a slave and treating them well is not moral, even if they end up with a happier life than they otherwise would have had. Personally, I'm consequentialist, so in some sense I agree with you, but even then, "consequences" includes all consequences, including those on societal norms, perceptions, and attitudes, so in practice framing effects and philosophical objections do still have relevance. Of course there has to be an exchange of money, but it's still very relevant what, conceptually or practically, that money buys. We have concepts like "criminal law" and "human rights" because we see benefits to not permitting everything to be bought or sold or contracted, so it's worth considering whether something like this crosses one of those lines. I agree that seems likely, but in my mind it's not the main reason to prevent it, and treating it as an afterthought or a happy coincidence is a serious omission. If your prison system's foundational goal doesn't recognize what (IMO) may be the most serious negative consequence of prison as it exists today, then your goal is inadequate. Indirect effects can't patch that. As a concrete example, there are people that you might predict are likely to die in prison (e.g. they have a terminal illness with a prognosis shorter than their remaining sentence). Their expected future tax revenue is roughly zero. Preventing their torture is still important, but your system won't view it as such. Now that I'm thinking about it, I'm more convinced that this is exactly the kind of thing people are concerned about when they are concerned about commodification and dehumanization. Your system attempts to quantify the good consequences of rehabilitation, but entirely omits the benefits for the person being rehabilitated. You measure them only by what they can do for others – how they can be used. That seems textbook dehumanization to me, and the
How to Fix Private Prisons and Immigration
But perhaps this is what your remark about zero economic profit is meant to address. I didn't understand that; perhaps you can elaborate.

That's correct. . The profit that most people think about is the accounting profit. Accounting profit ignores opportunity costs, which is what you give up by doing what you're doing (bear with me a moment). Economic profit, on the other hand, includes these opportunity costs in the calculation. For example, let's say Tom Cruise quits acting and decides to bake cakes for a li... (read more)

How to Fix Private Prisons and Immigration

No, I don't think this is a problem. The prisons are competing against each other, not acting as a single, unified block. Why would a prison spend money on making something illegal (through lobbying) when they still have to outbid their opponents? Not only that, prisons would also have an additional liability to pay for their existing prisoners who might commit these new crimes after their release.

2Linch9moThis will predict that organized corporate lobbying efforts to a first approximation do not exist, except for entrenching intrasector monopolies against direct competitors. I think this is sometimes untrue in practice [https://qz.com/1590961/taxpayers-are-paying-turbotax-to-keep-taxes-complicated/] .
3JohannWolfgang1yGood point.
How to Fix Private Prisons and Immigration

Sorry about the confusion. I hope the new notation makes it easier. (I've removed the graphs.)

How to Fix Private Prisons and Immigration

Thanks, Larks.

I [think it is] a huge mistake that reformists focus on abolishing private prisons, rather than using them.

Yeah, me too. I've told people that "I have an idea for a private prison system" and they think it's a bad idea before they've heard any details. I think the government has probably done a better job than the private sector with prisons, so it's a bit of hard sell.

With privatisation you get what you pay for, and at the moment we pay for volume.

Correct! The performance of the private sector depends on what the system maximizes. The prison

... (read more)
How to Fix Private Prisons and Immigration

The graphs show what is encapsulated by what. The area to which a label corresponds is the smallest convex shape that encapsulates the label. For example, is the whole lower-left quadrant, which also encapsulates the monetary effect of crimes (which is why the monetary effect of crimes is not explicitly included in the formulas). doesn't stand for all monetary factors. It stands for every monetary factor except .

If the convict pays tax, that's a good thing for society (all else being equal). should increase. And it does, since... (read more)

1JohannWolfgang1yThank you for the clarifications. According to Peter's comment, there already seem to be many informed people around working both inside and outside the prison system. Maybe it would be sufficient incentivize them better to make those bets, by introducing premiums for prisons that reduce the number of reconvictions of their previous inmates, taking into account some priors how likely they were to recidivate based on what their crime was and socio-economic background or so. One could also try to increase their agency if needed, I mean, letting public officials make decisions without having to worry too much about protocol or having to obtain permission from elected superiors who might want to take a tough stance against crime, letting researchers pursue any research they think is promising. Given the number of national and sub-national prison systems a lot of different insights would potentially result from that, which then could be shared and produce large benefits - especially if you pay the sharers. Maybe the private system would still be more effective, but I am unsure by how much and in that case still my case holds up, I think, that you could make political progress much faster when you pursue a less radical idea - and the potential downsides would be lower.
How to Fix Private Prisons and Immigration

Yep, those perverse incentives that you identified are all good criticisms. If there's a theoretical model that says why a system will work, the real-world failure points of that system will be the assumptions of its model. The assumptions can be made to be true with the right regulations. My model assumes that prisons will act lawfully, which I think they will under the right punishments (since there's always a possibility of being caught).

I knew about the prison's incentive to murder high-risk inmates, but I didn't consider the other... (read more)

1JohannWolfgang1yThe payoffs for the prison don't exist, but that might be fixed - at least to some extent - by introducing premiums and there are payoffs for the state. Although states are not as constrainted for funding as private companies, the costs of imprisonments have not gone unnoticed.
Load More