Interesting points. 100 years is unnecessarily long, it just simplified some of my arguments (every politician being dead, for instance).
If it were, say, 50 years, the arguments still roughly hold. Then it becomes something that people do for their children, and not something for “the unborn children of my unborn children” which doesn't seem real to people (even though it is). I think this probably solves the silliness issue, and the constituency issue.
But I also think it might seem silly because no one has done it before. In December, putting a tree in yo...
I would assume that for a private prison that has become good at its business the benefits of more inmates would outweigh the liabilities and that at some point it would (in principle, ignoring the free rider problem for a moment) become easier to increase the profits by increasing the revenue by making more things illegal than trying to reduce the reoffending rate.
Ignoring the free-rider problem ("problem" being from the perspective of the prison), as the prison gets more and more current/former inmates, it becomes harder for that cost-benefit calculat...
The “Planck principle” seems more applicable to scientists who are strongly invested in a given hypothesis
Yep, that’s why I referred to your 2nd and 3rd traits: A better competing theory is only an inconvenient conclusion if you’re invested in the wrong theory (especially if you yourself created that theory).
I know IQ and these traits are probably correlated (again, since some level intelligence is a prerequisite for most of the traits). But I’m assuming the reason you wrote the post is that a correlation across a population isn’t relevant when you’re dealing with a smart individual who lacks one of these traits.
I think you have to be smart to have all the OP’s listed traits, so sure, there’s going to be correlation. But what’s the phrase? “Science advances one funeral at a time.” If that’s true, then there are plenty of geniuses who can’t bring themselves to admit when someone else has a better theory. That would show that traits 2 and 3 are commonly lacking in smart people, which yes, makes those people dumber than they otherwise would be, but they’re still smart.
Wow, that essay explains strong anecdotes a lot better than I did. I knew about the low-variance aspect, but his third point and onwards made things even clearer for me. Thanks for the link!
Yep, I agree.
Maybe I should have gone into why everyone puts anecdotes at the bottom of the evidence hierarchy. I don't disagree that they belong there, especially if all else between the study types is equal. And even if the studies are quite different, the hierarchy is a decent rule of thumb. But it becomes a problem when people use it to disregard strong anecdotes and take weak RCTs as truth.
One big change that a lot of employers can make is changing their interviews and written tests.
I’ve been required to create a new policy from scratch in interview settings. “Okay now you should come up with an idea on the spot, and you will need to say why this policy should now be a legal requirement of every person in the country.” It’s exactly that type of surface-level thinking that policymakers should avoid.
You should be allowed to bring in work that you’ve already made into the interview and for the written application. It’s far more reflective of th...
(One of my comments from LessWrong)
If we were to see inflation going back to levels expected by the Fed (2-3% I suppose?) how would that change your forecast?
Great question. So my view is that there could be a few potential triggers for a sell-off cascade (via some combination of margin calls and panic selling), leading to a large drop. There’s also a few triggers for increasing interest rates, not just inflation: The Fed doesn’t have a monopoly on rates. When they buy fewer bonds, they shift the demand curve left, decreasing the price, leading to high...
I’m thinking that you might be able to bet against experienced bettors who think that you’re the victim of confirmation bias (which you might be)
I’d say I’m neutral (though so would anyone who has confirmation bias). I’ve given reasons why these indicators may have lost their predictive value. My main concern is increased savings (and investment of those savings). But hey, we don’t get better at prediction until we actually make predictions.
I’m just looking for market odds. I’d prefer to read the other side that you mention before I size my bets, but I ...
That’s a fair question. Culture is extremely important (e.g. certain cultural norms facilitate corruption and cronyism, which leads to slower annual increases in quality of life indices), but whether cancelling, specifically, is a big problem, I’m not sure.
Government demonstrably changes culture. At a minor level, drink-driving laws and advertising campaigns have changed something that was a cultural norm into a serious crime. At a broader level, you have things like communist governments making religion illegal and creating a culture where everyone snitch...
Thanks for the link; I should read Overcoming Bias more. I liked Hanson’s Futarchy idea, specifically the idea of replacing the Fed with financial instruments (which I can no longer seem to find anywhere). (Though I think the idea of tying returns of a policy’s implementation to GDP+ is doomed for several technical reasons, including getting stuck at local maxima and a good policy choice being a losing bet because of unrelated policy failures). I think he probably influenced my prison and immigration idea, and really my whole methodology (along with Alvin ...
Well now I'm definitely glad I wrote "is not a new idea". I didn't know so many people had discussed similar proposals. Thank you all for the reading material. It'll be interesting to hear some downsides to funding retrospectively.
I mentioned the Future of Life Institute which, for those who haven't checked it out yet, does the "Future of Life" award. (Although, now that I think about it, all awards are retrospective.) They also do a podcast, which I haven't listened to in a while but, when I was listening, they had some really interesting discussions.
It's not that any criticism is bad, it's that people who agree with an idea (when political considerations are ignored) are snuffing it out based on questionable predictions of political feasibility. I just don't think people are good at predicting political feasibility. How many people said Trump would never be president (despite FiveThirtyEight warning there was a 30 percent chance)?
Rather than the only disagreement being political feasibility, I would actually prefer someone to be against a policy and criticise it based on something more substantive (li...
saying that it's unfeasible will tend to make it more unfeasible
Thank you for saying this. It's frustrating to have people who agree with you bat for the other team. I'd like to see how accurate people are for their infeasibility predictions: Take a list of policies that passed, a list that failed to pass, mix them together, and see how much better you can unscramble them than random chance. Your "I'm not going to talk about political feasibility in this post" idea is a good one that I'll use in future.
Poor meta-arguments I've noticed on the Forum:
It's frustrating to have people who agree with you bat for the other team.
I don't like "bat for the other team" here; it reminds me of "arguments are soldiers" and the idea that people on your "side" should agree your ideas are great, while the people who criticize your ideas are the enemy.
Criticism is good! Having accurate models of tractability (including political tractability) is good!
What I would say is:
within some small number
In terms of cardinal utility? I think drawing any line in the sand has problems when things are continuous because it falls right into a slippery slope (if doesn't make a real difference, what about drawing the line at , and then what about ?).
But I think of our actions as discrete. Even if we design a system with some continuous parameter, the actual implementation of that system is going to be in discrete human actions. So I don't think we can get arbitrarily small differences in utility. Then maximalism (i.e. g...
I think he's saying "optimal future = best possible future", which necessarily has a non-zero probability.
Agreed, but at least in theory, a model that takes into account inmate's welfare at the proper level will, all else being equal, do better under utilitarian lights than a model that does not take into account inmate welfare.
What if the laws forced prisons to treat inmates in a particular way, and the legal treatment of inmates coincided with putting each inmate's wellbeing at the right level? Then the funding function could completely ignore the inmate's wellbeing, and the prisons' bids would drop to account for any extra cost to support the inmate's we...
That's a good point. You could set up the system so that it's "societal contribution" + funding - price (which is what it is at the moment) + "Convict's QALYs in dollars" (maybe plus some other stuff too). The fact that you have to value a murder means that you should already have the numbers to do the dollar conversion of the QALYs.
I'm hesitant to make that change though. The change would allow prisons to trade off societal benefit for the inmate's benefit, who, as some people say, "owes a debt to society". Allowing this trade-off would also reduce the de...
You mean the first part? (I.e. Why pay for lobbying when you share the "benefits" with your competitors and still have to compete?) Yeah, when a company becomes large enough, the benefits of a rule change can outweigh the cost of lobbying.
But, for this particular system, if a prison is large enough to lobby, then they're going to have a lot of liabilities from all of their former and current inmates. If they lobby for longer sentences or try to make more behaviours illegal, and one of their former inmates is caught doing one of these new crimes, the prison...
There are mechanisms that aggregate distributed knowledge, such as free-market pricing.
I cannot really evaluate the value of a grant if I have not seen all the other grants.
Not with 100 percent accuracy, but that's not the right question. We want to know whether it can be done better than chance. Someone can lack knowledge and be biased and still reliably do better than random (try playing chess against a computer that plays uniformly random moves).
...In addition, if there would be an easy and obvious system people would probably already have implemente
When designing a system, you give it certain goals to satisfy. A good example of this done well is voting theory. People come up with apparently desirable properties, such the Smith criterion, and then demonstrate mathematically that certain voting methods succeed or fail the criterion. Some desirable goals cannot be achieved simultaneously (an example of this is Arrow's impossiblility theorem).
Lotteries give every ticket has an equal chance. And if each person has one ticket, this implies each person has an equal chance. But this goal is in conflict with ...
If people fill in the free-text box in the survey, this is essentially the same as sending an email. If I disagree with the fund's decisions, I can send them my reasons why. If my reasons aren't any good, the fund can see that, and ignore me; if I have good reasons, the fund should (hopefully) be swayed.
Votes without the free-text box filled in can't signal whether the voter's justifications are valid or not. Opinions have differing levels of information backing them up. An "unpopular" decision might be supported by everyone who knows what they're talking about; a "popular" decision might be considered to be bad by every informed person.
My idea of EA's essential beliefs are:
This doesn't commit you to a particular moral philosophy. You can rank timelines by whatever aspects you want: Your moral rule can tell you to only consider your own actions, and disregard their effects on the behaviour of other people's actions...
It happens in philosophy sometimes too: "Saving your wife over 10 strangers is morally required because..." Can't we just say that we aren't moral angels? It's not hypocritical to say the best thing is to do is save the 10 strangers, and then not do it (unless you also claim to be morally perfect). Same thing here. You can treat yourself well even if it's not the best moral thing to do. You can value non-moral things.
I think you're conflating moral value with value in general. People value their pets, but this has nothing to do with the pet's instrumental moral value.
So a relevant question is "Are you allowed to trade off moral value for non-moral value?" To me, morality ranks (probability distributions of) timelines by moral preference. Morally better is morally better, but nothing is required of you. There's no "demandingness". I don't buy into the notions of "morally permissible" or "morally required": These lines in the sand seem like sociological observations (e.g...
Hey Bob, good post. I've had the same thought (i.e. the unit of moral analysis is timelines, or probability distributions of timelines) with different formalism
...The trolley problem gives you a choice between two timelines (). Each timeline can be represented as the set containing all statements that are true within that timeline. This representation can neatly state whether something is true within a given timeline or not: “You pull the lever” , and “You pull the lever” . Timelines contain statements that are combined as well as statements that
I watched those videos you linked. I don't judge you for feeling that way.
Did you convert anyone to veganism? If people did get converted, maybe there were even more effective ways to do so. Or maybe anger was the most effective way; I don't know. But if not, your own subjective experience was worse (by feeling contempt), other people felt worse, and fewer animals were helped. Anger might be justified but, assuming there was some better way to convert people, you'd be unintentionally prioritizing emotions ahead of helping the animals.
Another th...
“writing down stylized models of the world and solving for the optimal thing for EAs to do in them”
I think this is one of the most important things we can be doing. Maybe even the most important since it covers such a wide area and so much government policy is so far from optimal.
you just solve for the policy ... that maximizes your objective function, whatever that may be.
I don't think that's right. I've written about what it means for a system to do "the optimal thing" and the answer cannot be that a single policy maximizes your objective function:...
And bits describe proportional changes in the number of possibilities, not absolute changes...
And similarly, the 3.3 bits that take you from 100 possibilities to 10 are the same amount of information as the 3.3 bits that take you from 10 possibilities to 1. In each case you're reducing the number of possibilities by a factor of 10.
Ahhh. Thanks for clearing that up for me. Looking at the entropy formula, that makes sense and I get the same answer as you for each digit (3.3). If I understand, I incorrectly conflated "information" with "value of information".
I think this is better parsed as diminishing marginal returns to information.
How does this account for the leftmost digit giving the most information, rather than the rightmost digit (or indeed any digit between them)?
per-thousandths does not have double the information of per-cents, but 50% more
Let's say I give you $1 + $ where is either 0, $0.1, $0.2 ... or $0.9. (Note $1 is analogous to 1%, and is equivalent adding a decimal place. I.e. per-thousandths vs per-cents.) The average value of , given a uniform distribution, is $0.45. Thus, agains...
Does this match your view?
Basically, yeah.
But I do think it's a mistake to update your credence based off someone else's credence without knowing their argument and without knowing whether they're calibrated. We typically don't know the latter, so I don't know why people are giving credences without supporting arguments. It's fine to have a credence without evidence, but why are people publicising such credences?
But you say invalid meta-arguments, and then give the example "people make logic mistakes so you might have too". That example seems perfectly valid, just often not very useful.
My definition of an invalid argument contains "arguments that don't reliably differentiate between good and bad arguments". "1+1=2" is also a correct statement, but that doesn't make it a valid response to any given argument. Arguments need to have relevancy. I dunno, I could be using "invalid" incorrectly here.
And I'd also say...
It's almost irrelevant, people still should provide their supporting argument of their credence, otherwise evidence can get "double counted" (and there's "flow on" effects where the first person who updates another person's credence has a significant effect on the overall credence of the population). For example, say I have arguments A and B supporting my 90% credence on something. And you have arguments A, B and C supporting your 80% credence on something. And neither of us post our reasoning; we just post our credences....
The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what "pretty sure" means in common language), but ought to have drastically different implications for behavior!
Yes you're right. But I'm making a distinction between people's own credences and their ability to update the credences of other people. As far as changing the opinion of the reader, when someone says "I haven't thought much about it", it should be an indicator to not update your own credence by very m...
I'm not sure how you think that's what I said. Here's what I actually said:
A superforecaster's credence can shift my credence significantly...
If the credence of a random person has any value to my own credence, it's very low...
The evidence someone provides is far more important than someone's credence (unless you know the person is highly calibrated and precise)...
[credences are] how people should think...
if you're going to post your credence, provide some evidence so that you can update other people's credences too....
Yes, in most cases if somebody has important information that an event has XY% probability of occurring, I'd usually pay a lot more to know what X is than what Y is.
As you should, but Greg is still correct in saying that Y should be provided.
Regarding the bits of information, I think he's wrong because I'd assume information should be independent of the numeric base you use. So I think Y provides 10% of the information of X. (If you were using base 4 numbers, you'd throw away 25%, etc.)
But again, there's no point in throwing away that 10%.
I agree. Rounding has always been ridiculous to me. Methodologically, "Make your best guess given the evidence, then round" makes no sense. As long as your estimates are better than random chance, it's strictly less reliable than just "Make your best guess given the evidence".
Credences about credences confuse me a lot (is there infinite recursion here? I.e. credences about credences about credences...). My previous thoughts have been give a credence range or to size a bet (e.g. "I'd bet $50 out of my $X of wealth at a Y o...
I mean, very frequently it's useful to just know what someone's credence is. That's often an order of magnitude cheaper to provide, and often is itself quite a bit of evidence.
I agree, but only if they're a reliable forecaster. A superforecaster's credence can shift my credence significantly. It's possible that their credences are based off a lot of information that shifts their own credence by 1%. In that case, it's not practical for them to provide all the evidence, and you are right.
But most people are poor forecaster...
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report.
Habryka seems to be talking about people who have evidence and are just not stating it, so we might be talking past one another. I said in my first comment "There's also a lot of pseudo-superforcasting ... without any evidence backing up those credences." I didn't say "without stating any evidence backing up those cred...
From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences
Sure there is: By communicating, we're trying to update one another's credences. You're not going to be very successful in doing so if you provide a credence without supporting evidence. The evidence someone provides is far more important than someone's credence (unless you know the person is highly calibrated and precise). If you have a credence that you keep to yourself, then yes, there's no need for supporting...
Ambiguous statements are bad, 100%, but so are clear, baseless statements.
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report. The former claim is undoubtedly true, but it doesn't necessarily describe a problematic phenomenon. (See Greg Lewis's recent post; I'm not sure if you disagree.). The latter claim would be very worrying if true, but I don't see reason to believe that ...
EA epistemology is weaker than expected.
I'd say nearly everyone's ability to determine an argument's strength is very weak. On the Forum, invalid meta-arguments* are pretty common, such as "people make logic mistakes so you might have too", rather than actually identifying the weaknesses in an argument. There's also a lot of pseudo-superforcasting, like "I have 80% confidence in this", without any evidence backing up those credences. This seems to me like people are imitating sound arguments without actually unders...
There's also a lot of pseudo-superforcasting, like "I have 80% confidence in this", without any evidence backing up those credences.
From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences, and in general I think there is a lot of value in people providing credences even if they don't provide additional evidence, if only to avoid problems of ambiguous language.
Yeah, that's right. The problem with my toy model is that it assumes that funds can actually estimate their optimal bid, which would need to be an exact prediction of the their future returns at an exact time, which is not possible. Allowing bids to reference a single, agreed-upon global index reduces the problem to a prediction of costs, which is much easier for the funds. And in the long run, returns can't be higher the return of the global index, so it should maximize long-run returns.
However, most (?) indices are made by committees, which I ...
Yeah, it's definitely flawed. I was more thinking that the bids could be made as a difference between an index (probably a global one). So the profit-maximizing bids for the funds would be the index return (whatever it happens to be) minus their expected costs. And then you have large underwriters of the firms, who make sure that the fund's processes are sound. What I'd like is everyone to be in Vanguard/Blackrock, but there should be some mechanism for others to overthrow them if someone can match the index at a lower cost.
Caught red handed. I'd been thinking about this idea for a while and was trying to get the maths to work last night, so I had my prison/immigration idea next to me for reference.
I like this idea; we should have many more second-price auctions out there. Do you have any further references about it?
Thanks. I'm not the best person to ask about auctions. For people looking for an introduction, this video is pretty good. If anyone's got a good textbook, I'd be interested.
Ah yes, I have mentioned in other comments about regulation keeping private prisons in check too. I should have restated it here. I am in favour of checks and balances, which is why my goal for the system contains "...within the limits of the law". I agree with almost everything you say here (I'd keep some public prisons until confident that the market is mature enough to handle all cases better than the public system, but I wouldn't implement your 10-year loan).
Human rights laws. Etc.
Yep, I'm all for that. One thing that people ar...
Basically everyone was convinced by the theory of small loans to those too poor to receive finance.
I was against microfinance, but I also don't know how they justified the idea. I think empirical evidence should be used to undermine certain assumptions of models, such as "People will only take out a loan if it's in their own interests". Empirically, that's clearly not always the case (e.g. people go bankrupt from credit cards), and any model that relies on that statement being true may fail because of it. A theoretical argument wi...
Theoretical reasons are great but actual evidence [is] more important.
Good theoretical evidence is "actual evidence". No amount of empirical evidence is up to the task of proving there are an infinite number of primes. Our theoretical argument showing there are an infinite number of primes is the strongest form of evidence that can be given.
That's not to say I think my argument is airtight, however. My argument could probably be made with more realistic assumptions (alternatively, more realistic assumptions might show my proposed system is ...
Thanks for the kind comment.
My guess is that the US would be the best place to start (a thick "market", poor outcomes), but I'm talking about prison systems in general.
I'm not familiar with the UK system, but I haven't heard of any prison system with a solid theoretical grounding. Theory is required because we want to compare a proposed system to all other possible systems and conclude that our proposal is the best. You want theoretical reasons to believe that your system will perform well, and that good performance will endure.
Mos...
Damn, the nicest comment I've ever gotten and it's a bot lol
Just one point of nuance. Even if the current government issues the asset with the intention of breaking the promise, they lose basically nothing in net present terms (because those cashflows 50 years into the future are discounted by 1+d to the 50th power). I've updated the post to make the point clearer.