All of FCCC's Comments + Replies

Damn, the nicest comment I've ever gotten and it's a bot lol

Just one point of nuance. Even if the current government issues the asset with the intention of breaking the promise, they lose basically nothing in net present terms (because those cashflows 50 years into the future are discounted by 1+d to the 50th power). I've updated the post to make the point clearer.

Interesting points. 100 years is unnecessarily long, it just simplified some of my arguments (every politician being dead, for instance).

If it were, say, 50 years, the arguments still roughly hold. Then it becomes something that people do for their children, and not something for “the unborn children of my unborn children” which doesn't seem real to people (even though it is). I think this probably solves the silliness issue, and the constituency issue.

But I also think it might seem silly because no one has done it before. In December, putting a tree in yo... (read more)

I would assume that for a private prison that has become good at its business the benefits of more inmates would outweigh the liabilities and that at some point it would (in principle, ignoring the free rider problem for a moment) become easier to increase the profits by increasing the revenue by making more things illegal than trying to reduce the reoffending rate.

Ignoring the free-rider problem ("problem" being from the perspective of the prison), as the prison gets more and more current/former inmates, it becomes harder for that cost-benefit calculat... (read more)

1
JohannWolfgang
2y
Actually, I was referring to a point you made in an earlier comment: So do we both agree that (1) does not hold in the current system?

The “Planck principle” seems more applicable to scientists who are strongly invested in a given hypothesis

Yep, that’s why I referred to your 2nd and 3rd traits: A better competing theory is only an inconvenient conclusion if you’re invested in the wrong theory (especially if you yourself created that theory).

I know IQ and these traits are probably correlated (again, since some level intelligence is a prerequisite for most of the traits). But I’m assuming the reason you wrote the post is that a correlation across a population isn’t relevant when you’re dealing with a smart individual who lacks one of these traits.

5
Magnus Vinding
2y
I think it's important to stress that it's not just that some people with an extremely high IQ fail to change their minds on certain issues, and more generally fail to overcome confirmation bias  (which I think is fairly unsurprising). A key point is that there actually doesn't appear to be much of a correlation at all between IQ and resistance to confirmation bias. So to slightly paraphrase what you wrote above, I didn't just write the post because a correlation across a population is of limited relevance when you’re dealing with a smart individual who lacks one of these traits, but also because for a number of these traits (e.g. interpersonal kindness, being driven, and limiting confirmation bias), there seems to be virtually no correlation in the first place. And also because these other skills likely are more easy to improve than is IQ, implying that there is a tractability case for focusing more on developing and incentivizing these other traits.

I think you have to be smart to have all the OP’s listed traits, so sure, there’s going to be correlation. But what’s the phrase? “Science advances one funeral at a time.” If that’s true, then there are plenty of geniuses who can’t bring themselves to admit when someone else has a better theory. That would show that traits 2 and 3 are commonly lacking in smart people, which yes, makes those people dumber than they otherwise would be, but they’re still smart.

1
Magnus Vinding
2y
If that were literally true, then science wouldn't ever advance much. :) It seems that most scientists are in fact willing to change their minds when strong evidence has been provided for a hypothesis that goes against the previously accepted view. The "Planck principle" seems more applicable to scientists who are strongly invested in a given hypothesis, but even in that reference class, I suspect that most scientists do actually change their minds during their lifetime when the evidence is strong. And even if that were not the case,  I don't think it would count as compelling evidence in favor of thinking that IQ isn't strongly correlated with less confirmation bias. (E.g. non-scientists might still do far worse.) I think stronger evidence for a weak or non-existent correlation between IQ and resistance to confirmation bias is found in the psychological studies on the matter. :)

Wow, that essay explains strong anecdotes a lot better than I did. I knew about the low-variance aspect, but his third point and onwards made things even clearer for me. Thanks for the link!

Yep, I agree.

Maybe I should have gone into why everyone puts anecdotes at the bottom of the evidence hierarchy. I don't disagree that they belong there, especially if all else between the study types is equal. And even if the studies are quite different, the hierarchy is a decent rule of thumb. But it becomes a problem when people use it to disregard strong anecdotes and take weak RCTs as truth.

2
Peter S. Park
2y
I think so too! A strong anecdote can directly illustrate a cause-and-effect relationship that is consistent with a certain plausible theory of the underlying system. And correct causal understanding is essential for making externally valid predictions.

One big change that a lot of employers can make is changing their interviews and written tests.

I’ve been required to create a new policy from scratch in interview settings. “Okay now you should come up with an idea on the spot, and you will need to say why this policy should now be a legal requirement of every person in the country.” It’s exactly that type of surface-level thinking that policymakers should avoid.

You should be allowed to bring in work that you’ve already made into the interview and for the written application. It’s far more reflective of th... (read more)

(One of my comments from LessWrong)

If we were to see inflation going back to levels expected by the Fed (2-3% I suppose?) how would that change your forecast?

Great question. So my view is that there could be a few potential triggers for a sell-off cascade (via some combination of margin calls and panic selling), leading to a large drop. There’s also a few triggers for increasing interest rates, not just inflation: The Fed doesn’t have a monopoly on rates. When they buy fewer bonds, they shift the demand curve left, decreasing the price, leading to high... (read more)

I’m thinking that you might be able to bet against experienced bettors who think that you’re the victim of confirmation bias (which you might be)

I’d say I’m neutral (though so would anyone who has confirmation bias). I’ve given reasons why these indicators may have lost their predictive value. My main concern is increased savings (and investment of those savings). But hey, we don’t get better at prediction until we actually make predictions.

I’m just looking for market odds. I’d prefer to read the other side that you mention before I size my bets, but I ... (read more)

That’s a fair question. Culture is extremely important (e.g. certain cultural norms facilitate corruption and cronyism, which leads to slower annual increases in quality of life indices), but whether cancelling, specifically, is a big problem, I’m not sure.

Government demonstrably changes culture. At a minor level, drink-driving laws and advertising campaigns have changed something that was a cultural norm into a serious crime. At a broader level, you have things like communist governments making religion illegal and creating a culture where everyone snitch... (read more)

Thanks for the link; I should read Overcoming Bias more. I liked Hanson’s Futarchy idea, specifically the idea of replacing the Fed with financial instruments (which I can no longer seem to find anywhere). (Though I think the idea of tying returns of a policy’s implementation to GDP+ is doomed for several technical reasons, including getting stuck at local maxima and a good policy choice being a losing bet because of unrelated policy failures). I think he probably influenced my prison and immigration idea, and really my whole methodology (along with Alvin ... (read more)

Well now I'm definitely glad I wrote "is not a new idea". I didn't know so many people had discussed similar proposals. Thank you all for the reading material. It'll be interesting to hear some downsides to funding retrospectively.

I mentioned the Future of Life Institute which, for those who haven't checked it out yet, does the "Future of Life" award. (Although, now that I think about it, all awards are retrospective.) They also do a podcast, which I haven't listened to in a while but, when I was listening, they had some really interesting discussions.

It's not that any criticism is bad, it's that people who agree with an idea (when political considerations are ignored) are snuffing it out based on questionable predictions of political feasibility. I just don't think people are good at predicting political feasibility. How many people said Trump would never be president (despite FiveThirtyEight warning there was a 30 percent chance)?

Rather than the only disagreement being political feasibility, I would actually prefer someone to be against a policy and criticise it based on something more substantive (li... (read more)

2
ryan_b
3y
I think this is much closer to the core problem. If we don't evaluate the object-level at all, our assessment of the political feasibility winds up being wrong. When I hear people say "politically feasible" what they mean at the object level is "will the current officeholders vote for it and also not get punished in their next election as a result." This ruins the political analysis, because it artificially constrains the time horizon. In turn this rules out political strategy questions like messaging (you have to shoe-horn it into whatever the current messaging is) or salience (stuck with whatever the current priorities are in public opinion) or tradeoffs among different policy priorities entirely. All of this leaves aside enough time to work on fundamental things like persuading the public, which can't be done over a single election season and is usually abandoned for shorter term gains.

saying that it's unfeasible will tend to make it more unfeasible

Thank you for saying this. It's frustrating to have people who agree with you bat for the other team. I'd like to see how accurate people are for their infeasibility predictions: Take a list of policies that passed, a list that failed to pass, mix them together, and see how much better you can unscramble them than random chance. Your "I'm not going to talk about political feasibility in this post" idea is a good one that I'll use in future.

Poor meta-arguments I've noticed on the Forum:

  • Usi
... (read more)

It's frustrating to have people who agree with you bat for the other team.

I don't like "bat for the other team" here; it reminds me of "arguments are soldiers" and the idea that people on your "side" should agree your ideas are great, while the people who criticize your ideas are the enemy.

Criticism is good! Having accurate models of tractability (including political tractability) is good!

What I would say is:

  • Some "criticisms" are actually self-fulfilling prophecies, rather than being objective descriptions of reality. EAs aren't wary enough of these, and d
... (read more)

within some small number

In terms of cardinal utility? I think drawing any line in the sand has problems when things are continuous because it falls right into a slippery slope (if doesn't make a real difference, what about drawing the line at , and then what about ?).

But I think of our actions as discrete. Even if we design a system with some continuous parameter, the actual implementation of that system is going to be in discrete human actions. So I don't think we can get arbitrarily small differences in utility. Then maximalism (i.e. g... (read more)

I think he's saying "optimal future = best possible future", which necessarily has a non-zero probability.

2
athowes
3y
Events which are possible may still have zero probability, see "Almost never" on this Wikipedia page. That being said I think I still might object even if it was ϵ-optimal (within some small number ϵ>0 of achieving the mathematically optimal future)  unless this could be meaningfully justified somehow.

Agreed, but at least in theory, a model that takes into account inmate's welfare at the proper level will, all else being equal, do better under utilitarian lights than a model that does not take into account inmate welfare.

What if the laws forced prisons to treat inmates in a particular way, and the legal treatment of inmates coincided with putting each inmate's wellbeing at the right level? Then the funding function could completely ignore the inmate's wellbeing, and the prisons' bids would drop to account for any extra cost to support the inmate's we... (read more)

That's a good point. You could set up the system so that it's "societal contribution" + funding - price (which is what it is at the moment) + "Convict's QALYs in dollars" (maybe plus some other stuff too). The fact that you have to value a murder means that you should already have the numbers to do the dollar conversion of the QALYs.

I'm hesitant to make that change though. The change would allow prisons to trade off societal benefit for the inmate's benefit, who, as some people say, "owes a debt to society". Allowing this trade-off would also reduce the de... (read more)

2
Linch
3y
Thanks for your engagement! Agreed, but at least in theory, a model that takes into account inmate's welfare at the proper level will, all else being equal, do better under utilitarian lights than a model that does not take into account inmate welfare.  This may be an obvious point, but I've made this same mistake ~4 years ago when discussing a different topic (animal testing), so I think it's worth flagging explicitly.      Please feel free to edit the post if you do! I worry that many posts (my own included) on the internet are stale, and we don't currently have a protocol in place for declaring things to be outdated.

You mean the first part? (I.e. Why pay for lobbying when you share the "benefits" with your competitors and still have to compete?) Yeah, when a company becomes large enough, the benefits of a rule change can outweigh the cost of lobbying.

But, for this particular system, if a prison is large enough to lobby, then they're going to have a lot of liabilities from all of their former and current inmates. If they lobby for longer sentences or try to make more behaviours illegal, and one of their former inmates is caught doing one of these new crimes, the prison... (read more)

1
JohannWolfgang
2y
I would assume that for a private prison that has become good at its business the benefits of more inmates would outweigh the liabilities and that at some point it would (in principle, ignoring the free rider problem for a moment) become easier to increase the profits by increasing the revenue by making more things illegal than trying to reduce the reoffending rate. Also do administrators profit from more crimes in a public system? It of course increases the demand for administrators, but I don't see how it would increase the salary of a significant number of them. Does insurance contracts typically contain clauses for future "products"? I would have assumed that the insurance of the prison would only cover the damage of the point in time the contract was firmed.

There are mechanisms that aggregate distributed knowledge, such as free-market pricing.

I cannot really evaluate the value of a grant if I have not seen all the other grants.

Not with 100 percent accuracy, but that's not the right question. We want to know whether it can be done better than chance. Someone can lack knowledge and be biased and still reliably do better than random (try playing chess against a computer that plays uniformly random moves).

In addition, if there would be an easy and obvious system people would probably already have implemente

... (read more)
Answer by FCCCDec 04, 20201
0
0

When designing a system, you give it certain goals to satisfy. A good example of this done well is voting theory. People come up with apparently desirable properties, such the Smith criterion, and then demonstrate mathematically that certain voting methods succeed or fail the criterion. Some desirable goals cannot be achieved simultaneously (an example of this is Arrow's impossiblility theorem).

Lotteries give every ticket has an equal chance. And if each person has one ticket, this implies each person has an equal chance. But this goal is in conflict with ... (read more)

1
FJehn
3y
I think the problem that it is really, really hard to come up with better systems. As mentioned above research grants have quite a few problems. Those problems are founded in human bias and a lack of knowledge. I cannot really evaluate the value of a grant if I have not seen all the other grants and I might be influenced by my biases so give it to a scientist I like or trust. In addition, if there would be an easy and obvious system people would probably already have implemented it.   So, lotteries solve this problem. There might be better approaches, but many of them probably need an allknowing and unbiased arbiter and I have the impression that we lack those.  Basically it boils down to the question: Am I better in evaluting this than chance? And I think people often are not due to their unconcious bias and knowledge gaps. 

If people fill in the free-text box in the survey, this is essentially the same as sending an email. If I disagree with the fund's decisions, I can send them my reasons why. If my reasons aren't any good, the fund can see that, and ignore me; if I have good reasons, the fund should (hopefully) be swayed.

Votes without the free-text box filled in can't signal whether the voter's justifications are valid or not. Opinions have differing levels of information backing them up. An "unpopular" decision might be supported by everyone who knows what they're talking about; a "popular" decision might be considered to be bad by every informed person.

Answer by FCCCOct 22, 20203
0
0

My idea of EA's essential beliefs are:

  • Some possible timelines are much better than others
  • What "feels" like the best action often won't result in anything close to the best possible timeline
  • In such situations, it's better to disregard our feelings and go with the actions that get us closer to the best timeline.

This doesn't commit you to a particular moral philosophy. You can rank timelines by whatever aspects you want: Your moral rule can tell you to only consider your own actions, and disregard their effects on the behaviour of other people's actions... (read more)

It happens in philosophy sometimes too: "Saving your wife over 10 strangers is morally required because..." Can't we just say that we aren't moral angels? It's not hypocritical to say the best thing is to do is save the 10 strangers, and then not do it (unless you also claim to be morally perfect). Same thing here. You can treat yourself well even if it's not the best moral thing to do. You can value non-moral things.

8
Will Bradshaw
4y
This feels...not wrong, exactly, but also not what I was driving at with this comment. At least, I think I probably disagree with your conception of morality.

I think you're conflating moral value with value in general. People value their pets, but this has nothing to do with the pet's instrumental moral value.

So a relevant question is "Are you allowed to trade off moral value for non-moral value?" To me, morality ranks (probability distributions of) timelines by moral preference. Morally better is morally better, but nothing is required of you. There's no "demandingness". I don't buy into the notions of "morally permissible" or "morally required": These lines in the sand seem like sociological observations (e.g... (read more)

Hey Bob, good post. I've had the same thought (i.e. the unit of moral analysis is timelines, or probability distributions of timelines) with different formalism

The trolley problem gives you a choice between two timelines (). Each timeline can be represented as the set containing all statements that are true within that timeline. This representation can neatly state whether something is true within a given timeline or not: “You pull the lever” , and “You pull the lever” . Timelines contain statements that are combined as well as statements that

... (read more)

I watched those videos you linked. I don't judge you for feeling that way. 

Did you convert anyone to veganism? If people did get converted, maybe there were even more effective ways to do so. Or maybe anger was the most effective way; I don't know. But if not, your own subjective experience was worse (by feeling contempt), other people felt worse, and fewer animals were helped. Anger might be justified but, assuming there was some better way to convert people, you'd be unintentionally prioritizing emotions ahead of helping the animals. 

Another th... (read more)

“writing down stylized models of the world and solving for the optimal thing for EAs to do in them”

I think this is one of the most important things we can be doing. Maybe even the most important since it covers such a wide area and so much government policy is so far from optimal.

you just solve for the policy ... that maximizes your objective function, whatever that may be. 

I don't think that's right. I've written about what it means for a system to do "the optimal thing" and the answer cannot be that a single policy maximizes your objective function:... (read more)

4
BrownHairedEevee
4y
I wonder if we could create an open source library of IAMs for researchers and EAs to use and audit.
And bits describe proportional changes in the number of possibilities, not absolute changes...
And similarly, the 3.3 bits that take you from 100 possibilities to 10 are the same amount of information as the 3.3 bits that take you from 10 possibilities to 1. In each case you're reducing the number of possibilities by a factor of 10.

Ahhh. Thanks for clearing that up for me. Looking at the entropy formula, that makes sense and I get the same answer as you for each digit (3.3). If I understand, I incorrectly conflated "information" with "value of information".

I think this is better parsed as diminishing marginal returns to information.

How does this account for the leftmost digit giving the most information, rather than the rightmost digit (or indeed any digit between them)?

per-thousandths does not have double the information of per-cents, but 50% more

Let's say I give you $1 + $ where is either 0, $0.1, $0.2 ... or $0.9. (Note $1 is analogous to 1%, and is equivalent adding a decimal place. I.e. per-thousandths vs per-cents.) The average value of , given a uniform distribution, is $0.45. Thus, agains... (read more)

[This comment is no longer endorsed by its author]Reply
Does this match your view?

Basically, yeah.

But I do think it's a mistake to update your credence based off someone else's credence without knowing their argument and without knowing whether they're calibrated. We typically don't know the latter, so I don't know why people are giving credences without supporting arguments. It's fine to have a credence without evidence, but why are people publicising such credences?

4
MichaelA
4y
I'd agree with a modified version of your claim, along the following lines: "You should update more based on someone's credence if you have more reason to believe their credence will track the truth, e.g. by knowing they've got good evidence (even if you haven't actually seen the evidence) or knowing they're well-calibrated. There'll be some cases where you have so little reason to believe their credence will track the truth that, for practical purposes, it's essentially not worth updating." But your claim at least sounds like it's instead that some people are calibrated while others aren't (a binary distinction), and when people aren't calibrated, you really shouldn't update based on their credences at all (at least if you haven't seen their arguments). I think calibration increases in a quantitative, continuous way, rather than switching from off to on. So I think we should just update on credences more the more calibrated the person they're from is. Does that sound right to you?
But you say invalid meta-arguments, and then give the example "people make logic mistakes so you might have too". That example seems perfectly valid, just often not very useful.

My definition of an invalid argument contains "arguments that don't reliably differentiate between good and bad arguments". "1+1=2" is also a correct statement, but that doesn't make it a valid response to any given argument. Arguments need to have relevancy. I dunno, I could be using "invalid" incorrectly here.

And I'd also say
... (read more)
2
MichaelA
4y
Oh, when you said "Effective altruists have centred around some ideas that are correct (longtermism, moral uncertainty, etc.)", I assumed (perhaps mistakenly) that by "moral uncertainty" you meant something vaguely like the idea that "We should take moral uncertainty seriously, and think carefully about how best to handle it, rather than necessarily just going with whatever moral theory currently seems best to us." So not just the idea that we can't be certain about morality (which I’d be happy to say is just “correct”), but also the idea that that fact should change our behaviour is substantial ways. I think that both of those ideas are surprisingly rare outside of EA, but the latter one is rarer, and perhaps more distinctive to EA (though not unique to EA, as there are some non-EA philosophers who've done relevant work in that area). On my "inside-view", the idea that we should "take moral uncertainty seriously" also seems extremely hard to contest. But I move a little away from such confidence, and probably wouldn't simply call it "correct", due to the fact that most non-EAs don't seem to explicitly endorse something clearly like that idea. (Though maybe they endorse somewhat similar ideas in practice, even just via ideas like "agree to disagree".)

It's almost irrelevant, people still should provide their supporting argument of their credence, otherwise evidence can get "double counted" (and there's "flow on" effects where the first person who updates another person's credence has a significant effect on the overall credence of the population). For example, say I have arguments A and B supporting my 90% credence on something. And you have arguments A, B and C supporting your 80% credence on something. And neither of us post our reasoning; we just post our credences.... (read more)

4
Linch
4y
I don't find your arguments persuasive for why people should give reasoning in addition to credences. I think posting reasoning is on the margin of net value, and I wish more people did it, but I also acknowledge that people's time is expensive so I understand why they choose not to. You list reasons why giving reasoning is beneficial, but not reasons for why it's sufficient to justify the cost. My question probing predictive ability of EAs earlier was an attempt to set right what I consider to be an inaccuracy in the internal impressions EAs have about the ability of superforecasters. In particular, it's not obvious to me that we should trust the judgments of superforecasters substantially more than we trust the judgments of other EAs.
2
MichaelA
4y
My view is that giving explicit, quantitative credences plus stating the supporting evidence is typically better than giving explicit, quantitative credences without stating the supporting evidence (at least if we ignore time costs, information hazards, etc.), which is in turn typically better than giving qualitative probability statements (e.g., "pretty sure") without stating the supporting evidence, and often better than just saying nothing. Does this match your view? In other words, are you essentially just arguing that "providing supporting arguments is a net benefit"? I ask because I had the impression that you were arguing that it's bad for people to give explicit, quantitative credences if they aren't also giving their supporting evidence (and that it'd be better for them to, in such cases, either use qualitative statements or just say nothing). Upon re-reading the thread, I got the sense that others may have gotten that impression too, but also I don't see you explicitly make that argument.
The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what "pretty sure" means in common language), but ought to have drastically different implications for behavior!

Yes you're right. But I'm making a distinction between people's own credences and their ability to update the credences of other people. As far as changing the opinion of the reader, when someone says "I haven't thought much about it", it should be an indicator to not update your own credence by very m... (read more)

2
Linch
4y
I'm curious if you agree or disagree with this claim: With a specific operationalization like:

I'm not sure how you think that's what I said. Here's what I actually said:

A superforecaster's credence can shift my credence significantly...
If the credence of a random person has any value to my own credence, it's very low...
The evidence someone provides is far more important than someone's credence (unless you know the person is highly calibrated and precise)...
[credences are] how people should think...
if you're going to post your credence, provide some evidence so that you can update other people's credences too.
... (read more)
Yes, in most cases if somebody has important information that an event has XY% probability of occurring, I'd usually pay a lot more to know what X is than what Y is.

As you should, but Greg is still correct in saying that Y should be provided.

Regarding the bits of information, I think he's wrong because I'd assume information should be independent of the numeric base you use. So I think Y provides 10% of the information of X. (If you were using base 4 numbers, you'd throw away 25%, etc.)

But again, there's no point in throwing away that 10%.

6
Tyle_Stelzig
4y
In the technical information-theoretic sense, 'information' counts how many bits are required to convey a message. And bits describe proportional changes in the number of possibilities, not absolute changes. The first bit of information reduces 100 possibilities to 50, the second reduces 50 possibilities to 25, etc. So the bit that takes you from 100 possibilities to 50 is the same amount of information as the bit that takes you from 2 possibilities to 1. And similarly, the 3.3 bits that take you from 100 possibilities to 10 are the same amount of information as the 3.3 bits that take you from 10 possibilities to 1. In each case you're reducing the number of possibilities by a factor of 10. To take your example: If you were using two digits in base four to represent per-sixteenths, then each digit contains the 50% of the information (two bits each, reducing the space of possibilities by a factor of four). To take the example of per-thousandths: Each of the three digits contains a third of the information (3.3 bits each, reducing the space of possibilities by a factor of 10). But upvoted for clearly expressing your disagreement. :)

I agree. Rounding has always been ridiculous to me. Methodologically, "Make your best guess given the evidence, then round" makes no sense. As long as your estimates are better than random chance, it's strictly less reliable than just "Make your best guess given the evidence".

Credences about credences confuse me a lot (is there infinite recursion here? I.e. credences about credences about credences...). My previous thoughts have been give a credence range or to size a bet (e.g. "I'd bet $50 out of my $X of wealth at a Y o... (read more)

FCCC
4y-1
0
0
I mean, very frequently it's useful to just know what someone's credence is. That's often an order of magnitude cheaper to provide, and often is itself quite a bit of evidence.

I agree, but only if they're a reliable forecaster. A superforecaster's credence can shift my credence significantly. It's possible that their credences are based off a lot of information that shifts their own credence by 1%. In that case, it's not practical for them to provide all the evidence, and you are right.

But most people are poor forecaster... (read more)

2
Habryka
4y
Yes, but unreliability does not mean that you instead just use vague words instead of explicit credences. It's a fine critique to say that people make too many arguments without giving evidence (something I also disagree with, but that isn't the subject of this thread), but you are concretely making the point that it's additionally bad for them to give explicit credences! But the credences only help, compared to vague and ambiguous terms that people would use instead.
You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report.

Habryka seems to be talking about people who have evidence and are just not stating it, so we might be talking past one another. I said in my first comment "There's also a lot of pseudo-superforcasting ... without any evidence backing up those credences." I didn't say "without stating any evidence backing up those cred... (read more)

4
Linch
4y
I agree that EAs put superforecasters and superforecasting techniques on a pedestal, more than is warranted. Yes, I think it's a lot worse. Consider the two statements: And The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what "pretty sure" means in common language), but ought to have drastically different implications for behavior! I basically think EA and associated communities would be better off to have more precise credences, and be accountable for them. Otherwise, it's difficult to know if you were "really" wrong, even after checking hundreds of claims!
From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences

Sure there is: By communicating, we're trying to update one another's credences. You're not going to be very successful in doing so if you provide a credence without supporting evidence. The evidence someone provides is far more important than someone's credence (unless you know the person is highly calibrated and precise). If you have a credence that you keep to yourself, then yes, there's no need for supporting... (read more)

Ambiguous statements are bad, 100%, but so are clear, baseless statements.

You seem to have switched from the claim that EAs often report their credences without articulating the evidence on which those credences rest, to the claim that EAs often lack evidence for the credences they report. The former claim is undoubtedly true, but it doesn't necessarily describe a problematic phenomenon. (See Greg Lewis's recent post; I'm not sure if you disagree.). The latter claim would be very worrying if true, but I don't see reason to believe that ... (read more)

8
Habryka
4y
I mean, very frequently it's useful to just know what someone's credence is. That's often an order of magnitude cheaper to provide, and often is itself quite a bit of evidence. This is like saying that all statements of opinions or expressions of feelings are bad, unless they are accompanied with evidence, which seems like it would massively worsen communication.
EA epistemology is weaker than expected.

I'd say nearly everyone's ability to determine an argument's strength is very weak. On the Forum, invalid meta-arguments* are pretty common, such as "people make logic mistakes so you might have too", rather than actually identifying the weaknesses in an argument. There's also a lot of pseudo-superforcasting, like "I have 80% confidence in this", without any evidence backing up those credences. This seems to me like people are imitating sound arguments without actually unders... (read more)

4
MichaelA
4y
Here are two claims I'd very much agree with: * It's often best to focus on object-level arguments rather than meta-level arguments, especially arguments alleging bias * One reason for that is that the meta-level arguments will often apply to a similar extent to a huge number of claims/people. E.g., a huge number of claims might be influenced substantially by confirmation bias. * (Here are two relevant posts.) Is that what you meant? But you say invalid meta-arguments, and then give the example "people make logic mistakes so you might have too". That example seems perfectly valid, just often not very useful. And I'd also say that that example meta-argument could sometimes be useful. In particular, if someone seems extremely confident about something based on a particular chain of logical steps, it can be useful to remind them that there have been people in similar situations in the past who've been wrong (though also some who've been right). They're often wrong for reasons "outside their model", so this person not seeing any reason they'd be wrong doesn't provide extremely strong evidence that they're not. It would be invalid to say, based on that alone, "You're probably wrong", but saying they're plausibly wrong seems both true and potentially useful. (Also, isn't your comment primarily meta-arguments of a somewhat similar nature to "people make logic mistakes so you might have too"? I guess your comment is intended to be a bit closer to a specific reference class forecast type argument?) Describing that as pseudo-superforecasting feels unnecessarily pejorative. I think such people are just forecasting / providing estimates. They may indeed be inspired by Tetlock's work or other work with superforecasters, but that doesn't mean they're necessarily trying to claim their estimates use the same methodologies or deserve the same weight as superforecasters' estimates. (I do think there are potential downsides of using explicit probabilities, but I think each p
There's also a lot of pseudo-superforcasting, like "I have 80% confidence in this", without any evidence backing up those credences.

From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences, and in general I think there is a lot of value in people providing credences even if they don't provide additional evidence, if only to avoid problems of ambiguous language.

Yeah, that's right. The problem with my toy model is that it assumes that funds can actually estimate their optimal bid, which would need to be an exact prediction of the their future returns at an exact time, which is not possible. Allowing bids to reference a single, agreed-upon global index reduces the problem to a prediction of costs, which is much easier for the funds. And in the long run, returns can't be higher the return of the global index, so it should maximize long-run returns.

However, most (?) indices are made by committees, which I ... (read more)

3
Larks
4y
My understanding is the committees generally make rules for the indices, and then apply them relatively mechanistically, though they do occasionally change the rules. I think it is hard to totally get rid of this. You need some way to judge that a company's market cap is actually representative of market trading, as opposed to being manipulated by insiders (like LFIN was). Presumably if the index committee changed it to something absurd the regulator could change their index provider for the next year's bidding, though you are at risk of small changes that do not meet the threshold for firing. As a minor technical note gross returns often are (very slightly) higher than the index's, because the managers can profit from stock lending. This is what allows zero-fee ETFs (though they are also somewhat a marketing ploy).

Yeah, it's definitely flawed. I was more thinking that the bids could be made as a difference between an index (probably a global one). So the profit-maximizing bids for the funds would be the index return (whatever it happens to be) minus their expected costs. And then you have large underwriters of the firms, who make sure that the fund's processes are sound. What I'd like is everyone to be in Vanguard/Blackrock, but there should be some mechanism for others to overthrow them if someone can match the index at a lower cost.

2
Larks
4y
Ahhh, so basically the idea is that no underwriter would be willing to vouch for anything but a credible index shop. Seems plausible.

Caught red handed. I'd been thinking about this idea for a while and was trying to get the maths to work last night, so I had my prison/immigration idea next to me for reference.

I like this idea; we should have many more second-price auctions out there. Do you have any further references about it?

Thanks. I'm not the best person to ask about auctions. For people looking for an introduction, this video is pretty good. If anyone's got a good textbook, I'd be interested.

Ah yes, I have mentioned in other comments about regulation keeping private prisons in check too. I should have restated it here. I am in favour of checks and balances, which is why my goal for the system contains "...within the limits of the law". I agree with almost everything you say here (I'd keep some public prisons until confident that the market is mature enough to handle all cases better than the public system, but I wouldn't implement your 10-year loan).

Human rights laws. Etc.

Yep, I'm all for that. One thing that people ar... (read more)

Basically everyone was convinced by the theory of small loans to those too poor to receive finance.

I was against microfinance, but I also don't know how they justified the idea. I think empirical evidence should be used to undermine certain assumptions of models, such as "People will only take out a loan if it's in their own interests". Empirically, that's clearly not always the case (e.g. people go bankrupt from credit cards), and any model that relies on that statement being true may fail because of it. A theoretical argument wi... (read more)

2
weeatquince
4y
Unfortunately I don’t have much useful to contribute on this. I don’t have experience running trials and pilots. I would think through the various scenarios by which a pilot could get started and then adapt to that. Eg what if you had the senior management of one prison that was keen. What about a single state. What about a few prisons. Also worth recognising that data might take years. I used to know someone who worked on prison data collection and assessing success of prisons, if I see her at some point I could raise this and message you.
Theoretical reasons are great but actual evidence [is] more important.

Good theoretical evidence is "actual evidence". No amount of empirical evidence is up to the task of proving there are an infinite number of primes. Our theoretical argument showing there are an infinite number of primes is the strongest form of evidence that can be given.

That's not to say I think my argument is airtight, however. My argument could probably be made with more realistic assumptions (alternatively, more realistic assumptions might show my proposed system is ... (read more)

2
weeatquince
4y
I strongly disagree. Additional checks and balances that prevent serious problems occurring are good. You have already said your system could go wrong (you said "more realistic assumptions might show my proposed system is fundamentally mistaken") and maybe it could go wrong in subtle ways that take years to manifest as companies learn how they can twist the rules. You should be in favour of checks and balances, and might want to explore what additional systems of checks would work best for your proposal. Options include: A few prisons running on a different system (eg state-run). A regulator for your auction based prisons. Transparency. The prisons being on 10 year loans from the state with contacts the need regular renewing so they would default to state ownership. Human rights laws. Etc. Maybe all of the above are things to have. As an example, one thing that could go wrong (although it looks like you have touched on this elsewhere in the comments) is prisons may not have a strong incentive to care about the welfare of the prisoners whilst they are in the prison.
1
weeatquince
4y
MAIN POINTS: Looks like we are mostly on the same page. We both recognise the need for theoretical data and empirical data to play a role and we both think that you have a good idea for prison reform. I still get the impression that you undervalue empirical evidence of existent systems compared to theoretical evidence and may under invest in understanding evidence that goes against the theory or could improve the model. (Or may be I am being too harsh and we agree here too, hard to judge from a short exchange like this.) I am not sure I can persuade you to change much on this but I go into detail on a few points below. Anyway even if you are not persuaded I expect (well hope) that you would need to gather the empirical evidence before any senior policy makers look to implement this so either way that seems like a good next step. Good luck :-) TO ADDRESS THE POINTS RAISED: Firstly, apologies. I am not sure I explained things very well. Was late and I minced my words a bit. By "actual evidence" I was trying to just encompass the case of a similar policy already being in place and working. Eg we know tobacco tax works well at achieving the policy aim of reducing smoking because we can see it working. Sorry for any confusion caused. A better example from development is microcredit (mirofinace). Basically everyone was convinced by the theory of small loans to those too poor to receive finance. The guy who came up with the idea got a freking Nobel Prize. Super-skeptics GiveWell used to have a page on the best microcredit charity. But turns out (from multiple meta-analyses) that there was basically no way to make it work in practice (not for the worlds poorest). Blanket statements like this – suggesting your idea or similar is the ONLY way prisons can work still concerns me and makes me think that you value theoretical data too highly compared to empirical data. I don’t know much about prisons systems but I would be shocked if there was NO other good way to have

Thanks for the kind comment.

My guess is that the US would be the best place to start (a thick "market", poor outcomes), but I'm talking about prison systems in general.

I'm not familiar with the UK system, but I haven't heard of any prison system with a solid theoretical grounding. Theory is required because we want to compare a proposed system to all other possible systems and conclude that our proposal is the best. You want theoretical reasons to believe that your system will perform well, and that good performance will endure.

Mos... (read more)

2
weeatquince
4y
Hi, THEORY AND EVIDENCE Correct me if I am wrong but you seem to be implying that the "theoretical reasons" why a policy idea will work are necessary and more important than empirical evidence that a system has worked in some case (which may be misleading due to confounding factors like good people). If so I strongly disagree: * Based on my 7 years experience working in UK policy would lead me to say the opposite. Theoretical reasons are great but actual evidence that a particular system has worked is super great, and in most cases more important. * Of course both can be useful. The world is complicated and policy is complicated and both evidence and theory can lead you down the wrong path. Good theoretical policy ideas can turn out to be wrong and well-evidence policy idea may not replicate as expected. * Consider international development. The effective altruism community has been saying for years (and backing up these claims) that in development you cannot just do things that theoretically sound like they will work (like building schools) but you need to do things that have empirical evidence of working well. * People are very very good at persuading themselves in what they believe (eg confirmation bias). A risk with policies driven by theoretical reasoning is that its adherents have ideological baggage and motivated reasoning and do not shift in line with new evidence. This is less of a risk for policy driven based on what works. ON PRISONS I have not considered all the details but I do think you have a decent policy idea here. I would be interested to see it tried. I would make the following, hopefully constructive, suggestions to you. 1. Focus on countries where the prison system is actually broken There is a lot of failings in policy and limited capacity to address them all. I do think "if it is not broke don’t fix it" is often a good maxim in policy and countries with working systems should not be the first to shift to the system you describe
Load more