FCCC

Comments

Use resilience, instead of imprecision, to communicate uncertainty
And bits describe proportional changes in the number of possibilities, not absolute changes...
And similarly, the 3.3 bits that take you from 100 possibilities to 10 are the same amount of information as the 3.3 bits that take you from 10 possibilities to 1. In each case you're reducing the number of possibilities by a factor of 10.

Ahhh. Thanks for clearing that up for me. Looking at the entropy formula, that makes sense and I get the same answer as you for each digit (3.3). If I understand, I incorrectly conflated "information" with "value of information".

Use resilience, instead of imprecision, to communicate uncertainty
I think this is better parsed as diminishing marginal returns to information.

How does this account for the leftmost digit giving the most information, rather than the rightmost digit (or indeed any digit between them)?

per-thousandths does not have double the information of per-cents, but 50% more

Let's say I give you $1 + $ where is either 0, $0.1, $0.2 ... or $0.9. (Note $1 is analogous to 1%, and is equivalent adding a decimal place. I.e. per-thousandths vs per-cents.) The average value of , given a uniform distribution, is $0.45. Thus, against $1, adds almost half the original value, i.e. $0.45/$1 (45%). But what if I instead gave you $99 + $? $0.45 is less than 1% of the value of $99.

The leftmost digit is more valuable because it corresponds to a greater place value (so the magnitude of the value difference between places is going to be dependent on the numeric base you use). I don't know information theory, so I'm not sure how to calculate the value of the first two digits compared to the third, but I don't think per-thousandths has 50% more information than per-cents.

[This comment is no longer endorsed by its author]Reply
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher
Does this match your view?

Basically, yeah.

But I do think it's a mistake to update your credence based off someone else's credence without knowing their argument and without knowing whether they're calibrated. We typically don't know the latter, so I don't know why people are giving credences without supporting arguments. It's fine to have a credence without evidence, but why are people publicising such credences?

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher
But you say invalid meta-arguments, and then give the example "people make logic mistakes so you might have too". That example seems perfectly valid, just often not very useful.

My definition of an invalid argument contains "arguments that don't reliably differentiate between good and bad arguments". "1+1=2" is also a correct statement, but that doesn't make it a valid response to any given argument. Arguments need to have relevancy. I dunno, I could be using "invalid" incorrectly here.

And I'd also say that that example meta-argument could sometimes be useful.

Yes, if someone believed that having a logical argument is a guarantee, and they've never had one of their logical arguments have a surprising flaw, it would be valid to point that out. That's fair. But (as you seem to agree with) the best way to do this is to actually point to the flaw in the specific argument they've made. And since most people who are proficient with logic already know that logic arguments can be unsound, it's not useful to reiterate that point to them.

Also, isn't your comment primarily meta-arguments of a somewhat similar nature to "people make logic mistakes so you might have too"?

It is, but as I said, "Some meta-arguments are valid". (I can describe how I delineate between valid and invalid meta-arguments if you wish.)

Describing that as pseudo-superforecasting feels unnecessarily pejorative.

Ah sorry, I didn't mean to offend. If they were superforecasters, their credence alone would update mine. But they're probably not, so I don't understand why they give their credence without a supporting argument.

Did you mean "some ideas that are probably correct and very important"?

The set of things I give 100% credence is very, very small (i.e. claims that are true even if I'm a brain in a vat). I could say "There's probably a table in front of me", which is technically more correct than saying that there definitely is, but it doesn't seem valuable to qualify every statement like that.

Why am I confident in moral uncertainty? People do update their morality over time, which means that they were wrong at some point (i.e. there is demonstrably moral uncertainty), or the definition of "correct" changes and nobody is ever wrong. I think "nobody is ever wrong" is highly unlikely, especially because you can point to logical contradictions in people's moral beliefs (not just unintuitive conclusions). At that point, it's not worth mentioning the uncertainty I have.

I definitely don't think EAs are perfect, but they do seem above-average in their tendency to have true beliefs and update appropriately on evidence, across a wide range of domains.

Yeah, I'm too focused on the errors. I'll concede your point: Some proportion of EAs are here because they correctly evaluated the arguments. So they're going to bump up the average, even outside of EA's central ideas. My reference classes here were all the groups that have correct central ideas, and yet are very poor reasoners outside of their domain. My experience with EAs is too limited to support my initial claim.

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

It's almost irrelevant, people still should provide their supporting argument of their credence, otherwise evidence can get "double counted" (and there's "flow on" effects where the first person who updates another person's credence has a significant effect on the overall credence of the population). For example, say I have arguments A and B supporting my 90% credence on something. And you have arguments A, B and C supporting your 80% credence on something. And neither of us post our reasoning; we just post our credences. It's a mistake for you to then say "I'll update my credence a few percent because FCCC might have other evidence." For this reason, providing supporting arguments is a net benefit, irrespective of EA's accuracy of forecasts.

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher
The two statements are pretty similar in verbalized terms (and each falls under loose interpretations of what "pretty sure" means in common language), but ought to have drastically different implications for behavior!

Yes you're right. But I'm making a distinction between people's own credences and their ability to update the credences of other people. As far as changing the opinion of the reader, when someone says "I haven't thought much about it", it should be an indicator to not update your own credence by very much at all.

I basically think EA and associated communities would be better off to have more precise credences, and be accountable for them

I fully agree. My problem is that this is not the current state of affairs for the majority of Forum users, in which case, I have no reason to update my credences because an uncalibrated random person says they're 90% confident without providing any reasoning that justifies their position. All I'm asking for is for people to provide a good argument along with their credence.

I agree that EAs put superforecasters and superforecasting techniques on a pedestal, more than is warranted.

I think that they should be emulated. But superforcasters have reasoning to justify their credences. They break problems down into components that they're more confident in estimating. This is good practice. Providing a credence without any supporting argument, is not.

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

I'm not sure how you think that's what I said. Here's what I actually said:

A superforecaster's credence can shift my credence significantly...
If the credence of a random person has any value to my own credence, it's very low...
The evidence someone provides is far more important than someone's credence (unless you know the person is highly calibrated and precise)...
[credences are] how people should think...
if you're going to post your credence, provide some evidence so that you can update other people's credences too.

I thought I was fairly clear about what my position is. Credences have internal value (you should generate your own credence). Superforecasters' credences have external value (their credence should update yours). Uncalibrated random people's credences don't have much external value (they shouldn't shift your credence much). And an argument for your credence should always be given.

I never said vague words are valuable, and in fact I think the opposite.

This is an empirical question. Again, what is the reference class for people providing opinions without having evidence? We could look at all of the unsupported credences on the forum and see how accurate they turned out to be. My guess is that they're of very little value, for all the reasons I gave in previous comments.

you are concretely making the point that it's additionally bad for them to give explicit credences!

I demonstrated a situation where a credence without evidence is harmful:

If we have different credences and the set of things I've considered is a strict subset of yours, you might update your credence because you mistakenly think I've considered something you haven't.

The only way we can avoid such a situation is either by providing a supporting argument for our credences, OR not updating our credences in light of other people's unsupported credences.

Use resilience, instead of imprecision, to communicate uncertainty
Yes, in most cases if somebody has important information that an event has XY% probability of occurring, I'd usually pay a lot more to know what X is than what Y is.

As you should, but Greg is still correct in saying that Y should be provided.

Regarding the bits of information, I think he's wrong because I'd assume information should be independent of the numeric base you use. So I think Y provides 10% of the information of X. (If you were using base 4 numbers, you'd throw away 25%, etc.)

But again, there's no point in throwing away that 10%.

Use resilience, instead of imprecision, to communicate uncertainty

I agree. Rounding has always been ridiculous to me. Methodologically, "Make your best guess given the evidence, then round" makes no sense. As long as your estimates are better than random chance, it's strictly less reliable than just "Make your best guess given the evidence".

Credences about credences confuse me a lot (is there infinite recursion here? I.e. credences about credences about credences...). My previous thoughts have been give a credence range or to size a bet (e.g. "I'd bet $50 out of my $X of wealth at a Y odds"). I like both your solutions (e.g. "if I thought about it for an hour..."). I'd like to see an argument that shows there's an optimal method for representing the uncertainty of a credence. I wouldn't be surprised if someone has the answer and I'm just unaware of it.

I've thought about the coin's 50% probability before. Given a lack of information about the initial forces on the coin, there exists an optimal model to use. And we have reasons to believe a 50-50 model is that model (given our physics models, simulate a billion coin flips with a random distribution of initial forces). This is why I like your "If I thought about it more" model. If I thought about the coin flip more, I'd still guess 49%-51% (depending on the specific coin, of course).

Load More