Owen_Cotton-Barratt

Wiki Contributions

Comments

Some thoughts on David Roodman’s GWP model and its relation to AI timelines

I came in with roughly the view you describe as having had early on in the project, and I found this post extremely clear in laying out the further considerations that shifted you. Thanks!

A do-gooder's safari

Interesting idea!

I'm keen for the language around this to convey the correct vibe about the epistemic status of the framework: currently I think this is "here are some dimensions that I and some other people feel like are helpful for our thinking". But not "we have well-validated ways of measuring any of these things" nor "this is definitely the most helpful carving up in the vicinity" nor "this was demonstrated to be helpful for building a theory of change for intervention X which did verifiably useful things". I think the animal names/pictures are kind of playful and help to convey that this isn't yet attempting to be in epistemically-solid land?

I guess I'm interested in the situations where you think an abbreviation would be helpful. Do you want someone to make an EA personality test based on this?

What should we call the other problem of cluelessness?

I think this is a good point which I wasn't properly appreciating. It doesn't seem particularly worse for (2) than for (1), except insofar as terminology is more locked in for (1) than (2).

Of course, a possible advantage of "clueless" is that it strikes a self-deprecating tone; if we're worried about being perceived as arrogant then having the language err on the side of assigning blame to ourselves rather than the universe might be a small help

What should we call the other problem of cluelessness?

I think that bare terms like "unpredictability" or particularly "uncertainty" are much too weak; they don't properly convey the degree of epistemic challenge, and hence don't pick out what's unusual about the problem situation that we're grappling with.

"Unforseeability" is a bit stronger, but still seems rather too weak. I think "unknowability", "radical uncertainty", and "cluelessness" are all in the right ballpark for their connotations.

I do think "unknowability" for (2) and "absolute/total unknowability" for (1) is an interesting alternative. Using "unknowable" rather than "clueless" puts the emphasis on the decision situation rather than the agent; I'm not sure whether that's better.

What should we call the other problem of cluelessness?

To me it sounds slightly odd to use the word "clueless" for (2), however, given the associations that word has (cf. Cambridge dictionary).

In everyday language I actually think this fits passably well. The dictionary gives the definition "having no knowledge of something". For (2) I feel like informally I'd be happy with someone saying that the problem is we have no knowledge of how our actions will turn out, so long as they clarified that they didn't mean absolutely no knowledge. Of course this isn't perfect; I'd prefer they said "basically no knowledge" in the first place. But it's also the case that informally "clueless" is often modified with superlatives (e.g. "totally", "completely"), so I think that a bare "clueless" doesn't really connote having no idea at all.

What should we call the other problem of cluelessness?

(1) is not a gradable concept - if we're clueless, then in Hilary Greaves' words, we "can never have even the faintest idea" which of two actions is better.

(2), on the other hand, is a gradable concept - it can be more or less difficult to find the best strategies. Potentially it would be good to have a term that is gradable, for that reason.

I appreciate you making this distinction. Although I find that it all the more makes me want to use one term (e.g. clueless) for (2), and a modified version (absolutely clueless, or totally clueless, or perhaps infinitely clueless) for (1). I think that the natural relation between the two concepts is that (1) is something like a limiting case of (2) taken to the extreme, so it's ideal if the terminology reflects that.

A couple of other remarks around this:

  • I think the fact that "totally clueless" is a common English phrase suggests that "clueless" is grammatically seen as a gradable concept.
  • I agree that in principle  (2) is a gradable concept so we might want to have language that can express it.
    • In practice my instinct is that most of the time one will point at the problem and put attention on possible responses, and it won't be that helpful to discuss exactly how severe the problem is.
    • However, I like the idea that being able to express gradations might make it easier to notice the concept of gradations.
      • I can dream that eventually we could find a natural metric for degree-of-cluelessness ...
What should we call the other problem of cluelessness?

One possibility is something relating to (un)predictability or (un)foreseeability. That has the advantage that it relates to forecasting. 

Hmm, I'm unsure whether the link to forecasting is more of an advantage or a disadvantage. It's suggestive of the idea that one deals with the problem by becoming better at forecasting, which I think is something which is helpful, but probably only a small minority of how we should address it.

What should we call the other problem of cluelessness?

Some alternatives in a similar vein: (1) = strong cluelessness / (2) = weak cluelessness (1) = total cluelessness / (2) = partial cluelessness

I guess I kind of like the word "practical" for (2), to point to the fact that it isn't the type of thing that will have a clean philosophical resolution.

What should we call the other problem of cluelessness?

I suggest that (1) should be called "the problem of absolute cluelessness" and that (2) should be called "the practical problem of cluelessness".

When context is clear one could drop the adjective. My suspicion is that with time (1) will come to be regarded as a solved problem, and (2) will still want a lot of attention. I think it's fine/desirable if at that point it gets to use the pithier term of "cluelessness". I also think that it's probably good if (1) and (2) have names which make it clear that there's a link between them. I think there may be a small transition cost from current usage, but (a) there just isn't that much total use of the terms now, and (b) current usage seems inconsistent about whether it includes (2).

Anki deck for "Some key numbers that (almost) every EA should know"

Neat! Is there any easy way to read the content without using the Anki software?

Load More