Owen_Cotton-Barratt

Wiki Contributions

Comments

Democratising Risk - or how EA deals with critics

I feel like there's just a crazy number of minority views (in the limit a bunch of psychoses held by just one individual), most of which must be wrong. We're more likely to hear about minority views which later turn out to be correct, but it seems very implausible that the base rate of correctness is higher for minority views than majority views.

On the other hand I think there's some distinction to be drawn between "minority view disagrees with strongly held majority view" and "minority view concerns something that majority mostly ignores / doesn't have a view on".

Supporting Video, Audio, and other non-text media on the Forum

Intuitively I'm pretty interested in the possibility of supporting more formats in service of serious discourse (e.g. having a place to share recordings of conversations that others might benefit from), and pretty uninterested in extra formats for the sake of driving more engagement ... there's a middle ground of "driving engagement with serious discourse" which I'm not sure what to feel about.

Truthful AI

If this looks like an issue, one could distinguish speech acts (which are supposed to meet certain standards) from the outputs of various transparency tools (which hopefully meet some standards of accuracy, but might be based on different standards).

Truthful AI

The idea is that one statement which is definitely false seems a much more egregious violation of truthfulness than e.g. four statements each only 75% true.

Raising it to a power >1 is a factor correcting for this. The choice of four is a best guess based on thinking through a few examples and how bad things seemed, but I'm sure it's not an optimal choice for the parameter.

Truthful AI

The distinction I'm drawing is that "cannot spread it to you" is ambiguous between whether it's shorthand for:

  1. Cannot (in any circumstances) spread it to you
  2. Cannot (as a rule of thumb) spread it to you

Whereas I think that "can never spread it to you" or "absolutely cannot spread it to you" are harder to interpret as being shortenings of 2.

Truthful AI

Some content which didn't make it into the paper in the end but is relevant for this discussion is a draft protocol for "counting microlies" (the coloured text is the instructions, to be read counterclockwise starting in the top left):

Truthful AI

In general if we're asking about what has a “poor” track record, it would be good to think about quantification and comparison to alternatives. Note that we’d consider sites like Wikipedia as examples of institutions doing a form of truth evaluation. 

Discussions of fact-checking institutions often focus on some concrete case that they got wrong; but they are bound to get some things wrong. The questions are :

  1. What’s the overall track record over all statements (including those that seem easy/obvious)? 
  2. How well do they do against alternatives?  

Analogously people often point out some particular cases where prediction markets did badly, but advocates of prediction markets just claim that they are at least as accurate over all as alternative prediction mechanisms. And right now many questions humans ask are not controversial (e.g. science questions, local questions). But AI currently says false things about these questions! So there’s lots of room for improvement without even touching the controversial stuff (though eventually one wants some relatively graceful handling of controversy).

(Thanks to Owain for most of these points.)

Truthful AI

Re. the particulars of fact-checkers and discretion, I'm in favour of more precise processes for assessing possible meanings of ambiguous statements and then assessing the truth of those possible meanings. I think that this could remove quite a bit of the subjectivity.

In the case of the example you give, I would like to give Biden's statement a medium penalty, and Trump's statement a medium-large penalty. The difference is Trump's use of the word "whatsoever".  This is the opposite of a caveat -- it is stressing that the literal meaning rather than the approximate one is intended. To my mind pairs of comparably-bad statements would be:

  • Not bad:
    • Guns
      • "There were very few guns ..."
      • "For the most part, there were no guns ..."
    • Coronavirus
      • "... are less likely to spread it to you"
      • "... cannot spread it to you in most cases"
  • Somewhat bad:
    • "There were no guns ..."
    • "... cannot spread it to you"
  • More bad (but still room to be more false):
    • Guns
      • "There were no guns whatsoever ..."
      • "There were absolutely no guns ..."
    • Coronavirus
      • "... absolutely cannot spread it to you"
      • "... can never spread it to you"

This is not to say that political bias isn't playing a role in how these organisations are functioning at the moment, but I do think that we can hope to establish more precise standards which reduces the scope for bias to apply.

Listen to more EA content with The Nonlinear Library

I do think that there's an interesting fuzzy boundary here between "derivative work" and "interpretative tool".

e.g. with the framing "turn it into a podcast" I feel kind of uncomfortable and gut-level wish I was consulted on that happening to any of my posts.

But here's another framing: it's pretty easy to imagine a near-future world where anyone who wants can have a browser extension which will read things to them at this quality level rather than having visual fonts. If I ask "am I in favour of people having access to that browser extension?", I'm a fairly unambiguous yes. And then the current project can be seen as selectively providing early access to that technology. And that seems ... pretty fine?

This actually makes me more favourable to the version with automated rather than human readers. Human readers would make it seem more like a derivative work, whereas the automation makes the current thing seem closer to an interpretative tool.

The Cost of Rejection

Insurance seems like a fairly poor tool here, since there's a significant moral hazard effect (insurance makes people less careful about taking steps to minimize exposure), which could lead to dynamics where the price goes really high and then only the people who are most likely to attract lawsuits still take the insurance ...

Actually if there were a market in this I'd expect the insurers as condition of cover to demand legible steps to reduce exposure ... like not giving feedback to unsuccessful applicants.

Load More