Matt_Sharp

328Joined Oct 2014

Bio

I'm just a normal, functioning member of the human race, and there's no way anyone can prove otherwise

Comments
55

I don't think I know enough about either to make that judgement. 

Also tbh right now I don't have the time or interest to debate this topic. I provided the above comments as possible reasons you received a few downvotes, rather than to indicate a desire to debate the topic itself.

What sort of examples do you want? Do you want me to call out specific individuals who misquoted and say that's bad? You could look through my comment history and find some examples if you want to, but I thought drawing attention to and shaming those people would be bad.


It's generally a good sentiment to not want to call out specific individuals, particularly if they are not repeat offenders. However, if this is a widespread issue that is worth the attention of the community, then providing lots of examples will help demonstrate the scale of the problem without it seeming like you're picking on one or two people. 

If it is only one or two people who are repeat offenders, and these are senior members of EA orgs (and/or regular posters on the EA Forum), then it may be justified in shaming them.

It's easier to discuss whether misquoting is very bad for truth seeking, and mistreats a victim, without simultaneously making it a discussion about whether particular individuals in the community are bad.

Without examples to demonstrate that it's a common issue in the EA community, you may find that the discussion is very short, as I suspect most people will just think "yeah, misquoting is indeed bad for truth seeking, which is why I don't do it". 

I didn't downvote either of your articles on misquoting. Skimming over the first article now, it seems reasonably well argued.

However, I agree with the following points made on this comment (which you also referred to in your second article):

  • There's too much to read, so people don't have extensive time to engage with everything. Try to be succint.
    • One of your post spent 22 minutes to say that people shouldn't misquote. It's a rather obvious conclusion that can be exposed in 3 minutes top. I think some people read that as a rant.
  • Use examples showing why the topic is important (or even stories). It allows to link your arguments to something that exists.
    • You can think with purely abstract stuff - but most people are not like that. A useful point to keep in mind is you are not your audience. What works for you doesn't work for most other people. So adapting to other reasoning types is useful.

From skimming your first misquoting article, I don't think you've made the case that misquoting is a particular problem within EA. I don't think there are any examples? In which case, some people might read it, get to the end and think "well that was a waste of 22 minutes and hardly seems relevant to EA, so I'll downvote it to deter others from spending time reading it".

I may be misinterpreting something, but I think what Emrik has described is basically how generic multi-attribute utility instruments (MAUIs)  are used by health economists in the calculation of QALYs. 

For example, as described in this excellent overview, the EQ-5D questionnaire asks about 5 different dimensions of health, which are then valued in combination: 

  1. mobility (ability to walk about)
  2. self-care (ability to wash and dress yourself)
  3. usual activities (ability to work, study, do housework, engage in leisure activities, etc.)
  4. pain/discomfort
  5. anxiety/depression

Each level is scored 1 (no problems), 2 (moderate problems), or 3 (extreme problems). These scores are combined into a five-digit health state profile, e.g., 21232 means some problems walking about, no problems with self-care, some problems performing usual activities, extreme pain or discomfort, and moderate anxiety or depression. However, this number has no mathematical properties: 31111 is not necessarily better than 11112, as problems in one dimension may have a greater impact on quality of life than problems in another. Obtaining the weights for each health state, then, requires a valuation exercise.

Valuation methods

There are many ways of generating a value set (set of weights or utilities) for the health states described by a health utility instrument. (For reviews, see e.g., Brazier, Ratcliffe, et al., 2017 or Green, Brazier, & Deverill, 2000; they are also discussed further in Part 2.) The following five are the most common:

  • Time tradeoff: Respondents directly trade off duration and quality of life, by stating how much time in perfect health is equivalent to a fixed period in the target health state. For example, if they are indifferent between living 10 years with moderate pain or 8 years in perfect health, the weight for moderate pain (state 11121 in the EQ-5D-3L) is 0.8.
  • Standard gamble: Respondents trade off quality of life and risk of death, by choosing between a fixed period (e.g., 10 years) in the target health state and a “gamble” with two possible outcomes: the same period in perfect health, or immediate death. If they would be indifferent between the options when the gamble has a 20% probability of death, the weight is 0.8.
  • Discrete choice experiments: Respondents choose the “best” health state out of two (or sometimes three) options. Drawing on random utility theory, the location of the utilities on an interval scale is determined by the frequency each is chosen, e.g., if 55% of respondents say the first person is healthier than the second (and 45% the reverse), they are close together, whereas if the split is 80:20 they are far apart. This ordinal data then has to be anchored to 0 and 1; some ways of doing so are presented in Part 2. Less common ordinal methods include:
    • Ranking: Placing several health states in order of preference.
    • Best-worst scaling: Choosing the best and worst out of a selection of options.
  • Visual analog scale: Respondents mark the point on a thermometer-like scale, usually running from 0 (e.g., “the worst health you can imagine”) to 100 (e.g., “the best health you can imagine”), that they feel best represents the target health state. If they are also asked to place “dead” on the scale, a QALY value can be easily calculated. For example, with a score of 90/100 and a dead point of 20/100, the weight is (90-20)/(100-20) = 70/80 = 0.875.
  • Person tradeoff (previously called equivalence studies): Respondents trade off health (and/or life) across populations. For example, if they think an intervention that moves 500 people from the target state to perfect health for one year is as valuable as extending the life of 100 perfectly healthy people for a year, the QALY weight is 1  – (100/500) = 0.8.[13]

The Sky at Night did a fantastic one-hour interview with him for their  June show, to celebrate his 80th birthday (unfortunately I'm not sure if the interview is accessible to those outside the UK). 

The focus is obviously astronomy, so mostly covers his achievements and interesting things that have happened in his lifetime. However, the final 20 minutes or so discusses extraterrestrial intelligence, the future of life/humanity, and associated threats.

I think we should avoid using acronyms where possible. 

It makes sense to occasionally use them when you're abbreviating common phrases or names of organisations that would otherwise be long/unwieldy to say or write in full every time. But too often acronyms just needlessly introduce barriers to understanding.

@1 seems unreasonable, because as soon as the first AI-economics people would come up with these arguments, if they were reasonable, they would become mainstream

1 seems the most plausible to me. Reasonable arguments might eventually become mainstream, but that doesn't mean they would do so immediately. 

In particular (a) there may not be many AI-economics people, so the signal could get lost in the noise and (b) economics journals may tend to favour research that focuses on established topics or that uses clever methodology, rather than topics that are important/valuable.

This is similar to this post from just a couple of weeks ago - you may also be interested in the comments on that post.

Load More