Gregory Lewis

4395Joined Oct 2014

Bio

Researcher (on bio) at FHI

Comments
282

I'd guess the distinction would be more 'public interest disclosure' rather than 'officialness' (after all, a lot of whistleblowing ends up in the media because of inadequacy in 'formal' channels). Or, with apologies to Yes Minister: "I give confidential briefings, you leak, he has been charged under section 2a of the Official Secrets Act". 

The question seems to be one of proportionality: investigative or undercover journalists often completely betray the trust and (reasonable) expectations of privacy of its subjects/targets, and this can ethically vary from reprehensible to laudable depending on the value of what it uncovers (compare paparrazzi to Panorama). Where this nets out for disclosing these slack group messages is unclear to me. 

Hello Luke,

I suspect you are right to say that no one has carefully thought through the details of medical career choice in low and middle income countries - I regret I certainly haven't. One challenge is that the particular details of medical careers will not only vary between higher and lower income countries but also within these groups: I would guess (e.g.) Kenya and the Phillipines differ more than US and UK. My excuse would be that I thought I'd write about what I knew, and that this would line up with the backgrounds of the expected audience. Maybe that was right in 2015, but much less so now, and - hopefully - clearly false in the near future.

Although I fear I'm little help in general, I can offer something more re. E2G vs. medical practice in Kenya.

First, some miscellaneous remarks/health warnings on the 'life saved' figure(s):

  • The effect size interval of 'physician density' crosses zero (P value ~ 0.4(!)). So with more sceptical priors/practices you might take this as a negative result. E.g. I imagine a typical Givewell analyst would interpret this work as an indication training more doctors is not a promising intervention.
  • Both wealth and education factors are much more predictive, which is at least indicative (if not decisive) of what stands better prospects of moving the population health needle. This fits with general doctrine in public health around the social determinants of health, and rhymes with the typically unimpressive impacts of generally greater medical care/expenditure in lottery studies, RCTs, etc.
  • Ecological methods may be the best we (/I) have, but are tricky, ditto the relatively small dataset and bunch on confounds. If I wanted to give my best guess central estimate of the impact of a doctor, I would probably adjust down further due to likely residual confounding, probably by a factor of ~~3. The most obvious example is physician density likely proxies healthcare workers generally, and doctors are unlikely to contribute the majority of the impact of a 'marginal block of healthcare staff'.
  • I typically think the best use of this work is something like an approximate upper-bound: "When you control for the obvious, it is hard to see any impact of physicians in the aggregate - but it is unlikely to be much greater than X".
  • The 'scaling' effect of how much returns of physicians diminish as their density increases is a function of how the variables are linearized. Although this is indirectly data-driven (i.e. because the relationship is very non-linear, you linearise using a function which drives diminishing returns), it is not a 'discovery' from the analysis itself. 
  • Although available data (and maybe reality) is much too underpowered to show this, I would guess this scaling overrates the direct impact of medical personnel in lower-income settings: advanced medical training is likely overkill for primary prevention (or sometimes typical treatment) of the main contributors to lower-income countries burden of disease (e.g., for Kenya). If indeed the skill-mix should be tilted away from highly trained staff like physicians in low-income settings versus higher-income ones, then there is less of outsized effect of physician density. 

Anyway, bracketing all the caveats and plugging in Kenya's current physician per capita figure into the old regression model gives a marginal response of ~40 DALYs, so a 15x multiplier versus the same for the UK. If one (very roughly) takes ~20-40 DALYs  = 1 'life saved', each year of Kenyan medical practice roughly roughly nets out to 5-10k USD of Givewell donations. 

As you note, this is >>10% (at the upper end, >100%) of the average income of someone in Kenya. However, I'd take the upshot as less "maybe medical careers is a good idea for folks in lower-income countries", but more "maybe E2G in lower-income countries is usually a bad idea" as (almost by definition) the opportunities to generate high incomes to support large donaions to worthy causes will be scarcer. 

Notably, the Kenyan diaspora in the US reports a median houshold income of ~$61 000, whilst the average income for a Kenyan physician is something like $35 000, so 'E2G + emirgration' likely ends up ahead. Of course 'Just move to a high income country' is not some trivial undertaking, and much easier said than done - but then again, the same applies to 'Just become a doctor'.

Asserting (as epicurean views do) death is not bad (in itself) for the being that dies is one thing. Asserting (as the views under discussion do) that death (in itself) is good - and ongoing survival bad - for the being that dies is quite another. 

Besides its divergence from virtually everyone's expressed beliefs and general behaviour, it doesn't seem to fare much better under deliberate reflection. For the sake of a less emotionally charged variant of Mathers' example, responses to the Singer's shallow pond case along the lines of, "I shouldn't step in, because my non-intervention is in the child's best interest: the normal life they could 'enjoy' if they survive accrues more suffering in expectation than their imminent drowning" appear deranged. 
 

Cf. your update, I'd guess the second order case should rely on things being bad rather than looking bad. The second-order case in the OP looks pretty slim, and little better than the direct EV case: it is facially risible supporters of a losing candidate owe the winning candidate's campaign reparations for having the temerity to compete against them in the primary. The tone of this attempt to garner donations by talking down to these potential donors as if they were naughty children who should be ashamed of themselves for their political activity also doesn't help.  

I'd guess strenuous primary contests within  a party does harm the winning candidate's chances for the general (sort of like a much watered down version of third party candidates splitting the vote for D or R), but competitive primaries seem on balance neutral-to-good for political culture, thus competing in them when one has a fair chance of winning seems fair game. 

It seems the key potential 'norm violation you owe us for' is the significant out-of-state fundraising. If this was in some sense a 'bug' in the political system, taking advantage of it would give Salinas and her supporters a legitimate beef (and would defray the potential hypocrisy of supporters of Salinas attacking Flynn in the primary for this yet subsequently hoping to solicit the same to benefit Salinas for the general - the latter is sought to 'balance off' the former). This looks colorable but dubious by my lights: not least, nationwide efforts for both parties typically funnel masses of out-of-state support to candidates in particular election races, and a principled distinction between the two isn't apparent to me.  

I agree this form of argument is very unconvincing. That "people don't act as if Y is true" is a pretty rubbish defeater for "people believe Y is true", and a very rubbish defeater for "X being true" simpliciter. But this argument isn't Ord's, but one of your own creation.

Again, the validity of the philosophical argument doesn't depend on how sincerely a belief is commonly held (or whether anyone believes it at all). The form is simply modus tollens:

  1. If X (~sanctity of life from conception) then Y (natural embryo loss is - e.g. a much greater moral priority than HIV)
  2. ¬Y (Natural embryo loss is not a much greater moral priority than (e.g.) HIV)
  3. ¬X (The sanctity of life from conception view is false)

Crucially, ¬Y is not motivated by interpreting supposed revealed preferences from behaviour. Besides it being ~irrelevant ("Person or group does not (really?) believe Y -->?? Y is false") this apparent hypocrisy can be explained by ignorance rather than insincerity: it's not like statistics around natural embryo loss are common knowledge, so their inaction towards the Scourge could be owed to them being unaware of it.

¬Y is mainly motivated by appeals to Y's apparent absurdity. Ord (correctly) anticipates very few people on reflection would find Y plausible, and so would find if X indeed entailed Y, this would be a reason to doubt X. Again, it is the implausibility on rational reflection, not the concordance of practice to those who claim to believe it, which drives the argument .

Sure - I'm not claiming "EA doctrine" has no putative counter-examples which should lead us to doubt it. But these counter-examples should rely on beliefs about propositions not assessments of behaviour: if EA says "it is better to do X than Y", yet this seems wrong, this is a reason to doubt EA, but whether anyone is actually doing X (or X instead of Y) is irrelevant. "EA doctrine" (ditto most other moral views) urges us to be much less selfish - that I am selfish anyway is not an argument against it.

I think this piece mostly misunderstands Ord's argument, through confusing reductios with revealed preferences. Although you quote the last sentence of the work in terms of revealed preferences, I think you get a better picture of Ord's main argument from his description of it:

The argument then, is as follows. The embryo has the same moral status as an adult human (the Claim). Medical studies show that more than 60% of all people are killed by spontaneous abortion (a biological fact). Therefore, spontaneous abortion is one of the most serious problems facing humanity, and we must do our utmost to investigate ways of preventing this death—even if this is to the detriment of other pressing issues (the Conclusion).

Note there's nothing here about hypocrisy, and the argument isn't  "Ord wants us to interpret people’s departure from their stated moral beliefs, not as moral failure or selfishness or myopia or sin, but as an argument against people’s stated moral claims." 

This wouldn't be much of an argument anyway: besides the Phil-101 points around "Even if pro-lifers are hypocrites their (pretended) belief could still be true", it's still very weak as an abductive consideration. If indeed pro-lifers were hypocritical this gives some evidence their (stated) beliefs are false (through a few mechanisms I'll spare elaborating), this counts for little unless this hypocrisy was of  a remarkably greater degree than others. As moral hypocrisy is all-but-universal, and merely showing (e.g.) that stereotypical Kantians sometimes lie, utilitarians give less than they say they ought to charity (etc. etc.) is not much of a revelation, I doubt this (or the other extensions in the OP) bear much significance in terms of identifying particularly discrediting hypocrisy.  

The challenge of the Scourge is that a common bioconservative belief ("The embryo has the same moral status as an adult human") may entail another which seems facially highly implausible ("Therefore, spontaneous abortion is one of the most serious problems facing humanity, and we must do our utmost to investigate ways of preventing this death—even if this is to the detriment of other pressing issues"). Many (most?) find the latter bizarre, so if they believed it was entailed by the bioconservative claim would infer this claim must be false. Again, this reasoning is basically orthogonal to any putative hypocrisy among those asserting its truth: even if it were the case (e.g.) the Catholic Church was monomaniacal in its efforts to combat natural embryo loss, the argument would still lead me to think they were mistaken.

Ord again:

One certainly could save the Claim by embracing the Conclusion, however I doubt that many of its supporters would want to do so. Instead, I suspect that they would either try to find some flaw in the argument, or abandon the Claim. Even if they were personally prepared to embrace the Conclusion, the Claim would lose much of its persuasive power. Many of the people they were trying to convince are likely to see the Conclusion as too bitter a pill, and to decide that if these embryo-related practices are wrong at all, it cannot be due to the embryo having full moral status.

The guiding principle I recommend is 'disclose in the manner which maximally advantages good actors over bad actors'. As you note, this usually will mean something between 'public broadcast' and 'keep it to yourself', and perhaps something in and around responsible disclosure in software engineering: try to get the message to those who can help mitigate the vulnerability without it leaking to those who might exploit it.

On how to actually do it, I mostly agree with Bloom's answer. One thing to add is although I can't speak for OP staff, Esvelt, etc., I'd expect - like me - they would far rather have someone 'pester' them with a mistaken worry than see a significant concern get widely disseminated because someone was too nervous to reach out to them directly.

Speaking for myself: If something comes up where you think I would be worth talking to, please do get in touch so we can arrange a further conversation. I don't need to know (and I would recommend against including) particular details in the first instance.

(As perhaps goes without saying, at least for bio - and perhaps elsewhere - I strongly recommend against people trying to generate hazards, 'red teaming', etc.)

Thanks for this, Richard.

As you (and other commenters) note, another aspect of Pascalian probabilities is their subjectivity/ambiguity. Even if you can't (accurately) generate "what is the probability I get hit by a car if I run across this road now?", you have "numbers you can stand somewhat near" to gauge the risk - or at least 'this has happened before' case studies (cf. asteroids). Although you can motivate more longtermist issues via similar means (e.g. "Well, we've seen pandemics at least this bad before", "What's the chance folks raising grave concern about an emerging technology prove to be right?") you typically have less to go on and are reaching further from it.

I think we share similar intuitions: this is a reasonable consideration, but it seems better to account for it quantitatively (e.g. with a sceptical prior or discount for 'distance from solid epistemic ground') rather than a qualitative heuristic. E.g. it seems reasonable to discount AI risk estimates (potentially by orders of magnitude) if it all seems very outlandish to you - but then you should treat these 'all things considered' estimates at face value.

 

Load More