Gregory Lewis

4298Joined Oct 2014


Researcher (on bio) at FHI


Cf. your update, I'd guess the second order case should rely on things being bad rather than looking bad. The second-order case in the OP looks pretty slim, and little better than the direct EV case: it is facially risible supporters of a losing candidate owe the winning candidate's campaign reparations for having the temerity to compete against them in the primary. The tone of this attempt to garner donations by talking down to these potential donors as if they were naughty children who should be ashamed of themselves for their political activity also doesn't help.  

I'd guess strenuous primary contests within  a party does harm the winning candidate's chances for the general (sort of like a much watered down version of third party candidates splitting the vote for D or R), but competitive primaries seem on balance neutral-to-good for political culture, thus competing in them when one has a fair chance of winning seems fair game. 

It seems the key potential 'norm violation you owe us for' is the significant out-of-state fundraising. If this was in some sense a 'bug' in the political system, taking advantage of it would give Salinas and her supporters a legitimate beef (and would defray the potential hypocrisy of supporters of Salinas attacking Flynn in the primary for this yet subsequently hoping to solicit the same to benefit Salinas for the general - the latter is sought to 'balance off' the former). This looks colorable but dubious by my lights: not least, nationwide efforts for both parties typically funnel masses of out-of-state support to candidates in particular election races, and a principled distinction between the two isn't apparent to me.  

I agree this form of argument is very unconvincing. That "people don't act as if Y is true" is a pretty rubbish defeater for "people believe Y is true", and a very rubbish defeater for "X being true" simpliciter. But this argument isn't Ord's, but one of your own creation.

Again, the validity of the philosophical argument doesn't depend on how sincerely a belief is commonly held (or whether anyone believes it at all). The form is simply modus tollens:

  1. If X (~sanctity of life from conception) then Y (natural embryo loss is - e.g. a much greater moral priority than HIV)
  2. ¬Y (Natural embryo loss is not a much greater moral priority than (e.g.) HIV)
  3. ¬X (The sanctity of life from conception view is false)

Crucially, ¬Y is not motivated by interpreting supposed revealed preferences from behaviour. Besides it being ~irrelevant ("Person or group does not (really?) believe Y -->?? Y is false") this apparent hypocrisy can be explained by ignorance rather than insincerity: it's not like statistics around natural embryo loss are common knowledge, so their inaction towards the Scourge could be owed to them being unaware of it.

¬Y is mainly motivated by appeals to Y's apparent absurdity. Ord (correctly) anticipates very few people on reflection would find Y plausible, and so would find if X indeed entailed Y, this would be a reason to doubt X. Again, it is the implausibility on rational reflection, not the concordance of practice to those who claim to believe it, which drives the argument .

Sure - I'm not claiming "EA doctrine" has no putative counter-examples which should lead us to doubt it. But these counter-examples should rely on beliefs about propositions not assessments of behaviour: if EA says "it is better to do X than Y", yet this seems wrong, this is a reason to doubt EA, but whether anyone is actually doing X (or X instead of Y) is irrelevant. "EA doctrine" (ditto most other moral views) urges us to be much less selfish - that I am selfish anyway is not an argument against it.

I think this piece mostly misunderstands Ord's argument, through confusing reductios with revealed preferences. Although you quote the last sentence of the work in terms of revealed preferences, I think you get a better picture of Ord's main argument from his description of it:

The argument then, is as follows. The embryo has the same moral status as an adult human (the Claim). Medical studies show that more than 60% of all people are killed by spontaneous abortion (a biological fact). Therefore, spontaneous abortion is one of the most serious problems facing humanity, and we must do our utmost to investigate ways of preventing this death—even if this is to the detriment of other pressing issues (the Conclusion).

Note there's nothing here about hypocrisy, and the argument isn't  "Ord wants us to interpret people’s departure from their stated moral beliefs, not as moral failure or selfishness or myopia or sin, but as an argument against people’s stated moral claims." 

This wouldn't be much of an argument anyway: besides the Phil-101 points around "Even if pro-lifers are hypocrites their (pretended) belief could still be true", it's still very weak as an abductive consideration. If indeed pro-lifers were hypocritical this gives some evidence their (stated) beliefs are false (through a few mechanisms I'll spare elaborating), this counts for little unless this hypocrisy was of  a remarkably greater degree than others. As moral hypocrisy is all-but-universal, and merely showing (e.g.) that stereotypical Kantians sometimes lie, utilitarians give less than they say they ought to charity (etc. etc.) is not much of a revelation, I doubt this (or the other extensions in the OP) bear much significance in terms of identifying particularly discrediting hypocrisy.  

The challenge of the Scourge is that a common bioconservative belief ("The embryo has the same moral status as an adult human") may entail another which seems facially highly implausible ("Therefore, spontaneous abortion is one of the most serious problems facing humanity, and we must do our utmost to investigate ways of preventing this death—even if this is to the detriment of other pressing issues"). Many (most?) find the latter bizarre, so if they believed it was entailed by the bioconservative claim would infer this claim must be false. Again, this reasoning is basically orthogonal to any putative hypocrisy among those asserting its truth: even if it were the case (e.g.) the Catholic Church was monomaniacal in its efforts to combat natural embryo loss, the argument would still lead me to think they were mistaken.

Ord again:

One certainly could save the Claim by embracing the Conclusion, however I doubt that many of its supporters would want to do so. Instead, I suspect that they would either try to find some flaw in the argument, or abandon the Claim. Even if they were personally prepared to embrace the Conclusion, the Claim would lose much of its persuasive power. Many of the people they were trying to convince are likely to see the Conclusion as too bitter a pill, and to decide that if these embryo-related practices are wrong at all, it cannot be due to the embryo having full moral status.

The guiding principle I recommend is 'disclose in the manner which maximally advantages good actors over bad actors'. As you note, this usually will mean something between 'public broadcast' and 'keep it to yourself', and perhaps something in and around responsible disclosure in software engineering: try to get the message to those who can help mitigate the vulnerability without it leaking to those who might exploit it.

On how to actually do it, I mostly agree with Bloom's answer. One thing to add is although I can't speak for OP staff, Esvelt, etc., I'd expect - like me - they would far rather have someone 'pester' them with a mistaken worry than see a significant concern get widely disseminated because someone was too nervous to reach out to them directly.

Speaking for myself: If something comes up where you think I would be worth talking to, please do get in touch so we can arrange a further conversation. I don't need to know (and I would recommend against including) particular details in the first instance.

(As perhaps goes without saying, at least for bio - and perhaps elsewhere - I strongly recommend against people trying to generate hazards, 'red teaming', etc.)

Thanks for this, Richard.

As you (and other commenters) note, another aspect of Pascalian probabilities is their subjectivity/ambiguity. Even if you can't (accurately) generate "what is the probability I get hit by a car if I run across this road now?", you have "numbers you can stand somewhat near" to gauge the risk - or at least 'this has happened before' case studies (cf. asteroids). Although you can motivate more longtermist issues via similar means (e.g. "Well, we've seen pandemics at least this bad before", "What's the chance folks raising grave concern about an emerging technology prove to be right?") you typically have less to go on and are reaching further from it.

I think we share similar intuitions: this is a reasonable consideration, but it seems better to account for it quantitatively (e.g. with a sceptical prior or discount for 'distance from solid epistemic ground') rather than a qualitative heuristic. E.g. it seems reasonable to discount AI risk estimates (potentially by orders of magnitude) if it all seems very outlandish to you - but then you should treat these 'all things considered' estimates at face value.


Thanks for the post.

As you note, whether you use exponential or logistic assumptions is essentially decisive for the long-run importance of increments in population growth. Yet we can rule out exponential assumptions which this proposed 'Charlemagne effect'  relies upon.

In principle, boundless forward compounding is physically impossible, as there are upper bounds on growth rate from (e.g.) the speed of light, and limitations on density from the amount of available matter in a given volume. This is why logistic functions, not exponential ones, are used for modelling populations in (e.g.) ecology.

Concrete counter-examples to the exponential modelling are thus easy to generate. To give a couple:

A 1% constant annual growth rate assumption would imply saving one extra survivor 4000 years ago would have result in a current population of ~ 2* 10 ^17:  200 Quadrillion people.

A 'conservative' 0.00001% annual growth rate still results in populations growing one order of magnitude every ~25 million years. At this rate, you end up with a greater population than atoms in the observable universe within 2 billion years. If you run until the end of the stelliferous era (100 trillion years) at the same rate, you end up with populations on the order of 10^millions, with a population density basically 10^thousands every cubic millimetre. 

I don't find said data convincing re. CFAR, for reasons I fear you've heard me rehearse ad nauseum. But this is less relevant: if it were just 'CFAR, as an intervention, sucks' I'd figure (and have figured over the last decade) that folks don't need me to make up their own mind. The worst case, if that was true, is wasting some money and a few days of their time.

The doctor case was meant to illustrate that sufficiently consequential screw-ups in an activity can warrant disqualification from doing it again - even if one is candid and contrite about them. I agree activities vary in the prevalence of their "failure intolerable" tasks (medicine and aviation have a lot, creating a movie or a company very few). But most jobs which involve working with others have some things for which failure tolerance is ~zero, and these typically involve safety and safeguarding. For example, a teacher who messes up their lesson plans obviously shouldn't be banned from their profession as a first resort; yet disqualification looks facially appropriate for one who allows their TA to try and abscond with one of their students on a field trip.

CFAR's track record includes a litany of awful mistakes re. welfare and safeguarding where each taken alone would typically warrant suspension or disqualification, and in concert should guarantee the latter as it demonstrates - rather than (e.g.) "grave mistake which is an aberration from their usually excellent standards" - a pattern of gross negligence and utter corporate incompetence. Whatever degree of intermediate risk attending these workshops constitute is unwise to accept (or to encourage others accepting), given CFAR realising these risks is already well-established. 

CFAR's mistakes regarding Brent

Although CFAR noted it needed to greatly improve re. "Lack of focus on safety" and "Insufficient Institutional safeguards", evidence these have improved or whether they are now adequate remains scant. Noting "we have reformed various things" in an old update is not good enough.

Whether anything would be 'good enough' is a fair question. If I, with (mostly) admirable candour, describe a series of grossly incompetent mistakes during my work as a doctor, the appropriate response may still be to disqualify me from future medical practice (there are sidelines re. incentives, but they don't help). The enormity of fucking up as badly as (e.g.[!!!]):

Of the interactions CFAR had with Brent, we consider the decision to let him assist at ESPR—a program we helped run for high school students—to have been particularly unwise. While we were not aware of any allegations of abuse at the time of that decision, many of us did feel that his behavior was sometimes manipulative, and that he was often dismissive of standard ethical norms. We consider it an obvious error to have ignored these behaviors when picking staff for a youth program.

Once the allegations about Brent became public, we notified ESPR students and their parents about them. We do not believe any students were harmed. However, Brent did invite a student (a minor) to leave camp early to join him at Burning Man. Beforehand, Brent had persuaded a CFAR staff member to ask the camp director for permission for Brent to invite the student. Multiple other staff members stepped in to prevent this, by which time the student had decided against attending anyway.

This student does not believe they were harmed. Nevertheless, we consider this invitation to have been a clear violation of common sense ethics. After this incident, CFAR made sure not to invite Brent back to any further youth programs, but we now think it was a mistake not to have gone further and banned Brent from all CFAR events. Additionally, while we believe the staff member’s action resulted mostly from Brent’s influence causing them not to register the risks, we and they nonetheless agreed that it would be best to part ways, in light both of this incident and a general shared sense of heading in different directions. They left CFAR’s employment in November 2018; they will not be in any staff or volunteer roles going forward, but they remain a welcome member of the alumni community.

Should be sufficient to disqualify CFAR from running 'intensive' residential retreats, especially given the 'inner work' and 'mutual vulnerability' they (at least used to) have. 

I would also hope a healthy EA community would warn its members away from things like this. Regardless, I can do my part: for heaven's sake, just don't go. 

Load More