Gregory_Lewis

Researcher (on bio) at FHI

Gregory_Lewis's Comments

Why not give 90%?

Part of the story, on a consequentialising-virtue account, is typically desire for luxury is amenable to being changed in general, if not in Agape's case in particular. Thus her attitude of regret rather than shrugging her shoulders typically makes things go better, if not for her but for third parties who have a shot at improving this aspect of themselves.

I think most non-consequentialist views (including ones I'm personally sympathetic to) would fuzzily circumscribe character traits where moral blameworthiness can apply even if they are incorrigible. To pick two extremes: if Agape was born blind, and this substantially impeded her from doing as much good as she would like, the commonsense view could sympathise with her regret, but insist she really has 'nothing to be sorry about'; yet if Agape couldn't help being a vicious racist, and this substantially impeded her from helping others (say, because the beneficiaries are members of racial groups she despises), this is a character-staining fault Agape should at least feel bad about even if being otherwise is beyond her - plausibly, it would recommend her make strenuous efforts to change even if both she and others knew for sure all such attempts are futile.

Why not give 90%?

Nice one. Apologies for once again offering my 'c-minor mood' key variation: Although I agree with the policy upshot, 'obligatory, demanding effective altruism' does have some disquieting consequences for agents following this policy in terms of their moral self-evaluation.

As you say, Agape does the right thing if she realises (similar to prof procrastinate) that although, in theory, she could give 90% (or whatever) of her income/effort to help others, in practice she knows this isn't going to work out, and so given she wants to do the most good, she should opt for doing somewhat less (10% or whatever), as she foresees being able to sustain this.

Yet the underlying reason for this is a feature of her character which should be the subject of great moral regret. Bluntly: she likes her luxuries so much that she can't abide being without them, despite being aware (inter alia) that a) many people have no choice but to go without the luxuries she licenses herself to enjoy; b) said self-provision implies grave costs to those in great need if (per impossible) she could give more; c) her competing 'need' doesn't have great non-consequentialist defences (cf. if she was giving 10% rather than 90% due to looking after members of her family); d) there's probably not a reasonable story of desert for why she is in this fortunate position in the first place; e) she is aware of other people, similarly situated to her, who nonetheless do manage to do without similar luxuries and give more of themselves to help others.

This seems distinct from other prudential limitations a wise person should attend to. Agape, when making sure she gets enough sleep, may in some sense 'regret' she has to sleep for several hours each day. Yet it is wise for Agape to sleep enough, and needing to sleep (even if she needs to sleep more than others) is not a blameworthy trait. It is also wise for Agape to give less in the OP given her disposition of, essentially, "I know I won't keep giving to charity unless I also have a sports car". But even if Agape can't help this no more than needing to sleep, this trait is blameworthy.

Agape is not alone in having blameworthy features of her character - I, for one, have many; moral saintliness is rare, and most readers probably could do more to make the world better were they better people. 'Obligatory, demanding effective altruism' would also make recommendations against responses to this fact which are counterproductive (e.g. excessive self-flagellation, scrupulosity). I'd agree, but want to say slightly more about the appropriate attitude as well as the right action - something along the lines of non-destructive and non-aggrandising regret.[1] I often feel EAs tend to err in the direction of being estranged from their own virtue; but they should also try to avoid being too complaisant to their own vice.


[1] Cf. Kierkegaard, Sickness unto Death

Either in confused obscurity about oneself and one’s significance, or with a trace of hypocrisy, or by the help of cunning and sophistry which is present in all despair, despair over sin is not indisposed to bestow upon itself the appearance of something good. So it is supposed to be an expression for a deep nature which thus takes its sin so much to heart. I will adduce an example. When a man who has been addicted to one sin or another, but then for a long while has withstood temptation and conquered -- if he has a relapse and again succumbs to temptation, the dejection which ensues is by no means always sorrow over sin. It may be something else, for the matter of that it may be exasperation against providence, as if it were providence which had allowed him to fall into temptation, as if it ought not to have been so hard on him, since for a long while he had victoriously withstood temptation. But at any rate it is womanish [recte maudlin] without more ado to regard this sorrow as good, not to be in the least observant of the duplicity there is in all passionateness, which in turn has this ominous consequence that at times the passionate man understands afterwards, almost to the point of frenzy, that he has said exactly the opposite of that which he meant to say. Such a man asseverated with stronger and stronger expressions how much this relapse tortures and torments him, how it brings him to despair, "I can never forgive myself for it"; he says. And all this is supposed to be the expression for how much good there dwells within him, what a deep nature he is.

Thoughts on The Weapon of Openness
All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?

No I agree with these pro tanto costs of secrecy (and the others you mentioned before). But key to the argument is whether these problems inexorably get worse as time goes on. If so, then the benefits of secrecy inevitably have a sell-by date, and once the corrosive effects spread far enough one is better off 'cutting ones losses' - or never going down this path in the first place. If not, however, then secrecy could be a strategy worth persisting with if the (~static) costs of this are outweighed by the benefits on an ongoing basis.

The proposed trend of 'getting steadily worse' isn't apparent to me. One can find many organisations which typically do secret technical work have been around for decades (the NSA is one, most defence contractors another, (D)ARPA, etc.). A skim of what they were doing in (say) the 80s versus the 50s doesn't give an impression they got dramatically worse despite the 30 years of secrecy's supposed corrosive impact. Naturally, the attribution is very murky (e.g. even if their performance remained okay, maybe secrecy had gotten much more corrosive but this was outweighed by countervailing factors like much larger investment; maybe they would have fared better under a 'more open' counterfactual) but the challenge of dissecting out the 'being secret * time' interaction term and showing it is negative is a challenge that should be borne by the affirmative case.

EA Survey 2019 Series: Donation Data

Minor:

Like last year, we ran a full model with all interactions, and used backwards selection to select predictors.

Presuming backwards selection is stepwise elimination, this is not a great approach to model generation. See e.g. this from Frank Harrell: in essence, stepwise tends to be a recipe for overfitting, and thus the models it generates tend to have inflated goodness of fit measures (e.g. R2), overestimated coefficient estimates, and very hard to interpret p values (given the implicit multiple testing in the prior 'steps'). These problems are compounded by generating a large number of new variables (all interaction terms) for stepwise to play with.

Some improvements would be:

1. Select the variables by your judgement, and report that model. If you do any post-hoc additions (e.g. suspecting an interaction term), report these with the rider it is a post-hoc assessment.

2. Have a hold-out dataset to test your model (however you choose to generate it) against. (Cross-validation is an imperfect substitute).

3. Ridge, Lasso, elastic net or other approaches to variable selection.

Thoughts on The Weapon of Openness

Thanks for this, both the original work and your commentary was an edifying read.

I'm not persuaded, although this is mainly owed to the common challenge that noting considerations 'for' or 'against' in principle does not give a lot of evidence of what balance to strike in practice. Consider something like psychiatric detention: folks are generally in favour of (e.g.) personal freedom, and we do not need to think very hard to see how overruling this norm 'for their own good' could go terribly wrong (nor look very far to see examples of just this). Yet these considerations do not tell us what the optimal policy should be relative to the status quo, still less how it should be applied to a particular case.

Although the relevant evidence can neither be fully observed or fairly sampled, there's a fairly good prima facie case for some degree of secrecy not leading to disaster, and sometimes being beneficial. There's some wisdom of the crowd account that secrecy is the default for some 'adversarial' research; it would surprise if technological facts proved exceptions to the utility of strategic deception. Bodies that conduct 'secret by default' work have often been around decades (and the states that house them centuries), and although there's much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.

Moreover technological secrecy has had some eye-catching successes: the NSA likely discovered differential cryptanalysis years before it was on the open literature; discretion by early nuclear scientists (championed particularly by Szilard) on what to publish credibly gave the Manhattan project a decisive lead over rival programs. Openness can also have some downsides - the one that springs to mind from my 'field' is Al-Qaeda started exploring bioterrorism after learning of the United States expressing concern about the same.

Given what I said above, citing some favourable examples doesn't say much (although the nuclear weapon one may have proved hugely consequential). One account I am sympathetic to would be talking about differential (or optimal) disclosure: provide information in the manner which maximally advantages good actors over bad ones. This will recommend open broadcast in many cases: e.g. where there aren't really any bad actors, where the bad actors cannot take advantage of the information (or they know it already, so letting the good actors 'catch up'), where there aren't more selective channels, and so forth. But not always: there seem instances where, if possible, it would be better to preferentially disclose to good actors versus bad ones - and this requires some degree of something like secrecy.

Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught: although, for what it's worth, I think 'security service' norms tend closer to the mark than 'academic' ones. I understand cybersecurity faces similar challenges around vulnerability disclosure, as 'don't publish the bug until the vendor can push a fix' may not perform as well as one might naively hope: for example, 'white hats' postponing their discoveries hinders collective technological progress, and risks falling behind a 'black hat' community avidly trading tips and tricks. This consideration can also point the other way: if the 'white hats' are much more able than their typically fragmented and incompetent adversaries, the greater the danger of their work 'giving bad people good ideas'. The FBI or whoever may prove much more adept at finding vulnerabilities terrorists could exploit than terrorists themselves. They would be unwise to blog their red-teaming exercises.

Concerning the Recent 2019-Novel Coronavirus Outbreak

All of your examples seem much better than the index case I am arguing against. Commonsense morality attaches much less distaste to cases where those 'in peril' are not crisply identified (e.g. "how many will die in some pandemic in the future" is better than "how many will die in this particular outbreak", which is better than "will Alice, currently ill, live or die?"). It should also find bets on historical events are (essentially) fine, as whatever good or ill implicit in these has already occurred.

Of course, I agree they your examples would be construed as to some degree morbid. But my recommendation wasn't "refrain from betting in any question where we we can show the topic is to some degree morbid" (after all, betting on GDP of a given country could be construed this way, given its large downstream impacts on welfare). It was to refrain in those cases where it appears very distasteful and for which there's no sufficient justification. As it seems I'm not expressing this balancing consideration well, I'll belabour it.

#

Say, God forbid, one of my friend's children has a life-limiting disease. On its face, it seems tasteless for me to compose predictions at all on questions like, "will they still be alive by Christmas?" Carefully scrutinising whether they will live or die seems to run counter to the service I should be providing as a supporter of my friends family and someone with the child's best interests at heart. It goes without saying opening a book on a question like this seems deplorable, and offering (and confirming bets) where I take the pessimistic side despicable.

Yet other people do have good reason for trying to compose an accurate prediction on survival or prognosis. The child's doctor may find themselves in the invidious position where they recognise they their duty to give my friend's family the best estimate they can runs at cross purposes to other moral imperatives that apply too. The commonsense/virtue-ethicsy hope would be the doctor can strike the balance best satisfies these cross-purposes, thus otherwise callous thoughts and deeds are justified by their connection to providing important information to the family

Yet any incremental information benefit isn't enough to justify anything of any degree of distastefulness. If the doctor opened a prediction market on a local children's hospice, I think (even if they were solely and sincerely motivated for good purposes, such as to provide families with in-expectation better prognostication now and the future) they have gravely missed the mark.

Of the options available, 'bringing money' into it generally looks more ghoulish the closer the connection is between 'something horrible happening' and 'payday!'. A mere prediction platform is better (although still probably the wrong side of the line unless we have specific evidence it will give a large benefit), also paying people to make predictions on said platform (but paying for activity and aggregate accuracy rather than direct 'bet results') is also slightly better. "This family's loss (of their child) will be my gain (of some money)" is the sort of grotesque counterfactual good people would strenuously avoid being party to save exceptionally good reason.

#

To repeat: the it is the balance of these factors - which come in degrees - which is determines the final evaluation. So, for example, I'm not against people forecasting the 'nCoV' question (indeed, I do as well), but the addition of money takes it the wrong side of the line (notwithstanding the money being ridden on this for laudable motivation). Likewise I'm happy to for people to prop bet on some of your questions pretty freely, but not for the 'nCoV' (or some even more extreme versions) because the question is somewhat less ghoulish, etc. etc. etc.

I confess some irritation. Because I think whilst you and Oli are pressing arguments (sorry - "noticing confusion") re. there not being a crisp quality that obtains to the objectionable ones yet not the less objectionable ones (e.g. 'You say this question is 'morbid' - but look here! here are some other questions which are qualitatively morbid too, and we shouldn't rule them all out') you are in fact committed to some sort of balancing account.

I presume (hopefully?) you don't think 'child hospice sweepstakes' would be a good idea for someone to try (even if it may improve our calibration! and it would give useful information re. paediatric prognosticiation which could be of value to the wider world! and capitalism is built on accurate price signals! etc. etc.) As you're not biting the bullet on these reductios (nor bmg's, nor others) you implicitly accept all the considerations about why betting is a good thing are pro tanto and can be overcome at some extreme limit of ghoulishness etc.

How to weigh these considerations is up for grabs. Yet picking each individual feature of ghoulishness in turn and showing it, alone, is not enough to warrant refraining from highly ghoulish bets (where the true case against would be composed of other factors alongside the one being shown to be individually insufficient) seems an exercise in the fallacy of division.

#

I also note that all the (few) prop bets I recall in EA up until now (including one I made with you) weren't morbid. Which suggests you wouldn't appreciably reduce the track record of prop bets which show (as Oli sees it) admirable EA virtues of skin in the game.

Concerning the Recent 2019-Novel Coronavirus Outbreak
Both of these are environments in which people participate in something very similar to betting. In the first case they are competing pretty directly for internet points, and in the second they are competing for monetary prices.
Those two institutions strike me as great examples of the benefit of having a culture of betting like this, and also strike me as similarly likely to create offense in others.

I'm extremely confident a lot more opprobrium attaches to bets where the payoff is in money versus those where the payoff is in internet points etc. As you note, I agree certain forecasting questions (even without cash) provoke distaste: if those same questions were on a prediction market the reaction would be worse. (There's also likely an issue the money leading to a question of ones motivation - if epi types are trying to predict a death toll and not getting money for their efforts, it seems their efforts have a laudable purpose in mind, less so if they are riding money on it).

I agree with you that were there only the occasional one-off bet on the forum that was being critiqued here, the epistemic cost would be minor. But I am confident that a community that had a relationship to betting that was more analogous to how Chi's relationship to betting appears to be, we would have never actually built the Metaculus prediction platform.

This looks like a stretch to me. Chi can speak for themselves, but their remarks don't seem to entail a 'relationship to betting' writ large, but an uneasy relationship to morbid topics in particular. Thus the policy I take them to be recommending (which I also endorse) of refraining making 'morbid' or 'tasteless' bets (but feel free to prop bet to heart's desire on other topics) seems to have very minor epistemic costs, rather than threatening some transformation of epistemic culture which would mean people stop caring about predictions.

For similar reasons, this also seems relatively costless in terms of other perceptions: refraining from 'morbid' topics for betting only excludes a small minority of questions one can bet upon, leaving plenty of opportunities to signal its virtuous characteristics re. taking ideas seriously whilst avoiding those which reflect poorly upon it.

Concerning the Recent 2019-Novel Coronavirus Outbreak

I emphatically object to this position (and agree with Chi's). As best as I can tell, Chi's comment is more accurate and better argued than this critique, and so the relative karma between the two dismays me.

I think it is fairly obvious that 'betting on how many people are going to die' looks ghoulish to commonsense morality. I think the articulation why this would be objectionable is only slightly less obvious: the party on the 'worse side' of the bet seems to be deliberately situating themselves to be rewarded as a consequence of the misery others suffer; there would also be suspicion about whether the person might try and contribute to the bad situation seeking a pay-off; and perhaps a sense one belittles the moral gravity of the situation by using it for prop betting.

Thus I'm confident if we ran some survey on confronting the 'person on the street' with the idea of people making this sort of bet, they would not think "wow, isn't it great they're willing to put their own money behind their convictions", but something much more adverse around "holding a sweepstake on how many die".

(I can't find an easy instrument for this beyond than asking people/anecdata: the couple of non-EA people I've run this by have reacted either negatively or very negatively, and I know comments on forecasting questions which boil down to "will public figure X die before date Y" register their distaste. If there is a more objective assessment accessible, I'd offer odds at around 4:1 on the ratio of positive:negative sentiment being <1).

Of course, I think such an initial 'commonsense' impression would very unfair to Sean or Justin: I'm confident they engaged in this exercise only out of a sincere (and laudable) desire to try and better understand an important topic. Nonetheless (and to hold them much higher standards than my own behaviour) one may suggest it is nonetheless a lapse of practical wisdom if, whilst acting to fulfil one laudable motivation, not tempering this with other moral concerns one should also be mindful of.

One needs to weigh the 'epistemic' benefits of betting (including higher order terms) against the 'tasteless' complaint (both in moral-pluralism case of it possibly being bad, but also the more prudential case of it looking bad to third parties). If the epistemic benefits were great enough, we should reconcile ourselves to the costs of sometimes acting tastelessly (triage is distasteful too) or third parties (reasonably, if mistakenly) thinking less of us.

Yet the epistemic benefits on the table here (especially on the margin of 'feel free to bet, save on commonsense ghoulish topics') are extremely slim. The rate of betting in EA/rationalist land on any question is very low, so the signal you get from small-n bets are trivial. There are other options, especially for this question, which give you much more signal per unit activity - given, unlike the stock market, people are interested in the answer for-other-than pecuniary motivations: both metacalus and the John's Hopkins platform prediction have relevant questions which are much active, and where people are offering more information.

Given the marginal benefits are so slim, they are easily outweighed by the costs Chi notes. And they are.

The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*)

Thanks. I think it would be better, given you are recommending joining and remaining in the party, the 'price' isn't quoted as a single month of membership.

One estimate could be the rate of leadership transitions. There have been ~17 in the last century of the Labour party (ignoring acting leaders). Rounding up, this gives an expected vote for every 5 years of membership, and so a price of ~£4.38*60 = ~£250 per leadership contest vote. This looks a much less attractive value proposition to me.

The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*)

Forgive me, but your post didn't exactly avoid any doubt, given:

1) The recommendation in the second paragraph is addressed to everyone regardless of political sympathy:

We believe that, if you're a UK citizen or have lived in the UK for the last year, you should pay £4.38 to register to vote in the current Labour leadership, so you can help decide 1 of the 2 next candidates for Prime Minister. (My emphasis)

2) Your OP itself gives a few reasons for why those "indifferent or hostile to Labour Party politics" would want to be part of the selection. As you say:

For £4.38, you have a reasonable chance of determining the next candidate PM, and therefore having an impact in the order of billions of pounds. (Your emphasis)

Even a committed conservative should have preferences on "conditional on Labour winning in the next GE, which Labour MP would I prefer as PM?" (/plus the more Machiavellian "who is the candidate I'd most want leading Labour, given I want them to lose to the Conservatives?").

3) Although the post doesn't advocate joining just to cancel after voting, noting that one can 'cancel any time', alongside the main motivation being offered taking advantage of a time-limited opportunity for impact (and alongside the quoted cost being a single month of membership) makes this strategy not a dazzling feat of implicature (indeed, it would be the EV-maximising option taking the OP's argument at face value).

#

Had the post merely used the oncoming selection in Labour to note there is an argument for political party participation similar to voting (i.e. getting a say in the handful of leading political figures); clearly stressed this applied across the political spectrum (and so was more a recommendation that EAs consider this reason to join the party they are politically sympathetic in expectation of voting in future leadership contests, rather than the one which happens to have a leadership contest on now); and strenuously disclaimed any suggestion of hit and run entryism (noting defection for various norms with existing members of the party, membership mechanisms being somewhat based on trust that folks aren't going to 'game them', etc.), I would have no complaints. But it didn't (although I hope it will), so here we are.

Load More