Gregory_Lewis

Researcher (on bio) at FHI

Gregory_Lewis's Comments

EA Forum feature suggestion thread

On-site image hosting for posts/comments? This is mostly a minor QoL benefit, and maybe there would be challenges with storage. Another benefit would be that images would not vanish if their original source does.

EA Forum feature suggestion thread

Import from HTML/gdoc/word/whatever: One feature I miss from the old forum was the ability to submit HTML directly. This allowed one to write the post in google docs or similar (with tables, footnotes, sub/superscript, special characters, etc.), export it as HTML, paste into the old editor, and it was (with some tweaks) good to go.

This is how I posted my epistemic modesty piece (which has a table which survived the migration, although the footnote links no longer work). In contrast, when cross-posting it to LW2, I needed the kind help of a moderator - and even they needed to make some adjustments (e.g. 'writing out' the table).

Given such a feature was available before, hopefully it can be done again. It would be particularly valuable for the EA forum as:

  • A fair proportion of posts here are longer documents which benefit from the features available in things like word or gdocs. (But typically less mathematics than LW, so the nifty LATEX editor finds less value here than there).
  • The current editor has much less functionality than word/gdocs, and catching up 'most of the way' seems very labour intensive and could take a while.
  • Most users are more familiar with gdocs/word than editor/markdown/latex (i.e. although I can add and other special characters with the Latex editor and a some googling, I'm more familiar with doing this in gdocs - and I guess folks who have less experience with Latex or using a command line would find this difference greater).
  • Most users are probably drafting longer posts on google docs anyway.
  • Clunkily re-typesetting long documents in the forum editor manually (e.g. tables as image files) poses a barrier to entry, and so encourages linking rather than posting, with (I guess?) less engagement.

A direct 'import from gdoc/word/etc.' would be even better, but an HTML import function alone (given software which has both wordprocessing and HTML export 'sorted' are prevalent) would solve a lot of these problems at a stroke.

EA Forum feature suggestion thread

Footnote support in the 'standard' editor: For folks who aren't fluent in markdown (like me), the current process is switching the editor back and forth to 'markdown mode' to add these footnotes, which I find pretty cumbersome.[1]

[1] So much so I lazily default to doing it with plain text.

Examples of people who didn't get into EA in the past but made it after a few years

I applied for a research role at GWWC a few years ago (?2015 or so), and wasn't selected. I now do research at FHI.

In the interim I worked as a public health doctor. Although I think this helped me 'improve' in a variety of respects, 'levelling up for an EA research role' wasn't the purpose in mind: I was expecting to continue as a PH doctor rather than 'switching across' to EA research in the future; if I was offered the role at GWWC, I'm not sure whether I would have taken it.

There's a couple of points I'd want to emphasise.

1. Per Khorton, I think most of the most valuable roles (certainly in my 'field' but I suspect in many others, especially the more applied/concrete) will not be at 'avowedly EA organisations'. Thus, depending on what contributions you want to make, 'EA employment' may not be the best thing to aim for.

2. Pragmatically, 'avowedly EA organisation roles' (especially in research) tend oversubscribed and highly competitive. Thus (notwithstanding the above) this is ones primary target, it seems wise to have a career plan which does not rely on securing such a role (or at least have a backup).

3. Although there's a sense of ways one can build 'EA street cred' (or whatever), it's not clear these forms of 'EA career capital' are best even for employment at avowedly EA organisations. I'd guess my current role owes more to (e.g.) my medical and public health background than it does to my forum oeuvre (such as it is).

Why not give 90%?

Part of the story, on a consequentialising-virtue account, is typically desire for luxury is amenable to being changed in general, if not in Agape's case in particular. Thus her attitude of regret rather than shrugging her shoulders typically makes things go better, if not for her but for third parties who have a shot at improving this aspect of themselves.

I think most non-consequentialist views (including ones I'm personally sympathetic to) would fuzzily circumscribe character traits where moral blameworthiness can apply even if they are incorrigible. To pick two extremes: if Agape was born blind, and this substantially impeded her from doing as much good as she would like, the commonsense view could sympathise with her regret, but insist she really has 'nothing to be sorry about'; yet if Agape couldn't help being a vicious racist, and this substantially impeded her from helping others (say, because the beneficiaries are members of racial groups she despises), this is a character-staining fault Agape should at least feel bad about even if being otherwise is beyond her - plausibly, it would recommend her make strenuous efforts to change even if both she and others knew for sure all such attempts are futile.

Why not give 90%?

Nice one. Apologies for once again offering my 'c-minor mood' key variation: Although I agree with the policy upshot, 'obligatory, demanding effective altruism' does have some disquieting consequences for agents following this policy in terms of their moral self-evaluation.

As you say, Agape does the right thing if she realises (similar to prof procrastinate) that although, in theory, she could give 90% (or whatever) of her income/effort to help others, in practice she knows this isn't going to work out, and so given she wants to do the most good, she should opt for doing somewhat less (10% or whatever), as she foresees being able to sustain this.

Yet the underlying reason for this is a feature of her character which should be the subject of great moral regret. Bluntly: she likes her luxuries so much that she can't abide being without them, despite being aware (inter alia) that a) many people have no choice but to go without the luxuries she licenses herself to enjoy; b) said self-provision implies grave costs to those in great need if (per impossible) she could give more; c) her competing 'need' doesn't have great non-consequentialist defences (cf. if she was giving 10% rather than 90% due to looking after members of her family); d) there's probably not a reasonable story of desert for why she is in this fortunate position in the first place; e) she is aware of other people, similarly situated to her, who nonetheless do manage to do without similar luxuries and give more of themselves to help others.

This seems distinct from other prudential limitations a wise person should attend to. Agape, when making sure she gets enough sleep, may in some sense 'regret' she has to sleep for several hours each day. Yet it is wise for Agape to sleep enough, and needing to sleep (even if she needs to sleep more than others) is not a blameworthy trait. It is also wise for Agape to give less in the OP given her disposition of, essentially, "I know I won't keep giving to charity unless I also have a sports car". But even if Agape can't help this no more than needing to sleep, this trait is blameworthy.

Agape is not alone in having blameworthy features of her character - I, for one, have many; moral saintliness is rare, and most readers probably could do more to make the world better were they better people. 'Obligatory, demanding effective altruism' would also make recommendations against responses to this fact which are counterproductive (e.g. excessive self-flagellation, scrupulosity). I'd agree, but want to say slightly more about the appropriate attitude as well as the right action - something along the lines of non-destructive and non-aggrandising regret.[1] I often feel EAs tend to err in the direction of being estranged from their own virtue; but they should also try to avoid being too complaisant to their own vice.


[1] Cf. Kierkegaard, Sickness unto Death

Either in confused obscurity about oneself and one’s significance, or with a trace of hypocrisy, or by the help of cunning and sophistry which is present in all despair, despair over sin is not indisposed to bestow upon itself the appearance of something good. So it is supposed to be an expression for a deep nature which thus takes its sin so much to heart. I will adduce an example. When a man who has been addicted to one sin or another, but then for a long while has withstood temptation and conquered -- if he has a relapse and again succumbs to temptation, the dejection which ensues is by no means always sorrow over sin. It may be something else, for the matter of that it may be exasperation against providence, as if it were providence which had allowed him to fall into temptation, as if it ought not to have been so hard on him, since for a long while he had victoriously withstood temptation. But at any rate it is womanish [recte maudlin] without more ado to regard this sorrow as good, not to be in the least observant of the duplicity there is in all passionateness, which in turn has this ominous consequence that at times the passionate man understands afterwards, almost to the point of frenzy, that he has said exactly the opposite of that which he meant to say. Such a man asseverated with stronger and stronger expressions how much this relapse tortures and torments him, how it brings him to despair, "I can never forgive myself for it"; he says. And all this is supposed to be the expression for how much good there dwells within him, what a deep nature he is.

Thoughts on The Weapon of Openness
All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?

No I agree with these pro tanto costs of secrecy (and the others you mentioned before). But key to the argument is whether these problems inexorably get worse as time goes on. If so, then the benefits of secrecy inevitably have a sell-by date, and once the corrosive effects spread far enough one is better off 'cutting ones losses' - or never going down this path in the first place. If not, however, then secrecy could be a strategy worth persisting with if the (~static) costs of this are outweighed by the benefits on an ongoing basis.

The proposed trend of 'getting steadily worse' isn't apparent to me. One can find many organisations which typically do secret technical work have been around for decades (the NSA is one, most defence contractors another, (D)ARPA, etc.). A skim of what they were doing in (say) the 80s versus the 50s doesn't give an impression they got dramatically worse despite the 30 years of secrecy's supposed corrosive impact. Naturally, the attribution is very murky (e.g. even if their performance remained okay, maybe secrecy had gotten much more corrosive but this was outweighed by countervailing factors like much larger investment; maybe they would have fared better under a 'more open' counterfactual) but the challenge of dissecting out the 'being secret * time' interaction term and showing it is negative is a challenge that should be borne by the affirmative case.

EA Survey 2019 Series: Donation Data

Minor:

Like last year, we ran a full model with all interactions, and used backwards selection to select predictors.

Presuming backwards selection is stepwise elimination, this is not a great approach to model generation. See e.g. this from Frank Harrell: in essence, stepwise tends to be a recipe for overfitting, and thus the models it generates tend to have inflated goodness of fit measures (e.g. R2), overestimated coefficient estimates, and very hard to interpret p values (given the implicit multiple testing in the prior 'steps'). These problems are compounded by generating a large number of new variables (all interaction terms) for stepwise to play with.

Some improvements would be:

1. Select the variables by your judgement, and report that model. If you do any post-hoc additions (e.g. suspecting an interaction term), report these with the rider it is a post-hoc assessment.

2. Have a hold-out dataset to test your model (however you choose to generate it) against. (Cross-validation is an imperfect substitute).

3. Ridge, Lasso, elastic net or other approaches to variable selection.

Thoughts on The Weapon of Openness

Thanks for this, both the original work and your commentary was an edifying read.

I'm not persuaded, although this is mainly owed to the common challenge that noting considerations 'for' or 'against' in principle does not give a lot of evidence of what balance to strike in practice. Consider something like psychiatric detention: folks are generally in favour of (e.g.) personal freedom, and we do not need to think very hard to see how overruling this norm 'for their own good' could go terribly wrong (nor look very far to see examples of just this). Yet these considerations do not tell us what the optimal policy should be relative to the status quo, still less how it should be applied to a particular case.

Although the relevant evidence can neither be fully observed or fairly sampled, there's a fairly good prima facie case for some degree of secrecy not leading to disaster, and sometimes being beneficial. There's some wisdom of the crowd account that secrecy is the default for some 'adversarial' research; it would surprise if technological facts proved exceptions to the utility of strategic deception. Bodies that conduct 'secret by default' work have often been around decades (and the states that house them centuries), and although there's much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.

Moreover technological secrecy has had some eye-catching successes: the NSA likely discovered differential cryptanalysis years before it was on the open literature; discretion by early nuclear scientists (championed particularly by Szilard) on what to publish credibly gave the Manhattan project a decisive lead over rival programs. Openness can also have some downsides - the one that springs to mind from my 'field' is Al-Qaeda started exploring bioterrorism after learning of the United States expressing concern about the same.

Given what I said above, citing some favourable examples doesn't say much (although the nuclear weapon one may have proved hugely consequential). One account I am sympathetic to would be talking about differential (or optimal) disclosure: provide information in the manner which maximally advantages good actors over bad ones. This will recommend open broadcast in many cases: e.g. where there aren't really any bad actors, where the bad actors cannot take advantage of the information (or they know it already, so letting the good actors 'catch up'), where there aren't more selective channels, and so forth. But not always: there seem instances where, if possible, it would be better to preferentially disclose to good actors versus bad ones - and this requires some degree of something like secrecy.

Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught: although, for what it's worth, I think 'security service' norms tend closer to the mark than 'academic' ones. I understand cybersecurity faces similar challenges around vulnerability disclosure, as 'don't publish the bug until the vendor can push a fix' may not perform as well as one might naively hope: for example, 'white hats' postponing their discoveries hinders collective technological progress, and risks falling behind a 'black hat' community avidly trading tips and tricks. This consideration can also point the other way: if the 'white hats' are much more able than their typically fragmented and incompetent adversaries, the greater the danger of their work 'giving bad people good ideas'. The FBI or whoever may prove much more adept at finding vulnerabilities terrorists could exploit than terrorists themselves. They would be unwise to blog their red-teaming exercises.

Concerning the Recent 2019-Novel Coronavirus Outbreak

All of your examples seem much better than the index case I am arguing against. Commonsense morality attaches much less distaste to cases where those 'in peril' are not crisply identified (e.g. "how many will die in some pandemic in the future" is better than "how many will die in this particular outbreak", which is better than "will Alice, currently ill, live or die?"). It should also find bets on historical events are (essentially) fine, as whatever good or ill implicit in these has already occurred.

Of course, I agree they your examples would be construed as to some degree morbid. But my recommendation wasn't "refrain from betting in any question where we we can show the topic is to some degree morbid" (after all, betting on GDP of a given country could be construed this way, given its large downstream impacts on welfare). It was to refrain in those cases where it appears very distasteful and for which there's no sufficient justification. As it seems I'm not expressing this balancing consideration well, I'll belabour it.

#

Say, God forbid, one of my friend's children has a life-limiting disease. On its face, it seems tasteless for me to compose predictions at all on questions like, "will they still be alive by Christmas?" Carefully scrutinising whether they will live or die seems to run counter to the service I should be providing as a supporter of my friends family and someone with the child's best interests at heart. It goes without saying opening a book on a question like this seems deplorable, and offering (and confirming bets) where I take the pessimistic side despicable.

Yet other people do have good reason for trying to compose an accurate prediction on survival or prognosis. The child's doctor may find themselves in the invidious position where they recognise they their duty to give my friend's family the best estimate they can runs at cross purposes to other moral imperatives that apply too. The commonsense/virtue-ethicsy hope would be the doctor can strike the balance best satisfies these cross-purposes, thus otherwise callous thoughts and deeds are justified by their connection to providing important information to the family

Yet any incremental information benefit isn't enough to justify anything of any degree of distastefulness. If the doctor opened a prediction market on a local children's hospice, I think (even if they were solely and sincerely motivated for good purposes, such as to provide families with in-expectation better prognostication now and the future) they have gravely missed the mark.

Of the options available, 'bringing money' into it generally looks more ghoulish the closer the connection is between 'something horrible happening' and 'payday!'. A mere prediction platform is better (although still probably the wrong side of the line unless we have specific evidence it will give a large benefit), also paying people to make predictions on said platform (but paying for activity and aggregate accuracy rather than direct 'bet results') is also slightly better. "This family's loss (of their child) will be my gain (of some money)" is the sort of grotesque counterfactual good people would strenuously avoid being party to save exceptionally good reason.

#

To repeat: the it is the balance of these factors - which come in degrees - which is determines the final evaluation. So, for example, I'm not against people forecasting the 'nCoV' question (indeed, I do as well), but the addition of money takes it the wrong side of the line (notwithstanding the money being ridden on this for laudable motivation). Likewise I'm happy to for people to prop bet on some of your questions pretty freely, but not for the 'nCoV' (or some even more extreme versions) because the question is somewhat less ghoulish, etc. etc. etc.

I confess some irritation. Because I think whilst you and Oli are pressing arguments (sorry - "noticing confusion") re. there not being a crisp quality that obtains to the objectionable ones yet not the less objectionable ones (e.g. 'You say this question is 'morbid' - but look here! here are some other questions which are qualitatively morbid too, and we shouldn't rule them all out') you are in fact committed to some sort of balancing account.

I presume (hopefully?) you don't think 'child hospice sweepstakes' would be a good idea for someone to try (even if it may improve our calibration! and it would give useful information re. paediatric prognosticiation which could be of value to the wider world! and capitalism is built on accurate price signals! etc. etc.) As you're not biting the bullet on these reductios (nor bmg's, nor others) you implicitly accept all the considerations about why betting is a good thing are pro tanto and can be overcome at some extreme limit of ghoulishness etc.

How to weigh these considerations is up for grabs. Yet picking each individual feature of ghoulishness in turn and showing it, alone, is not enough to warrant refraining from highly ghoulish bets (where the true case against would be composed of other factors alongside the one being shown to be individually insufficient) seems an exercise in the fallacy of division.

#

I also note that all the (few) prop bets I recall in EA up until now (including one I made with you) weren't morbid. Which suggests you wouldn't appreciably reduce the track record of prop bets which show (as Oli sees it) admirable EA virtues of skin in the game.

Load More