Researcher (on bio) at FHI
All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?
No I agree with these pro tanto costs of secrecy (and the others you mentioned before). But key to the argument is whether these problems inexorably get worse as time goes on. If so, then the benefits of secrecy inevitably have a sell-by date, and once the corrosive effects spread far enough one is better off 'cutting ones losses' - or never going down this path in the first place. If not, however, then secrecy could be a strategy worth persisting with if the (~static) costs of this are outweighed by the benefits on an ongoing basis.
The proposed trend of 'getting steadily worse' isn't apparent to me. One can find many organisations which typically do secret technical work have been around for decades (the NSA is one, most defence contractors another, (D)ARPA, etc.). A skim of what they were doing in (say) the 80s versus the 50s doesn't give an impression they got dramatically worse despite the 30 years of secrecy's supposed corrosive impact. Naturally, the attribution is very murky (e.g. even if their performance remained okay, maybe secrecy had gotten much more corrosive but this was outweighed by countervailing factors like much larger investment; maybe they would have fared better under a 'more open' counterfactual) but the challenge of dissecting out the 'being secret * time' interaction term and showing it is negative is a challenge that should be borne by the affirmative case.
Like last year, we ran a full model with all interactions, and used backwards selection to select predictors.
Presuming backwards selection is stepwise elimination, this is not a great approach to model generation. See e.g. this from Frank Harrell: in essence, stepwise tends to be a recipe for overfitting, and thus the models it generates tend to have inflated goodness of fit measures (e.g. R2), overestimated coefficient estimates, and very hard to interpret p values (given the implicit multiple testing in the prior 'steps'). These problems are compounded by generating a large number of new variables (all interaction terms) for stepwise to play with.
Some improvements would be:
1. Select the variables by your judgement, and report that model. If you do any post-hoc additions (e.g. suspecting an interaction term), report these with the rider it is a post-hoc assessment.
2. Have a hold-out dataset to test your model (however you choose to generate it) against. (Cross-validation is an imperfect substitute).
3. Ridge, Lasso, elastic net or other approaches to variable selection.
Thanks for this, both the original work and your commentary was an edifying read.
I'm not persuaded, although this is mainly owed to the common challenge that noting considerations 'for' or 'against' in principle does not give a lot of evidence of what balance to strike in practice. Consider something like psychiatric detention: folks are generally in favour of (e.g.) personal freedom, and we do not need to think very hard to see how overruling this norm 'for their own good' could go terribly wrong (nor look very far to see examples of just this). Yet these considerations do not tell us what the optimal policy should be relative to the status quo, still less how it should be applied to a particular case.
Although the relevant evidence can neither be fully observed or fairly sampled, there's a fairly good prima facie case for some degree of secrecy not leading to disaster, and sometimes being beneficial. There's some wisdom of the crowd account that secrecy is the default for some 'adversarial' research; it would surprise if technological facts proved exceptions to the utility of strategic deception. Bodies that conduct 'secret by default' work have often been around decades (and the states that house them centuries), and although there's much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.
Moreover technological secrecy has had some eye-catching successes: the NSA likely discovered differential cryptanalysis years before it was on the open literature; discretion by early nuclear scientists (championed particularly by Szilard) on what to publish credibly gave the Manhattan project a decisive lead over rival programs. Openness can also have some downsides - the one that springs to mind from my 'field' is Al-Qaeda started exploring bioterrorism after learning of the United States expressing concern about the same.
Given what I said above, citing some favourable examples doesn't say much (although the nuclear weapon one may have proved hugely consequential). One account I am sympathetic to would be talking about differential (or optimal) disclosure: provide information in the manner which maximally advantages good actors over bad ones. This will recommend open broadcast in many cases: e.g. where there aren't really any bad actors, where the bad actors cannot take advantage of the information (or they know it already, so letting the good actors 'catch up'), where there aren't more selective channels, and so forth. But not always: there seem instances where, if possible, it would be better to preferentially disclose to good actors versus bad ones - and this requires some degree of something like secrecy.
Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught: although, for what it's worth, I think 'security service' norms tend closer to the mark than 'academic' ones. I understand cybersecurity faces similar challenges around vulnerability disclosure, as 'don't publish the bug until the vendor can push a fix' may not perform as well as one might naively hope: for example, 'white hats' postponing their discoveries hinders collective technological progress, and risks falling behind a 'black hat' community avidly trading tips and tricks. This consideration can also point the other way: if the 'white hats' are much more able than their typically fragmented and incompetent adversaries, the greater the danger of their work 'giving bad people good ideas'. The FBI or whoever may prove much more adept at finding vulnerabilities terrorists could exploit than terrorists themselves. They would be unwise to blog their red-teaming exercises.
All of your examples seem much better than the index case I am arguing against. Commonsense morality attaches much less distaste to cases where those 'in peril' are not crisply identified (e.g. "how many will die in some pandemic in the future" is better than "how many will die in this particular outbreak", which is better than "will Alice, currently ill, live or die?"). It should also find bets on historical events are (essentially) fine, as whatever good or ill implicit in these has already occurred.
Of course, I agree they your examples would be construed as to some degree morbid. But my recommendation wasn't "refrain from betting in any question where we we can show the topic is to some degree morbid" (after all, betting on GDP of a given country could be construed this way, given its large downstream impacts on welfare). It was to refrain in those cases where it appears very distasteful and for which there's no sufficient justification. As it seems I'm not expressing this balancing consideration well, I'll belabour it.
Say, God forbid, one of my friend's children has a life-limiting disease. On its face, it seems tasteless for me to compose predictions at all on questions like, "will they still be alive by Christmas?" Carefully scrutinising whether they will live or die seems to run counter to the service I should be providing as a supporter of my friends family and someone with the child's best interests at heart. It goes without saying opening a book on a question like this seems deplorable, and offering (and confirming bets) where I take the pessimistic side despicable.
Yet other people do have good reason for trying to compose an accurate prediction on survival or prognosis. The child's doctor may find themselves in the invidious position where they recognise they their duty to give my friend's family the best estimate they can runs at cross purposes to other moral imperatives that apply too. The commonsense/virtue-ethicsy hope would be the doctor can strike the balance best satisfies these cross-purposes, thus otherwise callous thoughts and deeds are justified by their connection to providing important information to the family
Yet any incremental information benefit isn't enough to justify anything of any degree of distastefulness. If the doctor opened a prediction market on a local children's hospice, I think (even if they were solely and sincerely motivated for good purposes, such as to provide families with in-expectation better prognostication now and the future) they have gravely missed the mark.
Of the options available, 'bringing money' into it generally looks more ghoulish the closer the connection is between 'something horrible happening' and 'payday!'. A mere prediction platform is better (although still probably the wrong side of the line unless we have specific evidence it will give a large benefit), also paying people to make predictions on said platform (but paying for activity and aggregate accuracy rather than direct 'bet results') is also slightly better. "This family's loss (of their child) will be my gain (of some money)" is the sort of grotesque counterfactual good people would strenuously avoid being party to save exceptionally good reason.
To repeat: the it is the balance of these factors - which come in degrees - which is determines the final evaluation. So, for example, I'm not against people forecasting the 'nCoV' question (indeed, I do as well), but the addition of money takes it the wrong side of the line (notwithstanding the money being ridden on this for laudable motivation). Likewise I'm happy to for people to prop bet on some of your questions pretty freely, but not for the 'nCoV' (or some even more extreme versions) because the question is somewhat less ghoulish, etc. etc. etc.
I confess some irritation. Because I think whilst you and Oli are pressing arguments (sorry - "noticing confusion") re. there not being a crisp quality that obtains to the objectionable ones yet not the less objectionable ones (e.g. 'You say this question is 'morbid' - but look here! here are some other questions which are qualitatively morbid too, and we shouldn't rule them all out') you are in fact committed to some sort of balancing account.
I presume (hopefully?) you don't think 'child hospice sweepstakes' would be a good idea for someone to try (even if it may improve our calibration! and it would give useful information re. paediatric prognosticiation which could be of value to the wider world! and capitalism is built on accurate price signals! etc. etc.) As you're not biting the bullet on these reductios (nor bmg's, nor others) you implicitly accept all the considerations about why betting is a good thing are pro tanto and can be overcome at some extreme limit of ghoulishness etc.
How to weigh these considerations is up for grabs. Yet picking each individual feature of ghoulishness in turn and showing it, alone, is not enough to warrant refraining from highly ghoulish bets (where the true case against would be composed of other factors alongside the one being shown to be individually insufficient) seems an exercise in the fallacy of division.
I also note that all the (few) prop bets I recall in EA up until now (including one I made with you) weren't morbid. Which suggests you wouldn't appreciably reduce the track record of prop bets which show (as Oli sees it) admirable EA virtues of skin in the game.
Both of these are environments in which people participate in something very similar to betting. In the first case they are competing pretty directly for internet points, and in the second they are competing for monetary prices.
Those two institutions strike me as great examples of the benefit of having a culture of betting like this, and also strike me as similarly likely to create offense in others.
I'm extremely confident a lot more opprobrium attaches to bets where the payoff is in money versus those where the payoff is in internet points etc. As you note, I agree certain forecasting questions (even without cash) provoke distaste: if those same questions were on a prediction market the reaction would be worse. (There's also likely an issue the money leading to a question of ones motivation - if epi types are trying to predict a death toll and not getting money for their efforts, it seems their efforts have a laudable purpose in mind, less so if they are riding money on it).
I agree with you that were there only the occasional one-off bet on the forum that was being critiqued here, the epistemic cost would be minor. But I am confident that a community that had a relationship to betting that was more analogous to how Chi's relationship to betting appears to be, we would have never actually built the Metaculus prediction platform.
This looks like a stretch to me. Chi can speak for themselves, but their remarks don't seem to entail a 'relationship to betting' writ large, but an uneasy relationship to morbid topics in particular. Thus the policy I take them to be recommending (which I also endorse) of refraining making 'morbid' or 'tasteless' bets (but feel free to prop bet to heart's desire on other topics) seems to have very minor epistemic costs, rather than threatening some transformation of epistemic culture which would mean people stop caring about predictions.
For similar reasons, this also seems relatively costless in terms of other perceptions: refraining from 'morbid' topics for betting only excludes a small minority of questions one can bet upon, leaving plenty of opportunities to signal its virtuous characteristics re. taking ideas seriously whilst avoiding those which reflect poorly upon it.
I emphatically object to this position (and agree with Chi's). As best as I can tell, Chi's comment is more accurate and better argued than this critique, and so the relative karma between the two dismays me.
I think it is fairly obvious that 'betting on how many people are going to die' looks ghoulish to commonsense morality. I think the articulation why this would be objectionable is only slightly less obvious: the party on the 'worse side' of the bet seems to be deliberately situating themselves to be rewarded as a consequence of the misery others suffer; there would also be suspicion about whether the person might try and contribute to the bad situation seeking a pay-off; and perhaps a sense one belittles the moral gravity of the situation by using it for prop betting.
Thus I'm confident if we ran some survey on confronting the 'person on the street' with the idea of people making this sort of bet, they would not think "wow, isn't it great they're willing to put their own money behind their convictions", but something much more adverse around "holding a sweepstake on how many die".
(I can't find an easy instrument for this beyond than asking people/anecdata: the couple of non-EA people I've run this by have reacted either negatively or very negatively, and I know comments on forecasting questions which boil down to "will public figure X die before date Y" register their distaste. If there is a more objective assessment accessible, I'd offer odds at around 4:1 on the ratio of positive:negative sentiment being <1).
Of course, I think such an initial 'commonsense' impression would very unfair to Sean or Justin: I'm confident they engaged in this exercise only out of a sincere (and laudable) desire to try and better understand an important topic. Nonetheless (and to hold them much higher standards than my own behaviour) one may suggest it is nonetheless a lapse of practical wisdom if, whilst acting to fulfil one laudable motivation, not tempering this with other moral concerns one should also be mindful of.
One needs to weigh the 'epistemic' benefits of betting (including higher order terms) against the 'tasteless' complaint (both in moral-pluralism case of it possibly being bad, but also the more prudential case of it looking bad to third parties). If the epistemic benefits were great enough, we should reconcile ourselves to the costs of sometimes acting tastelessly (triage is distasteful too) or third parties (reasonably, if mistakenly) thinking less of us.
Yet the epistemic benefits on the table here (especially on the margin of 'feel free to bet, save on commonsense ghoulish topics') are extremely slim. The rate of betting in EA/rationalist land on any question is very low, so the signal you get from small-n bets are trivial. There are other options, especially for this question, which give you much more signal per unit activity - given, unlike the stock market, people are interested in the answer for-other-than pecuniary motivations: both metacalus and the John's Hopkins platform prediction have relevant questions which are much active, and where people are offering more information.
Given the marginal benefits are so slim, they are easily outweighed by the costs Chi notes. And they are.
Thanks. I think it would be better, given you are recommending joining and remaining in the party, the 'price' isn't quoted as a single month of membership.
One estimate could be the rate of leadership transitions. There have been ~17 in the last century of the Labour party (ignoring acting leaders). Rounding up, this gives an expected vote for every 5 years of membership, and so a price of ~£4.38*60 = ~£250 per leadership contest vote. This looks a much less attractive value proposition to me.
Forgive me, but your post didn't exactly avoid any doubt, given:
1) The recommendation in the second paragraph is addressed to everyone regardless of political sympathy:
We believe that, if you're a UK citizen or have lived in the UK for the last year, you should pay £4.38 to register to vote in the current Labour leadership, so you can help decide 1 of the 2 next candidates for Prime Minister. (My emphasis)
2) Your OP itself gives a few reasons for why those "indifferent or hostile to Labour Party politics" would want to be part of the selection. As you say:
For £4.38, you have a reasonable chance of determining the next candidate PM, and therefore having an impact in the order of billions of pounds. (Your emphasis)
Even a committed conservative should have preferences on "conditional on Labour winning in the next GE, which Labour MP would I prefer as PM?" (/plus the more Machiavellian "who is the candidate I'd most want leading Labour, given I want them to lose to the Conservatives?").
3) Although the post doesn't advocate joining just to cancel after voting, noting that one can 'cancel any time', alongside the main motivation being offered taking advantage of a time-limited opportunity for impact (and alongside the quoted cost being a single month of membership) makes this strategy not a dazzling feat of implicature (indeed, it would be the EV-maximising option taking the OP's argument at face value).
Had the post merely used the oncoming selection in Labour to note there is an argument for political party participation similar to voting (i.e. getting a say in the handful of leading political figures); clearly stressed this applied across the political spectrum (and so was more a recommendation that EAs consider this reason to join the party they are politically sympathetic in expectation of voting in future leadership contests, rather than the one which happens to have a leadership contest on now); and strenuously disclaimed any suggestion of hit and run entryism (noting defection for various norms with existing members of the party, membership mechanisms being somewhat based on trust that folks aren't going to 'game them', etc.), I would have no complaints. But it didn't (although I hope it will), so here we are.
I'm not a huge fan of schemes like this, as it seems the path to impact relies upon strategic defection of various implicit norms.
Whether or not political party membership asks one to make some sort of political declaration, the spirit of membership is surely meant for those who sincerely intend to support the politics of the party in question.
I don't think Labour members (perhaps authors of this post excluded) or leadership would want to sell a vote for their future leader at £4.38 each to anyone willing to fill out an application form - especially to those indifferent or hostile to Labour Party politics. That we can buy one anyway (i.e. sign up then leave a month later) suggests we do so by taking advantage of their good faith: that folks signing up aren't just doing it to get a vote on the leadership election, that they intend to stick around for a while, that they'll generally vote for and support Labour, etc.
If this 'hit and run entryism' became a common tactic (e.g. suppose 'tactical tories' pretended to defect from the Conservatives this month to vote for the Labour candidate the Conservatives wanted to face in the next election) we would see parties act to close this vulnerability (I think the Conservatives did something like this in terms of restricting eligible members to those joining before a certain date for their most recent leadership contest).
I'd also guess that ongoing attempts to 'game' this sort of thing is bad for the broader political climate, as (as best as I can tell) a lot of it runs on trust rather than being carefully proofed against canny selectoral tactics (e.g. although all parties state you shouldn't be a member of more than one at a time, I'm guessing it isn't that hard to 'get away with it'). Perhaps leader selection is too important to justly leave to only party members (perhaps there should be 'open primaries'), but 'hit and run entryism' seems very unlikely to drive one towards this, but merely greater barriers to entry for party political participation, and lingering suspicion and mistrust.
FWIW I have found it more costly - I think this almost has to be true, as $X given to charity is $X I cannot put towards savings, mortgages, etc. - but, owed to fortunate circumstances, not very burdensome to deal with. I expect others will have better insight to offer.
Given your worries, an alternative to the GWWC pledge which might be worth contemplating is the one at The Life You Can Save. Their recommended proportion varies by income (i.e. a higher % with larger incomes), and is typically smaller than GWWC across most income bands (on their calculator, you only give 10% at ~$500 000 USD, and <2% up to ~$100 000).
Another suggestion I would make is it might be worth waiting for a while longer than "Once I have a job and I'm financially secure" before making a decision like this. It sounds like some of your uncertainties may become clearer with time (e.g. once you enter your career you may get a clearer sense of what your earning trajectory is going to look like, developments in your personal circumstances may steer you towards or away from buying a house). Further, 'experimenting' with giving different proportions may also give useful information.
How long to wait figuring things out doesn't have an easy answer: most decisions can be improved by waiting to gather more information, but most also shouldn't be 'put off' indefinitely. That said, commonsense advice would be to give oneself plenty of time when weighing up whether to make important lifelong commitments. Personally speaking, I'm glad I joined GWWC (when I was still a student), and I think doing so was the right decision, but - although I didn't rush in a whim - I think a wiser version of me would have taken greater care than I in fact did.