DT

David T

839 karmaJoined

Comments
130

Sure, just because the market was at 60% doesn't mean that nobody participating in it had 90% confidence, though when it's a thin market that indicates they're either cash constrained or missing out on easy, low-risk short term profit. I have the bigger question about why no psephologists, who one would think are the people most likely to have a knowledge advantage as well as good prediction skill, don't have to risk savings and actually have non-money returns skewed in their favour (everyone remembers when they make a right outlying call, few people remember the wrong outliers) seemed able to come up with explanation of why the sources of massive polling uncertainty were actually not sources of massive uncertainty. 

And the focus of my argument was that in order to rationally have 65-90% confidence in an outcome when the polls in all the key states were within the margin for error and largely dependent on turnout and how "undecideds" vote, people would have to have some relevant knowledge of systematic error in polling, turnout or how "undecideds" would vote which either eliminated all sources of uncertainties or justified their belief everyone else's polls were skewed[1]. I don't see any particular reason to believe the means to obtain that knowledge existed and was used when you can't tell me what that might look like, never mind how a small number of apparently resource-poor people obtained it... 

 

  1. ^

    The fact that most polls both correctly pointed to a Trump electoral college victory but also had a sufficiently wide margin of error to call a couple individual states (and the popular vote) wrongly, is in line with "overcautious pollsters are exaggerating the margin for error" or "pollsters don't want Trump to look like he'll win" not being well-justified reasons to doubt their validity

Hi Adam

Thanks for that extra info. On Egger I agree that many of the effects found there don't seem to be Hawthorne Effects - the bit that seems most credible to me is that non-recipients report working more hours on average which seems consistent with a theory that they are getting real benefit from a cash injection into a cash constrained village). I also agree with your conclusion they're unlikely to overstate their income/consumption, though there might be a stronger incentive for them to make their consumption choices seem sensible[1]

I have questions about how sustainable the gains are and who the losers are,[2] but also think the Egger characterisation of villages as small open economies with under-utilised resources (so the real welfare gains vastly outweigh local inflation) sounds plausible.

The roof methodology is probably a potentially Hawthorne Effect effect with more cause for concern: if potential recipients figure it out, the result might be certain households deferring a sensible investment in a tin roof in the hope this makes them eligible for future benefactor gifts. From a total welfare perspective those marginal poor families might be able to use their accumulated savings in other ways [nearly] as positive as the roof investment, but there are still arguments about perverse incentives

  1. ^

    I realise people in villages in developing countries generally don't have many opportunities to waste money on temptation goods benefactors would disapprove of, but at the margin...

  2. ^

    an obvious potential example, given GD's approach, would be future losses to the thatching industry (likely also composed of local poor people). Depending on how easily they can redeploy skills and resources, their losses from no longer being paid for rethatching might be comparable in magnitude to the welfare gains of families finally able to afford a roof that lasts.

I'm in agreement with the point that the aleatoric uncertainty was a lot lower than the epistemic uncertainty, but we know how prediction error from polls arises: it arises because a significant and often systematically skewed in favour of one candidate subset of the population refuse to answer them, other people end up making late decisions to (not) vote and it's different every time. There doesn't seem to be an obvious way to solve epistemic uncertainty around that with more money or less risk aversion, still less to 90% certainty with polls within the margin of error (which turned out to be reasonably accurate).

Market participants don't have to worry about blowback from being wrong as much as Silver but also didn't think Trump was massively underpriced [before Theo, who we know had a theory but less relevant knowledge, stepped in, and arguably even afterwards if you think 90% certainty was feasible]. And for all that pollsters get accused of herding, some pollsters weren't afraid to share outlier polls, it's just that they went in both directions (with Selzer, who is one of the few with enough of a track record to not instantly write off her past outlier successes as pure survivorship bias, publishing the worst of the lot late in the day). So I think the suggestion that there was some way to get to 65-90% certainty which apparently nobody was willing to either make with substantial evidence or cash in on to any significant extent is a pretty extraordinary claim...

I think there's something quite powerful about not going all in on a single data point and noting that Musk backed Hillary Clinton in 2016 and when he did endorse the winning side in 2020 he spent most of the next year publicly complaining about the [predictable] COVID policy outcomes. The base rate for Musk specifically and politically-driven billionaires in general picking winners in elections isn't better than pollsters, or even notably better than random chance.

Do you honestly believe that Harris (or Biden) would have won if Musk didn't buy Twitter or spend so much time on it?

I think he's referring to the paragraph lower down which says 

even if we took the 46% all-cause mortality effect at face value, our cost effectiveness estimate in Kenya would only move from 2.5x to 2.8x

On (1), I tend to agree and don't think a lot of the CEA on health even takes into account the cost of treatment purchases/copays and clinic transport (something I'm sure you have great data on!) which is not insignificant if your annual cash income is <$200 and your kid potentially gets infected multiple times per annum. Some CEA doesn't even include morbidity. But I don't think there's much scope for comparable medium term multiplier effects on a neighbourhood from typical health programs.

I think (2) is an important point. I've read studies before (not the Egger one which appears to have been the biggest factor here) which claim the remarkable finding that cash transfers had positive impact on things like domestic violence in neighbouring households that didn't receive them, and thought to myself have you not considered the possibility that people have noticed the outsiders with clipboards asking personal questions seem to be associated in some way with their neighbours getting unexpected windfalls, and started to speculate about what sort of answers the NGOs are looking for...

It's an interestingly large upward revision by GiveWell though, especially since they seem to have heavily discounted some of the study results

A quick look at Egger suggests their main novelty was that they considered positive spillover effects on other villages up to 2km away (whereas other studies didn't or may even have used those neighbouring villages as controls with increases in their income reducing the estimated cash transfer effect size). This seems plausible, though it also seems plausible they're bundling in other local effects with that. They seem to have plausible data that non-recipients are actually getting higher earned incomes by being paid to do more labour by recipients though, which is the sort of thing these programmes hope to achieve

My point isn't that the odds were definitely 60/40 (or in any particular range other than "not a dead cert for Trump and his allies to stay in power for as long as anything matters). 

My point was that to gloss Musk's political activity over the last four years as "genius" in a prediction market sense (something even he isn't claiming)  you've got to conclude that the most cost effective way a billionaire entrepreneur and major government contractor could get valuable ROI out of an easily-flattered president with overlapping interests was by buying Twitter and embedding himself in largely irrelevant but vaguely aligned culture war bullshit. This seems... unlikely, and it seems even more unlikely people wouldn't have been upset enough with the economy to vote Trump without Elon's input.

Otherwise it looks like Elon went on a political opinion binge, and this four year cycle it came up with his cards and not the other lot's cards. Many other people backed Trump in ways which cost them less and will be easier to reconcile with future administrations, and many others will successfully curry favour without even having backed him.

Put another way, did you consider the donations of SBF to be genius last time round? 

The incapacitation effect incurs the costs of full time incarceration: in El Salvador over 2% of the working age population.

It's hardly surprising a charity focusing on cost effective interventions which (rightly or wrongly) claims other interventions can cost as little as $2 per crime averted doesn't consider this the benchmark for success

This feels like it could easily be counterproductive. 

A chatbot's "relatable backstory" is generative fiction, and the default "Trump supporter" or "liberal voter" is going to be a vector of online commentary most strongly associated with Trumpiness or liberalism (which tends not to be the most nuanced...), with every single stereotyped talking point trotted out to contradict you. Yes, this can be tweaked, but the tweaking is just toning it down or adding further stereotypes, not creating an actual person. 

Whereas the default person that doesn't agree with your politics is an actual human being, with actual life experience that has influenced their views, probably doesn't actually hold the views that strongly or agree with literally every argument cited in favour of $cause, is probably capable of changing the subject and becoming likeable again, and hey, you might even be able to change their mind.

So if you're talking to the first option rather than the second, you're actually understanding less.

I don't think it helps matters for people to try to empathise with (say) a few tens of millions of people who voted for the other side - in many cases because they didn't really pay a lot of attention to politics and had one particularly big concern - by getting them to talk to a robot trained on the other side's talking points. If you just want to understand the talking points, I guess ChatGPT is a (heavily filtered for inoffensiveness) starting point, or there's a lot of political material with varying degrees of nuance already out there on the internet written by actual humans...

Load more