note: I think this applies much less or even not all in domains where you’re getting tight feedback on your models and have to take actions based on them which you’re then evaluated on. 

I think there’s a trend in the effective altruist and rationality communities to be quite trusting of arguments about how social phenomena work that have theoretical models that are intuitively appealing and have anecdotal evidence or non-systematic observational evidence to support them. The sorts of things I’m thinking about are:

 

  • The evaporative cooling model of communities
  • My friend’s argument that community builders  shouldn't spend [edit: most] of their time talking to people they consider less sharp than them to people less sharp than them because it’ll harm their epistemic
  • Current EA community is selecting for uncritical people
  • Asking people explicitly if they’re altruistic will just select for people who are good lairs (person doing selections for admittance to an EA thing)
  • The toxoplasma of rage
  • Max Tegmark’s model of nuclear war
  • John Wentworth’s post on takeoff speeds

 

I think this is a really bad epistemology for thinking about social phenomena. 

 

Here are some examples of arguments I could make that we know are wrong but seem reasonable based on arguments some people find intuitive and observational evidence:

 

  • Having a  minimum wage will increase unemployment rates. Employers hire workers up until the point that the marginal revenue generated by each worker equals the marginal cost of hiring workers. If the wage workers have to be paid goes up then unemployment will go up because marginal productivity is diminishing in the number of workers. 

 

  • Increasing interest rates will increase inflation. Firms set their prices as a cost plus a markup and so if their costs increase because the price of loans goes up then firms will increase prices which means that inflation goes up. My friend works as a handyman and he charges £150 for a day of work plus the price of materials. If the price of materials went up he’d charge more

 

  • Letting people emigrate to rich countries from poor countries will increase crime in rich countries. The immigrants who are most likely to leave their home countries are those who have the least social ties and the worst employment outlooks in their home countries. This selects people who are more likely to be criminals because  criminals are likely to have bad job opportunities in their home countries and weak ties to their families. If we try and filter out criminals we end up selecting smart criminals who are good at hiding their misdeeds. If you look at areas with high crime rates they often have large foreign immigrant populations. [Edit - most people wouldn't find this selection argument intuitive but I thought it was worth including because of how common selection based arguments are in the EA and rationality communities. I'm also not taking aim at arguments that are intuitively obvious rather arguments that those making find intuitively appeal, even if they're counterintuitive in some way. i.e some people think that adverse selection is a common and powerful force even though adverse selection is a counter-intutive concept.] 

 

  • Cash transfers increase poverty, or at least are unlikely to reduce more than in-kind transfers or job training. We that people in low-income countries often spend a large fraction of their incomes on tobacco and alcohol products. By giving these people cash they have more money to spend on tobacco and alcohol meaning they’re more likely to suffer from addiction problems that keep them in poverty. We also know that poverty selects people who make poor financial decisions, so giving people cash gives people greater ability to take out bad loans because they have more collateral. 

 

  • Opening up a country to immigration increases the unemployment of native-born workers. If there are more workers in a country then it becomes harder to find a job so unemployment goes up. 
     
  • Building more houses increases house prices. The value of housing is driven by agglormation effects. When you build more housing agglomeration effects increase, increasing the value of the housing, thereby increasing house prices. House building also drives low-income people out of neighbourhoods. When you see new housing being built in big cities it’s often expensive flats that low-income people won’t be able to afford. Therefore if you don’t have restrictions on the ability to build housing low income people won’t be able to live in cities anymore. 

 

Many people find these arguments very intuitively appealing - it was the consensus opinion amongst economists until the 2000s that having a minimum wage did increase unemployment. But we know that all of these arguments are wrong. I think all of the arguments I listed as examples of intuitively appealing arguments made in the EA and rationality communities have much less evidence behind them that having a minimum wage. The evidence for minimum wage increasing prices was both theoretical - standard supply and demand, a very successful theory, says that this is what happens - and statistical - you can do regressions showing minimum wages are associated with unemployment. But it turned out to be wrong because social science is really hard and our intuitions are often wrong. 

 

I'm pretty sceptical of macroeconomic theory. I think we mostly don't understand how inflation works, DSGE models (the forefront of macroeconomic theory) mostly don't have very good predictive power, and we don't really understand how economic growth works for instance. So even if someone shows me a new macro paper that proposes some new theory and attempts to empirically verify it with both micro and macro data I'll shrug and eh probably wrong. 

We have thousands of datapoints for macro data and tens (?) of millions of micro data, macro models are actively used by commercial and central banks so get actual feedback on their predictions and they're still not very good.

This is a standard of evidence way way higher than what’s used to evaluate a lot of the intuitive ideas that people have in EA, especially about community building. 

 

All the examples I gave of intuitively appealing ideas are probably (>80%) wrong and come from economics and all but one from microeconomics. This is in part because my training is as an economist and so economics examples are what come to mind. I think it’s probably also because it takes the rigour of modern microeconomics - large datasets with high-quality causal inference - to establish that we can be confident that ideas are wrong, and even so I have like 15% credence that any minimum wage meaningfully increases unemployment.  It's often intractable to do high-quality causal inference for the questions in which EAs are interested in, but this means that we should have much more uncertainty for our models, rather than adjusting the standards of evidence we need to belive something. 

 

My argument is that if we have these quite high levels of uncertainty even for the question of whether or not having a minimum wage increases unemployment, maybe the social science question which has had the most empirical firepower thrown at it, we should be way, way more sceptical of intuitive observational models of social phenomena we come up with. 

43

0
0

Reactions

0
0

More posts like this

Comments20
Sorted by Click to highlight new comments since:

This comment is both in response to this post, and in part to a previous comment thread (linked below, as the continued discussion seemed more relevant here than in the evaporative cooling model post here: https://forum.effectivealtruism.org/posts/wgtSCg8cFDRXvZzxS/ea-is-probably-undergoing-evaporative-cooling-right-now?commentId=PQwZQGMdz3uNxnh3D).

To start out:

  • When it comes to the reactions of individual humans and populations, there is inherently far more variability than there is in e.g. the laws of physics
  • No model is perfect, and will always be a simplification of reality (particularly when it comes to populations, but also the case in e.g. engineering models)
  • A model is only as good as its assumptions, and these should really be stated
  • Just because a model isn't perfect, does not mean it has no uses
  • Sometimes there are large data gaps, or you need to create models under a great degree of uncertainty
  • There are indeed some really bad models that should probably be ignored, but dismissing entire fields is not the way to approach this
  • Predicting the future with a large degree of certainty is very hard (hence the dart throwing chimpanzee analogy that made the news, and predictions becoming less accurate after around 5 years or so as per Superforecasting), so a large rate of inaccuracies should not be surprising (although of course you want to minimize these)
  • Being wrong and then new evidence causing you to update your models is how it should work (edited for clarity: as opposed to not updating your models in those situations)

For this post/general:

What I feel is lacking here is some indication of base rates, i.e. how often are people completely/largely without questioning trusting of these models, as opposed to being aware that all models have their limitations and that this should influence how they are applied. And of course 'people' is in itself a broad category, with some people being more or less questioning/deferential or more or less likely to jump to conclusions. What I am reading here is a suggestion of 'we should listen less to these models without question' without knowing who and how frequently people are doing that to begin with.

Out of the examples given, the minimum wage one was strong (given that there was a lot of debate about this) and I would count the immigration one as a valid example (people again have argued this, but often in a very politically charged way that how intuitive it is depends on the political opinions of the person reading), but many of the other ones seemed less intuitive or did not follow perhaps to the point of being a straw man.

I do believe you may be able to convince some people of any one of those arguments and make it be intuitive to them, if the population you are looking at it for example a typical person on the internet. I am far less convinced that this is true for a typical person within EA, where there is a large emphasis on e.g. reasoning transparency and quantitative reasoning.

There does appear to be a fair bit of deferral within EA, and some people do accept the thoughts of certain people within the community without doing much of their own evaluation (but given this is getting quite long, I'll leave that for another comment/post). But a lot of people within EA have similar backgrounds in education and work, and the base rate seems to be quantitative reasoning not qualitative, nor accepting social models blindly. In the case of 'evaporative cooling', that EA Forum post seemed more like 'this may be/I think it is likely to be the case' not 'I have complete and strong belief that this is the case'.

"even if someone shows me a new macro paper that proposes some new theory and attempts to empirically verify it with both micro and macro data I'll shrug and eh probably wrong." Read it first, I hope. Because that sounds like more of a soldier than a scout mindset, to use the EA terminology.

Even if a model does not apply in every situation also does not mean the model should not exist, nor that qualitative methods or thought exercises should not be used. You cannot model human behaviour the same way as you can model the laws of physics, human emotions do not follow mathematical formulas (and are inconsistent between people), creating a model of how any one person will act is not possible unless you know that particular person very well and perhaps not even then. But generally, trying to understand how a population in general could react should be done - after all, if you actually want to implement change it is populations that you need to convince.

I agree with 'do not assume these models are right on the outset', that makes sense. But I also think it is unhelpful and potentially harmful to go in with the strong assumption that the model will be wrong, without knowing much about it. Because not being open to potential benefits of a model, or even going as far as publicly dismissing entire fields, means that important perspectives of people with relevant expertise (and different to that of many people within EA) will not be heard.

What I think Nathan Beard is trying to say is EAs/LWers give way too much credence to models that are intuitively plausible and not systematically tested, and generally assume way too much usefulness of an average social science concept or paper, let alone intuition.

And given just how hard it is to make a useful social science model in economics, arguably one of the most well evidenced sciences, I think this is the case.

And I think this critique is basically right, but I think it's still worth funding, as long as we drastically lower our expectations of the average usefulness of social sciences .

Should we limit this to social phenomena? Is it more likely for them because social phenomena are more complex and have more moving parts, so it's easy to miss the most important effects and use vastly oversimplified models? Also, studies of social phenomena (+biology and medical research) often don't replicate, possibly for that reason and also having high p-value cutoffs and many ways to do analyses until you get a result.

In general, I think we should be careful about claims of causal effects X -> Y ->Z (or longer causal paths from X to Z) capturing most of the impact or even having the right sign for overall effects of X on Z. Ideally, you should manipulate X and directly measure Z.

Furthermore, the longer the causal path in the argument, the more places it could be missing parallel paths that are more important or have the opposite sign (and be more likely to have an error at some point). So, we should be more skeptical of longer causal paths in general. Plus, the intuitive examples you give (at least as stated) don't establish that the specific effects are practically significant even if they exist.

[anonymous]1
0
0

Yeah this seems right.

I think I don't understand the point you're making with your last sentence. 

I'm one of the people who believe that "Current EA community building is selecting for uncritical people", it feels like the reasons I think that are different from what you mention in this post. 

Specifically, it isn't for a neat theoretical reason but just from talking to university students who recently got into EA via their university group, and from hanging out with university group organisers, and feeling like things were off. I would predict that my most impressive/talented/interesting friends would not enjoy hanging out in the type of environment created by some of the activities the group organisers prioritise (and from talking to my friends this seems to be true - some of them who agree with EA ideas still distanced themselves from their local group because it seemed to be trying hard to make 'EA bots')

In fact, often the arguments given for doing that type of university group organising seem to fall into the problem you describe. I have had uni group organisers mention that during your first interaction with a new person, you should try to optimise for the outcome of getting them to come to more future EA events (eg: don't say things that directly disagree with them, focus on how their interests overlap with EA, don't be too intense) and that even if this harms epistemics, over the long-run this will be fine because these people who don't care much about good thinking will come to care about it once they get excited about doing good (due to your events!). It feels like I haven't seen evidence for this being true beside the intuitions of new-ish community builders. 

[anonymous]6
1
0

I think one of my critiques of this is that I'm very sceptical that strong conclusions should be drawn from any individual's experiences and those of their friends. My current view is that we just have limited evidence for any models of what good and bad community building looks like and the way to move forward is do try a wide range of stuff and do what seems to be working well.

I think I mostly disagree with your third paragraph. The assumptions I see here are:

  1. Not being very truth seeking with new people will either select for people who aren't very critical or will make people who are critical into not critical people 
  2. This will have second order effects on the wider community epistemics specifically in the direction of less critiques of EA ideas

i.e it's not obvious to me it makes EA community epistemics worse in the sense that EAs make worse decisions as a result of this. 

Maybe these things are true or maybe they aren't. My experience has not been this ( for context have been doing uni group cb for 2 years) the sorts of people who get excited about EA ideas and get involved are very smart, curious people who are very good critical thinkers.

But in the sprit of the post what I'd want to see are some regressions, like I'd want to see some measure of if the average new EA at a uni group which doesn't cb in a way that strongly promotes a kind of epistemic frankness are less critical of ideas in general than an appropriate reference class. 

Like currently I don't talk about animal welfare when first talking to people about EA because it's reliably the thing which puts the most people off. I think the first order effect of this is very clear - more people come to stuff - and my guess is that there are ~no second-order effects. I want to see some systematic evidence that this would have bad second order effects before I give up the clearly positive first order one. 

Compared to you I think intuitions are a good guide to pointing out what kinds of social interactions are appealing vs not appealing to people similar to us. I am less in favour of trying a wide range of stuff and then naively doing what seems to be working well based on simpler metrics, specifically I am less in favour of handing that strategy to new group organisers just because:

1) I trust their judgment way less than more experienced people in EA who have done useful direct work before
2) Because you miss out on all the great people you turn off because of your activities. You won't get negative feedback from those people because they stop showing up and won't bother
3) I think the metrics used to judge what seems to be working well are often the wrong ones (number of people who show up to your events, who do the intro fellowship, who go to EAGx from your university etc.) and noticing if you're getting people interested who actually seem likely to have a decent probability of very high impact in the world is hard to do

I also don't think the people you'd get into EA that way would be less critical of ideas in general than the average university student, just because the 'average university student' is a very low bar. I'm not sure what reference class to compare them to (one random suggestion: perhaps the libertarian society at a university?) or what sort of prediction to make besides that I don't think people I think of as having the aptitudes that are most helpful for having massive amounts of positive impact (being good at thinking for research, being good at starting projects etc.) would enjoy the kind of environment created at most university groups.  

Specifically, one of my main gripes is that university group organisers sometimes seem to just be optimising for getting in more new people instead of doing things and organising things that would have actually appealed to them. Under some assumptions, this is not a massive problem because my guess is this happens less at top universities (though unclear, friends at some other top universities also complain about the bad vibes created) and the people who would be most interested in effective altruism would get interested anyway in spite of, instead of because of, community building strategy. So the main visible effect is the dilution of the EA community and ideas, which could actually just be not that bad if you don't particularly care about the "EA" label providing useful information about people who use it. 

[anonymous]1
0
0

Yeah, I'm pretty sceptical of the judgement of experienced community builders on the sorts of questions like effect of different strategies on community epistemics. I think if I frame this as an intervention "changing community building in x way will improve EA community epistemic" I have a strong prior that it has no effect because most interventions people try to have no or small  effect (see famous graph of global health interventions.) 

I think the following are some examples of places where you'd think people would have good intuitions about what works well but they don't 

  • Parenting. We used to just systematically abuse children and think it was good for them (e.g denying children the ability to see their parents in the hospital). There's a really interesting passage in Invisible China where the authors describe loving grandparents deeply damaging the grandchildren they care for by not giving them enough stimulation as infants. 
  • Education. It's really really hard to find education interventions which work in rich countries. It's also interesting that in the US there's lots of opposition from teachers over teaching phonics despite it being one of the few rich country education interventions with large effect sizes (although it's hard to judge how much of this is for self-interested reasons)
  • I think it's unclear how well you'd expect people to do on the economics examples I gave. I probably would have expected people to do well with cash transfers since in fact lots of people do get cash transfers (e.g pensions, child benefits, inheritance) and do ok with minimum wage since at least some fraction of people have a sense of how the place they work for hires people. 
  • Psychotherapy. We only good treatments that worked for specific mental health conditions (rather than to generally improve people's lives, I haven't read anything on this) other than mild-moderate depression when we started doing RCTs. I'm most familiar with OCD treatment specifically and the current best practice was only developed in the late 60s. 

Hmm, would you then also say that we should be skeptical about claims about the overall usefulness of university group organising. If you frame it as an intervention of "run x program (intro fellowship, retreat, etc.) that will increase probability someone has a large positive impact", would you also have a strong prior that it has no effect because most interventions people try especially education interventions which is a lot of what uni groups try to do have no or small effect? 

If we try and filter out criminals we just end up selecting smart criminals who are good at hiding their misdeeds. [emphasis added]

I don't buy that this is an 'intuitive' view; I think you are setting up a straw man here. I think the vast majority of immigration restrictivists would view attempting to filter for criminality as a positive thing that would on average improve the quality of immigrants, because a lot of criminals are stupid and could be apprehended. As a reductio, do you think the average person finds it intuitive that, if an immigrant confesses to murder when asked by a border agent, the border agent should shrug his shoulders and ignore this information?

[anonymous]1
0
0

I agree I think this second part isn't intuitive to most people. I was using intuitive somewhat loosely to mean based on intuitions the person making the argument has.

You write that "Many people find these arguments very intuitively appealing", but I struggle to think of three people that would agree with that intuition.

The reason I bring this up is not just pedanty. I was pretty sympathetic to your argument until I got to the examples, but a lot of them seemed to involve a bit of motte-and-bailey. In many cases I can either come up with a version that I agree is intuitive, or one that is clearly false, but not both.

For another example, your minimum wage example combined multiple claims:

  1. Having a  minimum wage will increase unemployment rates. 
  2. Employers hire workers up until the point that the marginal revenue generated by each worker equals the marginal cost of hiring workers. 
  3. If the wage workers have to be paid goes up then unemployment will go up because marginal productivity is diminishing in the number of workers. 

I agree that the conjunction is literally false, because 2) is not an accurate description of hiring manager thought processes, 1) is clearly false if that minimum wage is very very low, and 3) is false in some ranges for monopsony models. But 2) and 3) could be a reasonable approximation for many purposes, and I was not under the impression that the core claim, 1), had been disproven in an economically meaningful way. Recent research like the 2019 Dube QJE paper suggest that historically US minimum wage increases haven't increased unemployment, but others like Clemens and Strain (2021) or Neumark and Shirley (2021) suggest they did increase unemployment.

[anonymous]3
3
0

I completely stand by the minimum wage one, this was the standard model of how labour markets worked until like the shapiro-Stiglitz model (I think) and is still the standard model for how input markets work, and if you're writing a general equilibrium model you'll probably still have wage = marginal product of labour. 

Meta-analysis find that minimum wage doesn't increase unemployment until about 60% of median wage https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/844350/impacts_of_minimum_wages_review_of_the_international_evidence_Arindrajit_Dube_web.pdf, and most economists don't agree that a even a $15 an hour minium wage would lead to substantial unemployment (although many are uncertain) https://www.igmchicago.org/surveys/15-minimum-wage/

This comment does not sound like 'completely standing by' to me! If a $15/hour US minimum wage, which is the relevant current policy proposal, reduces employment, that means the intuition is correct. 

I think the IGM poll is weak evidence for you here. Lets look at some quotes from the guys who didn't agree that increasing the minimum wage would substantially increase unemployment (e.g. ostensibly disagreed with the intuition):

Evidence is that it would be lower by perhaps 1 - 2 %. Lots of margins for adjustments.

Lower, probably; substantially lower, not clear at all.

Empirical studies disagree on the sign of the effect. Few of those concluding in favor of negative are consistent with "substantially."

I don't think the evidence supports the bold prediction that employment will be substantially lower. Not impossible, but no strong evidence.

Empirical evidence suggests the effects on employment would be modest.

Lower, yes. "Substantially"? Not clear. For small changes in min wage, there are small changes in employment. But this is a big change.

In many cases, these people either 1) believe there would be unemployment, but are getting hung up on 'substantial', or 2) think there will be other adjustments (e.g. reduction in non-wage benefits). I think the headline result here is somewhat misleading - at that is before any adjustment for the partisan bias issue. If my intuition was that increasing the minimum wae would increase unemployment, and the people who ostensibly disagree with me think it would only cause 780,000[1] people to lose their jobs, I would consider myself vindicated.

I haven't read that 2019 Dube review, though I'm guessing it's similar to the other 2019 Dube review I posted. But as I noted in the grandparent, there is serious work on the other side since (e.g. the two 2021 papers).

  1. ^

    52m people on under $15/hour according to Oxfam * 1.5% according to Nordhaus, who voted 'disagree'

This discussion seems a bit of a side-track to your main point. These are just examples to illustrate that intuition is often wrong - you're not focused on the minimum wage per se. Potentially it could have been better if you had chosen more uncontroversial examples to avoid these kinds of discussions.

[anonymous]1
1
1

Maybe, I meant to pick examples where I thought the consensus of economists was clear (in my mind it's very clearly the consensus that having a low minimum wage has no employment effects.) 

Fwiw I think this is good advice.

If you want to make a point about science, or rationality, then my advice is to not choose a domain from contemporary politics if you can possibly avoid it. If your point is inherently about politics, then talk about Louis XVI during the French Revolution. Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.

I don’t buy any of the arguments you said at the top of the post, except for toxoplasma of rage (with lowish probability) and evaporative cooling. But both of these (to me) seem like a description of an aspect of a social dynamic, not the aspect. And currently not very decision relevant.

Like, obviously they’re false. But are they useful? I think so!

I’d be interested in different, more interesting or decision relevant or less obvious mistakes you often see.

[anonymous]1
0
0

I suppose I think the example I gave where someone I know doing selections for an important EA program didn't include questions about altruism because they thought that adverse selection effects were sufficiently bad. 

Seems like that is just a bad argument, and can be solved with saying “well that’s obviously wrong for obvious, commonsense reasons” and if they really want to, they can make a spreadsheet, fill it in with the selection pressures they think they’re causing, and see for themselves that indeed its wrong.

The argument I’m making is that most of the examples you gave I thought “that’s a dumb argument”. And if people are consistently making transparently dumb selection arguments, this seems different from people making subtly dumb selection arguments, like economists.

If you have subtly dumb selection arguments, you should go out and test which are true, if you’re making transparently dumb ones, you should figure out how to formulate better hypotheses. Chances are you’re not yet even oriented in the vague direction of reality in the domain you’re attempting to reason in.

Curated and popular this week
Relevant opportunities