rorty

Posts

Sorted by New

Wiki Contributions

Comments

Deference for Bayesians

Maybe to try and see if I understand I should try to answer: it'd be a mix of judgment, empirical evidence (but much broader than the causal identification papers), deductive arguments that seem to have independent force, and maybe some deference to an interdisciplinary group with good judgment, not necessarily academics?

Deference for Bayesians

Makes sense on 3 and 4. Out of curiosity, what would change your mind on the minimum wage? If you don't find empirical economics valuable nor the views of experts (or at least don't put much stock in them) how would you decide whether supply and demand was better or worse theory than an alternative? The premises underlying traditional economic models are clearly not fully 100% always-and-everywhere true, so their conclusions need not be either. How do you decide about a theory's accuracy or usefulness if not by reference to evidence or expertise?

Deference for Bayesians

Thanks for this. I don't agree for scientists, at least in their published work, but I do agree that to an extent it's of course inevitable to bring in various other forms of reasoning to make subjective assessments that allow for inferences. So I think we're mostly arguing over extent.

My argument would basically be:

  1. Science made great progress when it agreed to focus its argument on empirical evidence and explanations of that evidence.
  2. Economics has (in my opinion) made great progress in moving from a focus on pure deduction and theory (akin to the Natural Philosophers pre-science) and focused more on generating careful empirical evidence (especially about cause and effect). Theory's job is then to explain that evidence. (Supply and demand models don't do a good job of explaining the minimum wage evidence that's been developed over more than a decade.)
  3. In forecasting, starting with base rates (a sort of naive form of empiricism) is best practice. Likewise, naive empiricism seems to work in business, sports, etc. Even descriptive and correlational data appears practically very useful.
  4. Therefore, I'd like to see EA and the rationality community stay rooted in empiricism as much as possible. That's not always an option of course, but empirically driven processes seem to beat pure deduction much of the time, even when the data available doesn't meet every standard of statistical inference. This still leaves plenty of room for the skilled Bayesian to weight things well, vet the evidence, depart from it when warranted etc.

I've not mentioned experts even once. I find the question of when and how much to defer to be quite difficult and I have no strong reaction to what I think is your view on that. My concern is with your justification of it. Anything that moves EA/rationality away from empiricism worries me. A bunch of smart people debating and deducing, largely cut off from observation and experiment, is a recipe for missing the mark. I know that's not truly what you're suggesting but that's where I'm coming from.

(Finally, I grant that the scale of the replication crisis, or put another way the social science field in question, matters a lot and I've not addressed it.)

Deference for Bayesians

Philosopher Michael Strevens argues that what you're calling myopic empiricism is what has made science so effective: https://aeon.co/essays/an-irrational-constraint-is-the-motivating-force-in-modern-science

I'd like to see more deference to the evidence as you say, which isn't the same as to the experts themselves, but more deference to either theory or common sense is an invitation for motivated reasoning and for most people in most contexts would, I suspect, be a step backward.

But ultimately I'd like to see this question solved by some myopic empiricism! We know experts are not great forecasters; we know they're pretty good at knowing which studies in their fields will or not replicate. More experiments like that are what will tell us how much to defer to them.

How do you balance reading and thinking?

I lean toward: When in doubt, read first and read more. Ultimately it's a balance and the key is having the two in conversation. Read, then stop and think about what you read, organize it, write down questions, read more with those in mind.

But thinking a lot without reading is, I'd posit, a common trap that very smart people fall into. In my experience, smart people trained in science and engineering are especially susceptible when it comes to social problems--sometimes because they explicitly don't trust "softer" social science, and sometimes because they don't know where to look for things to read.

And that's key: where do you go to find things to read? If like me you suspect there's more risk of under-reading than under-thinking, then it becomes extra important to build better tools for finding the right things to read on a topic you're not yet familiar with. That's a challenge I'm working on, and one where there's very easy room for improvement.

The Folly of "EAs Should"

I take this post to raise both practical/strategic and epistemological/moral reasons to think EAs should avoid being too exclusive or narrow in what they say "EAs should do." Some good objections have been raised in the comments already. 

Is it possible this post boils down to shifting from saying what EAs should do to what EAs should not do? 

That sounds maybe intuitively unappealing and un-strategic because you're not presenting a compelling, positive message to the outside world. But I don't mean literally going around telling people what not to do. I mean focusing on shifting people away from clearly bad or neutral activities toward positive ones, rather than focusing so much on what the optimal paths are. I raised this before in my "low-fidelity EA" comment: https://forum.effectivealtruism.org/posts/6oaxj4GxWi5vuea4o/what-s-the-low-resolution-version-of-effective-altruism?commentId=9AsgNmts2JqibdcwY

Even if you don't think there are epistemological/moral reasons for this, there may be practical/strategic ones: A large movement that applies rationality and science to encourage all its participants to do some good may do a lot more good than a small one that uses it to do the most good

Julia Galef and Angus Deaton: podcast discussion of RCT issues (excerpts)

This kind of debate is why I'd like to see the next wave of Tetlock-style research focus on the predictive value of different types of evidence. We know a good bit now about the types of cognitive styles that are useful for predicting the future, and even for estimating causal effects in simulated worlds. But we still don't know that much about the kinds of evidence that help. (Base rates, sure, but what else?) Say you're trying to predict the outcome of an experiment. Is reading about a similar experiment helpful? Is descriptive data helpful? Is interviewing three people who've experienced the phenomenon? When is one more and less useful? It's time to take these questions about evidence from the realms of philosophy, statistical theory, and personal opinion and study them as social phenomena. And yes, that is circular because what kind of evidence on evidence counts? But I think we'd still benefit from knowing a lot more on the usefulness of different sorts of evidence and prediction tournaments would be a nice way to study their cash value.

Here's the case that the low-fidelity version is actually better. Not saying I believe it, but trying to outline what the argument would be...

Say the low-fidelity version is something like: "Think a bit about how you can do the most good with your money and time, and do some research." 

Could this be preferable to the real thing?

It depends on how sharply diminishing the returns are to the practice of thinking about all of this stuff. Sometimes it seems like effective altruists see no diminishing returns at all. But it's plausible that they are steeply diminishing, and that effectively the value of EA is avoiding really obviously bad uses of time and money, rather than successfully parsing whether AI safety is better or worse than institutional decision-making as an area of focus. 

If you can get most of the benefits of EA with people just thinking a little about whether they're doing as much good as they could be, perhaps the low-fidelity EA is the best EA: does a lot of good, saves a lot of time for other things. And that's before you add in the potential of the low-fidelity version to spread more quickly and put off fewer people, thereby also potentially doing much more good.

Improving Institutional Decision-Making: a new working group

Do you see this area as limited to cases where participants in a decision are trying and failing to make "good" decisions by their own criteria (ie where incentives are aligned but performance isn't there because of bad process or similar) or are you also thinking of cases where participants have divergent goals and suboptimal decisions from an EA standpoint are driven by conflict and misaligned incentives rather than by process failures?

Incentivizing forecasting via social media

Agree on both points. Economist's World in 2021 partnership with Good Judgment is interesting here. I also think as GJ and others do more content themselves, other content producers will start to see the potential of forecasts as a differentiated form of user-generated content they could explore. (My background is media/publishing so more attuned to that side than the internal dynamics of the social platforms.) If there are further discussions on this and you're looking for participants let me know.

Load More