R

rorty

91 karmaJoined Aug 2020

Posts
1

Sorted by New

Comments
17

I like your description here a lot. I am no expert but I agree with your characterization that Peirce's pragmatic maxim offers something really valuable even for those committed to correspondence and, more generally, to analytic philosophy. 

On Rorty, his last book was just published posthumously and it offers an intriguing and somewhat different take on his thinking. The basics haven't changed, but he frames his version of pragmatism in terms of the Enlightenment and anti-authoritarianism. I won't try to summarize; your mileage might vary but I've found it interesting. 

For me, again not as any kind of philosophy expert, the original appeal came from disillusionment with metaphysics. It seemed to me as a student that the arguments were just language games. The pieces might be really logical in relation to each other but they had no force because there was no solid foundation. There was always an assumption that could be challenged. (It's admittedly hard for me now to describe this without slipping into Rorty-esque language.) 

And then I read Rorty's Philosophy and Social Hope which was my introduction to pragmatism, and which seemed to directly address these concerns. Putting goals up front seemed a way around the constant possibility for objections: at some point we all have things we want to achieve and that can be the starting point for something. (Rorty also sort of gives you permission to stop reading philosophy and get on with it which at the time I appreciated.)

I imagine most EAs would not really enjoy Rorty because he sort of delights in constantly knocking seemingly common-sense notions of truth and a lot of his best writing is purposefully loose and interpretive. (Side note: the new Rorty book has some interesting nods toward causality; he's still rejecting correspondence but recognizing that causal forces limit our actions. One more reason I read him as attacking philosophy more than attacking reality.) Still, I think he offers a starting point that can really work for building up an epistemology based on application and moral goals rather than on metaphysics. And that's the part I think EAs might find exciting and interesting. 

Thanks for this. Here's Stanford Encyclopedia of Philosophy's first paragraph:

"Pragmatism is a philosophical tradition that – very broadly – understands knowing the world as inseparable from agency within it. This general idea has attracted a remarkably rich and at times contrary range of interpretations, including: that all philosophical concepts should be tested via scientific experimentation, that a claim is true if and only if it is useful (relatedly: if a philosophical theory does not contribute directly to social progress then it is not worth much), that experience consists in transacting with rather than representing nature, that articulate language rests on a deep bed of shared human practices that can never be fully ‘made explicit’." https://plato.stanford.edu/entries/pragmatism

I'd say pragmatism is as much a criticism of certain directions in philosophy as anything. It's a method of asking of philosophical distinctions "what difference would that make." But instead of turning toward skepticism it seeks to reorient philosophy and reasoning toward the fulfillment of human goals.

Maybe to try and see if I understand I should try to answer: it'd be a mix of judgment, empirical evidence (but much broader than the causal identification papers), deductive arguments that seem to have independent force, and maybe some deference to an interdisciplinary group with good judgment, not necessarily academics?

Makes sense on 3 and 4. Out of curiosity, what would change your mind on the minimum wage? If you don't find empirical economics valuable nor the views of experts (or at least don't put much stock in them) how would you decide whether supply and demand was better or worse theory than an alternative? The premises underlying traditional economic models are clearly not fully 100% always-and-everywhere true, so their conclusions need not be either. How do you decide about a theory's accuracy or usefulness if not by reference to evidence or expertise?

Thanks for this. I don't agree for scientists, at least in their published work, but I do agree that to an extent it's of course inevitable to bring in various other forms of reasoning to make subjective assessments that allow for inferences. So I think we're mostly arguing over extent.

My argument would basically be:

  1. Science made great progress when it agreed to focus its argument on empirical evidence and explanations of that evidence.
  2. Economics has (in my opinion) made great progress in moving from a focus on pure deduction and theory (akin to the Natural Philosophers pre-science) and focused more on generating careful empirical evidence (especially about cause and effect). Theory's job is then to explain that evidence. (Supply and demand models don't do a good job of explaining the minimum wage evidence that's been developed over more than a decade.)
  3. In forecasting, starting with base rates (a sort of naive form of empiricism) is best practice. Likewise, naive empiricism seems to work in business, sports, etc. Even descriptive and correlational data appears practically very useful.
  4. Therefore, I'd like to see EA and the rationality community stay rooted in empiricism as much as possible. That's not always an option of course, but empirically driven processes seem to beat pure deduction much of the time, even when the data available doesn't meet every standard of statistical inference. This still leaves plenty of room for the skilled Bayesian to weight things well, vet the evidence, depart from it when warranted etc.

I've not mentioned experts even once. I find the question of when and how much to defer to be quite difficult and I have no strong reaction to what I think is your view on that. My concern is with your justification of it. Anything that moves EA/rationality away from empiricism worries me. A bunch of smart people debating and deducing, largely cut off from observation and experiment, is a recipe for missing the mark. I know that's not truly what you're suggesting but that's where I'm coming from.

(Finally, I grant that the scale of the replication crisis, or put another way the social science field in question, matters a lot and I've not addressed it.)

Philosopher Michael Strevens argues that what you're calling myopic empiricism is what has made science so effective: https://aeon.co/essays/an-irrational-constraint-is-the-motivating-force-in-modern-science

I'd like to see more deference to the evidence as you say, which isn't the same as to the experts themselves, but more deference to either theory or common sense is an invitation for motivated reasoning and for most people in most contexts would, I suspect, be a step backward.

But ultimately I'd like to see this question solved by some myopic empiricism! We know experts are not great forecasters; we know they're pretty good at knowing which studies in their fields will or not replicate. More experiments like that are what will tell us how much to defer to them.

Answer by rortyJan 18, 20214
0
0

I lean toward: When in doubt, read first and read more. Ultimately it's a balance and the key is having the two in conversation. Read, then stop and think about what you read, organize it, write down questions, read more with those in mind.

But thinking a lot without reading is, I'd posit, a common trap that very smart people fall into. In my experience, smart people trained in science and engineering are especially susceptible when it comes to social problems--sometimes because they explicitly don't trust "softer" social science, and sometimes because they don't know where to look for things to read.

And that's key: where do you go to find things to read? If like me you suspect there's more risk of under-reading than under-thinking, then it becomes extra important to build better tools for finding the right things to read on a topic you're not yet familiar with. That's a challenge I'm working on, and one where there's very easy room for improvement.

I take this post to raise both practical/strategic and epistemological/moral reasons to think EAs should avoid being too exclusive or narrow in what they say "EAs should do." Some good objections have been raised in the comments already. 

Is it possible this post boils down to shifting from saying what EAs should do to what EAs should not do? 

That sounds maybe intuitively unappealing and un-strategic because you're not presenting a compelling, positive message to the outside world. But I don't mean literally going around telling people what not to do. I mean focusing on shifting people away from clearly bad or neutral activities toward positive ones, rather than focusing so much on what the optimal paths are. I raised this before in my "low-fidelity EA" comment: https://forum.effectivealtruism.org/posts/6oaxj4GxWi5vuea4o/what-s-the-low-resolution-version-of-effective-altruism?commentId=9AsgNmts2JqibdcwY

Even if you don't think there are epistemological/moral reasons for this, there may be practical/strategic ones: A large movement that applies rationality and science to encourage all its participants to do some good may do a lot more good than a small one that uses it to do the most good

This kind of debate is why I'd like to see the next wave of Tetlock-style research focus on the predictive value of different types of evidence. We know a good bit now about the types of cognitive styles that are useful for predicting the future, and even for estimating causal effects in simulated worlds. But we still don't know that much about the kinds of evidence that help. (Base rates, sure, but what else?) Say you're trying to predict the outcome of an experiment. Is reading about a similar experiment helpful? Is descriptive data helpful? Is interviewing three people who've experienced the phenomenon? When is one more and less useful? It's time to take these questions about evidence from the realms of philosophy, statistical theory, and personal opinion and study them as social phenomena. And yes, that is circular because what kind of evidence on evidence counts? But I think we'd still benefit from knowing a lot more on the usefulness of different sorts of evidence and prediction tournaments would be a nice way to study their cash value.

Answer by rortyDec 31, 20205
0
0

Here's the case that the low-fidelity version is actually better. Not saying I believe it, but trying to outline what the argument would be...

Say the low-fidelity version is something like: "Think a bit about how you can do the most good with your money and time, and do some research." 

Could this be preferable to the real thing?

It depends on how sharply diminishing the returns are to the practice of thinking about all of this stuff. Sometimes it seems like effective altruists see no diminishing returns at all. But it's plausible that they are steeply diminishing, and that effectively the value of EA is avoiding really obviously bad uses of time and money, rather than successfully parsing whether AI safety is better or worse than institutional decision-making as an area of focus. 

If you can get most of the benefits of EA with people just thinking a little about whether they're doing as much good as they could be, perhaps the low-fidelity EA is the best EA: does a lot of good, saves a lot of time for other things. And that's before you add in the potential of the low-fidelity version to spread more quickly and put off fewer people, thereby also potentially doing much more good.

Load more