All of rorty's Comments + Replies

I like your description here a lot. I am no expert but I agree with your characterization that Peirce's pragmatic maxim offers something really valuable even for those committed to correspondence and, more generally, to analytic philosophy. 

On Rorty, his last book was just published posthumously and it offers an intriguing and somewhat different take on his thinking. The basics haven't changed, but he frames his version of pragmatism in terms of the Enlightenment and anti-authoritarianism. I won't try to summarize; your mileage might vary but I've fou... (read more)

1
William McAuliffe
3y
Thanks for your thoughtful response. Your post piqued my interest enough that I am finally getting around to reading Susan Haack's Evidence and Inquiry, which is a theory of justification that builds on Peirce and has an entire chapter devoted to Rorty. She is very unsympathetic to Rorty, but I suspect that other commentators on pragmatism, such as Cornel West and Louis Menand, are more sympathetic. It may not be a coincidence that the latter folks have more applied, political interests, which would jibe with EA as you say.

Thanks for this. Here's Stanford Encyclopedia of Philosophy's first paragraph:

"Pragmatism is a philosophical tradition that – very broadly – understands knowing the world as inseparable from agency within it. This general idea has attracted a remarkably rich and at times contrary range of interpretations, including: that all philosophical concepts should be tested via scientific experimentation, that a claim is true if and only if it is useful (relatedly: if a philosophical theory does not contribute directly to social progress then it is not worth much), ... (read more)

Maybe to try and see if I understand I should try to answer: it'd be a mix of judgment, empirical evidence (but much broader than the causal identification papers), deductive arguments that seem to have independent force, and maybe some deference to an interdisciplinary group with good judgment, not necessarily academics?

Makes sense on 3 and 4. Out of curiosity, what would change your mind on the minimum wage? If you don't find empirical economics valuable nor the views of experts (or at least don't put much stock in them) how would you decide whether supply and demand was better or worse theory than an alternative? The premises underlying traditional economic models are clearly not fully 100% always-and-everywhere true, so their conclusions need not be either. How do you decide about a theory's accuracy or usefulness if not by reference to evidence or expertise?

4[anonymous]3y
I think I would find it very hard to update on the view that the minimum wage reduces demand for labour. Maybe if there were an extremely well done RCT showing no effect from a large minimum wage increase of $10, I would update. Incidentally, here is discussion of an RCT on the minimum wage which illustrates where the observational studies might be going wrong. The RCT shows that employers reduced hours worked, which wouldn't show up in observational studies, which mainly study disemployment effects I am very conscious of the fact that almost everyone I have ever tried to convince of this view on the minimum wage remains wholly unmoved. I should make it clear that I am in favour of redistribution through tax credits, subsidies for childcare and that kind of thing. I think the minimum wage is not a smart way to help lower income people. 
1
rorty
3y
Maybe to try and see if I understand I should try to answer: it'd be a mix of judgment, empirical evidence (but much broader than the causal identification papers), deductive arguments that seem to have independent force, and maybe some deference to an interdisciplinary group with good judgment, not necessarily academics?

Thanks for this. I don't agree for scientists, at least in their published work, but I do agree that to an extent it's of course inevitable to bring in various other forms of reasoning to make subjective assessments that allow for inferences. So I think we're mostly arguing over extent.

My argument would basically be:

  1. Science made great progress when it agreed to focus its argument on empirical evidence and explanations of that evidence.
  2. Economics has (in my opinion) made great progress in moving from a focus on pure deduction and theory (akin to the Natur
... (read more)
[anonymous]3y13
0
0

2. I would disagree on economics. I view the turn of economics towards high causal identification and complete neglect of theory as a major error, for reasons I touch on here. The discipline has moved from investigating important things to trivial things with high causal identification. The trend towards empirical behavioural economics is also in my view a fad with almost no practical usefulness.  (To reiterate my point on the minimum wage - the negative findings are almost certainly false: it is  what you would expect to find for a small treatme... (read more)

Philosopher Michael Strevens argues that what you're calling myopic empiricism is what has made science so effective: https://aeon.co/essays/an-irrational-constraint-is-the-motivating-force-in-modern-science

I'd like to see more deference to the evidence as you say, which isn't the same as to the experts themselves, but more deference to either theory or common sense is an invitation for motivated reasoning and for most people in most contexts would, I suspect, be a step backward.

But ultimately I'd like to see this question solved by some myopic empiricism!... (read more)

6[anonymous]3y
Thanks for sharing that piece, it's a great counterpoint. I have a few thoughts in response.  Strevens argues that myopic empiricism drives people to do useful experiments which they perhaps might not have done if they stuck to theory. This seems to have been true in the case of physics. However, there are also a mountain of cases of wasted research effort, some of them discussed in my post. The value of information from eg  most studies on the minimum wage and observational nutritional epidemiology is miniscule in my opinion. Indeed, it's plausible that the majority of social science research is wasted money, per the claims of the meta-science movement.  I agree that it's not totally clear if it would be positive if in general people tried to put more weight on theory and common sense. But some reliance on theory and common sense is just unavoidable. So, this is a question of how much reliance we put on that, not whether to do it at all. For example, to make judgements about whether we should act on the evidence of whether masks work, we need to make judgments about the external validity of studies, which necessarily involves making some theoretical judgements about the mechanism by which masks work, which the empirical studies confirm. The true logical extension of myopic empiricism is the inability to infer anything from any study. "We showed that one set of masks worked in a series of studies in US in 2020, but we don't have a study of this other set of masks works in Manchester in 2021, so we don't know whether they work".  I tend to think it would be positive if scientists gave up on myopic empiricism and shifted to being more explicitly Bayesian. 
Answer by rortyJan 18, 20214
0
0

I lean toward: When in doubt, read first and read more. Ultimately it's a balance and the key is having the two in conversation. Read, then stop and think about what you read, organize it, write down questions, read more with those in mind.

But thinking a lot without reading is, I'd posit, a common trap that very smart people fall into. In my experience, smart people trained in science and engineering are especially susceptible when it comes to social problems--sometimes because they explicitly don't trust "softer" social science, and sometimes because they... (read more)

2
MichaelA
3y
Yeah, I broadly share those views. Regarding your final paragraph, here are three posts you might find interesting on that topic: * Learnings about literature review strategy from research practice sessions * Literature Review For Academic Outsiders: What, How, and Why * How to get up to speed on a new field of research? (Of course, a huge amount has also been written on that topic by people outside of the EA and rationality communities, and I don't mean to imply that anyone should necessarily read those posts rather than good things written by people outside of those communities. But those three posts things that I happen to have read and found useful.)

I take this post to raise both practical/strategic and epistemological/moral reasons to think EAs should avoid being too exclusive or narrow in what they say "EAs should do." Some good objections have been raised in the comments already. 

Is it possible this post boils down to shifting from saying what EAs should do to what EAs should not do? 

That sounds maybe intuitively unappealing and un-strategic because you're not presenting a compelling, positive message to the outside world. But I don't mean literally going around telling people what not to... (read more)

4
Davidmanheim
3y
I think that negative claims are often more polarizing than positive ones, but I agree that there is a reason to advocate for a large movement that applies science and reasoning to do some good. I just think it already exists, albeit in a more dispersed form than a single "EA-lite." (It's what almost every large foundation already does, for example.)  I do think that there is a clear need for an "EA-Heavy," i.e. core EA, in which we emphasize the "most" in the phrase "do the most good." My point here is that I think that this core group should be more willing to allow for diversity of action and approach. And in fact, I think the core of EA, the central thinkers and planners, people at CEA, Givewell, Oxford, etc. already advocate this. I just don't think the message has been given as clearly as possible to everyone else.

This kind of debate is why I'd like to see the next wave of Tetlock-style research focus on the predictive value of different types of evidence. We know a good bit now about the types of cognitive styles that are useful for predicting the future, and even for estimating causal effects in simulated worlds. But we still don't know that much about the kinds of evidence that help. (Base rates, sure, but what else?) Say you're trying to predict the outcome of an experiment. Is reading about a similar experiment helpful? Is descriptive data helpful? Is interview... (read more)

Answer by rortyDec 31, 20205
0
0

Here's the case that the low-fidelity version is actually better. Not saying I believe it, but trying to outline what the argument would be...

Say the low-fidelity version is something like: "Think a bit about how you can do the most good with your money and time, and do some research." 

Could this be preferable to the real thing?

It depends on how sharply diminishing the returns are to the practice of thinking about all of this stuff. Sometimes it seems like effective altruists see no diminishing returns at all. But it's plausible that they are steeply ... (read more)

Unfortunately I think the importance of EA actually goes up as you focus on better and better things. My best guess is the distribution of impact is lognormal, this means that going from, say, the 90th percentile best thing to the 99th could easily be a bigger jump than going from, say, the 50th percentile to the 80th.

You're right that at some point diminishing returns to more research must kick in and you should take action rather than do more research, but I think that point is well beyond "don't do something obviously bad", and more like "after you've thought really carefully about what the very top priority might be, including potentially unconventional and weird-seeming issues".

Do you see this area as limited to cases where participants in a decision are trying and failing to make "good" decisions by their own criteria (ie where incentives are aligned but performance isn't there because of bad process or similar) or are you also thinking of cases where participants have divergent goals and suboptimal decisions from an EA standpoint are driven by conflict and misaligned incentives rather than by process failures?

3
Vicky Clayton
3y
Thanks for the comment rorty! It's a really good question. I think the simple answer is that we don't know at this stage. I don't think it has to be the dichotomy you suggest though. A process could help individuals within a group align better and figure out what compromises they are happy to make. The question of whether we try to change people's goals I think depends on how tractable it is and we also recognise that there is already considerable efforts in EA movement building which may better cover trying to change people's goals. Thanks again.

Agree on both points. Economist's World in 2021 partnership with Good Judgment is interesting here. I also think as GJ and others do more content themselves, other content producers will start to see the potential of forecasts as a differentiated form of user-generated content they could explore. (My background is media/publishing so more attuned to that side than the internal dynamics of the social platforms.) If there are further discussions on this and you're looking for participants let me know.

This is a very good idea. The problems in my view are biggest on the business model and audience demand side. But there are still modest ways it could move forward. Journalism outlets are possible collaborators but they need the incentive perhaps by being able to make original content out of the forecasts.

To the extent prediction accuracy correlates with other epistemological skills you could task above average forecasters in the audience with tasks like up- and down-voting content or comments, too. And thereby improve user participation on news sites even if journalists did not themselves make predictions.

2
David_Althaus
3y
Thanks! I agree. Yeah, maybe such outlets could receive financial support for their efforts by organizations like OpenPhil or the Rockefeller Foundation—which supported Vox's Future Perfect. Interesting idea. More generally, it might be valuable if news outlets adopted more advanced commenting systems, perhaps with Karma and Karma-adjusted voting (e.g., similar to the EA forum). From what I can tell, downvoting isn't even possible on most newspaper websites. However, Karma-adjusted voting and downvotes could also have negative effects, especially if coupled with a less sophisticated user base and less oversight than on the EA forum.
Answer by rortyDec 14, 202010
0
0

Convergence to best practice produces homogeneity. 

As it becomes easier to do what is likely the best option given current knowledge, fewer people try new things and so best practices advance more slowly.

For example, most organizations would benefit from applying "the basics" of good management practice. But the frontier of management is furthered by experimentation -- people trying unusual ideas that at any given point in time seem unlikely to work. 

I still see the project of improving collective decision-making as very positive on net. But if it succeeds, it becomes important to think about new ways of creating space for experimentation.

Answer by rortyAug 22, 202011
0
0

If you're good at forecasting it's reasonable to expect you'll be above average at reasoning or decision making tasks that require making predictions.

But judgment is potentially different. In "Prediction Machines" Agrawal et al separate judgment and prediction as two distinct parts of decision making where the former involves weighing tradeoffs. That's harder to measure but a potentially distinct way to think about the difference between judgment and forecasting. They have a theoretical paper on this decision making model too.

4
Linch
4y
I think I agree with this answer.