Sorted by New


The Folly of "EAs Should"

I take this post to raise both practical/strategic and epistemological/moral reasons to think EAs should avoid being too exclusive or narrow in what they say "EAs should do." Some good objections have been raised in the comments already. 

Is it possible this post boils down to shifting from saying what EAs should do to what EAs should not do? 

That sounds maybe intuitively unappealing and un-strategic because you're not presenting a compelling, positive message to the outside world. But I don't mean literally going around telling people what not to do. I mean focusing on shifting people away from clearly bad or neutral activities toward positive ones, rather than focusing so much on what the optimal paths are. I raised this before in my "low-fidelity EA" comment:

Even if you don't think there are epistemological/moral reasons for this, there may be practical/strategic ones: A large movement that applies rationality and science to encourage all its participants to do some good may do a lot more good than a small one that uses it to do the most good

Julia Galef and Angus Deaton: podcast discussion of RCT issues (excerpts)

This kind of debate is why I'd like to see the next wave of Tetlock-style research focus on the predictive value of different types of evidence. We know a good bit now about the types of cognitive styles that are useful for predicting the future, and even for estimating causal effects in simulated worlds. But we still don't know that much about the kinds of evidence that help. (Base rates, sure, but what else?) Say you're trying to predict the outcome of an experiment. Is reading about a similar experiment helpful? Is descriptive data helpful? Is interviewing three people who've experienced the phenomenon? When is one more and less useful? It's time to take these questions about evidence from the realms of philosophy, statistical theory, and personal opinion and study them as social phenomena. And yes, that is circular because what kind of evidence on evidence counts? But I think we'd still benefit from knowing a lot more on the usefulness of different sorts of evidence and prediction tournaments would be a nice way to study their cash value.

What’s the low resolution version of effective altruism?

Here's the case that the low-fidelity version is actually better. Not saying I believe it, but trying to outline what the argument would be...

Say the low-fidelity version is something like: "Think a bit about how you can do the most good with your money and time, and do some research." 

Could this be preferable to the real thing?

It depends on how sharply diminishing the returns are to the practice of thinking about all of this stuff. Sometimes it seems like effective altruists see no diminishing returns at all. But it's plausible that they are steeply diminishing, and that effectively the value of EA is avoiding really obviously bad uses of time and money, rather than successfully parsing whether AI safety is better or worse than institutional decision-making as an area of focus. 

If you can get most of the benefits of EA with people just thinking a little about whether they're doing as much good as they could be, perhaps the low-fidelity EA is the best EA: does a lot of good, saves a lot of time for other things. And that's before you add in the potential of the low-fidelity version to spread more quickly and put off fewer people, thereby also potentially doing much more good.

Improving Institutional Decision-Making: a new working group

Do you see this area as limited to cases where participants in a decision are trying and failing to make "good" decisions by their own criteria (ie where incentives are aligned but performance isn't there because of bad process or similar) or are you also thinking of cases where participants have divergent goals and suboptimal decisions from an EA standpoint are driven by conflict and misaligned incentives rather than by process failures?

Incentivizing forecasting via social media

Agree on both points. Economist's World in 2021 partnership with Good Judgment is interesting here. I also think as GJ and others do more content themselves, other content producers will start to see the potential of forecasts as a differentiated form of user-generated content they could explore. (My background is media/publishing so more attuned to that side than the internal dynamics of the social platforms.) If there are further discussions on this and you're looking for participants let me know.

Incentivizing forecasting via social media

This is a very good idea. The problems in my view are biggest on the business model and audience demand side. But there are still modest ways it could move forward. Journalism outlets are possible collaborators but they need the incentive perhaps by being able to make original content out of the forecasts.

To the extent prediction accuracy correlates with other epistemological skills you could task above average forecasters in the audience with tasks like up- and down-voting content or comments, too. And thereby improve user participation on news sites even if journalists did not themselves make predictions.

How might better collective decision-making backfire?

Convergence to best practice produces homogeneity. 

As it becomes easier to do what is likely the best option given current knowledge, fewer people try new things and so best practices advance more slowly.

For example, most organizations would benefit from applying "the basics" of good management practice. But the frontier of management is furthered by experimentation -- people trying unusual ideas that at any given point in time seem unlikely to work. 

I still see the project of improving collective decision-making as very positive on net. But if it succeeds, it becomes important to think about new ways of creating space for experimentation.

How can good generalist judgment be differentiated from skill at forecasting?
Answer by rortyAug 22, 202011

If you're good at forecasting it's reasonable to expect you'll be above average at reasoning or decision making tasks that require making predictions.

But judgment is potentially different. In "Prediction Machines" Agrawal et al separate judgment and prediction as two distinct parts of decision making where the former involves weighing tradeoffs. That's harder to measure but a potentially distinct way to think about the difference between judgment and forecasting. They have a theoretical paper on this decision making model too.