0 karmaJoined Jul 2020


Speaking of gut instincts, cognitive psychology looks A LOT into what forms gut instincts take shape and fool us into bad answers or bad lines of reasoning. They'd call them cognitive biases. When building models, how do you ensure that there is as little of these biases in the model? To add to that, does some of the uncertainty you mentioned in other answers come from these biases, or are they purely statistical?

Often times, to me it seems, machine learning models reveal solutions or insights that, while researchers may have known them already, are actually closely linked to the problem it's modelling. In your experience, does this happen often with ML? If so, does that mean ML is a very good tool to use in Effective Altruism? If not, then where exactly does this tendency come from?

(As an example of this 'tendency', this study used neural networks to find that estrogen exposure and folate deficiency were closely correlated to breast cancer. Source: https://www.sciencedirect.com/science/article/abs/pii/S0378111916000706 )