A beginner data scientist tries her hand at biosecurity

First, congratulations. This is impressive, you should be very proud of yourself, and I hope this is the beginning of a long and fruitful data science career (or avocation) for you.

What is going on here?

I think the simplest explanation is that your model fit better because you trained on more data. You write that your best score was obtained by applying XGBoost to the entire feature matrix, without splitting it into train/test sets. So assuming the other teams did things the standard way, you were working with 25%-40% more data to fit the model. In a lot of settings, particularly in the case of tree-based methods (as I think XGBoost usually is), this is a recipe for overfitting. In this setting, however, it seems like the structure of the public test data was probably really close to the structure of the private test data, so the lack of validation on the public dataset paid off for you.

I think one interpretation of this is that you got lucky in that way. But I don't think that's the right takeaway. I think the right takeaway is that you kept your eye on the ball and chose the strategy that worked based on your understanding of the data structure and the available methods and you should be very satisfied.


Effective donation for Moria / Lesbos

I wonder if the forum  shouldn't encourage a class of post (basically like this one) that's something like "are there effective giving opportunities in X context?" Although EA is cause-neutral, there's no reason why members shouldn't take the opportunity provided by serendipity to investigate highly specific scenarios and model "virtuous EA behavior." This could be a way of making the forum friendlier to visitors like the OP, and a way for comments to introduce visitors to EA concepts in a way that's emotionally relevant.

EA's abstract moral epistemology

I also found this (ironically) abstract. There are more than enough philosophers on this board to translate this for us, but I think it might be useful to give it a shot and let somebody smarter correct the misinterpretations.

The author suggests that the "radical" part of EA is the idea that we are just as obligated to help a child drowning in a faraway pond as in a nearby one:

The morally radical suggestion is that our ability to act so as to produce value anywhere places the same moral demands on us as does our ability to produce value in our immediate practical circumstances

She notes that what she sees as the EA moral view excludes "virtue-oriented" or subjective moral positions, and lists several views (e.g. "Kantian constructivist") that are restricted if one takes what she sees as the EA moral view. She maintains that such views, which (apparently) have a long history at Oxford, have a lot to offer in the way of critique of EA.

Institutional critique

In a nutshell, EA focuses too much on what it can measure, and what it can measure are incrementalist approaches that ignores the "structural, political roots of global misery." The author says that the EA responses to this criticism (that even efforts at systemic change can be evaluated and judged effective) are fair. She says that these responses constitute a claim that the institutional critique is a criticism of how closely EA hews to its tenets, rather than of the tenets themselves. She disagrees with this claim.

Philosophical critique

This critique holds that EAs basically misunderstand what morality is-- that the point of view of the universe is not really possible. The author argues that attempting to take this perspective actively "deprives us of the very resources we need to recognise what matters morally"-- in other words, taking the abstract view eliminates moral information from our reasoning.

The author lists some of the features of the worldview underpinning the philosophical critique. Acting rightly includes:

acting in ways that are reflective of virtues such as benevolence, which aims at the well-being of others


acting, when appropriate, in ways reflective of the broad virtue of justice, which aims at an end—giving people what they are owed—that can conflict with the end of benevolence

She concludes:

In a case in which it is not right to improve others’ well-being, it makes no sense to say that we produce a worse result. To say this would be to pervert our grasp of the matter by importing into it an alien conception of morality ... There is here simply no room for EA-style talk of “most good.”

So in this view there are situations in which morality is more expansive than the improvement of others' well-being, and taking the abstract view eliminates these possibilities.

The philosophical-institutional critique

The author combines the philosophical and institutional critiques. The crux of this view seems to be that large-scale social problems have an ethical valence, and that it's basically impossible to understand or begin to rectify them if you take the abstract (god's eye) view, which eliminates some of this useful information:

Social phenomena are taken to be irreducibly ethical and such that we require particular modes of affective response to see them clearly ... Against this backdrop, EA’s abstract epistemological stance seems to veer toward removing entirely it from the business of social understanding.

This critique maintains that it's the methodological tools of EA ("economic modes of reasoning") that block understanding, and articulates part of the worldview behind this critique:

Underlying this charge is a very particular diagnosis of our social condition. The thought is that the great social malaise of our time is the circumstance, sometimes taken as the mark of neoliberalism, that economic modes of reasoning have overreached so that things once rightly valued in a manner immune to the logic of exchange have been instrumentalised.

In other words, the overreach of economic thinking into moral philosophy is a kind of contamination that blinds EA to important moral concerns.


Finally, the author contends that EA's framework constrains "available moral and political outlooks," and ties this to the lack of diversity within the movement. By excluding more subjective strains of moral theory, EA excludes the individuals who "find in these traditions the things they most need to say." In order for EA to make room for these individuals, it would need to expand its view of morality.

The Risk of Concentrating Wealth in a Single Asset

I'm curious to hear Michael's response, but also interested to hear more about why you think this. I have the opposite intuition- presumably 1910 had its fair share of moonshots which seemed crazy at the time and which turned out, in fact, to be basically crazy, which is why we haven't heard about them.

A portfolio which included Ford and Edison would have performed extremely well, but I don't know how many possible 1910 moonshot portfolios would have included them or would have weighted them significantly enough to outperform the many failed other moonshots.

Introducing LEEP: Lead Exposure Elimination Project

I'm really excited to see this!

I understand that, lead abatement itself aside, the alkalinity of the water supply seems to have an impact on lead absorption in the human body and its attendant health effects. I'm curious whether (1) this impact is significant (2) whether interventions to change the pH of water are competitive in terms of cost-effectiveness with other types of interventions and (3) whether this has been tried.

No More Pandemics: a lobbying group?

The venue of advocacy here will depend at least in part on the policies you decide are worth advocating. Even with hundreds of grassroots volunteers, it will be hard to ensure the fidelity of the message you are trying to communicate. It is hard at first blush to imagine how greater attention to pandemic preparedness could do harm, but it is not difficult that simply exhorting government to "do something" could have bad consequences.

Given the situation, it seems likely that governments preparing for future pandemics without clear guidance will prepare for a repeat of the pandemic that is already happening, rather than a different and worse one in future.

Once you select certain highly effective policy worth advocating (for example, an outbreak contingency fund), that's the stage at which to determine the venue and the tactic. I'm not a bio expert, but it's not difficult to imagine that once you identify a roster of potential policies, the most effective in expectation may involve, for example, lobbying Heathrow Airport Holdings or the Greater London Authority rather than Parliament.

Some learnings I had from forecasting in 2020
The EA community overrates the predictive validity and epistemic superiority of forecasters/forecasting.

This seems to be true and also to be an emerging consensus (at least here on the forum).

I've only been forecasting for a few months, but it's starting to seem to me like forecasting does have quite a lot of value—as valuable training in reasoning, and as a way of enforcing a common language around discussion of possible futures. The accuracy of the predictions themselves seems secondary to the way that forecasting serves as a calibration exercise. I'd really like to see empirical work on this, but anecdotally it does feel like it has improved my own reasoning somewhat. Curious to hear your thoughts.

[Linkpost] Some Thoughts on Effective Altruism

I think scale/scope is a pretty intuitive way of thinking about problems, which is I imagine why it's part of the ITN framework. To my eye, the framework is successful because it reflects intuitive concepts like scale, so I don't see too much of a coincidence here.

If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.

This is a good point. I don't see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice. Still, you're right that my "straw activist" would probably scoff at AI risk, for example.

I guess I'd say that the way of thinking I've described doesn't imply an accurate assessment of problem scale, and since skepticism about the (relatively formal) arguments on which concerns about AI risk are based is core to the worldview, there'd be no reason for someone like this to accept that some of the more "out there" GCRs are GCRs at all.

Quite separately, there is a tendency among all activists (EAs included) to see convergence where there is none, and I think this goes a long way toward neutralizing legitimate but (to the activist) novel concerns. Anecdotally, I see this a lot—the proposition, for instance, that international development will come "along for the ride" when the U.S. gets its own racial justice house in order, or that the end of capitalism necessarily implies more effective global cooperation.

[Linkpost] Some Thoughts on Effective Altruism

This is certainly a charitable reading of the article, and you are doing the right thing by trying to read it as generously as possible. I think they are indeed making this point:

the technocratic nature of the approach itself will only very rarely result in more funds going to the type of social justice philanthropy that we support with the Guerrilla Foundation – simply because the effects of such work are less easy to measure and they are less prominent among the Western, educated elites that make up the majority of the EA movement

This criticism is more than fair. I have to agree with it and simultaneously point out that of course this is a problem that many are aware of and are actively working to change. I don't think that they're explicitly arguing for the worldview I was outlining above. This is my own perception of the motivating worldview, and I find support in the authors' explicit rejection of science and objectivity.

[Linkpost] Some Thoughts on Effective Altruism

I can get behind your initial framing, actually. It's not explicit—I don't think the authors would define themselves as people who don't believe decision under uncertainty is possible—but I think it's a core element of the view of social good professed in this article and others like it.

A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.

These people and groups select causes based only on perceived scale. They don't necessarily think that malaria and AI risk aren't important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.

To me, this is not necessarily reflective of innumeracy or a lack of comfort with probability. It seems more like a really radical second- and third-order uncertainty about the value of certain kinds of reasoning— a deep-seated mistrust of numbers, science, experts, data, etc. I think the authors of the posted article lay their cards on the table in this regard:

the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites

These are people who associate the conventions and methods of science and rationality with their instrumental use in a system that they see as inherently unjust. As a result of that association, they're hugely skeptical about the methods themselves, and aren't able or willing to use them in decision-making.

I don't think this is logical, but I do think it is understandable. Many students, in particular American ones (though I recognize that Guerrilla is a European group) have been told repeatedly, for many years, that the central value of learning science and math lies in getting a good job in industry. I think it can be hard to escape this habituation and see scientific thinking as a tool for civilization instead of as some kind of neoliberal astrology.

Load More