Matt_Lerner

Comments

AMA: Tom Chivers, science writer, science editor at UnHerd

What do you see as the consequentialist value of doing journalism? What are the ways in which journalists can improve the world? And do you believe these potential improvements are measurable?

Do power laws drive politics?

One thing to note here is that lots of commonly-used power law distributions have positive support. Political choices can and sometimes do have dramatically negative effects, and many of the catastrophes that EAs are concerned with are plausibly the result of those choices (like nuclear catastrophe, for instance). 

So a distribution that describes the outcomes of political choices should probably have support on the whole real line, and you wouldn't want to model choices with most simple power-law distributions.  But you might be on to something-- you might think of a hierarchical model in which there's some probability that decisions are either good or bad, and that the degree to which they are good or bad is governed by a power law distribution. That's the model I've been working with, but it seems incomplete to me.

Good altruistic decision-making as a deep basin of attraction in meme-space

I read this post with a lot of interest; it has started to seem more likely to me lately that spreading productive, resilient norms about decision-making and altruism is a more effective means of improving decisions in the long run than any set of particular institutional structures. The knock-on effects of such a phenomenon would, on a long time scale, seem to dwarf the effects of many other ostensibly effective interventions.

So I get excited about this idea. It seems promising.

But some reflection about what is commonly considered precedent for something like this makes me a little bit more skeptical.

I think we see another kind of self-correction mechanism in the belief system of science. It provides tools for recognising truth and discarding falsehood, as well as cultural impetus to do so; this leads not just to the propagation of existing scientific beliefs, but to the systematic upgrading of those beliefs; this isn't drift, but going deeper into the well of truth.

I have a sense that a large part of the success of scientific norms comes down to their utility being immediately visible. Children can conduct and repeat simple experiments (e.g. baking soda volcano); undergraduates can repeat famous projects with the same results (e.g. the double slit experiment), and even non-experimentalists can see the logic at the core of contemporary theory (e.g. in middle school geometry, or at the upper level in real analysis). What's more, the norms seem to be cemented most effectively by precisely this kind of training, and not to spread freely without direct inculcation: scientific thinking is widespread among the trained, and (anecdotally) not so common among the untrained. For many Western non-scientists, science is just another source of formal authority, not a process that derives legitimacy from its robust efficacy.

I can see a way clear to a broadening of scientific norms to include what you've characterized as "truth-seeking self-aware altruistic decision-making." But I'm having trouble imaging how it could be self-propagating. It would seem, at the very least, to require active cultivation in exactly the way that scientific norms do-- in other words, that it would require a lot of infrastructure and investment so that proto-truth-seeking-altruists can see the value of the norms. Or perhaps I am having a semantic confusion: is science self-propagating in that scientists, once cultivated, go on to cultivate others? 

Big List of Cause Candidates

I very strongly upvoted this because I think it's highly likely to produce efficiencies in conversation on the Forum, to serve as a valuable reference for newcomers to EA, and to act as a catalyst for ongoing conversation.

I would be keen to see this list take on life outside the forum as a standalone website or heavily moderated wiki, or as a page under CEA or somesuch, or at QURI.

I'm not sure why this is being downvoted. I don't really have an opinion on this, but it seems at least worth discussing. OP, I think this is an interesting idea.

Books / book reviews on nuclear risk, WMDs, great power war?

John Lewis Gaddis' The Cold War: A New History contains a number of useful segments about the nuclear tensions between the U.S. and the U.S.S.R., insightful descriptions of policymakers' thinking during these moments, and a consideration of counterfactual histories in which nuclear weapons might have been deployed. I found it pretty useful in terms of helping me get a picture of what decision-making looks like when the wrong decision means (potentially) the end of civilization.

Careers Questions Open Thread

How harmful is a fragmented resume? People seem to believe this isn't much of a problem for early-career professionals, but I'm 30, and my longest tenure was for two and a half years (recently shorter). I like to leave for new and interesting opportunities when I find them, but I'm starting to wonder whether I should avoid good opportunities for the sake of appearing more reliable as a potential employee.

A beginner data scientist tries her hand at biosecurity

First, congratulations. This is impressive, you should be very proud of yourself, and I hope this is the beginning of a long and fruitful data science career (or avocation) for you.
 

What is going on here?


I think the simplest explanation is that your model fit better because you trained on more data. You write that your best score was obtained by applying XGBoost to the entire feature matrix, without splitting it into train/test sets. So assuming the other teams did things the standard way, you were working with 25%-40% more data to fit the model. In a lot of settings, particularly in the case of tree-based methods (as I think XGBoost usually is), this is a recipe for overfitting. In this setting, however, it seems like the structure of the public test data was probably really close to the structure of the private test data, so the lack of validation on the public dataset paid off for you.

I think one interpretation of this is that you got lucky in that way. But I don't think that's the right takeaway. I think the right takeaway is that you kept your eye on the ball and chose the strategy that worked based on your understanding of the data structure and the available methods and you should be very satisfied.

 

Effective donation for Moria / Lesbos

I wonder if the forum  shouldn't encourage a class of post (basically like this one) that's something like "are there effective giving opportunities in X context?" Although EA is cause-neutral, there's no reason why members shouldn't take the opportunity provided by serendipity to investigate highly specific scenarios and model "virtuous EA behavior." This could be a way of making the forum friendlier to visitors like the OP, and a way for comments to introduce visitors to EA concepts in a way that's emotionally relevant.

EA's abstract moral epistemology

I also found this (ironically) abstract. There are more than enough philosophers on this board to translate this for us, but I think it might be useful to give it a shot and let somebody smarter correct the misinterpretations.

The author suggests that the "radical" part of EA is the idea that we are just as obligated to help a child drowning in a faraway pond as in a nearby one:

The morally radical suggestion is that our ability to act so as to produce value anywhere places the same moral demands on us as does our ability to produce value in our immediate practical circumstances

She notes that what she sees as the EA moral view excludes "virtue-oriented" or subjective moral positions, and lists several views (e.g. "Kantian constructivist") that are restricted if one takes what she sees as the EA moral view. She maintains that such views, which (apparently) have a long history at Oxford, have a lot to offer in the way of critique of EA.

Institutional critique

In a nutshell, EA focuses too much on what it can measure, and what it can measure are incrementalist approaches that ignores the "structural, political roots of global misery." The author says that the EA responses to this criticism (that even efforts at systemic change can be evaluated and judged effective) are fair. She says that these responses constitute a claim that the institutional critique is a criticism of how closely EA hews to its tenets, rather than of the tenets themselves. She disagrees with this claim.

Philosophical critique

This critique holds that EAs basically misunderstand what morality is-- that the point of view of the universe is not really possible. The author argues that attempting to take this perspective actively "deprives us of the very resources we need to recognise what matters morally"-- in other words, taking the abstract view eliminates moral information from our reasoning.

The author lists some of the features of the worldview underpinning the philosophical critique. Acting rightly includes:

acting in ways that are reflective of virtues such as benevolence, which aims at the well-being of others

 

acting, when appropriate, in ways reflective of the broad virtue of justice, which aims at an end—giving people what they are owed—that can conflict with the end of benevolence

She concludes:

In a case in which it is not right to improve others’ well-being, it makes no sense to say that we produce a worse result. To say this would be to pervert our grasp of the matter by importing into it an alien conception of morality ... There is here simply no room for EA-style talk of “most good.”

So in this view there are situations in which morality is more expansive than the improvement of others' well-being, and taking the abstract view eliminates these possibilities.

The philosophical-institutional critique

The author combines the philosophical and institutional critiques. The crux of this view seems to be that large-scale social problems have an ethical valence, and that it's basically impossible to understand or begin to rectify them if you take the abstract (god's eye) view, which eliminates some of this useful information:

Social phenomena are taken to be irreducibly ethical and such that we require particular modes of affective response to see them clearly ... Against this backdrop, EA’s abstract epistemological stance seems to veer toward removing entirely it from the business of social understanding.

This critique maintains that it's the methodological tools of EA ("economic modes of reasoning") that block understanding, and articulates part of the worldview behind this critique:

Underlying this charge is a very particular diagnosis of our social condition. The thought is that the great social malaise of our time is the circumstance, sometimes taken as the mark of neoliberalism, that economic modes of reasoning have overreached so that things once rightly valued in a manner immune to the logic of exchange have been instrumentalised.

In other words, the overreach of economic thinking into moral philosophy is a kind of contamination that blinds EA to important moral concerns.

Conclusion

Finally, the author contends that EA's framework constrains "available moral and political outlooks," and ties this to the lack of diversity within the movement. By excluding more subjective strains of moral theory, EA excludes the individuals who "find in these traditions the things they most need to say." In order for EA to make room for these individuals, it would need to expand its view of morality.

Load More