M

MatthewDahlhausen

Research Engineer @ National Renewable Energy Laboratory
799 karmaJoined Nov 2014Working (6-15 years)

Bio

I develop software tools for the building energy efficiency industry. My background is in architectural and mechanical engineering (MS Penn State, PhD University of Maryland). I know quite a bit about indoor air quality and indoor infectious disease transfer, and closely follow all things related to climate change and the energy transition. I co-organize the local EA group in Denver, Colorado.

Comments
115

Given the differentiation between normative and factual beliefs, I'm having a hard time parsing the last sentence in the post: "It is hard to maintain tragic beliefs. On the face of it, it makes the world worse to believe them. But in order to actually do as much good as we can, we need to be open to them, while finding ways to keep a healthy relationship with tragedy."

Is the "worseness" a general worseness for the world, or specific to the believer? Does doing the most good (normative claim) necessarily require tragic beliefs (factual claim)? What is a "healthy relationship with tragedy"? Where does the normative claim that we should have only healthy relationships with tragedy derive its force? If we can't have a "joyful" flavor of righteousness, does that mean we ought not hold tragic beliefs?

Personal feelings about tragic beliefs are incidental; for someone with righteous beliefs, whether or not they feel joy or pain for having them seems irrelevant. Though we can't say with any certitude, I doubt Benjamin Lay had his personal happiness and health in the forefront of his mind in his abolitionist work. Perhaps instrumentally.

Should Benjamin Lay ought to not have lived in a cave, even if that meant compromising on acting out his tragic beliefs?

There are two kinds of belief. Belief in factual statements, and belief in normative statements.

“Insect suffering matters” is a normative statement, “people dying of preventable diseases could be saved by my donations” is a factual one. A restatement of the preventable disease statement in normative terms would look like: "If I can prevent people dying of preventable diseases by my donations at not greater cost to myself, I ought to do it."

I think tragic beliefs derive their force from being normative. "Metastatic cancer is terminal" is not tragic because of its factual nature, but because we think it sad that the patient dies with prolonged suffering before they've lived a full life.

Normative statements are not true in the same way as factual statements; the is-ought gap is wide. For them to be true assumes a meta-ethical position. If someone's meta-ethics disregards or de-emphasizes suffering, even suffering for which they are directly responsible, then “insect suffering matters” carries no tragic force.

The real force of tragic beliefs comes earlier. For insects, it is a consequence of another, more general belief: "suffering matters regardless of the species experiencing it", combined with a likely factual statement about the capacity for insects to suffer, and a factual statement about our complicity. In fact, if one assumes the more general belief, and takes the factual statements as true, it is hard to avoid the conclusion "insect suffering matters" without exploding principles. At that point avoidance is more about personal approaches to cognitive dissonance.

I'm inclined to reserve the tragic label for unavoidable horrors for which we are responsible. Think Oedipus, Hamlet, or demodex mites. But I understand there is a tragic element to believing unpopular things, especially normative ones, given the personal costs from social friction.

I suggest being highly skeptical of the work coming from the Copenhagen Consensus Center. It's founder, Bjorn Lomborg, has on several occasions been found to have committed scientific dishonesty. I wouldn't use this report to make an determinations of what are the "best investments" without independently verifying the data and methodology.

Down-voted, because I think the argument's premises are flawed, and the conclusions don't necessarily follow from the premises. It relies heavily on a "fruit of the poison tree" idea that because EA gets resources from civilization, and civilization can create the tools of its destruction, EA is inherently flawed. That is nonsense. The argument could be used to dismiss any kind of action that uses resources as being morally corrupt and ineffectual. Surely at the margin there are actions that reduce existential risk more than promote it.

I watched the video and then downvoted this post. The video is a criticism of EA and philanthropy, but there isn't anything new, thoughtful, or useful. I would have upvoted if I thought the criticism was insightful. We've had much better left-wing criticism of EA before on the forum.

Adam and Amy make basic mistakes. For example at 15:30, Adam says that GiveWell recommends funding AI alignment work, and that caused him to become critical because they weren't also recommending climate change mitigation. Adam treats GiveWell, SBF, and the entire EA movement as the same entity. Amy claims that EA is entirely about saving human lives. Neither demonstrated they were aware of the intense debate on saving vs. improving lives, or the concern for animals.

Among Amy's examples of good philanthropy are a billion dollars for the Amazon strike fund, and purchasing lots in NYC to make them community gardens instead of housing. Adam comes away from the conversation thinking that his philanthropic dollars that he gave to the Against Malaria Foundation would have better been spent on the Metropolitan Museum of Art in NYC or a local community garden (60:00). Both celebrate scope neglect, nepotism, and a worldview that the root of problems is political. They mock trolley problems and other philosophy thought problems as a masturbatory, navel-gazing effort, with no real world implications. All they have to offer in support of their preferred charities is an onslaught of left-wing buzzwords.

GiveWell has dozens of researchers putting tens of thousands of hours of work into coming up with better models and variable estimates. Their most critical inputs are largely determined by RCTs, and they are constantly working to get better data. A lot of their uncertainty comes from differences in moral weights in saving vs. improving lives.

Founders Pledge makes models using monte carlo simulations on complex theory of change models where the variables ranges are made up because they are largely unknowable. It's mostly Johannes, with a few assistant researchers, putting in a few hundreds of hours into model choice and parameter selection - with many more hours spent on writing and coding for their monte carlo analysis (which Givewell doesn't have to do, because they've got much simpler impact models in spreadsheets). FP has previously made 1/mtCO2e cost-effectiveness claims based on models like this, which was amplified in MacAskill's WWOTF. This model is wildly optimistic. FP now disowns that particularly model, but won't take it down or publicly list it as a mistake. They no longer publish their particular intervention CEAs publicly, though they may resume soon. My biggest criticism is that when making these complex theory-of-change models, the structure of model often matters more than than the variable inputs. While FP tries to pick "conservative" variable value assumptions (they rarely are), the model structure is wildly optimistic for their chosen interventions (generally technology innovation policy). For model feedback, FP doesn't have a good culture or process in place that deals with criticism well, a complaint that I've heard from several in the EA climate space. I think FP's uncertainty work has promise as a tool, but I think the recommendations they come up with are largely wrong given their chosen model structure and inputs.

GiveWell's recommendations in the health space are of vastly higher quality and certainty than FP's in the climate space.

Founders Pledge saying they can offset a ton of CO2 for $0.1-1 is like a malaria net charity saying they can save a life for $5.

Both are off by at least an order of magnitude. You should expect to spend at least $100/ton for robust, verifiable offsets. That brings your offset cost to $3,500 not $35.

Yes, I see your point. I used the video-of-torture instead of direct torture example to try to get around the common objections of demand-elasticity and psychological distance.

I think the space for refuge in the psychological difference is a lot smaller than may seem. Let's try another example.

Let's consider that you purchase a piglet that you keep in a dark, confined cage for 6 months and then slaughter. Would you have done something wrong in the psychological sense for being so personally responsible for it's life through slaughter? Is that still vastly psychologically different from torturing puppies? Perhaps the intentionality or desire to inflict suffering is the most relevant consideration for psychological culpability here?

If imprisoning the pig is not all that different from puppy-torture, then it seems that the psychological difference hinges on whether you have someone else do the unpleasant task of raising then dispatching of the pig for you or not. It seems odd to me that the act of enjoying the fruits of instrumental torture becomes psychologically benign simply through outsourcing and concentrating moral culpability into a few persons. Perhaps that's the case. But I don't think that's a universal intuition.

A lot's been written about the failing to donate to charity and saving the child drowning distinction. Intuition does seems to draw a clear difference. But I'm not sure that difference is as solid as intuited after reading Peter Unger's thought experiments in "Living High and Letting Die". I'm guessing you may have read that book and have preferred counterexamples?

"Of course, some mistakes are more egregious than others. Perhaps many reserve the term ‘wrong’ for those moral mistakes that are so bad that you ought to feel significant guilt over them. I don’t think eating meat is wrong in that sense. It’s not like torturing puppies..."

But it is a lot like torturing puppies. Or at least it is a lot like paying puppy torturers for access to a video of them torturing puppies because you get enjoyment out of watching the torture. The mechanized torture of young animals is a huge part of factory farming, which you support by buying meat.

Are you recommending people restrict their CATF donations, or give to CATF unrestricted?

How did CATF's net harmful work on 45Q influence your recommendation of CATF this year?

Load more