Harrison Durland

1444Joined Sep 2020


Topic Contributions

Yeah, the language in your comment really resonates with me/my emotions and also gives me a more negative view of the OP, yet I am worried about being overly influenced by 1) the quality of the OP (relative to the legitimacy of the underlying points), and 2) my emotions on this.

Ultimately, I think the second order effects still dominate and warrant someone somewhere giving this request a good think-through separate from emotional reactions:

  1. Does not providing some funds to Salinas hurt the chances of future EA candidates or advocacy (especially those who might run for or target the Democratic Party) due to Dem opposition/bitterness (regardless of how legitimate such feelings may be)?

  2. Does providing funds to Salinas hurt the chances of future EA candidates or advocacy (especially those who who might run for or target the Republican Party) due to Republicans portraying EA as a “fund blue no matter who” movement (regardless of how legitimate such a label may be)?

Wow, opinion on this post went really negative since I last checked. I figured its karma would hang out around 0–15…

I initially downvoted for many of the same reasons. And tbh, I still don’t like this post, as it does give off big “give us money please” vibes without really justifying some of its key claims.

But ultimately I rescinded the downvote because it (sort of) raises a good point: IF people actually believe Flynn/EA undermined Salinas enough to cost the Dems the election, it might really look bad for EA. This leads me to wonder if it isn’t worth just paying some to offset part of the (supposed) damages.

Personally, I’m reluctant to caving to mud throwing and misrepresenting rhetoric used to extort money out of a good cause. Probably if Salinas came out genuinely in favor of pandemic prevention policies I’d probably be quite supportive of providing funds—but otherwise I’d be iffy on it.

But it’s not my money…

Update: Actually, I’m becoming much more pessimistic about funding Salinas unless she clearly supported pandemic preparedness/prevention, because otherwise it would come across as more partisan (“we’re funding this Democrat because… we were told we hurt this Democrat…”). And appeals to political slogans like “short term business success instead of long term sustainability of our world” is honestly a bit intellectually insulting to me.

Ultimately, you should probably tailor messages to your audience, given their understanding, objections/beliefs, values, etc. If you think they understand the phrase “externalities,” I agree, but a sizable number of people in the world do not properly understand the concept.

Overall, I agree that this is probably a good thing to emphasize, but FWIW I think a lot pitches I’ve heard/read do emphasize this insofar as it makes sense to do so, albeit not always with the specific term “externality.”

I’m really unclear what the point of this is; to be entirely honest, I don’t think it accomplishes anything besides potentially making you feel better. If it does, that’s fine, but I’m not sure this is the place for such simplistic and emotion-laden declarations.

The reality is that war and conflict is a complicated and unfortunate fact of reality, and it’s not very productive to just spout off random hatred against war. Furthermore, sometimes “peace” might be worse than war. In the Civil War, should the North have simply let the South secede and continue the barbaric practice of slavery? Should the UK have just thrown down their arms when Hitler invaded, or when he was gassing Jewish people?

If you’ve been personally affected by war I can understand that you might have a lot of emotions, but to be brutally honest, the degree of thoughtfulness in this declaration is on par with young children; many people I know have come to understand that the world is not so black and white by middle or at least high school.

Alternatively, if you're trying to devise a reliable framework for evaluating decisions, you could just abandon INT in favor of things like COILS (disclaimer: this is my own personal work), which is actually specifically tailored for decision evaluation (rather than "evaluating a cause area to indirectly inform decisions") and is, in my view, actually a relatively reliable framework.

If a malaria charity fails to achieve anything, people with malaria can try to complain. But if a longterm-oriented charity fails to achieve anything, the only people who could complain would be time travelers!

I appreciate this concern, but as I explained in this comment here, I think this is not a very strong argument against longtermism. To briefly summarize the ideas I explain in more detail there:

  1. Many people are already not held very accountable in the near term, despite what we might hope.
  2. Near-term interventions can also prove to be relatively unimportant from a long-term lens.
  3. You can definitely be held accountable or feel guilty if it becomes apparent in the near-term that your arguments/proposals will actually be bad in the long-term.
  4. Due to the massive expected value, Long-termism can probably still just bite the bullet here even if you mostly dismiss the previous points.

I'm a bit confused by your distinction: the question "Did […]

If you can't find reliable data, that just makes it hard, not theoretical.

The use of “did” vs “would” wasn’t very intentional or precise.

As to the empirical vs. theoretical nature of my hypothesis, it is indeed claiming that certain relationships empirically existed (and, with a lot of caveats, may continue into the future). However, my point was that the research methods I used were much more “theoretical”: I couldn’t do a large-N empirical analysis or controlled experiments to even establish meaningfully-controlled correlation (let alone causation) between the dependent, control, and independent variables, and instead had to rely on lines of reasoning such as:

  1. Hypothetical scenarios (e.g., imagine comparing an ambush where both parties have machine guns vs. one where neither side has machine guns)—which is impractical to clinically/experimentally test (I.e., with high reality fidelity)
  2. More-qualitative (and somewhat subjective) comparison of case studies, using a large amount of argumentation/theoretical reasoning to deal with the many gaps and flaws in the case comparison (given that, as I noted, there didn’t seem to be any good case comparison pair in the historical record)
  3. Agreement with existing theoretical and/or empirical concepts in the literature, such as Biddle’s Modern System.

it basically just seems to say "think about the problem until you can figure out how to test it with traditional empirical methods."

Well, yeah, what else would you expect? The post describes how you might use argument clashes and oversimplified simulations in thinking about the problem.

Again, perhaps I was being a bit too imprecise with my language? My point is that for some questions (arguably including my thesis), theoretical argumentation has to bear a lot of the analytical burden. This analytical burden can include things like:

  1. Explaining why variables Q, K, and W—none of which you could experimentally control for—probably do or don’t affect the relationship;
  2. Explaining why your very limited sample size can probably be extrapolated to some other cases;
  3. Explaining why some metric is probably a decent proxy for what you actually are trying to measure;
  4. Reasoning about hypothetical scenarios which will not actually empirically occur.

(Caveat: all of those activities can be supported by direct reference to supporting data in some situations, but not always.)

In contrast, it seems that much of the “theoretical” research methods described in this post are basically just “use lots of thinking to figure out how to test this empirically against data [at which point these empirical methods do almost all the legwork.]”

There is perhaps some debate to be had over the meaning of “theoretical” research methods: do mathematical proofs or algorithms count as theory? While I’m not universally opposed to using the term in such a context, I think it is much less helpful to use the term “theory” when you’re trying to juxtapose it with empirical methods. This especially feels true if a major reason you support a mathematical proof or algorithm is based on your determination that “this empirically works every single time.” When teaching research methods, I think it’s important to emphasize the differences that I described previously (e.g., legibility/transparency, reliability/consistency, reputation stake) which, in my view, have tended to make empirical methods so much more effective when they can be used.

If that was the intention, I think the title and content should have more clearly expressed that. I’m unclear on what the significant difference is between theory and empirics in math; I think most of the value in distinguishing between theoretical and empirical research comes from highlighting the inability to simply “use data or counter-examples to falsify/test a hypothesis”—but in this case, that doesn’t apply.

It's worth at least noting that many people wouldn't do the same

I don’t think many people are capable of actually internalizing all of the relevant assumptions that in real life would be totally unreasonable, nor do most people have a really good sense of why they have certain intuitions in the first place. So, it’s not particularly surprising/interesting that people would have very different views on this question.

Load More