Harrison Durland

1573Joined Sep 2020


Topic Contributions

Whatever people end up doing, I suspect it would be quite valuable if serious effort is put into keeping track of the arguments in the debate and making it easier for people to find responses for specific points, and responses to those responses, etc. As it currently stands, I think that a lot of traditional text-based debate formats are prone to failure modes and other inefficiencies.

Although I definitely think it is good to find a risk-skeptic who is willing to engage in such a debate

  1. I don’t think there will be one person who speaks for all skeptical views (e.g., Erik Larsen vs. Yann LeCun vs. Gary Marcus);

  2. I think meaningful progress could be made towards understanding skeptics’ points of view even if no skeptic wants to participate or contribute to a shared debate map/management system, so long as their arguments are publicly available (I.e., someone else could incorporate it for them).

As a general note, when evaluating the goodness of a pro-democratic reform in a non-governmental context, it’s important to have a good appreciation of why one has positive feelings/priors towards democracy. One really important aspect of democracy’s appeal in governmental contexts is that for most people, government is not really a thing you consent to, so it’s important that the governmental structure be fair and representative.

The EA community, in contrast, is something you have much more agency to choose to be a part of. This is not to say “if you don’t like the way things are, leave,”—I am definitely pro-criticism/feedback—it’s just important to avoid importing wholesale one’s feelings towards democracy in governmental settings vs. settings like EA where people have more agency/freedom to participate, especially since democratic decision-making does have many disadvantages.

I’ve hypothesized that one potential failure mode is that experts are not used to communicating with EA audiences, and EA audiences tend to be more critical/skeptical of ideas (on a rational level). Thus, it may be the case that experts aren’t always as explicit about some of the concerns or issues, perhaps because they expect their audiences to defer to them or they have a model of what things people will be skeptical of and thus that they need to defend/explain, but that audience model doesn’t apply well to EA. I think there may be a case/example to highlight with regards to nuclear weapons or international relations, but then again it is also possible that the EA skepticism in some of these cases is valid due to higher emphasis on existential risks rather than smaller risks.

The most important solution is simple: one person, one vote.

I disagree with this: I may have missed a section where you seriously engaged with the arguments in favor of the current karma-weighted vote system, but I think there are pretty strong benefits of a system that puts value on reputation. For example, it seems fairly reasonable that the views of someone who has >1000 karma are considered with more weight than someone who just created an account yesterday or who is a known troll with -300 karma.

I think there are some valid downsides to this approach, and perhaps it would be good to put a tighter limit on reputation weighting (e.g., no more than 4x weight), but “one person one vote” is a drastic rejection of the principle of reputation, and I’m disappointed with how little consideration was apparently given to the potential negatives of this reform / positives of the current system.

Could you provide more details, namely what EAG you’ve applied for and where you live/go to school?

you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.

And relatedly, I think that such concerns about longterm epistemic damage are overblown. I appreciate that allowing epistemics to constantly be trampled in the name of optics is bad, but I don’t think that’s a fair characterization of what is happening. And I suspect that in the short term optics dominate due to how they are so driven by emotions and surface-level impressions, whereas epistemics seem typically driven more by reason over longer time spans and IMX are more the baseline in EA. So, there will be time to discuss what if anything “went wrong” with CEA’s response and other actions, and people should avoid accidentally fanning the flames in the name of preserving epistemics, which I doubt will burn.

(I’ll admit what I wrote may be wrong as written given that it was somewhat hasty and still a bit emotional, but I think I probably would agree with what I’m trying to say if I gave it deeper thought)

I think that some sort of general guide on “How to think about the issue of optics when so much of your philosophy/worldview is based on ignoring optics in the sake of epistemics/transparency (including embedded is-ought fallacies about how social systems ought to work), and your actions have externalities that affect the community” might be nice, if only so people don’t have to constantly reexplain/rehash this.

But generally, this is one of those things where it becomes apparent in hindsight that it would have been better to hash out these issues before the fire.

It’s too bad that Scout Mindset not only doesn’t seem to address this issue effectively, it also seems to push people more towards the is-ought fallacy of “optics shouldn’t matter that much” or “you can’t have good epistemics without full transparency/explicitness” (in my view: https://forum.effectivealtruism.org/posts/HDAXztEbjJsyHLKP7/outline-of-galef-s-scout-mindset?commentId=7aQka7YXrhp6GjBCw)

Could you provide a tl;dr here (or there on the article, I suppose)?

I appreciate the summary, and I'm especially glad to see it done with an emphasis on relatively hierarchical bullet points, rather than mostly paragraph prose. (And thanks for the reference to my comment ;)

Nonetheless, I am tempted to examine this question/debate as a case study for my strong belief that, relative to alternative methods for keeping track of arguments or mapping debates, prose/bullets + comment threads are an inefficient/ineffective method of

  1. Soliciting counterarguments or other forms of relevant information (e.g., case studies) from a crowd of people who may just want to focus on or make very specific/modular contributions, and 
  2. Showing how relevant counterarguments and information relate to each other—including where certain arguments have not been meaningfully addressed within a branch of arguments (e.g., 3 responses down), especially to help audiences who are trying to figure out questions like "has anyone responded to X."

I'm not even confident that this debate necessarily has that many divisive branches—it seems quite plausible that there are relatively few cruxes/key insights that drive the disagreement—but this question does seem fairly important and has generated a non-trivial amount of attention and disagreement.

Does anyone else share this impression with regards to this post (e.g., "I think that it is worth exploring alternatives to the way we handle disagreements via prose and comment threads"), or do people think that summaries like this comment are in fact sufficient (or that alternatives can't do better, etc.)?

I understand your frustration, and have myself been in your shoes a few times. I think that many employers/recruiters in EA are aware of these downsides, as I have seen a variety of posts by people discussing this in the past. Additionally, as Samuel points out in a separate comment, many if not all of the work-trials/etc. I've participated in have been compensated, which seems quite reasonable/non-predatory.

Unfortunately, for some positions/situations I don't think there will be any process which satisfies everyone, as they always seem to have downsides. I can especially speak to my experience applying to positions in non-EA think tanks and elsewhere, where I've suspected that most of the interview/review processes are ridiculously subjective or plainly ineffective. Setting aside the process of selecting applicants for proceeding to the interview stage (which I suspect is probably under-resourced/flawed), I've had multiple interviews where I came away thinking "are you seriously telling me that's how they evaluate candidates? That's how they determine if someone is a good researcher? Do they not apply any scrutiny to my claims / are my peers just getting away with total BS here [as I've heard someone imply on at least one occasion]? Do they not want to know any more concrete details about the relevant positions or projects even after I said I could describe them in more detail?" 

Many of the EA-org interviews I've done may not feel "personal," but I'll gladly take objectivity and skill-testing questions over smiles and "tell me your strengths and weaknesses."

That being said, I do sympathize with you, and I do tend to find that it's much more frustrating to be turned down by an EA org after so much effort, but in the end I still think I would prefer to see this kind of deeper testing/evaluation more often.

Load More