I asked whether EA has any rational, written debate methodology and whether rational debate aimed at reaching conclusions is available from the EA community. The answer I received, in summary, was “no”. (If that answer is incorrect, please respond to my original question with a better answer.)

So I have a second question. Does EA have any alternative to rational debate methods to use instead? In other words, does it have a different solution to the same problem?

The underlying problem which rational debate methods are meant to solve is how to rationally resolve disagreements. Suppose that someone thinks he knows about some EA error. He’d like to help out and share his knowledge. What happens next? If EA has rational debate available following written policies, then he could use that to correct EA. If EA has no such debate available, then what is the alternative?

I hope I will not be told that informal, unorganized discussion is an adequate alternative. There are many well know problems with that like people quitting without explanation when they start to lose a debate, people being biased and hiding it due to no policies for accountability or transparency, and people or ideas with low social status being ignored or treated badly. For sharing error corrections to work well, one option is having some written policies which can be used that help prevent some of these failure modes. I haven’t seen that from EA so I asked and no one said EA has it. (And also no one said “Wow, great idea, we should have that!”). So what, if anything, does EA have instead that works well?

Note: I’m aware that other groups (and individuals) also lack rational debate policies. This is not a way that EA is worse than competitors. I’m trying to speak to EA about this rather than speaking to some other group because I have more respect for EA, not less.

-2

Mentioned in
New Answer
Ask Related Question
New Comment
5 comments, sorted by Click to highlight new comments since: Today at 4:51 PM

I haven't looked into EA norms enough to answer your question, but your question makes me think the same thing that your first question did. If you have some norms to suggest or point to, then please provide some examples. In my experience over the years posting to blogs and forums, I've tried a few things, but they only tested people's patience, so I'm always looking for stuff that I could apply personally in future. Here are several ideas, some of which I've actually tried.

  • I know of several question-answer methodologies that rely on different specifics (grammatical categories, semantic categories, or predicate logic formalisms), but they all force iterative communication about the same topic and sometimes reveal inadequacies in description too early. If instead of responding algorithmically, I were to read the whole body of text, I would get some  answers from the following text. QA approaches to can turn persuasive, poetic, or narrowly-targeted communications into a lot of work. The QA approach conflicts with Grice's Maxims. It's not always concise. To use it effectively, you have to pick and choose what you care to learn from a person. 
    An example is:
    statement:"I believe that ethical decisions are fairly simple."
    question:"What ethical decisions are simple how, specifically?"
    answer: "I believe that population ethics decisions  can be based on a simple principle."
    question: "What population ethics decisions can be based on what principle?"
    or 
    statement:"I believe that ethical decisions are fairly simple."
    question:"You mean that every possible altruistic decision  is so simple that a two-year old could decide it, is that right?" (the expected answer is "No", and an explanation in more detail of what the original statement meant)
  • Similarly, there's models of how to articulate and label argument structures, for example:
    1.  make an outline of premises, intermediate conclusions, and final conclusions, 
    2. describe the inference types linking premises and conclusions (deductive, inductive, analogical). 
    3. Then analyze the outline for truthful premises and valid inferences.  
    This is really tedious but it provides a systematic way to identify a cogent argument. Using it you can pinpoint reasons for disagreement. In principle, arguments can be rejected, iterated, or accepted on the basis of the analysis. 
  • Any system built on tagging information can capture knowledge about communicated information. Those tags can drive automated or human analysis of the information. For example:
    "<estimate id="35" source="Joe EA" topic="AGI">By <time>2032</time>, we should expect AGI have <odds type="development">1:50</odds>  odds of creation and <odds type="extinction">1:2</odds> odds of killing us all.<estimate>"
    Those tags let software aggregate predictions or facilitate meta-analysis of competing estimates. Not as tedious to use software rather than do this manually, but human experts might do a more reliable job. You can use tagsets for anything:guiding follow-up questions, metadata about sources, argument structure records, etc.
  • I can see people writing bots to handle common situations that  comments on posts would note but that people don't have much time to write up. For example, they could check if links in posts are broken, or if people used the wrong terms, or suggest synonyms. 

    Have you seen the story about the guy who got stuck in the elevator at work?  He got stuck in an elevator, and so the story goes, he sent out a slack message saying, "Hey guys, I am stuck in an elevator, can I get some help?" and a slack bot wrote back right away and said, "Consider using the more gender-appropriate term 'folks' or 'people' instead of 'guys'." So the guy sent out another message, saying "Hey folks, I am stuck in an elevator, can I get some help?"

    Anyway, people don't seem to like subbing bots for people on forums, but I can see it being useful for when someone says that "This cause doesn't seem feasible", and the bot replies, "Do you mean that the cause doesn't seem tractable? If so, consider using the term 'tractable' to facilitate discussion of charitable causes using the ITN framework."

    or for that particularly vulgar post that someone might someday write:

    "I  noticed that you used  28 cuss words in your post. Please consider editing your post to remove some of those. While you have not violated an absolute rule, 28 is a lot. Please bring that number down below 5 cuss words. Thank you." (I'm just making stuff up here, I'm not clear on rules about vulgarity in posts on this forum)
  • posting forms built with more fields could structure conversation. Instead of just a post message, there could be post types, like for proposing a cause, or for presenting research, just like the forum has now for asking questions. So for example, a proposed cause would have fields like:
    * epistemic status
    * tl;dr
    * importance
    * tractability
    * neglectedness
    * closing comments
    and the post form wouldn't let you submit the post until you filled out each section. 

And there's more. As text analysis tools improve, I can see forums integrating them in to manage epistemic requirements for posts. More powerful AI could detect when argument forms are invalid, statements are vague, or claims are untrue (according to some body of knowledge).

I think the push right now is to get people to contribute posts to the forum, and I favor that rather than trying to force more structure or content into argumentative  posts. The moderators seem to be active on the forum, and the norms here seem reasonable to me. 

I haven't looked into EA norms enough to answer your question, but your question makes me think the same thing that your first question did. If you have some norms to suggest or point to, then please provide some examples.

Thank you for raising this issue. I appreciate the chance the address it rather than have people think I’m doing something wrong without telling me what.

Although I do have some suggestions, I think sharing them now is a bad idea. They would distract from the conceptual issue I’m trying to discuss: Is there a problem here that needs a solution? Does EA have a solution?

I guess your perspective is that of course this is an important problem, and EA isn’t already solving it in some really great way because it’s really hard. In that context, mentioning the problem doesn’t add value, and proposing solutions is an appropriate next step. But I suspect that is not the typical perspective here.

I think most people would deny it’s an important problem and aren’t looking to solve it. In that context, I don’t want to propose a solution to what people consider a non-problem. Instead, I’d rather encourage people to care about the problem. I think people should try to understand the problem well before trying to solve it (because it is pretty hard) so I’d rather talk about the nature of the problem first. I think the EA community should make this an active research area. If they do, I’ll be happy to contribute some ideas. But as long as its not an active research area, I think it’s important to investigate why not and try to address whatever is going on there. (Note that EA has other active research areas regarding very hard problems with no solutions in sight. EA is willing to work on hard problems, which is something I like about EA.)

I also wouldn’t want to suggest some solutions, which turn out to be incorrect, at which point people don’t work on better solutions. It would be logically invalid to dismiss the problem because my solutions were wrong; but it also strikes me as a likely possibility. Even if my solutions were good, they unfortunately aren’t of the “easy to understand, easy to use, no downsides” variety. So unless people care about the problem, they won’t want to bother with solutions that take much effort to understand.

In my experience over the years posting to blogs and forums, I've tried a few things, but they only tested people's patience, so I'm always looking for stuff that I could apply personally in future. Here are several ideas, some of which I've actually tried.

I think those ideas are fine. I’ve tried some of them too. However, if EA was currently doing all of them, I’d still have asked the same questions. I don’t see them as adequate to address the problem I’m trying to raise. Reiterating: If EA is wrong about something important, how can EA be corrected? (The question is seeking reasonable, realistic, practical ways of correcting errors, not just theoretically possible ways or really problematic ways like “climb the social hierarchy then offer the correction from the top”.)

However, if EA was currently doing all of them, I’d still have asked the same questions. I don’t see them as adequate to address the problem I’m trying to raise.

Really? All of them? 

  • QA models in use by people posting on comments
  • informal argument outlining
  • post content specialized by purpose
  • automated and expert content tagging 
  • bots patrolling post content
  • (in future) AI analyzing research quality

You don't think that would address problems of updating in EA to some extent? You could add a few more:

  • automated post tagging (the forum suggests tags and adds expert judgement when a post is untagged)
  • suggested sources or bibliographies (now the forum provides lists of posts tagged with a specific tag, this would go further, guiding post authors to existing content before the post is published)

Suppose there were some really big premise (uh, crux) that a bunch of people were focused on. They could have their high-tech and grueling argument. Then I suppose they could  record the argument's conclusion, and add it to some collection of arguments for/against some EA norm or principle or risk estimate or cause evaluation. Then I guess EA folks could heed the arguments and follow some process to canonicalize the conclusions as a set of beliefs. They would end  up with a "bible" of EA principles, norms, etc, maybe with a history feature to show updates over time. 

There might be some kind of vote or something for some types of updates, that would be very EA. Voters could come from the larger pool of EA, anyone committed to reviewing arguments (probably reading summaries) would get to vote on some canon updates. It would be political, there'd be plenty of status and charisma leading and motivated thinking, but it would be cool.

There are various people, Singer, Ord, Galef, MacAskill are just a few, all with varying degrees of influence over EA thought.  But as far as top-down decisions, my bottom line on EA is that EA the movement is not under anyone's control but EA the career might involve some conflicts of interest. In that sense, there's top-down (money-driven) influence.

But if you wanted to rank influence, I think EA's are influenced by media and popular thought just like everybody else. EA is not a refuge for unpopular beliefs, necessarily. Focusing on it as a community that can resolve issues around motivated thinking or bias could be a mistake.  EA's are as vulnerable as any other community of people to instrumental rationality, motivated thinking, and bias.

Current EA trends include:

  • framing beliefs inside a presumption of mathematical uncertainty 
    Bayesianism did not add clarity or rigor to updating of beliefs in EA.
  • suffering technological determinism more than some
     EA's unrealistically focus on technology to solve climate change, wealth inequality, etc.
  • harboring a strange techno-utopian faith in the future
    Longtermism offers implausible visions of trillions of happy AGI or AGI-managed people.
  • ignoring relevant political frames or working bottom-up with their charitable efforts 
    Neglectedness in ITN doesn't include political causes and feedbacks of aid work.
  • ignoring near-term (2020-2050) plausible climate change impacts on their work
    Charitable impact could be lost in developing countries as climate pressures rise.
  • accepting politicized scientific, medical, and economic models w/o argument
    Because science which is fine accept when the science is politicized or distorted.
  • believing in the altruistic consequences of spending money on charities
    EA's offset personal behavior with donations because they believe in donation impact.
  • ignoring ocean health, resource depletion, the sixth great extinction, etc
    EA's are not alone in this, terrestrial climate change gets most environment news press.
  • ignoring the presence of, and proper context for, self-interest in decision-making
    AFAIK, EA's haven't really addressed what role selfishness has and should have in life.

Julia Galef's work on instrumental versus epistemic rationality, and her Scout vs Soldier mindset model, are good for navigating the terrain you want to survey, Elliot. I recommend it.  She is a part of the EA community at large. I keep a list of independent thinkers whose work I should follow, and she's on it.

In addition, a friend once told me that I should be sure to enjoy the journey as well as set a destination, since there's no guarantee of ever reaching my destination. I joined the forum to make a submission to one of their contests.  My contest submission was basically about EA updating. My main observation of EA updating is that changes in belief do not reflect increasing constraints on an original belief as new evidence appears. Rather, an EA belief is just evaluated for credence as is, with confidence in it waxing or waning. EA does not appear to set out systematic methods for constraining beliefs, which is too bad. 

But if you wanted to rank influence, I think EA's are influenced by media and popular thought just like everybody else. EA is not a refuge for unpopular beliefs, necessarily. Focusing on it as a community that can resolve issues around motivated thinking or bias could be a mistake. EA's are as vulnerable as any other community of people to instrumental rationality, motivated thinking, and bias.

Isn't applying rationality (and evidence, science, math, etc.) to charity EA's basic mission? And therefore, if you're correct about EA, wouldn't it be failing at its mission? Shouldn't EA be trying to do much better at this stuff instead of being about as bad as many other communities at it? (The status quo or average in our society, for rationality, is pretty bad.)

You don't think that would address problems of updating in EA to some extent?

Do I think those techniques would address problems of updating in EA adequately? No.

Do I think those techniques would address problems of updating in EA to some extent? Yes

The change in qualifier is an example of something I find difficult to make a decision about in discussions. It's meaningful enough to invert my answer but I don't know that it matters to you, and I doubt it matters to anyone else reading. I could reply with topical, productive comments that ignore this detail. Is it better to risk getting caught up in details to address this or better to try to keep the discussion making forward progress? Ignoring it risks you feeling ignored (without explanation) or the detail having been important to your thinking. Speaking about it risks coming off picky, pedantic, derailing, etc.

In general, I find there's a pretty short maximum number of back-and-forths before people stop discussing (pretty much regardless of how well the discussion is going), which is a reason to focus replies only on the most important and interesting things. It's also a reason I find those discussion techniques inadequate: they don't address stopping conditions in discussions and therefore always allow anyone to quit any discussion at any time, due to bias or irrationality, with no transparency or accountability.

In this case, the original topic I was trying to raise is discussion methodology, so replying in a meta way actually fits my interests and that topic, which is why I've tried it. This is an example of a decision that people face in discussions which a good methodology could help with.

My contest submission

Sounds interesting. Link please.

It's also a reason I find those discussion techniques inadequate: they don't address stopping conditions in discussions and therefore always allow anyone to quit any discussion at any time, due to bias or irrationality, with no transparency or accountability.

 

I think what sets the EA forum apart is the folks who choose to participate in the forum, there's a lot of posts that go up here and I like their focus (ethics, charity, ai, meta stuff about thinking). 

I doubt there's enough interest to persuade folks to create and maintain a system of accountability for all arguments put on the forum or into their pool of literature, but there is a tendency here to quote other's work, and that lets them do peer review and build on earlier work, so there's some continuity of knowledge development that you don't always find. Also, sometimes posts show academic rigor, which can have its pluses. And while relying on expert opinion on controversial topics isn't going to lead to consensus, at least it positions an author in a larger field of perspectives enough for debates to have a well-known starting point.

My contest entry wasn't about that sort of continuity or any system of building consensus. Fwiw, here is my contest entry. Like I said, it was about updating, but makes some other points about unweighted beliefs  vs subjective probabilities, prediction, and EA guilt. Most of it went down in a short two-day stretch just before the contest entry was due, and there was  a lot I wanted to improve for the next month as I waited for results to come back. I've still got some changes to make, then I'll be done with it.