Hide table of contents

I asked if EA has a rational debate methodology in writing that people sometimes use. The answer seems to be “no”.

I asked if EA has any alternative to rationally resolve disagreements. The answer seems to be “no”.

If the correct answer to either question is actually “yes”, please let me know by responding to that question.

My questions were intended to form a complete pair. Do you use X for rationality, and if not do you use anything other than X?

Does EA have some other way of being rational which wasn’t covered by either question? Or is something else going on?

My understanding is that rationality is crucial to EA’s mission (of basically applying rationality, math, evidence, etc., to charity – which sounds great to me) so I think the issue I’m raising is important and relevant.

1

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

I think people try pretty hard to come to accurate answers given the information available, and have inherited or come up with various tools for this (e.g. probabilistic forecasting). Whether that counts as "rationality" or not depends a lot on your definitions of what it means to be rational, and how low your bar is. 

I don't think we're perfectly rational, and there's an argument that we aren't investing as much resources as optimal for rationality or epistemics-enhancing interventions. But it's pretty hard to answer a broad question like "How Is EA Rational?", and I don't think the crux is a specific form of argument mapping that we use or don't use.

At face value, the answer is something like we're reasonably good at coming to accurate enough answers to hard-ish questions. Whether this is "good enough" depends on whether "accurate enough" is good enough, how hard the questions we ultimately want to solve are, and whether/how much we can do better given the resources available.

But I don't think this is exactly what you're asking. In sum, I don't think "is X rational" has a binary answer.

Bias and irrationality are huge problems today. Should I make an effort to do better? Yes. Should I trust myself? No – at least as little as possible. It’s better to assume I will fail sometimes and design around that. E.g. what policies would limit the negative impact of the times I am biased? What constraints or rules can I impose on myself so that my irrationalities have less impact?

So when I see an answer like “I think people [at EA] try pretty hard [… to be rational]”, I find it unsatisfactory. Trying is good, but I think planning for failures of rati... (read more)

2
Linch
2y
I think this is possible but will mostly come from arrogance and ignoring big rationality failures after getting small wins For example, you can wear your more busy (and possibly more knowledgeable) interlocutors down with boredom.  I agree that relying entirely on personal rationality/integrity is not sufficient. To make up for individual failings, I feel more optimistic about cultural and maybe technological shifts than rules and policies. Top-down rules and policies especially feel a bit suss to me, given the lack of a track record.

List of reasons I think EA takes better actions than most movements, in no particular order:

  • taking weird ideas seriously; being willing to think carefully about them and dedicate careers to them
  • being unusually goal-directed
  • being unusually truth-seeking
    • this makes debates non-adversarial, which is easy mode
  • openness to criticism, plus a decent method of filtering it
  • high average intelligence. Doesn't imply rationality but doesn't hurt.
  • numeracy and scope-sensitivity
    • willingness to use math in decisions when appropriate (e.g. EV calculations) is only part of this
  • less human misalignment: EAs have similar goals and so EA doesn't waste tons of energy on corruption, preventing corruption, negotiation, etc.
  • relative lack of bureaucracy
  • various epistemic technologies taken from other communities: double-crux, forecasting
  • ideas from EA and its predecessors: crucial considerations, the ITN framework, etc.
  • taste: for some reason, EAs are able to (hopefully correctly) allocate more resources to AI alignment than overpopulation or the energy decline, for reasons not explained by the above.

Structured debate mechanisms are not on this list, and I doubt they would make a huge difference because the debates are non-adversarial, but if one could be found it would be a good addition to the list, and therefore a source of lots of positive impact.

Thanks for the list; it’s the most helpful response for me so far. I'll try responding to one thing at a time.

Structured debate mechanisms are not on this list, and I doubt they would make a huge difference because the debates are non-adversarial, but if one could be found it would be a good addition to the list, and therefore a source of lots of positive impact.

I think you're saying that debates between EAs are usually non-adversarial. Due to good norms, they’re unusually productive, so you're not sure structured debate would offer a large improvement... (read more)

taste: for some reason, EAs are able to (hopefully correctly) allocate more resources to AI alignment than overpopulation or the energy decline, for reasons not explained by the above.

Of course, in the eyes of the people warning about energy depletion, expecting energy growth to continue over decades is not the rational decision ^^  

 

I mean, 85% of energy comes from a finite  stock, and all renewables currently need this stock to build and maintain renewables,  so from the outside that seems at least worth exploring seriously - but I fe... (read more)

I think, based on the way you're phrasing your question, you're perhaps not fully grasping the key ideas of Less Wrong style rationality, which is what EA rationality is mostly about. It might help to read something like this post about what rationality is and isn't as a starting point, and from there explore the Less Wrong sequences.

7
Gordon Seidoh Worley
2y
No offense, but I'm surprised, because your phrasing doesn't parse for me, since it's not clear to me what it would mean for EA as a movement to be "rational", and most use of "rational" in the way you're using it here reflects a pattern shared among folks with only passing familiarity with Less Wrong. For example, you ask about "rational debate" and "rationally resolv[ing] disagreements", but the point of the post I linked is sort of that this doesn't make sense to ask for. People might debate using rational arguments, but it would be weird to call that rational debate since the debate itself is not the thing that is rational or not, but rather the thing that could be rational is the thought processes of the debaters. Maybe this odd phrasing is why you got few responses, since it reads like a signal that you've failed to grasp a fundamental point of Less Wrong style rationality: that rationality is a method applied by agents, not an essential property something can have or not.
1
Elliot Temple
2y
You raise multiple issues. Let's go one at a time. I didn't write the words "rational dispute resolution". I consider inaccurate quotes an important issue. This isn't the first one I've seen, so I'm wondering if there's a disagreement about norms.
4
Gordon Seidoh Worley
2y
I was just paraphrasing. You literally wrote "rationally resolve disagreements" which feels like the same thing to me as "rational dispute resolution". I edited my comment to quote you more literally since I think it maintains exactly the same semantic content.
1
Elliot Temple
2y
We disagree about quotation norms. I believe this is important and I would be interested in discussing it. Would you be? We could both explain our norms (including beliefs about their importance or lack thereof) and try to understand the other person’s perspective.
4
Gordon Seidoh Worley
2y
I don't know if we really disagree, but I'm not interested in talking about it. Seems extremely unlikely to be a discussion worth the effort to have since I don't think either of us thinks making up deceptive quotes is okay. I think I'm just sloppier than you and that's not interesting.

TLDR: We don't have some easy to summarise methodology and being rational is pretty hard. Generally we try our best and hold ourselves and each other accountable and try to set up the community in a way that encourages rationality. If what you're looking for is a list of techniques to be more rational yourself you could read this book of rationality advice or talk to people about why they prioritise what they do in a discussion group

Some meta stuff on why I think you got unsatisfactory answers to the other questions

I wouldn't try to answer either of the previous questions because the answers seem big and definitely incomplete. I don't have a quick summary for how I would resolve a disagreement with another EA because there are a bunch of overlapping techniques that can't be described in a quick answer. 

To put it into perspective I'd say the foundation to how I personally try to rationally approach EA is in the Rationality A-Z book but that probably doesn't cover everything in my head and I definitely wouldn't put it forward as a complete methodology for finding the truth. For a specific EA spin just talking to people about why they prioritise what they prioritise is what I've found most helpful and an easy way to do that is in EA discussion groups (in person is better than online).

It is pretty unfortunate that there isn't some easy to summarise methodology or curriculum for applying rationality for charity current EA curricula are pretty focussed on just laying out our current best guess and using those examples along with discussion to demonstrate our methodology.

How is EA rational then?

I think the main thing happening in EA is that there is a strong personal, social, and financial incentive for people to approach their work "rationally". E.g people in the community will expect you to have some reasoning which led you to do what you're doing, and they'll feedback on that reasoning if they think it's missing an important consideration. From that spawns a bunch of people thinking about how to reason about this stuff more rationally, and we end up with a big set of techniques or concepts which seem to guide us better.

Trying to address only one thing at a time:

I don’t think I asked for an “easy to summarise methodology” and I’m unclear on where that idea is coming from.

4
Will Payne
2y
I was responding mainly to the format. I don’t expect you to get complete answers to your earlier two questions because there’s a lot more rationality methodology in EA than can be expressed in the amount of time I expect someone to spend on an answer If I had to put my finger on why I don’t feel like the failure to answer those questions is as concerning to me as it seems to be for you I’d say because. A) Just because it’s hard to answer doesn’t mean EAs aren’t holding themselves and each other to a high epistemic standard B) Something about perfect not being the enemy of good and about urgency of other work. I want humanity to have some good universal epistemic tools but currently I don’t have them and I don’t really have the option to wait to do good until I have them. So I’ll just focus on the best thing my flawed brain sees to work on at the moment (using what fuzzy technical tools it has but still being subject to bias) because I don’t have any other machinery to use I could be wrong but my read from your comments on other answers is that we disagree most on B). E.g you think current EA work would be better directed if we were able to have a lot more formally rational discussions. To the point that EA work or priorities should be put on hold (or slowed down) until we can do this.
1
Elliot Temple
2y
I think I disagree with you on both A and B, as well as some other things. Would you like to have a serious, high-effort discussion about it and try to reach a conclusion?
Comments7
Sorted by Click to highlight new comments since: Today at 7:07 PM

My summary of the answers I got to my question:

EA lacks formal rationality policies, but individuals informally do many things to try to be rational and the community as a whole tries to encourage rationality.

This is intended to be a statement that EAers would agree with. Please let me know if you disagree.

I think this is a pretty good summary! One minor addition is that "the community as a whole tries to encourage rationality" suggests that this is a nice-to-have but not much effort has been put into this, but the reality is that the community has invested non-trivial resources into individual and collective rationality (e.g. funding LessWrong, Metaculus, Manifold Markets, QURI, etc).

Also relevant.

[anonymous]2y2
0
0

I think you're asking some important questions. In my view, this is the most critical thing you've written in the comments: 

So when I see an answer like “I think people [at EA] try pretty hard [… to be rational]”, I find it unsatisfactory. Trying is good, but I think planning for failures of rationality is needed. Being above average at rationality, and trying more than most people, can actually, paradoxically, partly make things worse, because it can reduce how much people plan for rationality failures.

We may disagree, but I think looking for a formal debate methodology is a distraction from this more important fundamental question. I don't consider it a promising way to approach the problem above and I suspect others feel similarly. 

Do you know of any criticism of using structured debate methods that I could read?

[anonymous]2y5
1
0

Sorry, no. I don't think I'll be able to sum up my views well quickly either, but here's a little effort: While there are no doubt improvement in debate methods possible, it's unlikely format changes will make a significant different to elements like who determines what debates are had, what debates are seen as productive and not, what incentives people have during the debate, how they affect decision making, etc. 

If you read the partial list of issues that I think a debate methodology should address, in my first question, you'll see that it's not merely format but also issues like who (or what policies) determine what debates are had (a.k.a. starting conditions). The elements you list and imply are more important are actually some of the things I want a debate methodology to address. I agree that those are important.

[anonymous]2y2
0
0

Makes sense. Part of what I think is that a debate methodology is of limited use for issues like the debate starting conditions, and much can be accomplished for rationality without any formal debate methodology, but I could be wrong. 

Based on your other comments, I think we likely agree that sometimes formal rules and policies are not just important but essential; but when I think of those I'm not really thinking of debate methodology. Could just be a lack of imagination from my part though. 

Curated and popular this week
Relevant opportunities