VL

Victor Lecomte

Theoretical CS PhD student @ Stanford
8 karmaJoined Apr 2022Pursuing a doctoral degree (e.g. PhD)

Bio

Interested in AI alignment, visited ARC in summer 2022.

Comments
1

A personal summary:

  1. Exploratory mindset. Don't try to win arguments (e.g. by being persuasive, witty); instead, try to figure out the truth of the matter.
  2. No violence. Don't respond to arguments with personal attacks.
  3. Don't mislead. Be careful not to unintentionally mislead the people listening to you.
    1. In particular, make it clear when you're oversimplifying things, joking around, etc.
  4. Allow debating subparts independently. Even if you don't think it will change your overall conclusion (say, "sexual harrassment is a problem in EA"), you should still be open to debating specific instances (e.g. "was so-and-so's behavior problematic?").
  5. Keep the alternative in mind. It's easy to get stuck in a wrong belief if you don't keep the alternative hypothesis in mind (and think about what evidence would separate the two). Make sure you truly understand the arguments of the people who hold the opposing view.
  6. Reality check. If there's any way to check if it's true right now, do it!
  7. Use concrete language. Words can hide misunderstandings when used vaguely.
    1. Relatedly, give concrete probabilities when appropriate.
  8. Remember why you asked. When thinking about a problem, it's easy to get down a rabbit hole of sub-considerations that don't affect the truth of the original question much. Sometimes, that's bad.
    1. Cruxes are sub-considerations that do affect the original question; they're great.
  9. Encourage these norms in others. Show people that you appreciate it when they're practicing good epistemic hygiene.
  10. Flag subjectivity. Identify statements that are your personal experience, narratives, beliefs and preferences as such; that will make things clearer for yourself too. E.g.
    • "today is terrible" -> "I feel terrible today",
    • "X" -> "my model says X" / "I believe X because blah" / "so-and-so told me X",
    • "people should X" -> "I prefer X".