I agree that this article is written in a very confusing way. Particularly this paragraph, which introduced many new 'things' into the article, without anchoring them soon enough:
"Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other?"
This would have been much easier to read with an added sentence:
"Suppose that, after a tree falls, the two arguers walk into the forest together. Will they expect any different sensory experience from eachother? Will one expect to see the tree fallen to the right, and one..."
A good ancestor takes on the set of problems that they can have the greatest impact in solving. If today there is an opportunity to prevent a global technocracy from taking power, we must tackle that problem, as future generations will not be likely to have a chance to overthrow such a powerful entity. If a future generation will have an equal chance of solving a certain problem, and delaying the solving of the problem doesn't cause immense suffering, it is not the ancestor's most pressing problem to solve.
The section on Taking Ideas Seriously reminded me of a recent piece of writing that landed in my inbox. It's by Henrik Karlsson, and it is about a way of processing information actively which we rarely do - actively pushing back against the ideas we form. You can read it here: https://open.substack.com/pub/escapingflatland/p/how-i-read?utm_campaign=post&utm_medium=email
Very enlightening. I'm most interested in the following excerpts:
1: Talent, not funding, is currently the binding constraint on the AI safety field.
2. I think field-building targeted at more experienced people who can go on to start and lead competent organisations (and who have networks to convert a load of other experienced people) is much more important than more junior field-building. I think most existing AI safety fellowships act as internship programmes for the labs, which is fine, but we should be doing more than that.
I'm wondering what specifically are the talents needed to start and lead competent organisations in this space? What gaps are there between existing orgs? And is that the only kind of talent that is constraining the AI safety field?