Quantified uncertainty might be fairly important for alignment, since there is a class of approaches that rely on confidence thresholds to avoid catastrophic errors (1, 2, 3). What might also be important is the ability to explicitly control your prior in order to encode assumptions such as those needed for value learning (but maybe there are ways to do it with other methods).
What is this "Effective Crypto"? (Google gave me nothing)
Kudos for this post. One quibble I have is, in the beginning you write
Potential help includes:MoneyGood mental health supportFriends or helpers, for when things are toughInsurance (broader than health insurance)
Potential help includes:
But later you focus almost exclusively on money. [Rest of the comment was edited out.]
Points where I agree with the paper:
Points where I disagree with the paper:
Yes, I did notice you're subverting the trope here, it was very well done :)
I cried a lot, especially in the ending. Also really liked the concept of the witch doing all this for the sake of other/future people. And, wow, this part:
“There is beauty in the world and there is a horror,” she said, “and I would not a miss a second of the beauty and I will not close my eyes to the horror.”
Kudos for the initiative! I think it makes sense to crosspost this to LessWrong.
Can you (or someone) write a TLDR of why "helping others" would turn off "progressives"?
I am deeply touched and honored by this endorsement. I wish to thank the LTFF and all the donors who support the LTFF from the bottom of my heart, and promise you that I will do my utmost to justify your trust.
Personally I prefer websites since they seem to be more efficient in terms of time and travel distance. Especially in the COVID era, online is better. Although I guess it's possible to do an online speed-dating event.