I am a cognitive scientist who specialized in rationality under radical uncertainty.
For many years I worked full-time in effective altruism community building, communication, and outreach.
Looking for what to do next, this could be a PhD or any other role that is uniquely fitted to my profile and interests
Reach out to me if you have questions about EA ideas and concepts, the EA community and landscape, rationality literature, cognitive science, psychology, artificial intelligence, science or communications more broadly, marketing, complexity science, meta tribe, mental health, weight lifting...
Hey Jay,
Over the years, I have talked to many very successful and productive people, and most do, in fact, not work more than 20 productive hours per week. If you have a job with meetings and low-effort tasks in between, it's easy to get to 40 hours plus. Every independent worker who measures hours of real mental effort is more in the 4-5 hours per day range. People who say otherwise tend to lie and change their numbers if you pressure them to get into the detail of what "counts as work" to them. It's a marathon, and if you get into that range every day, you'll do well.
Thank you for putting this together!
Recommend the book "Sometimes Brilliant" about Larry Brilliant's life in this context! I read it with so much joy this year.
As mentioned in the article Effective Altruism as "nish kam karma yoga" [Larry Brilliant]
It's already been removed from the final selection for various other reasons. Mostly redundancy of the values promoted, which are already covered in other works that will be on the list!
Thank you! And yes, excellent points.
Haven't personally read Deutsch yet, does he reflect evolutionary thought well?
Both the systems books on the list (explicitly) and Taleb's book (implicitly) describe a lot of the evolutionary process notion. Are there major points lacking?
Thank you! Added Lean Startup on the current list and removed the two books.
My personal stance on books is that it is to be expected that everyone has their strong points and their epistemic low points, Joy's book was very important to my own understanding and I personally easily felt like I could ignore the minor faults in relation to the main argumentation. Different readers have different preferences regarding the distribution of knowledge and error in a book, I'd personally be okay with this.
In consulting this is also referred to as the "Minto pyramid" or "pyramid principle of communication": you put the key message and main insight concisely as the first step.
In a fast-paced environment, this means the one piece with the highest information density is ensured to be delivered before e.g. getting interrupted. It also enables the listener to decide whether this topic is relevant early on in the communication and therefore shows respect for their attention.
After delivering the key message you can elaborate on the underlying insights or what brought you to the conclusion. If further elaboration is necessary or there is time for it the receiver of the message can ask further questions.
Here is an image to visualize the principle. The related book is called: "The Pyramid Principle: Logic in Writing and Thinking" by Barbara Minto.
Thank you to the many of you who have filled out the Google form as an alternative to writing a comment!
Here are some great points made:
Introduction:
Ethics:
Rationality:
Economics:
AI:
Community and Soft Skill:
Happiness:
Transhumanism:
Other
Are there categories missing?
Should a category be removed?
anything important missing?
Which books if understood by others would make you more confident in collaborating with them?
I agree with your points made and I tried to explicitly address similar arguments in the original post!
The poster is meant to exactly distill the "canon" or "current orthodoxy" by aggregating what people currently and historically so far found most important to read. It is explicitly meant to distill this to quickly get a meta-knowledge of what that entails for various reasons: getting a quick introduction to precisely this meta-knowledge and the content but also to have a representation on which to build, precisely that, namely one's criticism of what is missing e.g. how about making a "10 knowledge spheres and their books, from which EA could gain a lot from" as a co-creational poster answer. Then more people could read in those directions and the next poster in a couple of years could entail exactly more of the new books many found crucial. The post also explicitly addresses that it is not meant to encourage anyone to read all of them.
This project is not centralized at all btw. I'm one member of the community making a draft to ask other community members what they think about it. It's an open conversation.
By having such a representation you can also more quickly understand which book is actually a "venturing out" into new territory versus just one's lack of knowledge of how central it already is in the community. I have seen that many times that someone would read a book and feel like they found something completely new and important, while I already knew that many have read and thought through exactly the same literature already.
Think "introductory textbook" into a field as an analogy. It's difficult to make an argument that they shouldn't exist because they don't already in-depth contain all the other options and criticisms of the stances. The metaphor often used for this problem is that of Wittgenstein's ladder: "My propositions serve as elucidations in the following way: anyone who understands me eventually recognizes them as nonsensical, when he has used them—as steps—to climb beyond them. (He must, so to speak, throw away the ladder after he has climbed up it.) He must transcend these propositions, and then he will see the world aright."
People who like Logan's post on EA burnout will love Tyler Alterman's post on Effective altruism in the garden of ends. Both are close to my own experience.