Christian Pearson

Cofounder @ Insights for Impact YouTube Channel
Working (6-15 years of experience)

Bio

Participation
2

Learn more about me at https://christianpearson.ca

EA Focus: Communications, Video and Media
Cause areas: X-Risk and S-Risk reduction, global priorities,  animal rights.

How others can help me

I am interested in forming connections within the EA Communications space. I am potentially open to collaboration, so please send me a message if you have a project in mind!

How I can help others

I may be available to help valuable X-Risk and communications projects. I am open to contributing video editing, scriptwriting, graphic design, or even just a pair of eyes and ears to bounce ideas off of. I'm also interested in building relationships and adding solidarity to your endeavours. 

Posts
1

Sorted by New

Comments
7

Thanks for posting this, Luisa!

Depression is a beast that can affect anyone. While I have been lucky enough to evade it, I know plenty of others who suffer from it.

It's really cool that you kept such detailed logs of your mood, wellbeing, and side effects! Your analysis is very thorough. I hadn't heard of "brain zaps" before! This information will no doubt be useful to others.

I truly wish you the best on your journey toward fulfilment and happiness!

I enjoyed reading this take from Robert! It's a sincerely fun, refreshing, and erudite perspective. I am grateful to him for putting in the time to write such a thoughtful account.

I resonate with his points, especially his observations about many EAs being "young and lost". I found EA at a time when I was soul searching, and its frameworks helped me identify what impact could mean. But I don't fit the college EA group demographic, and I notice that some younger EAs struggle with self-assuredness. So it makes me happy that people approached the tent and attended the talk at the conference. Robert & Co's comms experience would no doubt be alluring and helpful.

As exub2a writes in ep 3 of his Catastrotivity podcast: when a benevolent, friendly critic gives you advice, listening can only help you. Robert strikes me as one of these true allies in the fight to make the world a better place.

Excellent advice! Thank you for linking to additional sources! Your post has already influenced my project for the better by aligning my thinking toward better branding options.

I'd be interested in hearing more about other "common mistakes" that EA orgs may be making, outside of naming.

Very nice piece!

Your writeup is useful to me, as  my casual reading into geopolitics and power have recently had me thinking more about Samo's work, and how little of it I currently understand. It's great to have a broader sense of his ideas before I dig into more of his individual articles and videos.

Having identified him to be a valuable expert whose opinions and work have overlap with EA, it surprised me that your post is among the few mentions of him on the forum. You have done great work to formulate an introduction to his ideas.

When you say,

"He believes that the lack of functional institutions, combined with their significant dependence on each other, creates systemic risks that significant technologies and capabilities will be lost by society. I suspect he sees this from a more longtermist frame, wherein he believes functional institutions should attempt to safeguard these capabilities for the long-term. As opposed to say an AI researcher's frame that assumes we'll deploy aligned AI this century with high probability and then all this won't matter."

This reminds me that I am interested in his  knowledge and interest on AI trends, as he is a civilizational collapse theorist. It seems he is aware of the alignment problem, and briefly spoke about AI Governance strategies in this video.  He stresses the importance of concrete steps, and I have attempted to un-rigorously summarize some of them:

1. He foresees the creation of some kind of institution, like an "AI Scientist Association", that identifies the highest risk forms of AI research.
2. He mentions surveillance of AI development, and asks the technical question of whether we can effectively monitor for this.
3. He expects Governments will try to use software-regulating software, if point 2 becomes feasible.
4. Over a longer timeframe, he sees international cooperation as important, with direct China-US academic collaboration likely resisting disruption attempts — short of intentional political will.

I currently don't know enough about AI Government to know if better ideas exist in this space within EA. As the talk was brief, and from 2019, I suspect both his ideas and the rest of the community have progressed a lot further since then.  Please correct me if I'm wrong, as I want to learn more.

Hi Harrison!

Thank you for taking the time to comment! I agree with your thoughtful assessment: my 'framework' is certainly not rigorous, nor useful to others aside from helping me think about how my approach has progressed. As my post is more a personal reflection, I think 'framework' is too generous as a description.

I appreciate you sharing a counter perspective. I agree that if note linking PKM is used at all, the inline links need to be obviously relevant to the idea and structure of the note.

As a peer review system, it is useful that the forum optimizes for a higher standard of rigor than my post offers. I feel challenged me to write more carefully and thoroughly in the future.

Hi Evie. I really like this post! You chose a great topic, given that communication and relationship-building skills are important to everyone.

The idea of a person's projection is a powerful one. I like how image management can just be a matter of sticking to a few key story elements about yourself—which makes it easier for others to place you in their mental map. My experience has been the same: that it pays to optimize for serendipity, and have many pots churning at once. It's amazing how random contacts made a decade ago can re-enter your life. I discovered EA through this angle; an old friend reached out and told me about it.

I really like the question "who do you know?" I have added that to my notes on this topic.

I strongly agree that timing is important. I like the framework of interest  —> ask. It matches my experience: while people might like you, a mutual exchange of value requires both parties knowing what each other seeks.

I wanted to add a small idea to this. I have a friend who is a successful businessman who owns hotels. He has loads of connections. One tip that I borrow from him is the "I'll buy you lunch" strategy. Take any person you want to get to know, reach out, and offer to buy them lunch. When combined with the post's other ideas on reaching out, it is very hard for anyone to turn down a free lunch—and they'll remember you. Overall, likeability is everything, and people like those who lead with value.

Thanks for the post! I enjoyed reading it.

Great idea for a survey! I have submitted my answers.

I really liked that the list of cause areas is extensive, that you allow multiple choices, and that you offer an "Other" category. I appreciate that the survey was reasonable in length, and that it felt well thought-out.

One potential point of improvement: for the optional answer fields,  I wasn't sure how long to make my responses. From a web form perspective, the fields look nice. However, I found that their single line default size led me to feel that a short response was preferred. It might be helpful to clarify the expected length of responses, such as by stating that there are no expected lengths I reason that longer answer lengths would give organizations deciding on candidates more information.

Another idea is to summarize all the answers given, allowing one to tweak any mistakes. Though I'm not sure this is possible within the limitations of how you put it together.

Overall, excellent work! I sincerely thank you for taking the time to put this project together. I feel that it will be helpful for people who, like me, are very interested in forming connections to Effective Organizations.