Hide table of contents

Increasingly, I find myself in situations where people request advice on important decisions (e.g., career decisions), and I realize that it's important to do well. Unfortunately, I find that giving good advice is difficult. Therefore, I decided to practice this deliberately and have created a five-step algorithm that I've found to be useful. I'm sharing the steps, principles, and underlying reasoning, hoping that it'll be useful to you too! 

 

Why is it important?

Like many other EAs, I find myself surrounded by more and more people who pursue careers with drive and dedication to altruism. This exciting position is accompanied by an increasing number of opportunities to provide input on their crucial decisions related to their career and life in general. Back in February, I realized that there's a solid chance that a good chunk of my impact will come from positively helping those around me be more effective in their thinking, decision-making, and conduct!

Unfortunately, I find that giving good advice is really difficult, and unnuanced advice may very well be actively harmful. This cocktail of importance and difficulty prompted me to sit down and put some thought into how to do this well, hoping that it'd enable me to help others make nuanced and effective decisions in their pursuit of making the world a better place. I created a five-step algorithm that I try to follow whenever I'm engaging in advice-giving to enable myself to bring my A-game, and I'm sharing this, hoping that it'll be useful to you too.

 

What is it based on?

Before we dive into it, I want to clarify that it isn't the most rigorous thing out there. It's mainly based on i) my personal deliberate practice with giving advice, ii) providing career coaching as a group organizer, and iii) conversations with people whom I find to be extraordinary communicators. My personal deliberate practice is based on around 10-20 "high-stakes"-decisions (e.g., "should I take this job or do a PhD?"), hundreds of smaller advice-giving situations, and giving around 20 career coaching sessions with some feedback loops. In the following, I'll go through the steps and principles that are currently very useful to me and explain the underlying reasoning.

 

The Algorithm

Step one: Create an implementation intention

An implementation intention is a simple "if-then plan" and is a widely used way of eliciting behavioral change (e.g., implementing a new habit). If you're familiar with the work of CFAR, you'll know this as a Trigger Action Plan (TAP). This step may seem trivial but when I formulate a clear intention of bringing my A-game (and this algorithm) when giving advice, I'll be much more inclined to actually use these "best practice" principles. Therefore, clarifying triggers is key. The triggers (advice requests or situations) can range from being very clear to highly unspecific. Here're some examples:

"Hi, I'm in the process of applying for this job and have made it to the third stage of the application process. Now that things are getting real, I'd like to get your view on whether I should take this job or accept a PhD-position at Oxford."

"Hi, I'm considering what the topic of my master's thesis should be to best set me up for a career in biosecurity. Do you have any tips?"

"Hi! It'd be great to catch up and also talk about something that's on my mind in the realm of careers."

Relatedly, Michelle Hutchinson wrote an excellent post on how to ask for advice, which outlines an approach for requesting advice in a useful and computationally kind way. 

 

Step two: Begin with the end in mind (just before the advice-giving)

Here I remind myself why I'm doing this. My reminder is something along the lines of "I want to be extraordinarily useful and enable her to become her most impactful self and lead a meaningful life." You may prefer something less sugar-coated, which is perfectly fine - the important thing is that it makes sense to you. Once I've formulated the purpose, I resolve to stay true to this purpose as I set out to understand how I can best help the other person.

I also have a list of some go-to resources that I find myself recommending frequently. An example of one of my go-to resources is:

Charity Entrepreneurship spreadsheet approach (mainly step 1-5)

Note, I try to remind myself that it's important not to overflood the person with resources. I've experienced getting 3-6 recommendations of super lengthy documents, which often just leads to me not reading any of them.

 

Step three: Seek first to understand - then to be understood (during the advice-giving)

This is one of my biggest weaknesses. I often fool myself into believing that I know what's going on prematurely. I'll be like, "Okay, to me, it seems that you care about x and that all you have to do is y and reach out to z". I think this misguided tendency is due to three related silly beliefs.

Belief 1: Smart people are quick at understanding what's going on and assert answers. I like to think that I'm smart and, therefore, I have to make up my mind and assert my answer quickly. 

Belief 2: Others are simply permutations of myself and think, feel, and behave similarly to me. So as soon as I recognize a pattern or a situation from my own life, I readily assume that I know what's going on.

Belief 3: The other person expects me to actively and eagerly participate in the conversation, so I'm supposed to talk a lot - I can't just sit and listen.

Therefore, I have to remind myself that these beliefs aren't aligned with reality nor the purpose I'm trying to stay true to. Instead, I try to remain humble, ask many questions, and summarize what the other person says to ensure that I understand the situation before I speak my mind.

 

Step four: Think before you speak (during the advice-giving)

Now that I'm mindful of the purpose and understand what's going on, it's time to really think before I give advice - at least with high-stakes decisions - and not just say the first thing that comes to mind.

I try to take 15-120 seconds to just sit and think about how I can provide as much value as possible. If I'm having a conversation and are worried about feeling awkward about the silence, I'll simply say that I'll take two minutes to think while getting a glass of water. While thinking, I remind myself of the purpose that I resolved to stay true to. A prompt I like to use at this point is: "What would be the most valuable input I could provide, on the margin?"

 

Step five: Level up (after the advice-giving)

Having good feedback loops is key for leveling up. At the very least, I take two minutes to reflect by asking myself two questions: What went well? What could be improved? However, I've found that my intuitive reflections may be off. In one case, I thought to myself, "Wow, I did really well on this part!" but I later discovered, via feedback from the other person, that this was precisely the thing that was the least useful to her. So, if the situation allows for it, I ask the other person for feedback. Getting their view on one thing that went well and one thing that could be improved is super valuable!

 

There you have it - my five-step algorithm for giving advice. If you like the steps outlined above, you can simply copy-paste them and add them to a document you can open the next time you find yourself in an advice-giving situation. However, don't follow the approach with robotic rigidity and very high deliberation - that probably won't lead to the best results as it may make the conversation contrived. If nothing else, I hope that you're also excited about going out there and give good advice as it may be an excellent opportunity for helping others make good decisions for the betterment of our fellow sentient beings!

 

PS. Please share your tips and thoughts on how this could be improved as I'm certain that this isn't the ideal approach. 

12

0
0

Reactions

0
0

More posts like this

Comments4


Sorted by Click to highlight new comments since:

I like your guidelines. Some others that come to mind:

-Some people are not just looking for advice but to avoid the responsibility of choosing for themselves (they want someone else to tell them what the right answer is). I think it's important to resist this and remind people that ultimately it's their responsibility to make the decision.

-If someone seems to be making a decision out of fear or anxiety, I try to address this and de-dramatize the different options. People rarely make their best decisions if they're afraid of the outcomes.

-I try to show my work and give the considerations behind different pieces of advice. That way if they get new evidence later they can integrate it with the considerations rather than starting from scratch.

You also sort of touch on this but I think it's also helpful to convey when you have genuine uncertainty (not at the cost of needless hedging and underconfidence) and also say when you think someone else (who they have access to) would be likely to have more informed advice on a particular question.

Especially with career decisions. I actually think that it can be good to start out with some noticeable gesture that makes people realize this clearly and use language like "this is my impression" or "my tentative judgment call is".

-I agree. We live in a time where people are seeking a lot of validation and just want to be told what to do. It's super important to encourage them to take agency and not just defer completely to others.

-Excellent point. If people are in a challenged state, I also see the priority as changing their state. E.g., increase their hope, agency, and light-heartedness.

-Reasoning transparency is great. Especially, because people will otherwise be inclined to overanchor on the specific suggestion instead of the consideration that led to the specific consideration.

Curated and popular this week
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
Relevant opportunities