Hide table of contents

Are you an effective altruist or an altruistic rationalist (also known as avant-garde effective altruism)? It's like milk chocolate versus chocolate milk: The second word is the main thing. Take this fun quiz to find out where you are on the spectrum!

 

Your friend decides to donate money to the top GiveWell charities instead of paying tips. They're Canadian, where the minimum wage is about $11 USD. You think this is a good idea.

1 point - Strongly agree

2 points - Mostly agree

3 points - Neither agree or disagree

4 points - Mostly disagree

5 points - Strongly disagree


 

How often do you tell people your epistemic status

1 point - Epi-what?

2 points - never but i know what it means

3 points - 1-3 times in my life

4 points - multiple times in the past year

5 points - multiple times in the past month


 

When you decide how to spend your time contributing to EA, do you think about how you can have a positive impact on the most lives, present or future, human or otherwise? Or do you choose what to do based on the communities you have access to, and your skill set?

1 point - Always choose to maximize lives, present or future, human or otherwise

2 points - Mostly choose to maximize lives, present or future, human or otherwise

3 points - I'm 50/50

4 points - Mostly choose based on my communities and skill set

5 points - Always choose based on my communities and skill set


 

People will eventually become digital beings and it will be easy to be happy, which means that we should make as many digital people as possible, to maximize utility.

1 point - Strongly agree

2 points - Mostly agree

3 points - Neither agree or disagree

4 points - Mostly disagree

5 points - This is too hypothetical to be worth thinking seriously about


 

You care about and donate to causes that you know don't give good ROI in terms of suffering reduced (though you might consider the effectiveness of different organizations within that cause area).

5 points - Yes

4 points - I do but I feel kind of guilty about it

3 points - I do but not very intentionally, like if my cousin asks me to donate for their marathon for AIDS

2 points - I try not to

1 point - No


 

Sometimes you have to remind yourself not to think too hard about an AI that would torture you

1 point - What

2 points - I know what this is about, and I think it's dumb

3 points - Neutral

4 points - Agree

5 points - This gives me anxiety


 

A bill would increase the well being of many chickens, but decrease the well being of some farmers. Mathematicians that you trust estimate that the increase in well being of the chickens is worth two to four times as much as the decrease in well being of the farmers, with high confidence. How do you feel about the bill?

1 point - Strongly agree

2 points - Mostly agree

3 points - Neither agree or disagree

4 points - Mostly disagree

5 points - Strongly disagree


 

How often do you worry about not doing the right thing instead of actually doing a thing?

5 points - Always

4 points - Usually

3 points - Sometimes

2 points - Not often

1 point - Never

 

___________________

 

Add up your score! Scores range from 8 to 40, placing you somewhere on this line graph:


Which personality result did you get? Post in the comments below!

 

Score 8 to 14: Eliezer Yudkowsky

100% altruistic rationalist

You sometimes have dreams where you're arguing with someone on the LessWrong forum, because you do it so much in real life. You're an independent thinker and not afraid to be contrarian, garnering you lots of respect for your thought leadership. You think a lot about AI safety because it's been mathematically calculated to be likely to save a lot of lives (of people who don't exist yet), and also because it lets you exercise your big brain.

 

Score 15-21: Will MacAskill

30% effective altruist, 70% altruistic rationalist

You are really good at connecting with the average person about the benefits of effective charity giving. But what really makes you tick is academic exploration of the more extreme philosophical implications of effective altruism, such as catastrophic risks and other longtermist concepts. You wish that the movement wasn't moving in the direction of a thousand shitty AI researchers, but you're not sure what to do about it.

 

Score 22-27: Julia Galef

50% effective altruist, 50% altruistic rationalist

Whether it's your PhD or the book that you're writing, you often look at what you've been working on, become skeptical, and re-derive it all from ground principles to work out bias. The trait you find the most attractive is the willingness to publicly admit to being wrong. You care as much about striving for correctness as you do about having a positive impact with your work, and you hope to help others make the best decisions and arrive upon the best outcomes.

 

Score 28-34: Julia Wise

80% effective altruist, 20% altruistic rationalist

After having thought about how to use your time and money to do the most good for years, you've arrived at a place where you can give a significant part of your income or hours to effective causes, while still prioritizing things that give you joy. You invest yourself in family, friends, hobbies, and even doing things that are good but not the most effective, because these things are restorative and make you happy. It's always something you have to be conscious of and continually working on, but you know that you can contribute the most effectively in the long term when you do it while being cheerful and mentally balanced.

 

Score 35-40: Leila Janah

100% effective altruist

You were onto 'growth and the case against randomista development' before anyone else. You didn't read too much into what the philosophers and mathematicians were saying about how to do the most good -- you just followed your heart and dove right in. It's always been obvious to you that finding ways for people who don't have many opportunities to make more money would empower them to improve the quality of life for themselves and their communities. You are entrepreneurial, extremely hard working, and highly empathetic. Your authenticity and drive makes you a natural leader. The people you help might not know your name, but they know the organizations you run and the impact you've had on their lives.


 

Comments1


Sorted by Click to highlight new comments since:

Which personality result did you get? Is any of it accurate?? Post in the comments below!

Also if you are one of the people I named and want me to correct something, I will do it! Just comment or DM me. I hope I am providing enough comedic value to justify a little roasting...

Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies