Hide table of contents
This is a Draft Amnesty Week draft. I don't think this is likely to be applicable in the 'real world', but worth asking in case!
Commenting and feedback guidelines: I'm happy to receive any questions, responses or similar!

Key Question: Should I be acting differently, when I apply to an 'EA role', compared to applying for 'standard' roles?

Here are my thoughts:

  • When I'm applying to certain job roles, I (should) care more about the overall best outcome (counterfactual impact), than the personally best outcome.
    • The personally best outcome is that I get hired for the role.
    • The overall best outcome is that the best person gets hired for the role, whether or not that is me.
    • Sometimes these will overlap (when I'm the best candidate). Often they will not.
  • There are roughly 3 categories of job that I might apply to:
    1. 'Standard Jobs': a role where there is no significant positive impact expected.
      1. Example: Getting a job working at my local supermarket.
      2. I only care about the personal outcome.
      3. I expect all applicants to only care about the personal outcome.
    2. 'Do-Good Jobs': a role where I expect to have a positive impact, but don't expect to be competing against other EA-aligned people (on average).
      1. Example: Working in a government role where I can influence policy in an important domain (AI Policy, Animal Welfare, etc).
      2. I care about the overall outcome. I want the best person hired.
      3. I expect most applicants to be mostly/exclusively concerned with the personal outcome.
    3. 'EA Jobs': a role at an EA org or similar, where I expect a high proportion of the applicants to be EA-aligned in their thinking and principles.
      1. Example: Working at OpenPhil or GWWC.
      2. I care more about the overall outcome than the personal outcome.
      3. I also expect that most applicants care likewise, more about the overall outcome than their personal outcome.
    4. NOTE: In reality, I think this is a spectrum rather than a tertiary division. The categories are more like useful placeholders.
  • For 'standard jobs', your best plan is just do your best, be competitive, and get the job!
  • For 'do-good jobs', it seems that I should probably adopt the instrumental goal of getting hired myself, and therefore defer to the same methods as 'standard jobs'.
    • This will make more sense in comparison with 'EA jobs' below.
    • It's likely that the best overall outcome aligns with the best personal outcome, (ie. that I am the best person for the job).
      • This assumes a bunch of implausible stuff about my abilities and having 'the one true good worldview', but I'm glossing over that for now.
      • The dynamic I'm going for is "Here's a tech policy role. AI alignment is the way to do the overall most good. Most people won't care much about AI alignment. So the way to do the overall most good is for me to get the role and do AI alignment stuff".
    • It's plausible that the best overall outcome is for another person to get hired. But I should probably ignore this.
      • There are a small proportion of applicants who are 'better'.
      • By decreasing my own chances of getting hired, I'm boosting everyone else's chance, so increasing the likelihood of a more negative outcome.
  • For 'EA jobs', I should probably not be completely competitive. Pursuing the personal best is unlikely to be the way to get the overall best.
    • It's likely that the best overall outcome is for someone else to be hired (ie. that someone else is better for the role).
    • If I pursue the best personal outcome, this might result in a worse overall outcome.
  • In practice, this might look like being much more epistemically honest about my own capabilities and lack thereof.
    • This decreases the chance of the best personal outcome.
    • This increases the chance of the best overall outcome.#
    • Example 1: The application lists many requirements, including 'excellent time management'. I could talk about how I fit the requirements. I could say "I fit all the requirements, {...}, but my time management is pretty poor. If this is really key to the role, you should probably go for someone else".
      • If I'm in a standard process, I should do the first.
      • If I'm in an 'EA process', I should say the second, to increase the chance of the best overall outcome.
    • Example 2: The application asks for programming experience. I could say "I've had experience with Python, building an ML app". I could say "I've done about 10 hours of programming, I copied a couple of templates, made a few minor UI changes, and linked the two templates".
      • If I'm in a standard process, I should say Sentence 1 to give me the best chance of a good personal outcome.
      • If I'm in an 'EA process', I should say Sentence 2, to give the most information to the decision maker, and increase the chance of the best overall outcome.
  • There are Game Theory dynamics going on here.
    • If everyone in the hiring process cares about the overall outcome, and everyone is epistemically honest, then the best decision gets made.
    • If most people care overall and are honest, but some people care about personal outcomes and aren't, then those people probably get hired.
    • There is probably a certain point, at which the proportion of people driven by personal outcome is low enough, than the expected value of epistemic honesty is still high.
  • Conclusion - when I'm doing EA Job applications, I should plausibly be epistemically honest to the point of the examples above, even though this gives me a lower chance of getting hired.

Here are a bunch of sub-questions about this:

  • Do some 'EA roles' meet the above conditions, whereby enough people care about the overall outcome, such that extra epistemic honestly is high EV?
    • If so, which ones? How can you tell?
    • If so, how should you act differently practically? What types of epistemic honesty are good, and what is 'too far' (if anything)?
    • If so, can we change hiring the overall hiring process for these roles, to account for the fact that many people care about the overall goals?
  • How if at all can we deal with the likelihood of people who come in with pure interest in personal goals?
  • Are any of the suggested changes better than the standard 'just be competitive' advice?

NOTE: I don't hugely endorse the terminology of 'EA role' and 'EA people' but am using it for Draft Amnesty speedrun reasons.

5

1
1

Reactions

1
1
New Answer
New Comment


1 Answers sorted by

I know there is more nuance in your post, but if I take your title at face value, I would say: When I'm evaluating candidates and I catch you not being honest (ie. lying or distorting the truth), I'm going to reject your application. If I catch you lying outright, I'm never going to consider you again as a candidate. If I find out after you were hired that you lied during the application process, I would probably do my best to get you fired. (I mean the ‘you’ in a general sense. I'm not expecting you, JDLC, would lie.)

If you give honest, but unspecific answers, and it's about an important skill, I'm going to ask you follow-up questions to figure out what's going on.

Comments2
Sorted by Click to highlight new comments since:

(even larger disclaimer than usual: i don't have much experience applying to EA orgs, i'm also not trying to give career advice and wouldn't recommend taking career advice from me, ymmv)

Thanks for posting! I'm broadly sympathetic to this line of reasoning. One thing I wanted to note was that hiring processes seem pretty noisy, and lots of people seem pretty bad at estimating how good they are at things,  so I think in practice there might not be that much difference between trying  to get yourself hired vs. trying to get the best candidate hired. I think a reasonable heuristic is "try to do well at all the interviews/work tests, as you would for a normal job, but don't rule yourself out in advance, and be very honest and transparent if you're asked specific questions".

This question has troubled me as well, plus the idea that once you get a high-impact job, if it turns out not to be a perfect fit, there are transaction costs to the organisastion replacing you with a better candidate

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
 ·  · 1m read
 · 
This is a linkpost for a new paper called Preparing for the Intelligence Explosion, by Will MacAskill and Fin Moorhouse. It sets the high-level agenda for the sort of work that Forethought is likely to focus on. Some of the areas in the paper that we expect to be of most interest to EA Forum or LessWrong readers are: * Section 3 finds that even without a software feedback loop (i.e. “recursive self-improvement”), even if scaling of compute completely stops in the near term, and even if the rate of algorithmic efficiency improvements slow, then we should still expect very rapid technological development — e.g. a century’s worth of progress in a decade — once AI meaningfully substitutes for human researchers. * A presentation, in section 4, of the sheer range of challenges that an intelligence explosion would pose, going well beyond the “standard” focuses of AI takeover risk and biorisk. * Discussion, in section 5, of when we can and can’t use the strategy of just waiting until we have aligned superintelligence and relying on it to solve some problem. * An overview, in section 6, of what we can do, today, to prepare for this range of challenges.  Here’s the abstract: > AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges.  > > These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making. > > We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are alig