THIS.
The 'hyper-optimisation' approach that organisations adopt when trying to recruit the 'best' talent comes at the cost of a huge waste of time and energy for countless candidates that don't even stand a chance of getting a job is, according to me, a textbook example of maximisation gone wrong.
What you suggest (limiting the number of application to a given number, say the first 70, and then stopping accepting applications) is in my view a good compromise since, as you say, after a certain point you're unlikely to get a noticeably better sample. Meanwhile, all the candidates who wouldn't realistically stand a viable chance just save themselves some time and don't apply.
I'd like to contribute my two-cents in the form of a meta comment on the discussion above, particularly on the points made by @Yarrow Bouchard 🔸 and @David Mathers🔸
What you guys are doing is the very valuable job of sifting through evidence and signals pointing towards factors that could either stall or accelerate progress towards AGI, and making some sort of epistemological analysis as to what evidence we should give more credit to when thinking about timelines, in order to influence our decision-making for very pragmatic day-to-day decisions like how should we best spend our money and time in order to have the best shot at creating goodness for Humanity.
I have my own views on each individual points you raised, but regardless of my opinions, I'd like to talk about the practical uses of such analyses, and about the next step: namely, what do we do with all this?
My best shot at a guiding principle for action in times of uncertainty is to try and act with the following reasoning:
What actions can I take, so that even if I turn out to have been completely wrong in my 'predictions', my actions are still very likely to make a positive impact on humanity / not be wasted?
In light of this:
- Investing tons of money into stocks and shares into frontier AI companies would not be a wise course of action, because in the event of an AI bubble popping, I'd have lost valuable money that I could instead have invested in other impactful causes.
- Investing in AI safety and AI security in the broad sense would be a wise course of action, because even if we turn out to be massively wrong as to when AGI will come, our safety investments would not be wasted and deliver actual benefits to society (e.g. improved democratic processes around AI policy, better cybersecurity, better biorisk security, etc.)
To illustrate this further, let's exaggerate ad absurdum and imagine for a moment that despite all the evidence we though we had, human-made climate change was actually a hoax, and actually the planet is fine without needing to intervene.
EVEN if that were the case, the efforts made to combat climate change, such as making sure people and companies stop polluting as much, saving spaces for nature and biodiversity, etc etc would still not have been in vain, as by doing so we delivered really nice things for people and animals.
In other words, I think we should try to pick actions that address the worst case scenario, but also simultaneously wouldn't go to waste if we turned out to be massively wrong on how likely that worst case scenario is.
Hi @abrahamrowe , would you be willing to share more information on this point?
Organizations wanted this to exist.
- Organizations would be happy to recruit candidates out of a shared hiring pool.
I'm preparing an article with @Anaeli V. 🔹 and others about this and would love some more evidence that organisations are looking for a simplified system.
Could you also clarify this point? Why do you think it would generate no savings despite organisations reporting they would save a lot of time?
- While this process seems like it might produce savings, based on the time savings organizations reported this would generate for them, my estimate was that the cost-effectiveness of a funder paying for this service to exist was pretty low.
I see and agree with your point regarding credibility. Would you mind sharing why you think your organisation didn't achieve the necessarily credibility in the eyes of recruiters, and what do you see as conducive to reaching the necessary credibility?
Thanks in advance for your help! :D
Thank you so much for this. Commenting for reach and also because I want to re-read later in depth. Very much agree the system is broken, although the problem is more general and not EA focussed. However, I do agree with you that the EA ecosystem has huge potential for streamlining the process due to shared values and usually similar recruitment processes.
I'm preparing a piece about it and will DM you the draft - would love to get your input on this
Currently in my drafts are pieces to do with reforming the way recruitment could work within the EA ecosystem, and one about a potential cause area, namely the greening of 'concrete' (as in the construction material) - it's highly tractable, neglected and could potentially help reduce global emissions in a non-trivial way. The industry is already moving in that direction so interventions in that area would be about accelerating that change. :)
Strongly upvoted this. It's three years old but as relevant now as ever
I really liked the nuance captured about the perils of maximisation, also very clearly expressed by @Holden Karnofsky with this post: EA is about maximization, and maximization is perilous
All of it really resonates with how I conceive of to the be the best way to do Effective Altruism, especially the bits about why naive utilitarianism is bad and about being respectful of common sense morality!
I second this - I think that whether it is in impactful circles or not, everyone has frustrations with the way recruitment is done nowadays.
See this post from 2022 which in my view summarises all the main complaints: siloed processes, no ability to build a holistic picture of each applicant over time and share insights from potential recruiters / career advisers on how each individual could best serve the movement, stale talent databases leading nowhere, and most high absorption opportunities being poorly integrated into the recruitment process + generally excluding lots of useful demographics / skillsets (mid career professionals, Humanities background people, etc.)
However, I see a huge potential for a centralised, pooled recruitment system amongst an ecosystem of EA orgs, emulating what we already see in other context where a group of organisation seeks to recruit within a highly qualified group of people (UK civil service, Oxbridge admissions, Ambitious Impact Charity Entrepreneurship Incubator etc.)
I've been drafting a post about my ideas for reforming the talent acquisition system. I'm currently seeking feedback on it from a small group of people before posting (I'm shy like that!) but watch this space as it should come out quite soon.
PS: responding to my first comment about how we don't yet have a proper definition for AGI -
We do in fact have workable definitions for what AGI could be (such as AI fulfilling 90% of human tasks to a level equal or superior to a human), even though in practice many people don't bother defining their terms before using them.
Some descriptive terms that I learned about which are useful are below. In practice, they are worth defining more precisely when using them so others know exactly what you mean when you use them.
TAI (transformative AI): The point at which AI has replaced humans so much that it has fundamentally transformed society. (E.g. someone might think that this is reached when AI can do 60% of economically-significant tasks). This is an impact-focussed term rather than a capability focussed term.
AGI: Artificial General Intelligence. The point at which AI can do most human activities at the same level as a human. This is capabilities focussed, and again, what this really means is up to the writer to define.
ASI: Artificial Super Intelligence. The point at which AI can do every human activity better than humans. At this point, arguably, we have no control over it anymore.
Very fair challenge; I think the EA movement is quick to want to 'justify' being based in very expensive areas because of an argument that it's more talent-concentrated there, but there are some arguments to be made that it would be cheaper to replicate this kind of 'talent cauldron' in a cheaper location than sustain even just 'decent' standards of living in some of the world's most expensive locations such as the Bay Area.