Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar.
We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window.
More details on our website.
Why we exist
We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared.
Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future.
Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization.
This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them.
Research
Research agendas
We are currently pursuing the following perspectives:
* Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
I've recently made an update to our Announcement on the future of Wytham Abbey, saying that since this announcement, we have decided that we will use some of the proceeds on Effective Venture's general costs.
Can you give a sense of what proportion? Should we expect 'some' to mean ≤10% or something more significant?
I've heard people express the idea that top of funnel community building is not worth the effort, as EA roles often get 100+ applicants.
I think this is misguided. Great applicants may get a job after only a few applications. Poor applicants may apply to many many jobs without getting a job. As a result you should expect poor applicants to be disproportionately well represented in the applicant pool - hence the pure number of applicants isn't that informative. This point is weakened by recruitment systems being imperfect, but as long as you believe recruitment systems have some ability to select people, then I think this take holds.
I'm really only making a claim about a specific argument, not whether or not top of funnel community building is a good idea on the margin.
H/T Amarins for nudging me to post this
Agreed – my favorite "acceptance rates aren't that meaningful" stat is that Walmart is much more selective than Harvard.
I strongly agree that the reasoning "top of funnel community building is not worth the effort, as EA roles often get 100+ applicants" is misguided. But I think the argument about many applicants being "poor applicants" because they get rejected more often is not that important compared to other reasons.
Here are 3 reasons that I think are much more relevant:
There are many tens of thousands of jobs that are at least as promising as the median "EA role". And after all those are filled by hyper-competent people, there will still be millions more FTE needed just to end: factory farming, easily preventable illness, global poverty, near term x-risks, wild animal suffering, ... (And many would need to do earning to give to fund all the above.)
Reflecting on how to do altruism more effectively can help people in those roles help more (e.g. learning about scope insensitivity, expected value, counterfactual reasoning, the Copenhagen interpretation of ethics, how to make and evaluate a theory of change, cost-effectiveness analyses, radical empathy, longtermism, ...)
Less relevant to your main point, but I strongly want to urge readers against "poor applicants get rejected often" kind of reasoning. I see it very often in this community and I think it's greatly overrated. Some relevant links and thoughts:
This is already super long, but I also want to quickly note that 7.3% of EA-survey-respondents are from one city, which seems to indicate that there might be lots of opportunities for top of funnel community building.
Other examples in this great comment of yours.
This feeds into Jonas’ argument in his recent Quick Take about focusing on talent development rather than community building - focusing on bringing in top potential applicants rather than on the number of people interested in EA jobs