Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer.
Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked.
Is AGI coming in the next 5-10 years?
This is very well covered elsewhere but basically it looks increasingly likely, e.g.:
* The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively.
* The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively.
* A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey.
* These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years).
* Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously.
What could AGI mean for animals?
AGI’s implications for animals depend heavily on who controls the AGI models. For example:
* AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition.
* For example, maybe two government-owned companies separately develop AGI then restrict others from developing it.
* These actors’ use of AGI might be dr
Me! I don't just enjoy people management, I am passionate about leadership and consider it to be my strongest skill set.
[I see I'm a bit late to the party on this once since the latest replies were in 2019. It seems like the underlying concern about lack of management capacity in EA organizations is still a pressing issue though. Does anyone strongly agree/disagree? Please comment as I'd love to chat further, or send me a message]
I have 11.5 years of experience managing software teams in industry (Amazon, Bloomberg). I'm now pivoting my career to EA and have been deeply engaged in the community for about 6 months. I love managing and growing high-performance teams, and scaled an organization from 8 people (1 team) to 25 people (3 teams). I take a people-oriented, supportive, coaching approach to management - this helps me find the win-win of happy employees who are learning and growing in their careers, and delivering amazing results. If someone reads this and is hiring for an operations or technology manager role in EA, please feel free to send me a message. My preferred cause areas are EA movement building, AI safety, and alternative proteins, but I'm open to other causes in order to gain experience and do good.
I also am piloting a pay-what-you-can leadership coaching program for current and aspiring EA managers. If you're interested, send me a message!