Peter Wildeford

Chief Advisory Executive @ IAPS
19339 karmaJoined Working (6-15 years)Washington, DC, USA
www.twitter.com/peterwildeford

Bio

I'm a former data scientist with 5 years industry experience now working in Washington DC to bridge the gap between policy and emerging technology. AI is moving very quickly and we need to help the government keep up!

I work at IAPS, a think tank of aspiring wonks working to understand and navigate the transformative potential of advanced AI. Our mission is to identify and promote strategies that maximize the benefits of AI for society and develop thoughtful solutions to minimize its risks.

I'm also a professional forecaster with specializations in geopolitics and electoral forecasting.

Posts
83

Sorted by New

Comments
1766

Topic contributions
1

Thanks for the comment, I think this is very astute.

~

Recently it seems like the community on the EA Forum has shifted a bit to favor animal welfare. Or maybe it's just that the AI safety people have migrated to other blogs and organizations.

I think there's a (mostly but not entirely accurate) vibe that all AI safety orgs that are worth funding will already be approximately fully funded by OpenPhil and others, but that animal orgs (especially in invertebrate/wild welfare) are very neglected.

I don't think that all AI safety orgs are actually fully funded since there are orgs that OP cannot fund for reasons (see Trevor's post and also OP's individual recommendations in AI) other than cost-effectiveness and also OP cannot and should not fund 100% of every org (it's not sustainable for orgs to have just one mega-funder; see also what Abraham mentioned here). Also there is room for contrarian donation takes like Michael Dickens's.

I basically endorse this post, as well as the use of the tools created by Rethink Priorities that collectively point to quite strong but not overwhelming confidence in the marginal value of farmed animal welfare.

I think more EAs should consider operations/management/doer careers over research careers, and that operations/management/doer careers should be higher status within the community.

I get a general vibe that in EA (and probably the world at large), that being a "deep thinking researcher"-type is way higher status than being an "operations/management/doer"-type. Yet the latter is also very high impact work, often higher impact than research (especially on the margin).

I see many EAs erroneously try to go into research and stick to research despite having very clear strengths on the operational side and insist that they shouldn't do operations work unless they clearly fail at research first.

I've personally felt this at times where I started my career very oriented towards research, was honestly only average or even below-average at it, and then switched into management, which I think has been much higher impact (and likely counterfactually generated at least a dozen or more researchers).

I really appreciate these dates being announced in advance - it makes it much easier to plan!

I'm not sure I understand the expectations enough about what these questions are looking for to answer.

Firstly, I don't think "the movement" is centralized enough to explicitly acknowledge things as a whole - that may be a bad expectation. I think some individual people and organizations have done some reflection (see here and here for prominent examples), though I would agree that there likely should be more.

Secondly, It definitely seems very wrong to me though to say that EA has had no new ideas in the past two years. Back in 2022 the main answer to "how do we reduce AI risk?" was "I don't know, I guess we should urgently figure that out" and now there's been an explosion of analysis, threat modeling, and policy ideas - for example Luke's 12 tentative ideas were basically all created within the past two years. On top of that, a lot of EAs were involved in the development of Responsible Scaling Policies which is now the predominant risk management framework for AI. And there's way more too.

Unfortunately I can mainly only speak to AI as it is my current area of expertise, but there's been updates in other areas as well. For example, at just Rethink Priorities, welfare ranges, CRAFT, and CURVE were all done within the past two years. Additionally, the Rethink Priorities model estimating the value of research influencing funders flew under the EA radar IMO but actually has led to very significant internal shifts in Rethink Priorities's thinking on which funders to work for and why.

I also think a lot of the genesis of the current focus on lead started in 2021 but significant work on pushing this forward happened in the 2022-2024 window.

As for new effective organizations, a bit of this depends on your opinions about what is "effective" and to what extent new organizations are "EA", but there are many new initiatives around, especially in the AI space.

Answer by Peter Wildeford174
33
1
1
4

It's very difficult to underrate how much EA has changed over the past two years.

For context, two years ago was 2022 July 30. It was 17 days prior to the "What We Owe the Future" book launch. It was also about three months before the FTX fraud was discovered (but at this time it was massively underway in secret) and the ensuing bankruptcy. We were still at the height of the Big Money Big Longtermism era.

It was also about eight months before the FLI Pause Letter, which I think coincided with roughly when the US and UK governments took very serious and intense interest in AI risk.

I think these two events were really key changes for the EA movement and led to a huge vibe shift. "Longtermism" feels very antiquated now and feels abandoned in the name of "holy crap we have to deal with AI risk occurring within the next ten years". Big Money is out, but we still have a lot of money, and it feels more responsible and somewhat more sustainable now. There are no longer regrantors running around everywhere, for better and for worse.

Many of the people previously working on longtermism have pivoted to "pandemics and AI" and many of the people previously working on pandemic risk have pivoted to "AI x bio intersections". WWOTF captures the current mid-2024 vibe of EA much less than Leopold's "Situational Awareness".

There also has been a massive pivot towards mainstream engagement. Many EAs have edited their LinkedIns to purge that two-word phrase and now barely and begrudgingly admit to being "EA-adjacent". These people now take meetings in DC and engage in the mainstream policy process (whereas previously "politics was the mindkiller"). Many AI policy orgs have popped up or become more prominent as a result. Even MIRI, which had just announced "Death with Dignity" only about three months prior to that date of 2022 July 30, has now given up on giving up and pivoted to policy work. DC is a much bigger EA hub than it was two years ago, but the people working in DC certainly wouldn't refer to it as that.

The vibe shift towards AI has also continued to cannibalize the rest of EA as well, for better and for worse. This trend was already in full swing in 2022 but became much more prominent over 2023-2024. There's a lot less money available for global health and animal welfare work than before, especially if you worked on more weird stuff like shrimp. Shrimp welfare kinda peaked in 2022 and the past two years have unfortunately not been kind to shrimp.

I agree with all this advice. I also want to emphasize that I think researchers ought to spend more time talking to people relevant to their work.

Once you’ve identified your target audience, spend a bunch of time talking to them at the beginning, middle, and end of the project. At the beginning learn and take into account their constraints, at the middle refine your ideas, and at the end actually try to get your research into action.

I think it’s not crazy to spend half of your time on the research project talking.

That’s fair - you’re right to make this distinction where I failed and I’m sorry. I think I have a good point but I got heated in describing it and strayed further from charitableness than I should. I regret that.

Thanks Linch. I appreciate the chance to step back here. So I want to apologize to @Austin and @Rachel Weinberg and @Saul Munn if I stressed them out with my comments. (Tagging means they'll see it, right?)

I want to be very clear that while I disagree with some of the choices made, I have absolutely no ill will towards them or any other Manifest organizer, I very much want Manifold and Manifest to succeed, and I very much respect their rights to have their conference the way they want. If I see any of them I will be very warm and friendly and there's really no need from me to talk about this further if they don't want to. I hope we can be friends and engage productively in other areas - even if I don't attend Manifest or trade on Manifold, I'd be happy to interact with them in other ways that don't involve Hanania.

While I dislike Hanania's ideas greatly, and I still think inviting Hanania was a mistake, and I still will not attend events or participate in places where Hanania is given a platform... I don't want to practice guilt by association for those who do not hold Hanania's detestable ideas. Just because someone interacted with him does not make them also bad people. I apologize for not being clear about this from the beginning and I regret that I may have lead people to think otherwise.

Load more