Hide table of contents

I'm pretty ignorant on AI risk and honestly tech stuff in general, but I'm trying to learn more.  I think AI risk is like the #2 or #3 most important thing, but my naive reaction to the EA community's view in particular was/sorta still is if it's so bad why don't they stop.  When EA people make a pitch for the importance and urgency of AI risk, they point at AlphaGo, GPT-3, and Dall-E, which are huge advances made possible by OpenAI and DeepMind.  Yet 80k and EAG (through the job fairs) actively recruit to non-safety roles at OpenAI and DeepMind and there's lots of EA's who have worked at them, and if anything they're looked upon more favorably for doing so.  When I asked my AI risk EA friends who I basically 99% defer to on AI stuff why we should be so cushy with people trying to do the thing we're saying might be the worst thing ever, they explained that other, less safety-conscious AI groups are not far behind.  Meta, Microsoft, and "AI groups in China" generally, are the ones I've heard referred to each at least 3x.  (Though I don't really get the Microsoft example after hearing about their partnership with OpenAI.)

The if-we-don't-someone-will argument doesn't sit very well with me, but I get it.  Meta's just released a chatbot called Blenderbot though, which, even though it's obviously a different type of endeavor from something like GPT-3, very obviously sucks.  It's not a category difference from the AIM chatbot I remember growing up, honestly.  If someone tried to sell me on impending existential AI risk using this chatbot, I would not be on board.  I assume that Meta is announcing Blenderbot because it is a positive example of Meta's AI work progress though.  Is that a fair assumption?  If not, should I / by how much should this cause me to negatively update on Meta's AI capabilities?  And by how should it cause me to negatively update on the if-we-don't-someone-will argument, both vis-a-vis Meta and in general?

Earnest thanks for any replies.

24

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

if-we-don't-someone-will

They (Meta) literally did do it. They open sourced a GPT-3 clone called OPT. It’s 175-B parameter version is the most powerful LM whose weights are publicly available. I have no idea why they released a system as bad as Blenderbot, but don’t let their worst projects distort your impression of their best projects. They’re 6 months behind Deepmind, not 6 years.

GPT-3 was released June 2020. Meta didn’t release their OPT until May 2022. They did this after open source replications by EleutherAI and others, and after more impressive language models had been released by DeepMind (Gopher, Chinchilla) and Google (PaLM). According to Meta’s own evaluation in Figure 4 of the OPT paper, their model still fails to perform as well as GPT-3.

Meta also recently lost many of their top AI scientists [1]. They disbanded FAIR, their dedicated AI research group, and instead have put all ML and AI researchers on product-focused te... (read more)

Not quite a direct answer to your question, but it is worth noting - not everyone in EA believes that about AI capabilities work. I, for one, believe that working on AI capabilities, especially at a top lab like OpenAI and DeepMind, is a terrible idea and should be front and center on our "List of unethical careers". Working in safety positions in those labs is still highly useful and impactful imo.

relevant tweet I saw recently: https://twitter.com/scholl_adam/status/1556989092784615424

Comments3
Sorted by Click to highlight new comments since: Today at 2:52 PM

I’m quite confused about that too. I don’t know of any real statistics, but my informal impression is that almost everyone is on board with not speeding capabilities work. There’s the vague argument floating around that actively impeding capabilities work would do nothing but burn bridges (which doesn’t seem right in full generality since animal rights groups also manage to influence whole production chains to switch to more human methods that form a new market equilibrium), but all the pitches for AI safety work always stress all the ways in which the groups will be careful not to work on anything that might differentially benefit capabilities and will keep everything secret by default unless they’re very sure that it won’t enhance capabilities. So I think my intuition that this is the dominant view is probably not far off the mark.

But the recruiting for non-safety roles is (seemingly) in complete contradiction to that. That’s what I’m completely confused about. Maybe the idea is that the organizations can be pushed in safer directions if there are more safety-conscious people working at them, so that it’s good to recruit EAs into them, since they are more likely to be safety-conscious than random ML people. (But the EAs you’d want to recruit for that are not the usual ML EAs but probably rather ML EAs who are also really good at office politics.) Or maybe these these groups are actually very safety-conscious and are years ahead of everyone else and are only gradually releasing stuff that they’ve completed years ago to keep the investors happy but are keeping all the really dangerous stuff completely secret.

Yet 80k and EAG (through the job fairs) actively recruit to non-safety roles at OpenAI and DeepMind and there's lots of EA's who have worked at them, and if anything they're looked upon more favorably for doing so.

 

I think this is no longer true (at least for the 80k jobs board), as of a couple months(?) ago.  The OpenAI roles are all security/abuse focused, and the DeepMind roles are all alignment/security focused.

I can't speak to what teams were recruiting at the most recent EAG but would be curious to hear from someone who has that info.

At the last two EAGs I manned the DeepMind stall; we were promoting alignment roles (though I would answer questions about DeepMind more broadly, including questions about capabilities roles, if people asked them).

More from Sisi
Curated and popular this week
Relevant opportunities