Here's the link for the feature.

The article painted a rather shady image of OpenAI:

But three days at OpenAI’s office—and nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the field—suggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.

5

0
0

Reactions

0
0
Comments10
Sorted by Click to highlight new comments since:

A few comments from Xrisk/EA folks that I've seen (which I agree with):

FHI's Markus Andjerlung: https://twitter.com/Manderljung/status/1229863911249391618

CSER's Haydn Belfield: https://twitter.com/HaydnBelfield/status/1230119965178630149


To me, AI heavyweight and past president of AAAI (and past critic of OpenAI) Rao Kambhampati put it well - written like / has tone of a hit piece, but without an actual hit (i.e. any relevation that actually justifies it):

https://twitter.com/rao2z/status/1229599668683673600

It's important to react with an open mind to outside criticism of EA work, and to especially engage with the strong points. Most of the responses posted here so far (including the links to tweets of other researchers) fail to do so.

Yes, the article has a much more accusing tone than content. But, the two main criticisms are actually clear and fairly reasonable, particularly given that OpenAi (as per the article) acknowledges the importance of being respected in the greater machine learning community:

1)Whatever it is that you think about the value of openness in AI research, if you call yourself OpenAI(!) people WILL expect you to be open about your work. Even though the Charta was changed to reflect that, most people will not be aware of this change.

2) I actually agree with the article that much of OpenAI's press releases feel like exaggerated hype. While I personally agree with the decision itself to not immediately release GPT-2, it was communicated with the air of "it's too dangerous and powerful to release". This was met with a strong negative reaction, which is not how you become the trusted authority on AI safety. (see here https://www.reddit.com/r/MachineLearning/comments/aqovhz/discussion_should_i_release_my_mnist_model_or/

Another instance that I personally thought was pretty egregious was the announcement of Microsoft's investment: https://openai.com/blog/microsoft/ :

We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI.

Note that this sentence does not include "attempt", or "we hope will scale" .It is hard to read this without coming away with the impression that OpenAI has a very high degree of confidence in being able to build an AGI, and promising so to the world.

On (2), I would note that the 'hype' criticism is one that is commonly made about the claims of both a range of individual groups in AI, and about the field as a whole. Criticisms of DeepMind's claims, and IBM's (usefulness/impact of IBM Watson in health) come immediately to mind, as well as claims by a range of groups re: deployment of self-driving cars. It's also a criticism made of the field as a whole (e.g. see various of Gary Marcus, Jack Stilgoe's comments etc). This does not necessarily mean that it's untrue of OpenAI (or that OpenAI are not one of the 'hypier'), but I think it's worth noting that this is not unique to OpenAI.

Seems like the writer decided to stab them in the back, didn't find any weak points, but decided to give it her best shot anyway. I'm not sure any response is necessary other than "don't trust Karen Hao in the future".

I feel like it's quite possible that the headline and tone was changed a bit by the editor, it's quite hard to tell with articles like this.

I wouldn't single out the author of this specific article. I think similar issues happen all the time. It's a highly common risk when allowing for media exposure, and a reason to possibly often be hesitant (though there are significant benefits as well).

Hmm, I agree that this might’ve happened, but I still think it is reasonable to hold both author and the magazine with its editors accountable for hostile journalism like this.

I think these comments could look like an attack on the author here. This may not be the intention, but I imagine many may think this when reading it.

Online discussions are really tricky. For every 1000 reasonable people, there could be 1 who's not reasonable, and who's definition of "holding them accountable" is much more intense than the rest of ours.

In the case of journalists this is particularly selfishly-bad; it would be quite bad for any of our communities to get them upset.

I also think that this is very standard stuff for journalists, so I really don't feel the specific author here is particularly relevant to this difficulty.

I'm all for discussion of the positives and weaknesses of content, and for broad understanding of how toxic the current media landscape can be. I just would like to encourage we stay very much on the civil side when discussing individuals in particular.

Thanks, I agree that my comment would be much more helpful if stated less ambiguously, and I also felt frustrated about the article while writing it (and still do). I also agree that we don't want to annoy such authors.

1) I interpreted your first commented to say it would not be a good use of resources to be critical of the author. I think that publically saying "I think this author wrote a very uncharitable and unproductive piece and I would be especially careful with him or her going forward" is better than not doing it, because it will a) warn others and b) slightly change the incentives for journalists: There are costs to writing very uncharitable things, such as people being less willing to invite you and giving you information that might be reported on uncharitably.

2) Another thing I thought you were saying: Authors have no influence on the editors and it's wasted effort to direct criticism towards them. I think that authors can talk to editors, and their unhappiness with changes to their written work will be heard and will influence how it is published. But I'm not super confident in that, for example if it's common to lose your job for being unhappy with the work of your editors, and there being little other job opportunities. On the other hand, there seem to be many authors and magazines that allow themselves to report honestly and charitably. So it seems useful to at least know who does and does not tend to do that.

I think some of the cultural aspects are deeply worrying, although I'm open to some of the claims being exaggerated.

The employees work long hours and talk incessantly about their jobs through meals and social hours... more than others in the field, its employees treat AI research not as a job but as an identity.

Although I would also be excited if my work were making a difference, this is a red flag. It's been argued that encouraging people to become very emotionally invested in their work leads to burnout, which can hurt their long-term productivity. I think effective altruists are especially susceptible to this dynamic. There needs to be a special emphasis on work-life balance in this community.

I'm also confused about the documentary thing. What is that statement referring to? It makes the documentary sound like a gratuitous attempt to flex on DeepMind.

More from qjtva
Curated and popular this week
Relevant opportunities