Here's the link for the feature.
The article painted a rather shady image of OpenAI:
But three days at OpenAI’s office—and nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the field—suggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.
It's important to react with an open mind to outside criticism of EA work, and to especially engage with the strong points. Most of the responses posted here so far (including the links to tweets of other researchers) fail to do so.
Yes, the article has a much more accusing tone than content. But, the two main criticisms are actually clear and fairly reasonable, particularly given that OpenAi (as per the article) acknowledges the importance of being respected in the greater machine learning community:
1)Whatever it is that you think about the value of openness in AI research, if you call yourself OpenAI(!) people WILL expect you to be open about your work. Even though the Charta was changed to reflect that, most people will not be aware of this change.
2) I actually agree with the article that much of OpenAI's press releases feel like exaggerated hype. While I personally agree with the decision itself to not immediately release GPT-2, it was communicated with the air of "it's too dangerous and powerful to release". This was met with a strong negative reaction, which is not how you become the trusted authority on AI safety. (see here https://www.reddit.com/r/MachineLearning/comments/aqovhz/discussion_should_i_release_my_mnist_model_or/
Another instance that I personally thought was pretty egregious was the announcement of Microsoft's investment: https://openai.com/blog/microsoft/ :
Note that this sentence does not include "attempt", or "we hope will scale" .It is hard to read this without coming away with the impression that OpenAI has a very high degree of confidence in being able to build an AGI, and promising so to the world.
On (2), I would note that the 'hype' criticism is one that is commonly made about the claims of both a range of individual groups in AI, and about the field as a whole. Criticisms of DeepMind's claims, and IBM's (usefulness/impact of IBM Watson in health) come immediately to mind, as well as claims by a range of groups re: deployment of self-driving cars. It's also a criticism made of the field as a whole (e.g. see various of Gary Marcus, Jack Stilgoe's comments etc). This does not necessarily mean that it's untrue of OpenAI (or that OpenAI are not one of the 'hypier'), but I think it's worth noting that this is not unique to OpenAI.