To me, this is evidence that culturally, OpenAI is not operating with a "security mindset". In my experience this sort of thing is relatively uniform across a company's culture, so if user data is not being treated in a secure way, we might conclude that the AI development work itself is likewise not being treated with the thoughtfulness that engineering against threat actors requires.
I disagree completely. It seems like the kinds of things they could have done to not be subject to this bug would be e.g.
Since, to my eyes, every single software organization in the world that has ever produced a public website would have been basically equally likely to get hit by this bug, I totally disagree that it's useful evidence about anything about OpenAI's culture, other than "I guess their culture is not run by superintelligent aliens who run at 100x human speed and proved all of their code correct before releasing the website." I agree, it's too bad that OpenAI is not that.
What is the thing that you thought they might have done differently, such that you are updating on them not having done that thing?
For reference, the bug: https://github.com/redis/redis-py/issues/2624
Is your claim that e.g. Google or American Express would be equally likely as OpenAI to suffer this issue? If so I would definitely disagree. I would be extremely surprised to see this type of issue in e.g. gmail, and if it did occur I think it would be correctly perceived as a massive scandal. Yet Google is almost certainly using Redis for important use cases.
Part of having a security mindset is assuming that system components can fail (or be made to fail) in surprising ways and making sure that the overall system is resilient to those failures. This does... (read more)