ChatGPT’s history bug may also have exposed payment details, says OpenAI

0
466

OpenAI announced new details about why it took ChatGPT offline on Monday, and it now says some users’ payment details may have been exposed during the incident.

According to a message from the company, a bug in an open source library called redis-py caused a caching issue that may have caused some active users to see the last four digits and expiration date of another user’s credit card, along with their first and last name, email address and payment address . Users may also have seen snippets of other people’s chat history.

This isn’t the first time caching issues have caused users to see each other’s data. were shown pages with information from other users’ accounts. There is some irony in the fact that OpenAI puts a lot of thought and research into figuring out the potential security and safety ramifications of its AI, but it was overtaken by a very well-known security vulnerability.

The company says the payment information leak may have affected about 1.2 percent of ChatGPT Plus who used the service on March 20 between 4 a.m. and 1 p.m. ET.

You were only affected if you were using the app during the incident.

According to OpenAI, there are two scenarios that can cause payment details to be exposed to an unauthorized user. If a user went to the My Account > Manage Subscription screen during the time frame, they may have seen information from another ChatGPT Plus user who was actively using the service at the time. The company also says that some subscription confirmation emails sent during the incident went to the wrong person and contained the last four digits of a user’s credit card number.

The company says it’s possible both things happened before the 20th, but it has no confirmation that it ever happened. OpenAI has contacted users whose payment information may have been made public.

As regards How this all happened, apparently it came down to caching. The company has a full technical explanation in his post, but the TL; DR is that it uses a piece of software called Redis to cache user information. Under certain circumstances, a canceled Redis request would result in corrupted data being returned for another request (which should not have happened). Usually the app would fetch that data, say “this isn’t what I asked for”, and throw an error.

But if the other person asked for the same type of data – say, if they wanted to load their account page and the data was someone else’s account data – the app decided that everything was fine and showed it to them.

That’s why people saw other users’ payment details and chat history; they were served cached data that should have gone to someone else but didn’t due to a canceled request. That is also why it only affected users who were active. People who were not using the app would not have their data cached.

What really made things bad was that on the morning of March 20, OpenAI made a change to its server that inadvertently caused a spike in canceled Redis requests, reducing the number of chances for the bug to send an unrelated cache to sending someone back increased.

OpenAI says the bug, which appeared in a very specific version of Redis, has now been fixed and that the people working on the project have been “great collaborators.” It also says it is making some changes to its own software and practices to prevent things like this from happening again, including adding “redundant checks” to ensure that the data being served actually belongs to the user requesting it and reducing the chance of its Redis cluster spitting out errors under high load.

While I’d say those checks should have been there in the first place, it’s a good thing OpenAI has added them now. Open source software is essential to the modern web, but it also brings its own challenges; because anyone can use it, bugs can affect many services and companies at once. And if a malicious actor knows what software a specific company is using, they may be able to target that software to try and knowingly introduce an exploit. There are controls that make this more difficult, but as companies like Google have shown, it’s best to make sure it doesn’t happen and be prepared if it does.