Artificial intelligence (AI) tool ChatGPT went offline on March 20 after a bug affected the web experience and accidentally revealed information about users.
The bug essentially exposed users’ prompt history to others. This meant those who logged into their ChatGPT account found the sidebar displaying previous conversations showed someone else’s chat history. Fortunately, only the titles were shown, but the prompts remained hidden.
The outage lasted more than 10 hours after the problem surfaced around 9:41 PDT on March 20. As of press time, the ChatGPT status page showed that it was experiencing a 5-10% error rate on text-davinci-003 and gpt-3.5-turbo and an approximate doubling of latency.
Users details exposed
To rectify the issue, OpenAI shut ChatGPT for several hours. The company also removed the conversation history feature from the tool; the feature remains unavailable. A spokesperson told Bloomberg they are “working to bring chat history back online as well.”
OpenAI has confirmed that a bug in one of its open-source software was responsible. The company also noted that the substance of other users’ conversations was not made public.
While this was a significant issue for a tool that has acquired over 100 million users, it was not the only problem that resulted from the bug. Some users reported that the ChatGPT website exposed random users’ emails.
People trying to subscribe to ChatGPT Plus experienced this issue while trying to complete the payment form. One user pointed out that he saw four emails.
ChatGPT Plus is the paid version of the free AI tool and costs $20 per month. The upgraded version permits access to the tool at all times, including peak periods, while guaranteeing faster response times and priority access to new features.
Other people who tried subscribing to the premium version reported the same issue. Many posted it on social media, showing this was a widespread problem. OpenAI has since paused its subscription to ChatGPT Plus.
What does this mean for the AI?
The data exposure could discourage some users from subscribing to ChatGPT Plus. At least one user stated that he would not become a paid customer for fear of his email being leaked.
Additionally, the event highlights the risks that come with using ChatGPT, especially for those who might want to treat it as a human capable of keeping secrets. In ChatGPT’s FAQ, there is a warning against sharing sensitive information in conversations.
This is mainly because the OpenAI team may review the conversations and use them for training the system. But the likelihood of a data breach is another valid reason for users to be concerned about.
This article is originally from MetaNews.