Here Is The Reason Behind ChatGPT's Lazy Responses

For over a month, ChatGPT has been acting "lazy." This description emerges from the AI chatbot's reluctance to perform certain tasks or a recently developed tendency to deliver shorter or partial solutions. Many attuned to this behavioral change have linked this to a "winter break" slowdown or seasonal depression.

AI simulates human intelligence in various areas of life — real, digital, and their cross-section. Besides excelling at logical tasks, which a machine or computer is programmed to complete, AI has surpassed reasonable expectations, even in creative tasks such as writing and creating images. With AI- and human-generated content and artwork becoming bewilderingly tricky to differentiate, it is instinctual for a layperson to believe that AI might replicate a cyclical behavior of humans going about things slowly around the holiday season. However, what's most interesting is that many AI researchers and experts seem to back the hypothesis as highly probable.

Reports about ChatGPT being "lazy" started pouring in about two months ago, with users complaining that ChatGPT Plus — ChatGPT's paid tier based on the GPT-4 large language model — is taking longer than usual to perform the same task. A pattern that can be seen consistent with these complaints is that the AI chatbot requires users to break down inputs into smaller chunks for it to process them. Typing multiple prompts to accomplish a task can be incredibly frustrating since ChatGPT Plus users have to bear with a limit of 50 messages every three hours.

GPT-4 Turbo at fault?

The exact reason behind the alleged slowdown and the bot's laziness has yet to be discerned, despite being acknowledged by OpenAI early in December 2023. Posting on X (formerly Twitter), OpenAI confirmed that GPT-4 is notably lazier, pinning it to the unpredictability of large language models. Interestingly, ChatGPT gets back on track with mere words of encouragement, when incentivized with a tip, or when asked to "take a deep breath."

Notably, ChatGPT's laziness can be linked to two key developments around the time the earliest reports came in. On November 6, 2023, OpenAI hosted its first "DevDay" developer conference in San Francisco. At the event, it announced custom "GPTs," allowing paid users to create single-purpose mini chatbots within ChatGPT. These GPTs can be trained with specific information — such as a product's troubleshooting document — to respond with customized solutions.

Alongside the new functionality, it announced GPT-4 Turbo — a more advanced model with broader training data up to April 2023, lower pricing, and capabilities such as vision (image-to-text) and text-to-speech. GPT-4 Turbo succeeds GPT-4 and can be used to power third-party apps through APIs — or preprogrammed code that allows direct integration into other applications. Ironically, OpenAI also claims the new model can process longer queries while noting a more "optimized" processing that helps reduce cost per query (measured in tokens or characters).

Seasons changing for AI too?

OpenAI did not explicitly mention that ChatGPT is also being upgraded to the Turbo model. However, switching to a newer model seems the most reasonable cause behind the delayed and less than impressive responses. Another potential explanation, independent of the model's update, has been proposed, with some users linking the delayed responses to the actual tendency to slow down, inspired by humans.

Developer Simon Willison highlighted on X (formerly Twitter) that each query on ChatGPT has a hidden timestamp, and it may have influenced the AI into believing that November and December are months for most people to slow down and unwind for the holidays and New Year's.

While AI's behavior remains unexplained — at least to those outside big companies like OpenAI and Google DeepMind — it's entirely possible that this human behavior may have crept into the training data. While there's no proving this hypothesis, data scientist Rob Lynch proposed a statistical model on X in an attempt to find out if the query's date makes any difference to the results produced by the GPT-4 Turbo model (not ChatGPT). They found responses to queries marked for May are longer while those for December are shorter.

OpenAI's team is probably just enjoying the holidays too

While I'm far from a subject expert on LLMs, Lynch's explanations sound fairly convincing — and at least 2,300 who like the post probably agree. However, the winter break hypothesis has its own critics, as another researcher, Ian Arawjo, responded that they could not reproduce the same results, and the comparison between queries labeled for May and December was too small to be noticed by users.

Meanwhile, there has been no update from OpenAI or CEO Sam Altman on X about the issue, even though both accounts can otherwise be seen posting actively. Amid ChatGPT's lazy-loading issues, the company went through a brief upheaval, with Altman temporarily being ousted from his position as the CEO, being hired by Microsoft, and then re-hired as OpenAI's CEO – in less than a week, while the majority of the OpenAI workforce threatened to mass-resign as a protest.

Perhaps, despite all the facade, there is still a power tussle inside the company, preventing it from fixing the issue. Or perhaps, the team is taking a well-deserved break after a year crammed with mentions of AI in almost every marketing and sales deck, tech podcast, or YouTube video — even in the Cambridge Dictionary's Word of the Year 2023.