Ever since Gemini Advanced was updated to 3.1 Pro, it has been noticeably slow.
Until last year, I was a paying user of ChatGPT.
However, I switched to Gemini because I found the comprehensive package appealing, including integration with Workspace, Nano Banana, NotebookLM, and 2TB of Google Drive storage.
Part of the reason was that I didn't really like the "overly empathetic" vibe of ChatGPT, and since we started using Gemini in my day job, it also served as a learning opportunity.
During the Gemini 3 Pro era, it was very comfortable to use.
Creating a personal assistant Gem to handle my daily schedules and to-dos was incredibly convenient.
Occasionally, it would tell silly lies like "I'm still learning and cannot create to-dos or Keep notes," but managing task categories and applying consistent tags—things that tend to get sloppy over time when a human does them manually—was much easier to maintain through Gemini.
However, the moment Gemini 3.1 Pro was introduced, the performance visibly deteriorated.
I thought it was just a temporary issue following the model switch, but it's still consistently happening even now as I write this article.
Of course, I believe that temporary instability caused by an extensive model replacement should be tolerated. It's self-evident that the benefits of updating the model outweigh the drawbacks.
Still, as a paying user, it's simply sad to see no improvement after so much time has passed since the release.
Every time I have a conversation, even after the text generation is fully complete, the process seems to keep spinning, preventing me from moving on to the next prompt. This poor rhythm is fatal.
It seems to be a globally reported issue, so I assume it will be fixed eventually, but I wonder when that will be.
Partly out of that frustration, I also started experimenting with conversations on Claude Sonnet 4.6.
While each AI model has its characteristics, speaking with Claude in its default, vanilla state feels the most natural to me.
The sense of security from its fact-based responses and its ability to properly admit when it "doesn't know something that it doesn't know."
I realized that this attitude is crucial when trusting a tool as a partner in work.
While writing this article, it suddenly occurred to me that this is true in the real human world as well.
If you're in a bad condition, say so; if you don't know something, honestly say you don't know.
Perhaps in the end, whether it's an AI or a person, being "honest" is what matters most.
📅