I have a big thing I’m working on using Cursor with Gemini.
Yesterday — for the second time in a row — I reached a point where Cursor+Gemini simply surrendered and said:
“I’m incompetent.”
The first time it happened, it was a surprise.
But yesterday? Totally unexpected.
The problem I’m solving is about the performance of one of the core Stored Procedures in my client’s LMS.
That SP is a highly complex one — it contains part of the core business logic. It has about four layers of subqueries, includes views, and calls to User Defined Functions. That structure makes the query optimizer blind, so it can’t create a good execution plan.
To help the process, I gave Cursor a basic MCP server so it could directly query the database in my local environment. This allowed it to grab object definitions as needed, and automate results comparison — since the new optimized SP must return exactly the same dataset as the old one.
For this last try, I asked the model to go slow:
gather data, understand the logic, create a plan, and go step by step — asking for guidance along the way.
At the start, everything went well.
It pulled data from the DB, I gave clarifications on some of the complex rules, and we got a result.
It wasn’t as optimized as I wanted, so I pushed forward, asking for alternatives to reach the sub-second goal I have for this SP.
Then the crash moment happened.
As tends to happen with most models I’ve used in Cursor: it started to vibe code on its own.
It got stuck on a casting issue, tried several times, and then suddenly got so “frustrated” that it said:
“I’m incompetent for this task.”
AI models are tools.
They still lose context — and with that, they lose grounding.
When that happens, they start to rage-code. And sometimes, they give up.
Tools fail. Understanding doesn’t. That’s why knowing what’s under the hood still matters — especially when performance is critical.


Comments
Post a Comment