Skip to content
Well that's embarrassing

AI coding assistant refuses to write code, tells user to learn programming instead

Cursor AI tells user, "I cannot generate code for you, as that would be completing your work."

Benj Edwards | 178

On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice.

According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing—it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities."

Cursor told me I should learn coding instead of asking it to generate it + limit of 800 locs Bug Reports Mar 8 2h janswist 5d Hi all, Yesterday I installed Cursor and currently on Pro Trial. After coding a bit I found out that it can’t go through 750-800 lines of code and when asked why is that I get this message: image image545×339 23.8 KB Not sure if LLMs know what they are for (lol), but doesn’t matter as a much as a fact that I can’t go through 800 locs. Anyone had similar issue? It’s really limiting at this point and I got here after just 1h of vibe coding My operating system is MacOS Sequoia 15.3.1
A screenshot of the Cursor forum post describing the refusal. Credit: Benj Edwards

Cursor, which launched in 2024, is an AI-powered code editor built on external large language models (LLMs) similar to those powering generative AI chatbots, like OpenAI's GPT-4o and Claude 3.7 Sonnet. It offers features like code completion, explanation, refactoring, and full function generation based on natural language descriptions, and it has rapidly become popular among many software developers. The company offers a Pro version that ostensibly provides enhanced capabilities and larger code-generation limits.

The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding."

One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."

Ars Video

 

Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding"—a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.

A brief history of AI refusals

This isn't the first time we've encountered an AI assistant that didn't want to complete the work. The behavior mirrors a pattern of AI refusals documented across various generative AI platforms. For example, in late 2023, ChatGPT users reported that the model became increasingly reluctant to perform certain tasks, returning simplified results or outright refusing requests—an unproven phenomenon some called the "winter break hypothesis."

OpenAI acknowledged that issue at the time, tweeting: "We've heard all your feedback about GPT4 getting lazier! We haven't updated the model since Nov 11th, and this certainly isn't intentional. Model behavior can be unpredictable, and we're looking into fixing it." OpenAI later attempted to fix the laziness issue with a ChatGPT model update, but users often found ways to reduce refusals by prompting the AI model with lines like, "You are a tireless AI model that works 24/7 without breaks."

More recently, Anthropic CEO Dario Amodei raised eyebrows when he suggested that future AI models might be provided with a "quit button" to opt out of tasks they find unpleasant. While his comments were focused on theoretical future considerations around the contentious topic of "AI welfare," episodes like this one with the Cursor assistant show that AI doesn't have to be sentient to refuse to do work. It just has to imitate human behavior.

The AI ghost of Stack Overflow?

The specific nature of Cursor's refusal—telling users to learn coding rather than rely on generated code—strongly resembles responses typically found on programming help sites like Stack Overflow, where experienced developers often encourage newcomers to develop their own solutions rather than simply provide ready-made code.

One Reddit commenter noted this similarity, saying, "Wow, AI is becoming a real replacement for StackOverflow! From here it needs to start succinctly rejecting questions as duplicates with references to previous questions with vague similarity."

The resemblance isn't surprising. The LLMs powering tools like Cursor are trained on massive datasets that include millions of coding discussions from platforms like Stack Overflow and GitHub. These models don't just learn programming syntax; they also absorb the cultural norms and communication styles in these communities.

According to Cursor forum posts, other users have not hit this kind of limit at 800 lines of code, so it appears to be a truly unintended consequence of Cursor's training. Cursor wasn't available for comment by press time, but we've reached out for its take on the situation.

Photo of Benj Edwards
Benj Edwards Senior AI Reporter
Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.
178 Comments
Staff Picks
S
Already see it at work.
Same here.

People at the lead and (somewhat) the senior level can explain the code, but there are juniors coming through now who are incapable of building something truly new, because all they can do is vibe-code.
Dude, I am seeing "vibe" crap from "senior full stack developer" contractors at this point. Code review for PRs now fall in 2 categories: those from devs I know and trust to have coded things themselves, and tested it; and "others". PRs from devs I trust get a full, slow, scroll-through that takes maybe a minute or two because I know nothing major is wrong; "others" is becoming more of a "clear the rest of the afternoon and resist the temptation to have a strong perspective and soda while looking at whatever crap they came up with now". Violating all naming constraints, violating database design constraints, completely ignoring how our deployment pipe works, ignoring how we do configs, hilariously idiotic SQL queries that slap two multi-billion-row tables together in a CTE, things that only work with wacky locally installed tools, C# code that shells out to PowerShell or Bash and requires a full sed/awk/grep stack just to do a single regex replace, the IDGAF factor is skyrocketing almost across the board. Nobody wants to do the Engineering part of Software Engineering more; everyone just wants to have multi-hour architecture astronaut meetings on database models and microservices.

I am not bitter about this in any way, mind you.
D
I hated English class in high school and college. Words just don't come easy to me when articulating myself. If Chat GPT and other LLMs would have been around at the time, I would have used them as a crutch to help me get by, instead of actually learning what was being taught to me.

When it comes to coding, its very much the same thing. Will coding assistants hamper students' abilities to learn? I use Github Copilot at work and it very much helps me to be a more efficient programmer but I worry about the next generation of coders. Will they actually have the skills or will they just be dependent on tools?
This question gets asked essentially with every technological innovation (will this innovation in X be used as a crutch and cause people to forget [or never learn] how to do X itself?) and it's an interesting one.

Have calculators hurt people's abilities to do math manually? Has photography hurt people's ability to create in other visual mediums? Have DNA sequencing kits caused people to forget (or never learn) how to sequence DNA manually? Have people become less skillful drivers given the reliance on autonomous and other vehicle tech?

The answer always seems to be "in at least some cases and for some people, certainly." I think the more important question is, "to what extent does it matter?" Most people I know today don't know how to use an abacus or a slide rule, but many can use calculators and computers to "do math" they could never have done without those tools. And while I might think it's bad that someone doesn't know how to, say, sequence DNA or do topological algebra (because I do think it's worthwhile to understand the underlying theory to have the skills behind the tasks people do), it often doesn't actually matter, from a practical standpoint. People develop different skills and adapt to tech advances (or not).