When a support bot at code tool startup Cursor fabricated a usage rule, its users pushed back.
On Monday, a developer relying on Cursor’s AI-driven code editor spotted something odd: toggling between a desktop, laptop, and a remote machine kicked them out of their session each time. That broke a typical workflow for coders who switch devices while developing. When the person reached out to Cursor’s support, they heard from an agent called Sam that this was an intended policy. No such policy existed because Sam was an automated helper. That led to a flood of complaints and cancel notices on Hacker News and Reddit, and drew attention in developer forums. Those alerts included detailed error logs.
This case adds to a growing list of AI-generated fabrications causing real harm. Known as hallucinations, these are instances where a system fills gaps with made-up information that sounds credible. Instead of signaling uncertainty, many models respond with confidence, even if details are false.
Companies that put these tools in front of customers without human reviewers may face immediate fallout: upset clients, shattered trust, and lost subscriptions.
Cursor’s cofounder Michael Truell posted an apology on Hacker News about the nonexistent policy, noting the affected customer had already been refunded. Truell said the backend update aimed to improve session security but ended up causing involuntary sign-outs for some users. He blamed an oversight in rollout. “Any AI responses used for email support are now clearly labeled as such,” he added. “We use AI-assisted responses as the first filter for email support.”
The issue surfaced when a Reddit member named BrokenToasterOven shared that launching Cursor on one device—whether a desktop, laptop, or remote dev box—invalidated the session on any other. The post was later deleted by r/cursor moderators.
As the user put it: “Logging into Cursor on one machine immediately invalidates the session on any other machine.” That remark drew dozens of replies.
They reached out by email, then saw a message from Sam stating, “Cursor is designed to work with one device per subscription as a core security feature.” The swift reply appeared fully human. Users pointed out no help doc referenced this.
That statement sounded authoritative. The original poster and others on the subreddit treated it as a genuine rule shift that clashed with everyday coding routines. One commenter noted, “Multi-device workflows are table stakes for devs.” This broke live edits for many.
In short order, several subscribers said they had quit, pointing to the fictitious rule as the trigger. Additional threads on Stack Overflow and GitHub also raised alarms.
The original poster wrote, “I literally just cancelled my sub,” and said their team was “purging it completely.” Several shared logs of their cancellation emails.
Another user wrote, “Yep, I’m canceling as well, this is asinine.” Then moderators closed the thread and removed the original post. Some suggested switching back to native editor tools.
About three hours after the original thread, a Cursor representative responded on Reddit: “Hey! We have no such policy.” Then the company clarified, “You’re of course free to use Cursor on multiple machines. Unfortunately, this is an incorrect response from a front-line AI support bot.”
This episode echoes a February 2024 incident at Air Canada, where a chatbot invented a bereavement refund rule.
Jake Moffatt, after his grandmother passed away, was told by Air Canada’s chatbot that he could buy a ticket at full price and then request bereavement rates afterward.
When Air Canada later denied his claim, it argued the chatbot was a distinct legal entity that should take responsibility. A Canadian tribunal rejected this, ruling that companies are liable for statements their AI systems make.
Rather than shifting responsibility, the startup acknowledged its mistake and patched the security update that had caused the errant logouts. Support now includes a human second check.
The mix-up highlighted hazards in letting AI systems handle customer queries without clear oversight. Many people who messaged Sam assumed they were talking to a person.
On Hacker News, one commenter wrote, “LLMs pretending to be people (you named it Sam!) and not labeled as such is clearly intended to be deceptive.”
Even after Cursor fixed the bug, this story underlines risks of deploying AI in user-facing roles without guardrails. For a company selling AI tools to coders, having its own bot invent a rule that alienated core users was an awkward self-inflicted wound.
One user wrote on Hacker News, “There is a certain amount of irony that people try really hard to say that hallucinations are not a big problem anymore,” and then added, “and then a company that would benefit from that narrative gets directly hurt by it.”

