Why Did I Cancel Claude?

By

Date

January 1, 2025

Claude used to be my go-to AI for everything: brainstorming, content generation, and—most importantly—data analysis. It was fast, reliable, and delivered impressive results. But something has changed. Over the past few months, I’ve noticed a steep decline in its performance. Tasks it used to handle effortlessly now trigger refusals, moral lectures, or vague avoidance.

As someone who pays for AI tools to do a job, this shift has been frustrating enough for me to cancel my subscription. Here’s a breakdown of why I gave up on Claude—and why I think Anthropic needs to pay attention.

Is Claude Still Capable of Proper Data Analysis?

The short answer: not really, not anymore.

One of Claude’s core functions—and something I relied on heavily—was its ability to analyze data. Whether it was financial trends, ESG analysis, or basic numerical interpretations, Claude used to excel.

But recently, I’ve run into roadblocks. Simple requests like:

  • “Can you analyze this dataset for trends?”
  • “What conclusions can you draw from these numbers?”

were met with hesitations, refusals, or outright shutdowns. Claude would either moralize the situation, claiming it couldn’t comment for ethical reasons, or refuse to engage at all.

Here’s an example: I asked Claude to analyze market entries for certain stock strategies like iron condors and call credit spreads. This isn’t stock-picking—it’s just data-driven analysis. Yet Claude stumbled through my request, clearly afraid of overstepping an imaginary ethical line.

The irony? Claude is built for this kind of work. Data analysis isn’t just an “extra feature”—it’s part of the AI's foundation. If it can’t do this anymore, what’s the point of paying for it?

Is the Problem With My Prompts?

Some users have suggested I’m the problem—that I’m using “bad prompts.” I’ve heard advice like:

“Simple prompts are more likely to be refused. Just ask more specific or complex questions.”

And you know what? I tried that.

Instead of asking, “What are the best stock plays this week?”—a request that Claude would rightly refuse—I carefully worded my prompts to focus solely on analysis. I provided the data myself and avoided any mention of “picking stocks.”

But Claude still refused. It felt like I was trying to outwit the AI rather than collaborate with it. Isn’t the whole point of paying for a premium AI tool that it helps you solve problems without jumping through hoops?

Why Is Claude So Afraid of Overstepping Boundaries?

This is the heart of the issue: Claude seems to be overcorrecting when it comes to “ethical safeguards.”

I understand the need for AI to be responsible. No one wants a tool that spreads misinformation, enables bad behavior, or makes unethical recommendations. Safeguards are important.

But when those safeguards start interfering with normal, harmless tasks, it becomes a problem. I wasn’t asking Claude to predict the stock market. I wasn’t pushing it into ethically gray territory. I was asking for basic trend analysis and data interpretation—something it was designed to do.

I’m not alone, either. Other users have shared similar experiences, reporting that Claude:

  • Refused to engage with harmless projects.
  • Provided moral lectures instead of answers.
  • Shut down simple requests that had no real ethical implications.

While some people claim they’ve “never hit a wall” with Claude, this inconsistency is part of the problem. Why does the AI work seamlessly for some users but refuse similar tasks for others?

Is It Fair to Compare Claude to GPT?

Yes, and here’s why: functionality matters.

OpenAI’s GPT models have historically faced similar criticism for being overly cautious. But OpenAI listened to its users. Over time, GPT has become more balanced. It still prioritizes safety but does so without sacrificing core functionality.

Claude, on the other hand, seems to be moving in the opposite direction. It feels like Anthropic has doubled down on hypersensitivity, even as competitors strike a better balance between safety and utility.

If GPT can analyze data responsibly without moralizing or shutting down, why can’t Claude?

What’s the Bigger Issue Here?

The bigger issue isn’t just about data analysis—it’s about trust.

When I pay for a tool like Claude, I expect it to:

  • Do what it was designed to do.
  • Work consistently across tasks.
  • Help me solve problems, not create more.

If Anthropic’s safeguards are so aggressive that they interfere with functionality, paying users are going to leave. We’re not paying for moral lectures or imaginary ethical dilemmas—we’re paying for a tool that works.

At this point, I feel like Claude has become a shell of its former self.

What Can Anthropic Do to Fix This?

Here’s what I think Anthropic needs to focus on:

  1. Revisit the safeguards: Find a better balance between responsibility and functionality. Overly cautious restrictions are driving away paying users.
  2. Be transparent about limits: If certain tasks are off-limits (e.g., financial recommendations), clearly communicate where the line is. Users shouldn’t have to guess why Claude refuses a task.
  3. Improve consistency: Why can some users analyze data without issues while others hit roadblocks? Anthropic needs to ensure Claude works predictably for all paying customers.
  4. Listen to the community: User feedback matters. If people are reporting that Claude is “afraid of its own shadow,” that’s a sign something needs to change.

Should You Still Use Claude?

If you’re using Claude for brainstorming or light content generation, you might still find value in it. But if you rely on AI for tasks like data analysis, trend identification, or contextual problem-solving, be prepared for roadblocks.

For me, those roadblocks became too frustrating to justify the subscription. I canceled Claude because I need an AI that works consistently without moralizing or refusing harmless tasks.

Final Thoughts: Is Claude Losing Its Audience?

If Anthropic doesn’t address this growing issue, Claude risks losing its core audience. Paying users don’t want an AI that freezes up over simple analysis or avoids tasks that have zero ethical implications.

The AI world is competitive. Tools like GPT are already stepping up, offering better functionality without compromising responsibility. If Claude can’t keep up, users like me will look elsewhere.

In the end, I canceled Claude because I need functionality over hypersensitivity. I need an AI that does the job. Anthropic, if you’re listening: it’s time to wake up.