top of page

How ChatGPT and DeepSeek Are Getting Us in Trouble

  • Writer: Martin Snyder
    Martin Snyder
  • 3 days ago
  • 3 min read


A few years ago, no one could have predicted how quickly AI-powered SaaS platforms would dominate business workflows. Now, it feels like every week brings a new tool promising smarter content, faster code, better insights, or even deepfake detection.

While this explosion of AI tools feels like innovation in overdrive, for security teams, it’s increasingly becoming a nightmare.

Let’s talk about ChatGPT, DeepSeek, and the countless AI tools your employees are already using—with or without approval.


Shadow IT Has Evolved—Welcome to Shadow AI SaaS

It starts innocently enough:

  • A marketer signs up for ChatGPT to generate blog outlines.

  • A developer tries DeepSeek to accelerate coding.

  • A team lead uses an AI transcription tool to summarize meetings.


It all happens in minutes—without involving IT or security, without vendor vetting, and without understanding how these tools process or store data.

This isn’t just shadow IT anymore. It’s AI SaaS sprawl—and it’s happening faster than traditional controls can keep up.


Why the SaaS Discovery Gap Is Growing Wider

In 2023, the number of AI-powered SaaS tools surged across industries—from legal AI assistants like Harvey to medical diagnostics platforms like PathAI. These tools are impressive, but they also introduce unprecedented risks.


Employees are using them to process customer data, intellectual property, and even confidential strategy documents. The problem? Security teams often don’t find out until after the tool is embedded in workflows.


By the time risk assessments begin, sensitive data may already be stored—or even retained indefinitely—by platforms no one evaluated for compliance or data privacy.


You Can’t Block What You Don’t Know Exists

Some organizations try to ban tools like ChatGPT outright. But that’s a short-term fix to a long-term problem.

You could blacklist ChatGPT today, but what about the hundreds of new AI tools launching tomorrow? Trying to block them one-by-one is like playing security whack-a-mole. You’ll always be behind.


Instead, security teams need continuous SaaS discovery—not just for known platforms, but for every AI-driven service employees connect to work accounts.

This is where Waldo Security comes in. Our AI SaaS discovery engine automatically identifies unsanctioned tools, unauthorized OAuth connections, and usage patterns that could lead to non-compliance or data leaks.

Want to see what AI tools your employees are already using? Try our free OAuth scanner.


AI SaaS Tools Aren’t Going Away—And That’s Not a Bad Thing

AI isn’t just hype—it’s helpful. That’s why employees will always find ways to use these tools. And banning them outright can backfire, pushing usage deeper into the shadows.

The better approach?

  • Discover what’s being used.

  • Assess and classify risks.

  • Guide responsible usage through policy and education.

  • Automate offboarding and revoke access when needed.

Waldo Security helps security teams move from reactive to proactive. No more relying on guesswork or hoping your blocklist holds. Just real-time SaaS visibility, built for the era of AI.


Final Thoughts: The Future of SaaS Security Is Discovery-First

The AI SaaS surge isn’t slowing down. From ChatGPT and DeepSeek to the next big thing in generative AI, the risk isn’t the tools themselves—it’s not knowing they’re in use.

If you don’t know what AI apps your workforce is using, you’re already behind.

It’s time to evolve from blocking to visibility-first SaaS security.Because no matter how advanced your firewall or endpoint protection is, you can’t secure what you can’t see.

Comments


bottom of page