top of page

The Rise of AI Identities in SaaS Security

AI systems are no longer just features.

They operate as identities inside SaaS environments.

If you’re not tracking AI identities, you’re not securing your perimeter.

The Rise of AI Identities in SaaS Security
The Rise of AI Identities in SaaS Security


AI Is No Longer Just a Feature

For years, security teams treated AI as functionality.

A productivity boost.

An analytics engine.

A summarization tool.


But in modern SaaS environments, AI is no longer just a feature.

It acts.

It connects.

It accesses.

It authenticates.


AI systems increasingly operate like identities.

And identities define your attack surface.


What Is an AI Identity?

An AI identity is not a human user.

It is:

  • An AI copilot with delegated file access

  • A summarization engine reading inbox content

  • A CRM assistant analyzing customer data

  • A workflow automation bot acting on behalf of users

  • A model-driven system with API keys


These AI-powered systems often authenticate via:

  • OAuth tokens

  • Service accounts

  • Application permissions

  • API keys


They have access.

They perform actions.

They process data.

From a security perspective, that makes them identities.


The Scale of the Problem

According to Waldo Security’s 2025 SaaS & Cloud Discovery Report:

  • 97% of SaaS applications are unknown to IT

  • 100% of organizations have unauthorized cloud accounts

  • Less than 1% of SaaS accounts enforce MFA



Now combine that with a simple reality:

Almost every modern SaaS platform now leverages AI.

If most SaaS apps are unknown, then most AI-enabled systems operating inside your organization are also unknown.

AI identities are expanding inside environments that were never fully mapped to begin with.


AI Identities Bypass Traditional Controls

AI systems rarely log in like employees.

They:

  • Use long-lived OAuth tokens

  • Operate through application permissions

  • Authenticate via service accounts

  • Sync data across platforms automatically


CISA’s Secure Cloud Business Applications (SCuBA) guidance highlights delegated access as a persistent risk because it can outlive users and evade centralized visibility:https://www.cisa.gov/secure-cloud-business-applications-scuba


When AI tools authenticate via delegated permissions, they inherit access silently.

They do not raise suspicion.

They look legitimate.

Because technically, they are.


AI Identities Change the Perimeter

The CISA Zero Trust Maturity Model centers identity as the foundation of modern security:https://www.cisa.gov/zero-trust-maturity-model

But traditional identity models assume:

  • Humans

  • Devices

  • Managed service accounts


AI identities blur those lines.

They:

  • Operate autonomously

  • Access data continuously

  • Analyze content in real time

  • Trigger actions without human review


If identity is the perimeter, AI has now expanded that perimeter.


Why AI Identity Risk Is Hard to See

AI systems often appear as:

  • “Connected apps”

  • “Integrations”

  • “Automation workflows”

  • “Enterprise applications”

They are rarely labeled as AI.


Security teams may see a connected app but not realize:

  • It processes files with machine learning

  • It analyzes sensitive communications

  • It retains data to improve model outputs

  • It connects across multiple SaaS platforms


If you are concerned about AI in your organization, discovery must start with SaaS.

Because AI lives inside SaaS.

And SaaS spreads at the speed of authentication.


Compliance Does Not Differentiate

Frameworks such as the NIST Privacy Framework and ISO/IEC 27001 require accountability for data processing activities:


If an AI system processes regulated or sensitive data, it must be governed.

It does not matter whether the processor is human or algorithmic.

Unknown AI identities represent undocumented data processing.

Undocumented processing represents exposure.


The Questions You Should Be Asking

For every AI-enabled SaaS platform, ask:

  • What identity does it use to authenticate?

  • What scopes or permissions are granted?

  • Does it have persistent OAuth access?

  • Does it process customer or regulated data?

  • Can access be revoked centrally?

  • Is there a business owner?

If you cannot answer those questions, the AI identity is unmanaged.

Unmanaged identities define modern breach paths.


From Human-Centric to Identity-Centric Security

Security programs historically focused on:

  • Employee lifecycle

  • Password policies

  • MFA coverage

  • Endpoint protection


Modern SaaS environments require a broader model:

  • Human identities

  • Non-human identities

  • Service accounts

  • OAuth tokens

  • AI-driven application permissions


All must be discovered.

All must be governed.


How Waldo Security Surfaces AI Identities

Waldo Security’s SaaS & Cloud Discovery Engine enables organizations to:

  • Discover known and unknown SaaS platforms

  • Surface OAuth and delegated access

  • Identify non-human and AI-driven identities

  • Detect Shadow cloud environments

  • Map SaaS and AI exposure to compliance frameworks


Because nearly every SaaS platform now leverages AI, understanding SaaS usage is inseparable from understanding AI identity risk.

You cannot govern AI if you cannot see the identities behind it.


Conclusion: AI Has Joined Your Identity Perimeter

AI is no longer just software.

It authenticates.

It accesses.

It analyzes.

It acts.


That makes it part of your identity perimeter.

If you are concerned about AI exposure, start with SaaS discovery.


Learn how organizations are uncovering SaaS and AI-related exposure in the 2025 SaaS & Cloud Discovery Report:


About Waldo Security

Waldo Security helps organizations discover, classify, and secure every SaaS and cloud service in use — known or unknown. By illuminating unmanaged identities, OAuth risk, Shadow IT, and AI-enabled SaaS exposure, Waldo enables security teams to defend the identity perimeter with continuous visibility and evidence.



Comments


bottom of page