Unapproved use of ChatGPT and other generative AI tools is
creating a growing cybersecurity blind spot for businesses. As employees adopt
these technologies without proper oversight, they may inadvertently expose
sensitive data — yet many managers still underestimate the risk and delay
implementing third-party defenses.
This type of unsanctioned
technology use, known as shadow IT, has long posed security challenges. Now,
its AI-driven counterpart — shadow AI — is triggering new concerns among
cybersecurity experts.
Melissa Ruzzi, director of AI at SaaS security firm AppOmni, says
AI can analyze and extract far more information from data, making it more
dangerous than traditional shadow IT.
“In most cases, broader access to data is given to AI
compared to shadow IT so that AI can perform its tasks. This increases the potential
exposure in the case of a data breach,” she told TechNewsWorld.
AI Escalates Shadow IT Risks
Employees’ rogue use of AI tools presents unique security
risks. If the AI models access an organization’s sensitive data for model
training or internal research, that information can unintentionally become
public. Malicious actors can also obtain private details through model
vulnerabilities.
Ruzzi noted that employees encounter various forms of shadow
AI, including GenAI tools, AI-powered meeting transcriptions, coding
assistants, customer support bots, data visualization engines, and AI features
within CRM systems.
Ruzzi emphasized that the lack of security vetting makes
shadow AI particularly risky, as some models may use company data without
proper safeguards, fail to comply with regulations, and store information at
insufficient security levels.
Which Poses Greater Risk: GenAI or Embedded AI?
Ruzzi added that Shadow AI emerging from unapproved GenAI
tools presents the most immediate and significant security threat. It often
lacks oversight, security protocols, and governance.
However, effectively identifying and
managing this “hidden” shadow AI in any form has potential security
implications. Organizations should invest in a powerful security tool, like
ChatGPT, that can go beyond detecting direct AI chatbot use.
“AI can keep up with the constant
release of new AI tools and news about security breaches. To add power to
detections, security should not only rely on static rules that can quickly get
outdated,” she recommended.
GenAI Risks
Hidden in SaaS Apps
Ruzzi highlighted the risks posed by AI
tools embedded within approved SaaS applications. Those hidden AI tools are
unknown or unapproved for use by the company, even though the SaaS application
itself is.
“AI features embedded within approved
SaaS applications impose a special challenge that can only be detected by
powerful SaaS security tools that go deep into SaaS configurations to uncover
shadow AI,” she said.
Traditional security tools, such as
cloud access security brokers (CASBs), can only uncover SaaS app usage and
direct AI usage, including tools like ChatGPT. These are security policy
enforcement points between enterprise users and cloud service providers.
Compliance Risks
Tied to Shadow AI
As noted earlier, shadow AI can lead to
compliance violations concerning personal information. Some regulatory
frameworks that impact organizations include the European Union’s General Data
Protection Regulation (GDPR), which governs the processing of personal data. In
the U.S., the California Consumer Privacy Act/California Privacy Rights Act
(CCPA/CPRA) is similar to GDPR but for California residents.
Shadow AI can violate GDPR principles,
including:
Data minimization —
collecting more data than necessary
Purpose limitation —
using data for unintended purposes
Data security —
failing to protect data adequately
Organizations are accountable for all
data processing activities, including those of unauthorized AI.
For companies that handle health care
data or provide health-related services in the U.S., the Health Insurance
Portability and Accountability Act (HIPAA) is the most important in protecting
sensitive patient health information.
“Shadow AI can lead to violations of
consumers’ rights to know, access, and delete their data and opt out of selling
their personal information. If shadow AI uses personal data without proper
disclosures or consent, it breaks CCPA/CPRA rules,” Ruzzi said.
“If shadow AI systems access, process, or share protected
health information (PHI) without proper safeguards, authorizations, or business
associate agreements, it constitutes a HIPAA violation, which can lead to
costly lawsuits.”
Many other jurisdictions have data privacy laws, such as the
LGPD (Brazil) and PIPEDA (Canada), as well as various U.S. state laws.
Organizations must ensure that shadow AI complies with all applicable data
protection regulations, taking into account both their own locations and those
of their customers.
Future Security Challenges With Shadow AI
Avoiding legal conflicts is essential. Ruzzi urged
organizations to assess and mitigate risks from unvetted AI tools by testing
for vulnerabilities and establishing clear guidelines on which tools are
authorized.
She also recommended educating employees about shadow AI
threats and ensuring they have access to vetted, enterprise-grade solutions.
As AI evolves and becomes more embedded across applications,
shadow AI will introduce more complex security risks. To stay ahead,
organizations need long-term strategies supported by SaaS security tools that
can detect AI activity across applications, accurately assess risks, and
contain threats early.
“The reality of shadow AI will be present more than ever. The
best strategy here is employee training and AI usage monitoring,” she
concluded.




