AI is spreading through workplaces faster than any other technology in recent memory. Every day, employees connect AI technologies to enterprise systems, often without permission or oversight from IT security teams. The result is what experts call shadow AI – a growing web of tools and integrations that access company data unmonitored.
Dr.Tal Shapira, Co founder and CTO at SaaS security and AI governance solution provider Reco, says this invisible sprawl could become one of the biggest threats facing organisations today, especially since the current speed of AI adoption has outpaced enterprise safeguards.
“We went from ‘AI is coming’ to ‘AI is everywhere’ in about 18 months. The problem is that governance frameworks simply haven’t caught up,” Shapira said.
The invisible risk inside company systems
Shapira said most corporate security systems were designed for an older world where everything stayed behind firewalls and network borders. Shadow AI breaks that model because it works from the inside, hidden in the company’s own tools.
Many modern AI tools connect straight into everyday SaaS platforms like Salesforce, Slack, or Google Workspace. While that is not a risk in itself, AI often does this through permissions and plug-ins that stay active after installation. Those ‘quiet’ links can keep giving AI systems access to company data, even after the person who set them up stops using them or leaves the organisation. That’s a big shadow AI problem.
Shapira said: “The deeper issue is that these tools are embedding themselves into the company’s infrastructure, sometimes for months or years without detection.”
The new class of risk is especially difficult to track as many AI systems are probabilistic. Instead of executing clear commands, AI makes predictions based on patterns, so their actions can change from one situation to the next, making them harder to review and control.
When AI goes rogue
The damage from shadow AI is already evident in real-world incidents. Reco recently worked with a Fortune 100 financial firm that believed its systems were secure and compliant. In days of deploying Reco’s monitoring, the company uncovered more than 1,000 unauthorised third-party integrations in its Salesforce and Microsoft 365 environments – over half of them powered by AI.
One integration, a transcription tool connected to Zoom, had been recording every customer call, including pricing discussions and confidential feedback. “They were unknowingly training a third-party model on their most sensitive data,” Shapira noted. “There was no contract, no understanding of how that data was being stored or used.”
In another case, an employee linked ChatGPT directly to Salesforce, allowing the AI to generate hundreds of internal reports in hours. That might sound efficient, but it also exposed customer information and sales forecasts to an external AI system.
How Reco detects the undetected
Reco’s platform is built to give companies full visibility into what AI tools are connected to their systems and what data those tools can access. It scans SaaS environments for OAuth grants, third-party apps, and browser extensions continuously. Once identified, Reco shows which users installed them, what permissions they hold, and whether the behaviour looks suspicious.
If a connection appears risky, the system can alert administrators or revoke access automatically. “Speed matters because AI tools can extract massive amounts of data in hours, not days,” Shapira said.
Unlike traditional security products that rely on network boundaries, Reco focuses on the identity and access layer. That makes it well suited for today’s cloud-first, SaaS-heavy organisations where most data lives outside the traditional firewall.
A wider security wake-up call
Industry analysts say Reco’s work reflects a larger trend in enterprise security: A shift from blocking AI to governing it. According to a recent Cisco report on AI readiness, in 2025 62% of organisations admitted they have little visibility into how employees are using AI tools at work, and nearly half have already experienced at least one AI-related data incident.
As AI features become embedded in mainstream software – from Salesforce’s Einstein to Microsoft Copilot — the challenge grows. “You may think you’re using a trusted platform,” Shapira said, “but you might not realise that platform now includes AI features accessing your data automatically.”
Reco’s system helps close the gap by monitoring sanctioned and unsanctioned AI activity, helping companies build a clearer picture of where their data is flowing, and why.
Harnessing AI securely
Shapira believes enterprises are entering what he calls the AI infrastructure phase – a period when every business tool will include some form of AI, whether visible or not. That makes continuous monitoring, least-privilege access, and short-lived permissions essential.
“The companies that succeed won’t be the ones blocking AI,” he observed. “They’ll be the ones adopting it safely, with guardrails that protect both innovation and trust.”
Shadow AI, he said, is not a sign of employee recklessness, but of how quickly technology has moved. “People are trying to be productive,” he said. “Our job is to make sure they can do that without putting the organisation at risk.”
For enterprises trying to harness AI without losing control of their data, Reco’s message is simple: You can’t secure what you can’t see.
Image source: Unsplash
The post Reco wants to eliminate the blind spot of shadow AI appeared first on AI News.
