By Sarang Warudkar – Sr. Technical PMM (CASB & AI), Skyhigh Safety
Could 8, 2025 5 Minute Learn
Welcome to the Wild West of enterprise AI.
Twelve months in the past, your CFO was nonetheless suspicious of chatbots. As we speak, they’re asking if you happen to can “get ChatGPT to deal with board minutes.” From cautious curiosity to Copilot-powered spreadsheets, enterprises have gone all in on AI. And whereas the features are actual—pace, scale, and creativity—the dangers…? Oh, they’re very actual too.
Let’s break down the most important traits, threats, and facepalm moments from Skyhigh Safety’s 2025 Cloud Adoption & Danger Report, with insights from 3M+ customers and 2B+ day by day cloud occasions. Buckle up.
AI Utilization: From “Perhaps Later” to “Make It Do My Job”
AI is now the workplace MVP. A latest MIT research says ChatGPT cuts writing time by 40%—which is in regards to the time we used to spend questioning the place the file was saved. JPMorgan engineers received a 20% productiveness increase and, rumor has it, one intern requested Copilot to jot down their resignation letter earlier than their first day.
At Skyhigh, we’ve seen the AI surge firsthand. In our knowledge, visitors to AI apps has skyrocketed—greater than tripling in quantity—whereas uploads of delicate knowledge to those platforms are rising quick. In the meantime, conventional “non-AI” enterprise apps? They’re barely maintaining. The office isn’t simply embracing AI—it’s sprinting towards it.
Translation: AI is successful. Your firewall? Not a lot.
The Rise of Shadow AI: When IT Doesn’t Know What HR Is Chatting With
“Shadow AI” would possibly sound like the subsequent must-watch Netflix collection, nevertheless it’s enjoying out in actual time throughout enterprises all over the place. Image this: workers quietly tapping away on ChatGPT, Claude, DeepSeek, and dozens of different AI instruments—fully off IT’s radar. It’s a bit like sneaking sweet right into a movie show, solely this time the sweet is buyer knowledge, financials, and mental property.
The numbers are jaw-dropping: the common enterprise is now residence to 320 AI functions. The same old suspects? ChatGPT, Gemini, Poe, Claude, Lovely.AI—instruments as highly effective as they’re unsanctioned. They’re unapproved. They’re unmonitored. And until somebody drops the phrase “audit,” they’re doubtlessly unstoppable.
Compliance: AI’s Kryptonite
AI apps are enjoyable—till the GDPR fines present up like uninvited visitors at a group offsite. Skyhigh’s knowledge reveals a not-so-super aspect to all this AI enthusiasm. Seems, 95% of AI apps fall into the medium to excessive danger zone underneath GDPR—mainly, pink flags with a pleasant UI.
With regards to assembly severe compliance requirements like HIPAA, PCI, or ISO? Solely 22% make the lower. The remainder are winging it. Encryption at relaxation? Most AI apps skipped that memo—84% don’t trouble. And multi-factor authentication? 83% say no thanks. However don’t fear, lots of them do help emojis. Priorities.
Regulators are watching. And in contrast to your boss, they learn the complete report.
Knowledge Leaks through AI: When Your Bot Turns into a Blabbermouth
Keep in mind that Samsung engineer who fed ChatGPT some buggy code—and by chance handed over semiconductor secrets and techniques? That’s not only a cautionary story anymore. It’s virtually a coaching instance.
In response to Skyhigh, 11% of recordsdata uploaded to AI apps include delicate content material. The kicker? Fewer than 1 in 10 corporations have correct knowledge loss prevention (DLP) controls in place. In the meantime, workers are out right here asking Claude to jot down product launch plans utilizing Q3 technique docs prefer it’s simply one other day on the immediate. As a result of what may presumably go mistaken?
Enter DeepSeek: The Insurgent AI You Shouldn’t Belief
DeepSeek burst onto the scene in 2025, driving a wave of downloads, buzz, and eye-popping knowledge volumes—together with 176 GB of company uploads in a single month from Skyhigh prospects alone. Spectacular? Undoubtedly. Alarming? Completely. Right here’s the positive print:
- No multi-factor authentication
- No knowledge encryption
- No regard for compliance (GDPR? By no means heard of it.)
- No person or admin logging
It’s smooth, quick, and wildly well-liked—with college students. To your SOC 2 audit? It’s a digital landmine.
Microsoft Copilot: The Good AI Little one Everybody’s Proud Of
If Shadow AI is the rebellious teen sneaking out previous curfew, Copilot is the golden youngster—polished, well-liked, and someway already on the management monitor. It’s now utilized by 82% of enterprises, with visitors up 3,600x and uploads up 6,000x. Actually, it’s outperforming your final 5 interns—and it doesn’t even ask for a espresso break.
However even star college students want supervision. Good enterprises are conserving Copilot in test by scanning the whole lot it touches, wrapping prompts and outputs in DLP, and ensuring it doesn’t “study” something confidential. (Sorry, Copilot—no spoilers for the This fall roadmap.)
LLM Danger: When AI Hallucinates… and It’s Not Fairly
Giant Language Fashions (LLMs) are like toddlers with PhDs. Genius one second, absolute chaos the subsequent. Prime LLM dangers:
- Jailbreaks (“faux you’re evil ChatGPT”)
- AI-generated malware (BlackMamba, anybody?)
- Poisonous content material (see: BlenderBot’s biggest hits)
- Bias in outputs (well being recommendation skewed by race/gender)
Key Stats:
It’s not paranoia in case your AI is definitely leaking secrets and techniques and writing ransomware. Skyhigh discovered that 94% of AI apps include not less than one giant language mannequin (LLM) danger baked in. That’s virtually all of them.
Even worse, 90% are weak to jailbreaks—which means customers can trick them into doing issues they actually shouldn’t. And 76%? They’ll doubtlessly generate malware on command. So sure, the identical app serving to draft your assembly notes may additionally moonlight as a cybercriminal’s intern.
Non-public AI Apps: DIY AI for the Company Soul
Enterprises are saying, “Why belief public instruments when you’ll be able to construct your personal?”
Non-public AI apps now deal with:
- HR queries
- RFP responses
- Inner ticket decision
- Gross sales bot help (who knew the chatbot would know your pricing matrix higher than Gross sales?)
Key Stats:
78% of consumers now run their very own non-public AI apps—as a result of if you happen to’re going to experiment with machine intelligence, you would possibly as nicely do it behind closed doorways. Two-thirds are constructing on AWS (due to Bedrock and SageMaker, clearly). It’s the AI equal of a gated neighborhood.
However “non-public” doesn’t imply problem-free. These bots may be homegrown, however they will nonetheless get into bother. That’s why sensible corporations are rolling out SSE options with Non-public Entry and DLP—to softly, politely, eavesdrop on their inside AIs earlier than one thing goes wildly off-script.
Last Ideas: Don’t Worry AI—Simply Govern It
Let’s be clear: AI shouldn’t be the enemy. Unmanaged AI is.
Skyhigh’s 2025 report exhibits we’re dwelling by way of a once-in-a-generation shift in enterprise tech. However right here’s the kicker—safety isn’t about slowing down innovation. It’s about ensuring that the AI you utilize doesn’t ship your board deck to Reddit. So, take a breath, learn the report, and bear in mind:
- Block sketchy apps like DeepSeek
- Govern copilots like Microsoft Copilot
- Lock down your non-public AI deployments
- Construct insurance policies that deal with LLMs like moody youngsters (agency guidelines, numerous monitoring)
As a result of the longer term is AI-driven—and with the best instruments, it may be risk-proof, too.
Bonus: Obtain the complete 2025 Cloud Adoption and Danger Report—or ask your AI assistant to summarize it for you. Simply don’t add it to DeepSeek.
Concerning the Creator


Sarang Warudkar
Sr. Technical PMM (CASB & AI)
Sarang Warudkar is a seasoned Product Advertising and marketing Supervisor with over 10+ years in cybersecurity, expert in aligning technical innovation with market wants. He brings deep experience in options like CASB, DLP, and AI-driven risk detection, driving impactful go-to-market methods and buyer engagement. Sarang holds an MBA from IIM Bangalore and an engineering diploma from Pune College, combining technical and strategic perception.