By Thyaga Vasudevan – Govt Vice President, Product
February 3, 2025 6 Minute Learn
DeepSeek, a Chinese language synthetic intelligence startup based in 2023, has skilled a meteoric rise in reputation over the previous week. Not solely did it surpass ChatGPT to turn into the highest-rated free app on the U.S. App Retailer, however the AI assistant additionally had a profound market affect, as main know-how shares skilled important declines. Nvidia, a number one AI chip producer, noticed its shares plummet by practically 17%, leading to a lack of roughly $589 billion in market worth—the biggest single-day loss in Wall Avenue historical past.
The innovation round DeepSeek represents an elevated democratization of AI, which is sweet for humanity at giant. The AI firm’s innovation has led to the providing of an open-source AI mannequin that rivals present platforms in efficiency whereas being less expensive and energy-efficient. The app’s user-friendly interface and clear “considering out loud” characteristic have additional enhanced its enchantment, permitting customers to comply with the AI’s reasoning course of.
The arrival of yet one more AI chatbot with its personal LLM mannequin additionally poses an necessary query to corporations, particularly giant enterprises, as they improve their AI funding. How ought to enterprises consider a brand new AI chatbot for his or her consumption? What components go into deciding the advantages and drawbacks to workers’ consumption of the AI utility and company adoption? Latest studies and real-world incidents present that sure LLMs—particularly open-source variants missing strong safety frameworks—pose important threats to information safety, regulatory compliance, and model popularity.
On this weblog, we discover:
- The rise of dangerous LLMs, like DeepSeek
- Key safety vulnerabilities related to AI
- How enterprises can consider, govern, and safe new AI chatbots
- Why an built-in strategy—resembling Skyhigh Safety SSE—is essential
The Rise of Dangerous LLMs and Chatbots
Open-source LLMs like DeepSeek have sparked each pleasure and concern. Not like enterprise-vetted AI options, open-source LLMs usually lack the strong safety controls wanted to safeguard delicate enterprise information as proven in a latest report from Enkrypt AI:
- 3x extra biased than comparable fashions
- 4x extra prone to generate insecure code
- 11x extra liable to dangerous content material
Regardless of these points, DeepSeek soared to the highest of the Apple App Retailer, surpassing even ChatGPT by hitting 2.6 million downloads in simply 24 hours (on twenty eighth Jan 2025). This explosive adoption highlights a elementary stress: AI is advancing at breakneck pace, however safety oversight usually lags behind, leaving enterprises uncovered to potential information leaks, compliance violations, and reputational injury.
Key Threat Areas When Evaluating AI Chatbots
As we highlighted in our Skyhigh AI Safety Weblog, companies should acknowledge the inherent dangers AI introduces, together with:
- Lack of utilization information: Safety groups don’t perceive what number of customers inside their enterprises are utilizing shadow AI apps to get their work finished.
- Restricted understanding of LLM threat: Understanding which AI apps and LLM fashions are dangerous is essential to governance and this info is just not simply acquired.
- Knowledge exfiltration: Within the strategy of getting work finished, customers add company information into AI apps and this might result in exfiltration of delicate information.
- Adversarial prompts: AI chatbots can usually present responses that are biased, poisonous, or just incorrect (hallucination). As well as, they’ll present code which might comprise malware. Consumption of those responses may cause issues for the corporate.
- Knowledge Poisoning: Enterprises are creating customized public or personal AI functions to swimsuit their enterprise wants. These apps are educated and tuned utilizing firm information. If the coaching information is compromised both inadvertently or by malicious intent, it may well result in the customized AI app offering incorrect info.
- Compliance & Regulatory dangers: Use of AI apps opens enterprises as much as larger compliance and regulatory dangers, both because of information exfiltration, publicity of delicate information, or incorrect or adversarial prompts related to customized AI chatbots.
Why an Built-in Strategy Issues: Skyhigh Safety SSE
As enterprises consider new AI apps or chatbots they need to take into account if they’ve the instruments to use the mandatory controls to guard their company property. They need to make sure that their safety stack is positioned not simply to use the controls on AI functions, but in addition to guage and reply to malicious exercise and threats that come up from these functions.
Safety Companies Edge (SSE) options resembling Skyhigh Safety are a key element of enterprise AI safety. These instruments are already built-in with the enterprise safety stack as corporations have secured on-prem and cloud visitors. Safety groups have already outlined governance and information safety insurance policies and these may be simply prolonged to AI functions. And at last, by overlaying internet, shadow apps, sanctioned apps, and personal apps by their versatile deployment modes, SSE options can cowl the spectrum of AI footprint throughout the enterprise and supply complete safety.
Listed below are the highest controls enterprises need to apply on AI apps:
- Governance of Shadow AI: Driving governance of shadow AI functions requires understanding utilization and threat of AI functions in addition to making use of controls. Main SSE options present complete visibility into shadow AI functions. As well as, they supply a deep understanding of AI threat, which incorporates how dangerous the underlying LLM mannequin is to dangers resembling jailbreaking, bias, toxicity, and malware. Lastly, these functions may be detected, grouped, and controls may be enforced with out requiring guide intervention.
- Knowledge Safety: The first concern that enterprises have with AI apps is the exfiltration of delicate company information into unsanctioned and dangerous AI apps as workers need to make the most of the numerous productiveness positive factors provided by AI. This drawback is just not completely different from some other shadow utility, however has gained prominence because of the important development that AI apps have seen in a brief time period. Utilizing SSE options, enterprises can prolong their present information safety controls to AI apps. Whereas some options solely supply these capabilities for company sanctioned apps that are built-in by way of APIs, main SSE options, resembling Skyhigh Safety, supply unified information safety controls. This implies the identical coverage may be utilized to a shadow app, sanctioned app, or personal app.
- Adversarial Immediate controls: The arrival of LLM fashions has given rise to a brand new threat vector in adversarial prompts. This refers to finish customers making an attempt to control the LLM fashions to supply undesirable or unlawful info resembling with jailbreaking or immediate injections. It might additionally consult with the AI apps offering poisonous, biased, harmful, NSFW, or incorrect content material of their responses. In both of those instances, the corporate is vulnerable to this content material getting used inside its company materials and making it weak to regulatory, governance, and reputational dangers. Corporations need to apply controls to detect and remediate dangerous prompts similar to they do with DLP.
- Knowledge Poisoning remediation: As enterprises more and more create their customized AI chatbots utilizing OpenAI GPTs or Customized Copilots, the sanctity of the coaching information used to coach these chatbots has gained significance from a safety perspective. If somebody with entry to this corpus of coaching information ‘poisons’ it with incorrect or malicious inputs, then it probably impacts the chatbot’s responses. This might topic the corporate to authorized or different enterprise dangers, particularly if the chatbot is open to public entry. Enterprises are already performing on-demand (or data-at-rest) DLP scans on coaching information to take away delicate information. They’re additionally seeking to carry out comparable scans to determine potential immediate injection or information poisoning makes an attempt.
- Compliance and regulatory enforcement: Enterprises are utilizing SSE options to implement governance and regulatory compliance, particularly with information being uploaded to cloud apps or shared with exterior events. As they undertake AI in a number of company use instances, they need to the SSE options to increase these controls to AI apps and proceed to allow entry to their workers.
The Way forward for AI Safety
The fast evolution of AI calls for a brand new safety paradigm—one which ensures innovation doesn’t come at the price of information safety. Enterprises seeking to leverage LLMs should accomplish that with warning, adopting AI safety frameworks that defend in opposition to rising threats.
At Skyhigh Safety, we’re dedicated to serving to companies securely embrace AI whereas safeguarding their most important property. To be taught extra about learn how to defend your group from dangerous AI utilization, discover our newest insights within the Skyhigh AI Safety Weblog.


Concerning the Writer
Thyaga Vasudevan
Govt Vice President, Product
Thyaga Vasudevan is a high-energy software program skilled at the moment serving because the Govt Vice President, Product at Skyhigh Safety, the place he leads Product Administration, Design, Product Advertising and GTM Methods. With a wealth of expertise, he has efficiently contributed to constructing merchandise in each SAAS-based Enterprise Software program (Oracle, Hightail – previously YouSendIt, WebEx, Vitalect) and Shopper Web (Yahoo! Messenger – Voice and Video). He’s devoted to the method of figuring out underlying end-user issues and use instances and takes pleasure in main the specification and growth of high-tech services to handle these challenges, together with serving to organizations navigate the fragile stability between dangers and alternatives. Thyaga loves to teach and mentor and has had the privilege to talk at esteemed occasions resembling RSA, Trellix Xpand, MPOWER, AWS Re:invent, Microsoft Ignite, BoxWorks, and Blackhat. He thrives on the intersection of know-how and problem-solving, aiming to drive innovation that not solely addresses present challenges but in addition anticipates future wants.