AI-Targeted Information Safety Report Finds Hundreds of Dangerous AWS Insurance policies Per Account
A brand new information safety report from Varonis uncovers a significant cloud governance hole: the common Amazon Internet Companies (AWS) cloud surroundings accommodates greater than 3,000 over-permissive entry insurance policies, creating a large and largely invisible assault floor for unhealthy actors to take advantage of.
Whereas the report surveyed information dangers throughout a spread of cloud platforms — together with Microsoft 365 and Salesforce — AWS was singled out for its quantity and sprawl of entry insurance policies, with tens of 1000’s of permission guidelines per account and 1000’s flagged as overly broad or dangerous.
Varonis, an information safety and analytics specialist, launched its 2025 State of Information Safety Report on Might 20, highlighting how extreme permissions and AI-driven dangers are leaving cloud environments dangerously uncovered.
The report, based mostly on an evaluation of 1,000 real-world IT environments, paints a troubling image of enterprise cloud safety within the age of AI. Amongst its most alarming findings: 99% of organizations had delicate information uncovered to AI instruments, 98% used unverified or unsanctioned apps — together with shadow AI — and 88% had stale however still-enabled person accounts that would present entry factors for attackers. Throughout platforms, weak id controls, poor coverage hygiene, and inadequate enforcement of safety baselines like multifactor authentication (MFA) have been widespread.
As a part of the wide-ranging report, the corporate particularly analyzed organizations utilizing AWS.
“AWS alone has over 18,000 attainable id and entry administration permissions to handle,” the report famous, underscoring the daunting complexity of securing cloud environments at scale. With such a sprawling permission set, organizations might wrestle to implement least-privilege entry. The report cites this complexity as a consider widespread entry misconfigurations, however doesn’t straight hyperlink it to particular over-permissive coverage standards.
Varonis recognized problematic AWS insurance policies by way of its evaluation of real-world environments, however the report doesn’t specify how these insurance policies have been labeled as over-permissive. The evaluation means that many organizations nonetheless lack the controls or visibility wanted to successfully audit and tighten cloud entry throughout their environments.
Past coverage sprawl, the report surfaced a number of different dangers distinctive to or particularly prevalent in AWS environments:
- Huge coverage depend: The typical AWS account had greater than 20,000 managed insurance policies, complicating entry oversight and rising the prospect of misconfiguration.
- Over-permissive configurations: The report flagged 1000’s of AWS insurance policies as overly permissive, although it doesn’t specify the precise traits (resembling wildcard actions or broad scopes) utilized in its classification.
- Non-human identities under-secured: Varonis flagged poor credential administration and extreme privileges related to APIs and repair accounts.
- Public publicity dangers: Some AWS identities have been configured to share public hyperlinks that would expose inside information to unauthorized customers or AI instruments.
- Unmasked coaching information: Delicate cloud information utilized in AI mannequin coaching was continuously left unencrypted or uncovered to nameless customers, rising the danger of mannequin poisoning or information leakage.
The report surfaces a spread of tendencies throughout all main cloud platforms, some revealing systemic weaknesses in entry management, information hygiene, and AI governance.
Whereas we’re specializing in AWS right here for readers of AWS Insider, the report closely targeted on AI, with the corporate in an accompanying weblog put up saying:
AI is all over the place. Copilots assist workers increase productiveness and brokers present front-line buyer help. LLMs allow companies to extract deep insights from their information.
As soon as unleashed, nevertheless, AI acts like a hungry Pac-Man, scanning and analyzing all the info it will probably seize. If AI surfaces vital information the place it does not belong, it is sport over. Information cannot be unbreached.
And AI is not alone — sprawling cloud complexities, unsanctioned apps, lacking MFA, and extra dangers are making a ticking time bomb for enterprise information. Organizations that lack correct information safety measures threat a catastrophic breach of their delicate info.
Key findings embody:
- 99% of organizations have delicate information uncovered to AI instruments: The report discovered that just about all organizations had information accessible to generative AI programs, with 90% of delicate cloud information, together with AI coaching information, left open to AI entry.
- 98% of organizations have unverified apps, together with shadow AI: Staff are utilizing unsanctioned AI instruments that bypass safety controls and enhance the danger of knowledge leaks.
- 88% of organizations have stale however enabled ghost customers: These dormant accounts typically retain entry to programs and information, posing dangers for lateral motion and undetected entry.
- 66% have cloud information uncovered to nameless customers: Buckets and repositories are continuously left unprotected, making them simple targets for menace actors.
- 1 in 7 organizations don’t implement multifactor authentication (MFA): The shortage of MFA enforcement spans each SaaS and multi-cloud environments and was linked to the biggest breach of 2024.
- Just one in 10 organizations had labeled recordsdata: Poor file classification undermines information governance, making it troublesome to use entry controls, encryption, or compliance insurance policies.
- 52% of workers use high-risk OAuth apps: These apps, typically unverified or stale, can retain entry to delicate sources lengthy after their final use.
- 92% of firms enable customers to create public sharing hyperlinks: These hyperlinks may be exploited to show inside information to AI instruments or unauthorized third events.
- Stale OAuth functions stay energetic in lots of environments: These apps might proceed accessing information months after being deserted, typically with out triggering alerts.
- Mannequin poisoning stays a significant menace: Poorly secured coaching information and unencrypted storage can enable attackers to inject malicious information into AI fashions.
The report gives a sobering evaluation of how AI adoption is magnifying long-standing points in cloud safety. From extreme entry permissions in AWS to shadow AI, stale person accounts, and uncovered coaching information, the findings clarify that many organizations should not ready for the velocity and scale of right this moment’s dangers. The report urges organizations to scale back their information publicity, implement sturdy entry controls, and deal with information safety as foundational to accountable AI use.
Concerning the Creator
David Ramel is an editor and author at Converge 360.