Cloud AI Threat Report Highlights Amazon SageMaker Root Entry Exposures
The Amazon Net Companies (AWS) platform was featured in a brand new report from Tenable on cloud AI dangers, particularly the Amazon SageMaker providing, the place root entry vulnerabilities had been discovered to be widespread.
To make sure, fellow cloud giants Microsoft (Azure) and Google (Google Cloud Platform) obtained their justifiable share of scrutiny within the report, with the analysi discovering 70% of cloud workloads utilizing AI providers comprise unresolved vulnerabilities. Nonetheless, Amazon SageMaker was related to one important discovering.
“The overwhelming majority (90.5%) of organizations which have configured Amazon SageMaker have the dangerous default of root entry enabled in at the least one pocket book occasion,” stated the Cloud AI Threat Report 2025 from publicity administration firm Tenable.
That’s by far the best proportion knowledge level among the many key findings of the report as introduced by Tenable:
- Cloud AI workloads aren’t resistant to vulnerabilities: Roughly 70% of cloud AI workloads comprise at the least one unremediated vulnerability. Particularly, Tenable Analysis discovered CVE-2023-38545 — a essential curl vulnerability — in 30% of cloud AI workloads.
- Jenga-style cloud misconfigurations exist in managed AI providers: 77% of organizations have the overprivileged default Compute Engine service account configured in Google Vertex AI Notebooks. This implies all providers constructed on this default Compute Engine are in danger.
- AI coaching knowledge is vulnerable to knowledge poisoning, threatening to skew mannequin outcomes: 14% of organizations utilizing Amazon Bedrock don’t explicitly block public entry to at the least one AI coaching bucket and 5% have at the least one overly permissive bucket.
- Amazon SageMaker pocket book cases grant root entry by default: In consequence, 91% of Amazon SageMaker customers have at the least one pocket book that, if compromised, may grant unauthorized entry, which may end result within the potential modification of all recordsdata on it.
“By default, when a pocket book occasion is created, customers who log into the pocket book occasion have root entry,” the report defined. “Granting root entry to Amazon SageMaker pocket book cases introduces pointless danger by offering customers with administrator privileges. With root entry, customers can edit or delete system-critical recordsdata, together with people who contribute to the AI mannequin, set up unauthorized software program and modify important setting elements, rising the chance if compromised. In keeping with AWS, ‘In adherence to the precept of least privilege, it’s a advisable safety greatest observe to limit root entry to occasion sources to keep away from unintentionally over provisioning permissions.'”
Failing to comply with the precept of least privilege will increase the chance of unauthorized entry, permitting attackers to steal AI fashions and expose proprietary knowledge. Compromised credentials may grant entry to essential sources like S3 buckets, which can comprise coaching knowledge, pre-trained fashions, or delicate info resembling PII,” stated Tenable. The corporate famous, “The implications of such breaches are extreme.”
Together with Amazon SageMaker, which is a completely managed machine studying service supplied by AWS that allows builders and knowledge scientists to construct, prepare, and deploy machine studying fashions at scale, the report additionally addressed Amazon Bedrock, a completely managed service that allows companies to construct and scale generative AI purposes utilizing basis fashions from a number of AI suppliers with no need to handle infrastructure.
The report discovered {that a} a lot smaller proportion of Amazon Bedrock coaching buckets are overly permissive.
“A small however vital portion (5%) of the organizations we studied which have configured Amazon Bedrock have at the least one overly-permissive coaching bucket,” the report stated. “Overly-permissive storage buckets are a well-known cloud misconfiguration; in AI environments such dangers are amplified if the buckets comprise delicate knowledge used to coach or fine-tune AI fashions. If improperly secured, the overly-permissive buckets could be compromised by attackers to change knowledge, steal confidential info or disrupt the coaching course of.”
The report additionally known as out Amazon Bedrock coaching buckets that didn’t have public entry blocked.
“Among the many organizations which have configured Amazon Bedrock coaching buckets, 14.3% have at the least one bucket that doesn’t have Amazon S3 Block Public Entry enabled,” the report stated. “The Amazon S3 Block Public Entry characteristic, thought of a greatest observe for securely configuring delicate S3 buckets, is designed to stop unauthorized entry and unintended knowledge publicity. Nonetheless, we recognized cases during which Amazon Bedrock coaching buckets lacked this safety — a configuration that will increase the chance of unintentional extreme publicity. Such oversights can depart delicate knowledge susceptible to tampering and leakage — a danger that’s much more regarding for AI coaching knowledge, as knowledge poisoning is highlighted as a prime safety concern within the OWASP Prime 10 threats for machine studying methods.”
Tenable Cloud Analysis created the brand new report by analyzing the telemetry gathered from workloads throughout various public cloud and enterprise landscapes, scanned by way of Tenable merchandise (Tenable Cloud Safety, Tenable Nessus), the corporate stated. The info had been collected between December 2022 and November 2024, consisting of:
- Cloud asset and configuration info
- Actual-world workloads in lively manufacturing
- Information from AWS, Azure and GCP environments
“After we speak about AI utilization within the cloud, greater than delicate knowledge is on the road. If a risk actor manipulates the info or AI mannequin, there could be catastrophic long-term penalties, resembling compromised knowledge integrity, compromised safety of essential methods and degradation of buyer belief,” stated Liat Hayun, VP of Analysis and Product Administration, Cloud Safety, Tenable, in a March 19 information launch. “Cloud safety measures should evolve to satisfy the brand new challenges of AI and discover the fragile stability between defending towards complicated assaults on AI knowledge and enabling organizations to realize accountable AI innovation.”
In regards to the Writer
David Ramel is an editor and author at Converge 360.