Cloud AI Threat Report Highlights Amazon SageMaker Root Entry Exposures
The Amazon Internet Companies (AWS) platform was featured in a brand new report from Tenable on cloud AI dangers, particularly the Amazon SageMaker providing, the place root entry vulnerabilities had been discovered to be widespread.
To make certain, fellow cloud giants Microsoft (Azure) and Google (Google Cloud Platform) acquired their fair proportion of scrutiny within the report, with the analysi discovering 70% of cloud workloads utilizing AI providers comprise unresolved vulnerabilities. Nevertheless, Amazon SageMaker was related to one important discovering.
“The overwhelming majority (90.5%) of organizations which have configured Amazon SageMaker have the dangerous default of root entry enabled in at the very least one pocket book occasion,” mentioned the Cloud AI Threat Report 2025 from publicity administration firm Tenable.
That’s by far the best share knowledge level among the many key findings of the report as offered by Tenable:
- Cloud AI workloads aren’t proof against vulnerabilities: Roughly 70% of cloud AI workloads comprise at the very least one unremediated vulnerability. Specifically, Tenable Analysis discovered CVE-2023-38545 — a crucial curl vulnerability — in 30% of cloud AI workloads.
- Jenga-style cloud misconfigurations exist in managed AI providers: 77% of organizations have the overprivileged default Compute Engine service account configured in Google Vertex AI Notebooks. This implies all providers constructed on this default Compute Engine are in danger.
- AI coaching knowledge is prone to knowledge poisoning, threatening to skew mannequin outcomes: 14% of organizations utilizing Amazon Bedrock don’t explicitly block public entry to at the very least one AI coaching bucket and 5% have at the very least one overly permissive bucket.
- Amazon SageMaker pocket book cases grant root entry by default: In consequence, 91% of Amazon SageMaker customers have at the very least one pocket book that, if compromised, might grant unauthorized entry, which might end result within the potential modification of all information on it.
“By default, when a pocket book occasion is created, customers who log into the pocket book occasion have root entry,” the report defined. “Granting root entry to Amazon SageMaker pocket book cases introduces pointless threat by offering customers with administrator privileges. With root entry, customers can edit or delete system-critical information, together with people who contribute to the AI mannequin, set up unauthorized software program and modify important atmosphere parts, rising the chance if compromised. In response to AWS, ‘In adherence to the precept of least privilege, it’s a advisable safety greatest follow to limit root entry to occasion assets to keep away from unintentionally over provisioning permissions.'”
Failing to comply with the precept of least privilege will increase the chance of unauthorized entry, permitting attackers to steal AI fashions and expose proprietary knowledge. Compromised credentials can even grant entry to crucial assets like S3 buckets, which can comprise coaching knowledge, pre-trained fashions, or delicate data corresponding to PII,” mentioned Tenable. The corporate famous, “The results of such breaches are extreme.”
Together with Amazon SageMaker, which is a completely managed machine studying service offered by AWS that permits builders and knowledge scientists to construct, prepare, and deploy machine studying fashions at scale, the report additionally addressed Amazon Bedrock, a completely managed service that permits companies to construct and scale generative AI purposes utilizing basis fashions from a number of AI suppliers without having to handle infrastructure.
The report discovered {that a} a lot smaller share of Amazon Bedrock coaching buckets are overly permissive.
“A small however vital portion (5%) of the organizations we studied which have configured Amazon Bedrock have at the very least one overly-permissive coaching bucket,” the report mentioned. “Overly-permissive storage buckets are a well-known cloud misconfiguration; in AI environments such dangers are amplified if the buckets comprise delicate knowledge used to coach or fine-tune AI fashions. If improperly secured, the overly-permissive buckets could be compromised by attackers to switch knowledge, steal confidential data or disrupt the coaching course of.”
The report additionally known as out Amazon Bedrock coaching buckets that didn’t have public entry blocked.
“Among the many organizations which have configured Amazon Bedrock coaching buckets, 14.3% have at the very least one bucket that doesn’t have Amazon S3 Block Public Entry enabled,” the report mentioned. “The Amazon S3 Block Public Entry characteristic, thought of a greatest follow for securely configuring delicate S3 buckets, is designed to forestall unauthorized entry and unintended knowledge publicity. Nevertheless, we recognized cases wherein Amazon Bedrock coaching buckets lacked this safety — a configuration that will increase the chance of unintentional extreme publicity. Such oversights can go away delicate knowledge weak to tampering and leakage — a threat that’s much more regarding for AI coaching knowledge, as knowledge poisoning is highlighted as a high safety situation within the OWASP Prime 10 threats for machine studying programs.”
Tenable Cloud Analysis created the brand new report by analyzing the telemetry gathered from workloads throughout numerous public cloud and enterprise landscapes, scanned via Tenable merchandise (Tenable Cloud Safety, Tenable Nessus), the corporate mentioned. The information had been collected between December 2022 and November 2024, consisting of:
- Cloud asset and configuration data
- Actual-world workloads in energetic manufacturing
- Information from AWS, Azure and GCP environments
“After we discuss AI utilization within the cloud, greater than delicate knowledge is on the road. If a risk actor manipulates the info or AI mannequin, there could be catastrophic long-term penalties, corresponding to compromised knowledge integrity, compromised safety of crucial programs and degradation of buyer belief,” mentioned Liat Hayun, VP of Analysis and Product Administration, Cloud Safety, Tenable, in a March 19 information launch. “Cloud safety measures should evolve to fulfill the brand new challenges of AI and discover the fragile stability between defending towards advanced assaults on AI knowledge and enabling organizations to attain accountable AI innovation.”
Concerning the Writer
David Ramel is an editor and author at Converge 360.