AWS re:Invent 2024: Prime ‘Accountable AI’ Periods
As superior GenAI remakes the cloudscape, it is no shock that the large upcoming AWS re:Invent convention is devoting greater than a 3rd of its 2,530 periods to AI/ML, which is only one of 21 matters. That’s AI dominance.
Nevertheless, with the growing dominance of AI are growing considerations about its moral use, resulting in the rise of “accountable AI.” And AWS is definitely not ignoring these considerations, with 36 of the 860 AI/ML periods falling beneath that space of curiosity.
Listed below are the accountable AI periods we’re most excited by at AWS re:Invent 2024 going down Dec. 2-6 occasion in Las Vegas (with online-only registration accessible).
Advancing accountable AI: Managing generative AI threat
“Threat evaluation is a necessary a part of accountable AI (RAI) improvement and is an more and more widespread requirement in AI requirements and legal guidelines equivalent to ISO 42001 and the EU AI Act. This chalk discuss offers an introduction to greatest practices for RAI threat evaluation for generative AI purposes, masking controllability, veracity, equity, robustness, explainability, privateness and safety, transparency, and governance. Discover examples to estimate the severity and probability of potential occasions that might be dangerous. Study Amazon SageMaker tooling for mannequin governance, bias, explainability, and monitoring, and about transparency within the type of service playing cards as potential threat mitigation methods.”
Working towards accountable generative AI with the assistance of open supply
“Many organizations are reinventing their Kubernetes environments to effectively deploy generative AI workloads, together with distributed coaching and inference APIs for purposes like textual content era, picture era, or different use circumstances. On this chalk discuss, learn to combine Kubernetes with open supply instruments to follow accountable generative AI. Discover key issues for deploying AI fashions ethically and sustainably, leveraging the scalability and resiliency tenets of Kubernetes, together with the collaborative and community-driven improvement ideas of the open supply CNCF device set.”
Accountable AI: From concept to follow with AWS
“The fast development of generative AI brings promising innovation however raises new challenges round its secure and accountable improvement and use. Whereas challenges like bias and explainability have been widespread earlier than generative AI, massive language fashions convey new challenges like hallucination and toxicity. Be part of this session to grasp how your group can start its accountable AI journey. Get an outline of the challenges associated to generative AI, and be taught concerning the accountable AI in motion at AWS, together with the instruments AWS provides. Additionally hear Cisco share its method to accountable innovation with generative AI.”
Accountable generative AI: Analysis greatest practices and instruments
“With the newfound prevalence of purposes constructed with massive language fashions (LLMs) together with options equivalent to Retrieval Augmented Technology (RAG), brokers, and guardrails, a responsibly-driven analysis course of is important to measure efficiency and mitigate dangers. This session covers greatest practices for a accountable analysis. Study open entry libraries and AWS companies that can be utilized within the analysis course of, and dive deep on the important thing steps of designing an analysis plan together with defining a use case, assessing potential dangers, selecting metrics and launch standards, designing an analysis dataset, and deciphering outcomes for actionable threat mitigation.”
Construct accountable generative AI apps with Amazon Bedrock Guardrails
“On this workshop, dive deep into constructing accountable generative AI purposes utilizing Amazon Bedrock Guardrails. Develop a generative AI utility from scratch, take a look at its conduct, and focus on the potential dangers and challenges related to language fashions. Use guardrails to filter undesirable matters, block dangerous content material, keep away from immediate injection assaults, and deal with delicate data equivalent to PII. Lastly, learn to detect and keep away from hallucinations in mannequin responses that aren’t grounded in your knowledge. See how one can create and apply customized tailor-made guardrails instantly with FMs and fine-tuned FMs on Amazon Bedrock to implement accountable AI insurance policies inside your generative AI purposes.”
Gen AI within the office: Productiveness, ethics, and alter administration
“Navigating the transformative impression of generative AI on the trendy office, this session explores methods to maximise productiveness beneficial properties whereas addressing moral considerations and alter administration challenges. Key matters embrace moral implementation frameworks, fostering accountable AI utilization, and optimizing human-AI collaboration dynamics. The session examines efficient change administration approaches to make sure clean integration and adoption of generative AI applied sciences inside organizations. Be part of us to navigate the intersection of generative AI, productiveness, ethics, and organizational change, charting a path towards an empowered, AI-driven workforce.”
KONE safeguards AI purposes with Amazon Bedrock Guardrails
“Amazon Bedrock Guardrails allows organizations to ship constantly secure and moderated consumer experiences by way of generative AI purposes, whatever the underlying basis fashions (FM). Be part of the session to deep dive into how guardrails present further customizable safeguards on prime of the native protections of FMs, delivering industry-leading security safety. Lastly, hear from KONE’s CEO on how they use Amazon Bedrock Guardrails to offer secure and correct real-time AI assist to 30,000 technicians that execute 80,000 area buyer visits per day. Get their recommendations on adoption of accountable AI ideas that ship worth whereas attaining productiveness beneficial properties.”
Safeguard your generative AI apps from immediate injections
“Immediate injection assaults pose a threat to the integrity and security of generative AI (gen AI) purposes. Menace actors can craft prompts to govern the system, resulting in the era of dangerous, biased, or unintended outputs. On this chalk discuss, discover efficient methods to defend in opposition to immediate injection vulnerabilities. Study sturdy enter validation, safe immediate engineering ideas, and complete content material moderation frameworks. See a demo of assorted prompts and their related protection mechanisms. By adopting these greatest practices, you possibly can assist safeguard your generative AI purposes and foster accountable AI practices in your group.”
Methods to mitigate social bias when implementing gen AI workloads
“As gen AI programs turn into extra superior, there may be rising concern about perpetuating social biases. This discuss examines challenges related to gen AI workloads and techniques to mitigate bias all through their improvement course of, and discusses options equivalent to Amazon Bedrock Guardrails, Amazon SageMaker Make clear, and SageMaker Information Wrangler. Be part of to learn to design gen AI workloads which might be honest, clear, and socially accountable.”
Creating explainable AI fashions with Amazon SageMaker
“As AI programs are more and more utilized in decision-making, explainable fashions have turn into important. This dev chat explores instruments and methods for constructing these fashions utilizing Amazon SageMaker. It walks by way of a number of methods for deciphering complicated fashions, offering insights into their decision-making processes. Discover ways to guarantee mannequin transparency and equity in machine studying pipelines and the way to deploy these fashions utilizing SageMaker endpoints. This dev chat is good for knowledge scientists specializing in AI ethics and mannequin interpretability.”
Observe that the catalog would not permit direct linking to these session descriptions, however they are often accessed right here.
Concerning the Creator
David Ramel is an editor and author at Converge 360.