Each day, risk hunters navigate an amazing sea of knowledge, sifting by way of numerous logs from varied sources. These logs, even after getting translated into alerts by varied analytical instruments, demand fixed consideration and scrutiny. Safety analysts have to manually examine 1000’s of alerts whereas ceaselessly referencing exterior risk intelligence sources akin to BrightCloud. Regardless of the supply of subtle analytics platforms, the sheer quantity and complexity of knowledge make environment friendly risk detection a frightening process.
By integrating AI into the threat-hunting course of, alerts may be enriched with deeper contextual insights thereby lowering the guide workload. AI-driven summarization can distill huge quantities of knowledge into concise, actionable summaries and narratives, serving to analysts give attention to vital threats quicker. Moreover, AI can automate report era and even counsel response methods, streamlining incident decision.
Decreasing alert fatigue with AI-powered enrichment
Almost each safety instrument available on the market at the moment can generate alerts after analyzing logs. These alerts could also be rule-based or derived from machine studying fashions and assist to scale back the burden from tens of millions of log occasions to a extra manageable variety of alerts. Nonetheless, even at this diminished scale, investigating these voluminous alerts stays a time-consuming problem for risk hunters.
Every alert requires a deep dive into underlying uncooked occasions to extract contextual particulars. Analysts additionally should manually cross-reference a number of sources, trying up data akin to course of hashes or distant IP addresses in risk intelligence databases to find out if they seem in recognized blacklists. In any case this effort, most alerts usually develop into false positives, resulting in wasted time and analyst fatigue.

Generative AI can play a strong position in routinely enriching safety alerts with contextual intelligence, considerably easing the burden on risk hunters. As an example, when the execution of an uncommon course of triggers an alert, analysts sometimes want to research manually. They give the impression of being up the method hash to find out if it’s linked to recognized malware, study the dad or mum and grandparent processes for anomalies, and analyze the command-line arguments used throughout execution.
Organizations can automate a lot of this investigative work with current developments in generative AI. AI can generate enhanced alert descriptions incorporating vital particulars. These embody course of lineage, command-line inputs, and real-time repute lookups for hashes and IP addresses. This enriched data empowers analysts to make faster, extra correct judgments about which alerts warrant deeper investigation. It additionally permits them to determine that are doubtless false positives. By minimizing guide effort and bettering resolution high quality, AI-driven enrichment helps safety groups minimize by way of the noise and give attention to real threats.
Enhancing entity-based risk evaluation with AI-driven summarization
Consumer and Entity Conduct Analytics (UEBA) instruments, akin to Core Menace Detection and Response, take risk detection additional by aggregating alerts based mostly on related entities akin to customers, machines, IP addresses, and extra. As an alternative of analyzing particular person alerts in isolation, these instruments compute a threat rating for every entity based mostly on their related alerts, permitting risk hunters to evaluate safety incidents holistically. This strategy helps determine patterns which may in any other case go unnoticed, together with connections between seemingly low-severity alerts that, when correlated, reveal a extra important safety risk.
On this strategy, risk hunters sometimes prioritize their investigations on entities based mostly on threat scores and manually evaluate their corresponding alerts to reconstruct an entity’s exercise timeline. Nonetheless, this course of nonetheless requires important effort and time to sew collectively a number of alerts and construct a coherent story.

Streamlining with Generative AI
Generative AI can streamline this course of by routinely summarizing anomalous actions for every high-risk entity, offering a concise but complete overview alongside the danger rating. The workflow sometimes includes the next steps:
- Figuring out high-risk entities and related time home windows: Focus is positioned on entities that accumulate increased threat scores based mostly on their related alerts in a given interval.
- Rating anomalies: Anomalies are prioritized based mostly on their contribution to the entity’s threat. This rating considers components such because the significance of related entities, the burden of the anomaly mannequin, the character of the suspicious exercise, and many others.
- Choosing and compressing prime anomalies: To make sure a holistic view, a curated set of great anomalies is chosen throughout varied behavioral dimensions—akin to entry patterns, authentication patterns, or entry anomalies.
- Setting up the anomalous narrative: A big language mannequin (LLM) generates a human-readable abstract that stitches these anomalies right into a coherent story. This narrative contextualizes scattered alerts right into a significant risk storyline, serving to analysts perceive what occurred instantly.
By highlighting key behaviors and tying them to the broader threat image, these AI-generated summaries allow analysts to focus their time and experience on the entities that matter most. This strategy accelerates decision-making and minimizes the danger of lacking vital safety threats hidden inside alert noise. This narrative is additional enhanced by associating potential MITRE ATT&CK strategies that map to the entity’s noticed actions—a subject we’ll discover in additional element in an upcoming weblog submit.
From entity insights to organizational summaries
Whereas entity-level summaries assist risk hunters analyze particular person customers, machines, or IPs effectively, they’ll prolong the identical AI-driven strategy to supply a broader view of a corporation’s total safety posture. By aggregating threat scores, anomalous actions, and traits throughout a number of entities, AI can generate a high-level abstract of a corporation’s safety state at any given second.
This organizational-level visibility allows safety groups to determine bigger assault patterns, persistent threats, and areas of concern which may not be evident from particular person alerts. Extra importantly, AI can automate the era of government summaries and detailed safety reviews, providing CISOs and different stakeholders clear perception into the corporate’s risk panorama.
Smarter AI-driven responses
Safety groups usually depend on previous experiences and documented response methods to deal with recurring threats successfully. Nonetheless, manually looking out by way of previous circumstances, incident reviews, and response playbooks may be time-consuming and inefficient. Nice-tuned Massive Language Fashions (LLMs) built-in with Retrieval-Augmented Technology (RAG) can take incident response to the following stage by studying from a corporation’s historic safety incidents.
This strategy accelerates incident decision and improves response accuracy by lowering reliance on guide analysis. Safety groups can give attention to executing one of the best plan of action relatively than spending precious time piecing collectively historic information. Finally, AI-powered response suggestions remodel risk looking from a reactive course of right into a proactive and adaptive cybersecurity technique.
Handle key challenges in AI-driven threat-hunting options
Whereas AI considerably enhances threat-hunting workflows, its adoption comes with a number of challenges that organizations should handle to make sure safety, accuracy, and usefulness.
Knowledge safety
Safety logs may include extremely delicate data essential to a corporation’s protection technique. Permitting these logs to go away a safe surroundings for AI processing poses a threat that many organizations are unwilling to take. To mitigate this, AI fashions should be hosted inside a corporation’s Digital Non-public Cloud (VPC), guaranteeing that every one information stays in a managed and guarded surroundings. This strategy permits organizations to leverage AI’s advantages whereas sustaining compliance with information safety insurance policies.
Knowledge illustration
Safety information exists in varied schemas, usually containing distinctive terminologies and abbreviations particular to a corporation. This inconsistency makes it difficult for AI fashions to interpret and course of the info successfully. The retrieval mechanism should be designed to extract significant data whereas normalizing in-house terminologies right into a format comprehensible by the AI mannequin. Standardization ensures correct insights and prevents misinterpretation resulting from information construction inconsistencies.
Prompting and output consistency
Making certain that AI-generated outputs adhere to a constant construction is vital for usability. As an example, if a UI engine expects responses in a particular JSON format with predefined keys, deviations from this customary may break the interface. Equally, reviews and summaries should comply with a uniform construction and language to take care of readability and usefulness. Establishing sturdy immediate engineering practices ensures that AI outputs stay predictable and seamlessly combine into present safety workflows.
Addressing these challenges is essential to efficiently integrating AI into threat-hunting operations.
Conclusion: The way forward for AI-augmented risk looking
Fusing human experience with AI-powered effectivity isn’t just a bonus—it’s turning into needed within the ever-evolving risk panorama. As cyber threats develop extra subtle, the demand for real-time evaluation, fast decision-making, and exact incident response is increased than ever. Whereas AI has already demonstrated its worth in enriching alerts and summarizing safety occasions, its position in proactive risk protection using mechanisms akin to agentic AI continues to be increasing. By autonomously analyzing patterns, detecting rising threats, and taking predefined defensive actions, agentic AI transforms cybersecurity from a reactive mannequin right into a proactive and adaptive protection technique.
Be part of OpenText Cybersecurity information scientists @ RSA 2025 the place my colleagues and fellow information scientists, Nakkul Khuraana, and Hari Manassery Koduvely, will focus on ‘Find out how to Use LLMs to Increase Menace Alerts with the MITRE Framework.’