It’s increasingly difficult to see through the hype of AI in cybersecurity in a sea of shiny vendor demos that fail to deliver in production.
We recently aired a discussion between Gourav Nagar (Head of Information Security and IT at Upwind) and Jon Hencinski (Head of Security Operations at Prophet Security, ex-Expel) that provides a practitioner's perspective on building comprehensive AI-driven cybersecurity programs. Key topics they discussed include:
• Getting organizational buy-in (where leadership and practitioners are aligned) • Improving alert detection, triage, and investigations • Maturing your cybersecurity program (alert management is no longer a constraint) |
Looking for some of the AI SOC best practices discussed? 1. Cover all the alerts you care about: You can feed in informational, low, and medium alerts so even these signals can be investigated while they’re early indicators, not after they’ve been aged into incidents.
2. Require deterministic consistency: Your Tier 1 analyst at 3:20am may not function like your Tier 2 at 12:00pm, but your AI SOC platform should absolutely enforce the same level of deterministic consistency and rigor in its reasoning and conclusions. 3. Unshackle your detection engineers: Stop suppressing rules because your team can’t handle the volume.
4. Keep humans in the loop for remediation: There is a distinction to be made between autonomous investigation and autonomous remediation, and the latter requires trust to be built amongst the practitioners on your team. 5. Verify the AI with a parallel run: It’s critical you run the AI alongside your SOC for a couple of weeks (or more) to build trust in its accuracy in your environment and team’s workflow. |
|
|
A take on a new threat from an old adversary
|
|
|
Welcome to another _secpro!
Cybersecurity in 2026 is being shaped by a convergence of accelerating attack speeds, expanding digital ecosystems, and increasingly autonomous adversary capabilities. Recent threat intelligence points to a shift from manually orchestrated intrusions toward highly adaptive operations, including the emergence of agentic AI systems capable of planning and executing multi-stage attacks with minimal human oversight. These developments are enabling adversaries to scale campaigns and adjust tactics in real time, while AI-assisted reconnaissance and credential abuse continue to compress intrusion timelines. In some environments, attackers are now moving laterally within minutes of initial access, leaving little margin for delayed detection or response.
At the same time, threat actors are increasingly exploiting trusted access paths and identity-based weaknesses rather than relying solely on traditional malware. Credential compromise, third-party exposure, and cross-domain movement remain dominant techniques, reflecting the growing dependence of organizations on interconnected services and supply chains. Ransomware groups continue to prioritize sectors where operational disruption increases the likelihood of payment, while intelligence-driven campaigns such as recent MuddyWater activity demonstrate sustained investment in targeted espionage operations.
Despite the growing sophistication of adversaries, many successful intrusions still exploit familiar weaknesses, including poor credential hygiene and unpatched systems. The current threat landscape underscores a clear reality: as attack capabilities evolve, resilience depends not only on advanced defenses but also on disciplined execution of fundamental security controls. |
If you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there! Cheers! Austin Miller Editor-in-Chief |
The MCP Maturity Model was created by Stacklok, who have built an MCP platform and are working with enterprises to put MCP into production. Their Applied AI Engineers work hands-on with leaders to curate trusted registries, deploy advanced security measures and light up AI agents. You can learn more about the company at stacklok.com, or just drop them an email at enterprise@stacklok.com to start a conversation.
|
|
|
Operation Olalampo is a cyber-espionage campaign attributed to the Iranian state-aligned Advanced Persistent Threat (APT) group MuddyWater. Identified by Group-IB threat intelligence researchers, the campaign represents a continuation of MuddyWater’s long-standing strategy of targeting organizations across geopolitically significant regions, particularly the Middle East and North Africa (MENA). First observed on 26 January 2026, Operation Olalampo demonstrates the group’s increasing technical sophistication and operational maturity, particularly through the deployment of custom malware families, the use of novel command-and-control (C2) channels, and evidence of artificial intelligence-assisted development practices.
|
Agentic AI: The 2026 Threat Multiplier Reshaping Cyberattacks (Barracuda): Barracuda researchers describe the emergence of agentic AI systems capable of autonomously planning and executing multi-stage cyberattacks. Unlike generative AI tools, these systems can coordinate actions, adapt to defenses, and persist without human oversight, significantly increasing attack speed and scalability.
CrowdStrike 2026 Global Threat Report Findings (Adam Meyers): CrowdStrike reported adversaries increasingly using trusted access paths and cross-domain movement to evade detection. AI-assisted intrusion techniques and malware-free attacks are becoming more common, with rapid lateral movement remaining a key threat. GRIT 2026 Ransomware & Cyber Threat Report Industry Insights (GuidePoint Research): Analysis shows ransomware operators continue targeting sectors where operational disruption increases the likelihood of payment. Credential-based access and third-party compromise remain dominant initial access vectors. Cyber Threat Landscape 2026 Update (Panorays Research): Recent analysis highlights increased reliance on third-party ecosystems and supply chains as attack surfaces. Organizations face growing risk from identity compromise and external partner exposure.
CrowdStrike Warns Attackers Move in Under 30 Minutes (TechRadar): CrowdStrike data shows attackers now move laterally in networks in an average of 29 minutes, with some compromises occurring in under a minute. AI-enabled reconnaissance and credential abuse are accelerating intrusion timelines.
IBM X-Force Threat Intelligence Index 2026 (IBM): IBM’s latest threat index reports increasing use of AI-assisted attacks alongside persistent exploitation of basic security weaknesses such as unpatched systems and poor credential management.
Operation Olalampo – MuddyWater Campaign (Group-IB): Researchers documented a new MuddyWater campaign using updated malware variants and Telegram-based command infrastructure. The operation targeted regional organizations with espionage-focused tooling. |
|
|
Cybersecurity Predictions for 2026 (Frankly Speaking): This article outlines major cybersecurity predictions for 2026, including shrinking security budgets, consolidation of tools, and the increasing impact of AI automation. The author argues that specialized “tool babysitters” will decline as AI simplifies security operations and organizations move toward generalist security practitioners. The post also highlights how AI spending may divert resources away from traditional cybersecurity investments.
SACR Cybersecurity 2026 Outlook (SACR team): This industry-focused outlook reviews major cybersecurity developments and forecasts trends across security platforms, identity security, SecOps, mergers and acquisitions, and AI-driven defense technologies. The article analyzes how enterprise security architectures are evolving and where investment and innovation are concentrating in 2026.
Cybersecurity Trends for 2026 (Trust in Digital Life): This expert-panel article compiles practitioner predictions for cybersecurity in 2026, covering topics such as AI-driven attacks, evolving threat actors, regulatory pressures, and new enterprise security challenges. It emphasizes the increasing complexity of defending digital infrastructure as organizations expand cloud and AI deployments.
The 6 Security Shifts AI Teams Can’t Ignore in 2026 (Gradient Flow): This article examines how AI-native companies must rethink security strategies. It highlights the move from traditional static security models to systems designed for autonomous AI agents interacting directly with enterprise environments. Key issues include identity security, data integrity, governance risks, and expanded attack surfaces.
|
|
|
Copyright (C) 2025 Packt Publishing. All rights reserved. Our mailing address is: Packt Publishing, Grosvenor House, 11 St Paul's Square, Birmingham, West Midlands, B3 1RB, United Kingdom
Want to change how you receive these emails? You can update your preferences or unsubscribe. |
|
|
|