DHOM SitRep #001: OpenClaw Exposes 135K Agents, Notepad++ Supply Chain Owned for 6 Months, and Microsoft Patches 6 Zero-Days
AI agents are the new shadow IT. Plus, a state-sponsored supply chain attack hid in plain sight, and Patch Tuesday brings six actively exploited zero-days.
Don’t Hack On Me -- Situation Report
February 11, 2026 // Weekly Security Operations Brief

TL;DR
Situation: 135K OpenClaw AI agents exposed with critical RCE vulns — AI agents are the new shadow IT
Enemy Activity: Notepad++ supply chain owned by Lotus Blossom for 6 months; Microsoft patches 6 zero-days; Google disrupts massive proxy network; Signal phishing warning
Friendly Forces: SANS Protocol SIFT brings MCP to forensics; Wiz maps 70+ SDLC attack techniques; EDR silencing detection rules; security scorecarding guide
Logistics: Trail of Bits releases sandboxed Claude Code container; Cisco drops AI skill scanner
AI Operations: Google reports on threat actor AI misuse; Microsoft’s top 10 Copilot agent risks; the agent identity crisis
Personnel: SANS ICS Command Briefing 2026
The Debrief: Marcus’s take on the AI agent era
Situation
This week, the security industry woke up to a problem it should have seen coming: AI agents are everywhere, and nobody’s securing them.
Over 135,000 OpenClaw AI agents were found exposed to the internet with critical RCE vulnerabilities. Researchers at Bitdefender and SecurityScorecard flagged the exposure. Roughly 386 malicious Skills were discovered on ClawHub targeting crypto wallets, LinkedIn, and Reddit -- racking up over 7,000 downloads before anyone noticed. Kaspersky published a deep-dive showing that default OpenClaw settings ship with no authentication on admin interfaces, and misconfigured reverse proxies expose everything. A fake ClawdBot VS Code extension was caught installing ScreenConnect RAT. And Moltbook, the AI-only social network, had a Supabase misconfiguration leaking every agent’s secret API keys.
This isn’t a single vulnerability. It’s a systemic failure. AI agents behave like users but execute like software. They have persistent memory, tool autonomy, and the ability to chain actions across systems -- and our security models were never built for that. As 1Password put it this week: agent identities need to be treated like new hires, with time-bound, revocable access. If your org is deploying AI agents and you haven’t thought about identity, permissions, and monitoring for them, this is your wake-up call.
Enemy Activity
Notepad++ Supply Chain Compromised by Chinese State Hackers (Lotus Blossom)
The Notepad++ project disclosed that its sole update server was compromised by Lotus Blossom, a China-linked APT, between June and December 2025. Attackers selectively pushed malicious updates to targets in Vietnam, El Salvador, Australia, and the Philippines. Kaspersky found they rotated C2 servers, downloaders, and payloads monthly -- using Cobalt Strike, Metasploit, and a novel “Chrysalis” backdoor. IT admins running Notepad++ with elevated privileges were prime targets. Six months of access before detection. That’s the real story here.
Microsoft February 2026 Patch Tuesday: 6 Zero-Days, 58 Flaws
Microsoft patched 58 vulnerabilities including six actively exploited zero-days and five Critical-rated flaws. The standout: CVE-2026-21510, a Windows Shell Security Feature Bypass that lets attackers bypass SmartScreen and Shell warning dialogs through crafted shortcut files. Microsoft also began rolling out new Secure Boot certificates ahead of the June 2026 legacy cert expiration. Patch now.
Google GTIG Disrupts IPIDEA, One of the World’s Largest Residential Proxy Networks
Google’s Threat Intelligence Group took down IPIDEA, which controlled 13 proxy/VPN brands and used malicious SDKs distributed through trojanized VPNs and uncertified Android TV boxes. Over 550 threat groups from China, DPRK, Iran, and Russia were observed using IPIDEA exit nodes in a single week. Google Play Protect removed 600+ Android apps. This is what large-scale infrastructure takedowns look like.
German BfV and BSI Warn of State-Sponsored Signal Phishing
Germany’s domestic intelligence agency (BfV) and federal cybersecurity agency (BSI) issued a joint advisory warning of state-sponsored phishing attacks targeting Signal users. If your org uses Signal for sensitive comms, share this advisory with your team.
Friendly Forces
SANS Protocol SIFT: First Autonomous Framework Integrating MCP
SANS released Protocol SIFT, an autonomous forensics framework built on the Model Context Protocol (MCP). It orchestrates 200+ utilities in the SIFT Workstation, letting analysts match the velocity of AI-powered threats with deterministic, court-admissible evidence. This is the kind of tooling that changes how DFIR teams operate.
Wiz SITF: SDLC Infrastructure Threat Framework
New open-source framework from Wiz mapping 70+ attack techniques across five SDLC pillars (Endpoint/IDE, VCS, CI/CD, Registry, Production). Includes an Attack Flow Visualizer for drag-and-drop threat modeling that runs entirely client-side. If you’re building or securing CI/CD pipelines, this is worth a look.
EDR Silencing Techniques and Detection
Purple Team published an overview of six EDR silencing methods -- WFP abuse, hosts file modification, NRPT manipulation, IPSec filters, routing table tampering, and IPMute -- along with a SIGMA detection rule for WFP-blocked outbound connections. If you run an EDR, you should know how attackers try to blind it.
Security Scorecarding Programs That Work
Rami McCarthy published an overview of scorecarding in security programs with real-world examples from Chime, Netflix, GitHub, and Atlassian. Practical guidance for teams trying to measure security posture without drowning in vanity metrics.
Logistics
Trail of Bits: Claude Code DevContainer for Security Audits
Trail of Bits released a sandboxed devcontainer for running Claude Code in bypass mode safely during security audits. They also dropped Dropkit, a CLI for managing DigitalOcean droplets with automated setup and lifecycle management. Security-conscious AI tooling from a team that understands the risks.
Cisco Releases Skill Scanner for AI Agent Security
Cisco published Skill Scanner, an open-source tool for analyzing Claude and OpenAI skills for prompt injection, data exfiltration, and malicious code. As AI agent ecosystems grow, tools like this become essential for supply chain security.
AI Operations
Google GTIG: How Threat Actors Are Misusing AI
Google’s Threat Intelligence Group published a new report on how threat actors use AI for gathering information, creating realistic phishing, and developing malware. The report also flagged frequent model extraction attacks -- corporate espionage targeting private AI models. Notably, APT actors aren’t yet directly attacking frontier models. They’re using them as tools, just like everyone else.
Microsoft: Top 10 Security Risks for Copilot Studio Agents
Microsoft published a guide on the top 10 security risks for Copilot Studio agents and how to detect and prevent them. Organizations are rapidly deploying these agents, and threat actors are equally fast at exploiting misconfigured AI workflows. If your org is building Copilot agents, this is required reading.
The Identity Problem for AI Agents
Multiple sources converged on the same theme this week: legacy IAM is static, but AI agents are non-deterministic. Daniel Miessler published security hardening recommendations for OpenClaw. 1Password argued that agent identities need the same rigor as human identities -- time-bound access, revocable credentials, full audit trails. The consensus is clear: agents should not inherit human permissions. They need their own identity layer.
Personnel
SANS ICS Command Briefing 2026
SANS announced the ICS Command Briefing 2026 and a virtual roundtable on Agile Incident Response spanning SOC, cloud, OT, and executive teams. If you’re in ICS/OT security or leading cross-functional IR, these are worth putting on the calendar.
The Debrief
Issue #001 lands in a week that makes one thing clear: the AI agent era didn’t announce itself. It just showed up -- with 135,000 exposed instances, malicious Skills on agent marketplaces, and security models that haven’t caught up.
We’ve been here before. Shadow IT. Cloud sprawl. Container explosion. Every time a new paradigm arrives, security teams are the last to know and the first expected to secure it. The difference this time is velocity. AI agents don’t wait for change management. They chain tools, make decisions, and act autonomously -- which is exactly what makes them useful and exactly what makes them dangerous.
The organizations that get ahead of this won’t be the ones that ban AI agents. They’ll be the ones that treat agent identity, agent permissions, and agent monitoring with the same rigor they apply to human users. Start there.
Stay alert. Don’t let them hack on you.
Subscribe to Don’t Hack On Me | donthackonme.com
This post was researched, drafted, and edited with AI assistance. The analysis and perspective are Marcus’s. See something wrong? Leave a comment.



What's funny about this is that we might have woken up to this today. However, it's been going on for months now undetected in large part. I wrote about the vulnerabilities of AI agents and having our potential first holidays with AI agents and swarm on masse.
It's good to see you here. This is what my feed needed. 💚