Issue #11 - AppSec Weekly 🛡️

Your go-to source for the latest in application security trends, tools, and insights!

AppSec Weekly

đź“° TLDR AppSec Weekly đź“°

This week was wild 🔥. From hacking Google’s AI Gemini and leaking internal source code to a zero-click RCE in Kubernetes Ingress NGINX affecting Fortune 500s, the stakes have never been higher. GitHub’s CodeQL almost became a global backdoor, and Next.js had an auth bypass that broke security assumptions in middleware. Meanwhile, Rust proves itself a malware dev’s dream, Mandiant drops a jaw-dropping Browser-in-the-Middle attack, and AI security takes center stage—from OAuth identity models to the ultimate jailbreak cookbook. Buckle up—AppSec is heating up 🔥.

🌶️ 🌶️ This Week in AppSec World 🌶️ 🌶️

In a jaw-dropping LLM hack, bug bounty legends Lupin and Rhynorater exploited Gemini’s Python sandbox, exfiltrated a 579MB ELF binary chunk by chunk, and extracted internal google3 source code and sensitive .proto files using binwalk and strings — all inside Google’s own AI playground. Their weapon? A recursive Python lister, shell-free exfil, and late-night fuzzing via Caido.

The result? A Most Valuable Hacker trophy at Google’s Vegas bugSWAT — and a reminder that GenAI is a high-stakes, high-reward attack surface with plenty of surprises under the hood.

A newly discovered vulnerability, CVE-2025-29927, allows attackers to skip authentication in Next.js apps by exploiting a flawed middleware header. The attack—requiring just a custom x-middleware-subrequest value—can grant unauthorized access and cause denial-of-service through poisoned CDN caches. JFrog’s research team demonstrates how the vulnerability works and offers mitigation guidance until patches (v14.2.25, v15.2.3) are deployed. For organizations relying on middleware for security, this flaw could be a critical blind spot.

Wiz Research unveiled CVE-2025-1974 and related critical vulnerabilities in Kubernetes’ Ingress NGINX Controller, dubbed IngressNightmare. By exploiting misconfigured admission controllers and abusing unsanitized annotations, attackers can inject malicious NGINX configurations, upload payloads, and execute code remotely—leading to full cluster compromise. With over 6,500 clusters publicly exposed, including those of Fortune 500 companies, defenders are urged to patch to v1.12.1 or isolate the vulnerable component immediately.

In a high-stakes discovery, Praetorian researchers revealed CVE-2025-24362, a vulnerability that could’ve weaponized GitHub’s CodeQL into a massive supply chain backdoor. A debug artifact briefly leaked a GITHUB_TOKEN with write privileges—just enough for attackers to retag CodeQL’s v3 release, impacting every repo using default configurations. The proof-of-concept raced against a 2-second expiry and succeeded. GitHub responded within hours, but the incident highlights ongoing risks in CI/CD workflows and the need for artifact hygiene.

Mandiant has disclosed a potent new technique dubbed Browser-in-the-Middle (BitM), where attackers proxy entire browser sessions to steal tokens post-MFA. Their internal tool Delusion enables mass-scale phishing with session recording, container orchestration, and token-free Firefox session theft. The fix? FIDO2 keys and client certs, or risk giving adversaries full post-login access within seconds.

Rust is gaining traction in malware development for its evasive properties and difficulty in reverse engineering. This post walks through building a Rust-based shellcode dropper, leveraging remote mapping injection and staging Sliver over HTTPS. Ghidra struggles to decompile it cleanly — and that’s part of the point.

🤖 This Week in AI Security 🤖

GitHub’s latest Copilot feature uses LLMs to detect generic passwords in code, outperforming traditional regex-based methods. After months of tuning prompt strategies and optimizing detection precision, the system cut false positives by up to 94% in public previews. A custom workload-aware queuing system ensures scalability, making AI-powered secret scanning ready for enterprise scale. GitHub’s move marks a major step forward in intelligent, context-aware security tooling.

As AI agents become more embedded in user workflows, the question of identity and access control looms large. Maya Kaczorowski makes a compelling case: we don’t need new tech, just better OAuth adoption. From rate-limiting agents separately to building audit-friendly scopes, the path forward is paved with existing tools—if only teams would fix their coarse-grained permission models first.

Cisco breaks down how MCP—the emerging standard for connecting AI agents to tools and data—can become a security liability without strict controls. From unmonitored data access to lack of human-in-the-loop approvals, Omar Santos outlines real-world risks and best practices for securing AI-integrated systems. A must-read for anyone building agentic apps or RAG pipelines.

General Analysis just dropped the most comprehensive breakdown of LLM jailbreaks to date—detailing both manual and automated techniques like TAP, GCG, AutoDAN-Turbo, and Crescendo. With benchmarks, threat categorizations, and code for every method, this is your one-stop resource for understanding and defending against adversarial prompting in modern LLMs.

And that’s a wrap for this week! If you enjoy this curated issue, feel free to forward or share it with your appsec folks, team, hackerclub. For feedback, hugs, bugs or chats drop an email to [email protected]