- AppSec Weekly
- Posts
- Issue #5 - AppSec Weekly 🛡️
Issue #5 - AppSec Weekly 🛡️
Your go-to source for the latest in application security trends, tools, and insights!
AppSec Weekly
đź“° TLDR AppSec Weekly đź“°
This week’s security deep dive uncovers stealthy YouTube email leaks, firmware exploits on the Steam Deck, and AI-driven security breakthroughs. A $10K bounty bug exposed private emails, while researchers hacked AMD UEFI vulnerabilities for SMM execution, highlighting firmware risks. Meanwhile, Meta and Cursor AI revolutionize software security with LLM-powered mutation testing and self-improving code rules, marking a shift toward autonomous, AI-driven DevSecOps.
🌶️ This Week in AppSec World 🌶️
Paul Butler demonstrates how Unicode variation selectors can be abused to hide arbitrary data within text, including emojis. By encoding bytes as invisible variation selectors, data can be smuggled past human reviewers and some content filters. This technique could enable covert data transmission, watermarking, or obfuscation. While mostly academic, it raises security concerns for data leakage and detection evasion. | ![]() |
NVIDIA’s security team questioned traditional test-based security verification, concluding that provability is superior to testing. In 2018, they migrated two security-sensitive applications from C to SPARK, a formally verified language, and saw major security and verification efficiency gains. The success led to widespread adoption, with over 50 trained developers and SPARK components shipping in NVIDIA products today. This shift strengthens security audits and provides mathematically provable software correctness.
A researcher uncovered a critical flaw that allowed leaking any YouTube user’s email address by chaining two overlooked Google vulnerabilities. By exploiting YouTube’s live chat API to extract obfuscated Gaia IDs and using Pixel Recorder’s sharing feature, attackers could resolve these IDs into email addresses without alerting the target. Google awarded appx $10,000 bug bounty, acknowledging the high impact and abuse potential.
Quarkslab researchers discovered two UEFI vulnerabilities affecting AMD-based devices, including the Steam Deck, allowing SMRAM leaks and arbitrary code execution in SMM. The flaws, CVE-2024-21925 and CVE-2024-0179, stem from improper validation in system management mode (SMM) handlers, enabling attackers with physical access to achieve low-level firmware persistence. This research highlights the broader security risks of UEFI vulnerabilities in modern computing devices.
Researchers discovered truffelvscode, a malicious typosquatted VS Code extension that delivers multi-stage malware. The attack chain begins with an obfuscated JavaScript file that downloads a malicious batch script, which then retrieves and executes a DLL payload. The final stage installs a preconfigured ScreenConnect client, granting attackers remote access to compromised machines. This highlights the growing threat of supply chain attacks targeting developers via public package registries. Caution and security scanning are critical to mitigating such risks.
🤖 This Week in AI Security / Engineering 🤖
Meta unveiled Automated Compliance Hardening (ACH), an LLM-driven mutation testing tool that automates fault generation and test creation. Unlike traditional coverage-based testing, ACH targets specific software faults, enhancing privacy, security, and reliability. By leveraging mutation-guided test generation, engineers can define concerns in plain text, allowing ACH to generate realistic faults and corresponding tests automatically. This approach scales automated testing efficiently, reducing regressions and strengthening compliance across Meta’s platforms.
A recent discussion explores how LLM-driven development agents could automate Semgrep rule generation, enabling closed-loop security testing. By leveraging self-improving prompts, LLMs could generate, validate, and enforce Semgrep rules based on real-world errors, transforming static analysis into a dynamic, self-correcting process. This approach could streamline AppSec workflows, reducing manual rule writing and making code security more scalable and intelligent. Semgrep just got a lot more interesting.
Geoffrey Huntley argues that most engineers are using Cursor AI incorrectly, treating it as an IDE rather than an autonomous agent. By leveraging Cursor Rules, developers can create a “stdlib” of AI-generated rules, allowing self-improving automation for coding, testing, commits, and deployment. The key insight? Teach Cursor like an apprentice—correct its mistakes, update its rules, and over time, achieve fully automated development workflows. The future isn’t writing code—it’s reviewing AI-generated PRs from 1000 concurrent agents.
And that’s a wrap for this week! If you enjoy this curated issue, feel free to forward or share it with your appsec folks, team, hackerclub. For feedback, hugs, bugs or chats drop an email to [email protected]