- AppSec Weekly
- Posts
- Issue #8 - AppSec Weekly 🛡️
Issue #8 - AppSec Weekly 🛡️
Your go-to source for the latest in application security trends, tools, and insights!

AppSec Weekly
📰 TLDR AppSec Weekly 📰
This week’s security roundup highlights advancements in AI security and developer tooling—ranging from Joseph Thacker’s comprehensive guide on hacking AI applications and Simon Willison’s perspective on LLM hallucinations, to critical open-source vulnerabilities like Better-Auth’s trustedOrigins bypass and typosquatted Go packages delivering malware loaders targeting Linux and macOS. Meanwhile, Code PathFinder makes strides as an open-source alternative to CodeQL with its new Atlas ruleset, offering developers growing coverage for secure code analysis. Together, these insights emphasize the rising importance of securing AI agents, software supply chains, and developer ecosystems in 2025.
🌶️ This Week in AppSec World 🌶️
Google’s security team disclosed EntrySign, a vulnerability in AMD Zen CPUs allowing attackers to install unauthorized microcode patches due to a weak cryptographic validation mechanism. By exploiting AMD’s use of AES-CMAC instead of a secure hash function, researchers bypassed signature verification and forged microcode updates. A new tool, zentool, was released to analyze and modify AMD microcode, raising concerns over microcode security and hardware trust. AMD has since patched the flaw, mitigating potential attacks. | ![]() |
Socket researchers uncovered set-utils, a malicious PyPI package targeting Ethereum developers. Masquerading as a utility library, it silently exfiltrates private keys via the Polygon blockchain. Keys are encrypted with an attacker’s RSA public key and sent through stealthy blockchain transactions. Developers using Python-based wallet tools like eth-account are at high risk—uninstalling isn’t enough if you’ve already created wallets.
Elttam researchers uncovered a new remote code execution (RCE) method in Ruby on Rails apps leveraging unsafe reflection and deserialization. By abusing the SQLite3::Database class in the default sqlite3 gem, attackers can load malicious SQLite extensions, achieving RCE. Exploits also chain ActiveRecord and ActiveSupport deserialization gadgets to trigger the attack.
A critical Open Redirect vulnerability was found in Better-Auth’s trustedOrigins validation, allowing attackers to bypass URL checks using crafted payloads like //attacker.com and /\/attacker.com. This flaw enabled account takeover by stealing password reset tokens through malicious redirects. Despite an initial patch, further bypasses were discovered and quickly fixed by the vendor.
Socket researchers uncovered a malicious typosquatting campaign targeting Go developers, where attackers uploaded fake packages impersonating popular libraries like hypert and layout. These packages delivered hidden loader malware that installed ELF binaries on Linux and macOS systems, using obfuscated array-based strings and delayed execution to evade detection. The campaign involved multiple fallback domains and IP addresses, suggesting a persistent, coordinated threat actor targeting open-source software supply chains.
🤖 This Week in AI Security 🤖
Simon Willison argues that hallucinated methods in LLM-generated code are minor issues—easy to catch and fix when the code runs. The real risk lies in subtle logic errors that silently pass tests or reviews. His advice? Run and rigorously test everything yourself. LLMs can speed coding, but manual QA and critical thinking remain irreplaceable.
Joseph Thacker released a definitive guide on hacking AI apps using LLMs. Covering everything from prompt injection and jailbreaks to RCE via AI agents, it explores vulnerabilities in retrieval-augmented generation (RAG), multimodal inputs, and AI-assisted workflows. Thacker outlines an AI hacker methodology: identify data sources, find data exfiltration sinks, exploit web and AI-specific vulnerabilities, and leverage prompt injection for attacks like SSRF, XSS, and sandbox escapes. Essential reading for AI security pros and bug bounty hunters!
This beginner-friendly guide demystifies Large Language Models (LLMs), explaining how they work through next-token prediction, embeddings, and attention mechanisms. It highlights key concepts like tokenization, similarity, multi-head attention, and reinforcement learning from human feedback (RLHF), showing how they power tools like ChatGPT. The post emphasizes why understanding LLM fundamentals matters for leveraging AI effectively, offering resources for deeper learning.
🏆 AppSec Tools of The Week 🏆
A very simple and basic open source implementation inspired by Google's Project Naptime - a vulnerability analysis tool that uses Large Language Models (LLMs) to discover and exploit native vulnerabilities.
Code PathFinder has unveiled its Atlas ruleset, positioning itself as a formidable open-source alternative to GitHub’s CodeQL. Currently, Atlas boasts approximately ten rulesets, with plans for continuous expansion as more features are integrated. Code PathFinder is designed to assist developers in identifying specific code patterns and paths within their codebases, enhancing code quality and security. Its open-source nature ensures accessibility and adaptability, making it a valuable tool for those seeking alternatives to proprietary solutions
And that’s a wrap for this week! If you enjoy this curated issue, feel free to forward or share it with your appsec folks, team, hackerclub. For feedback, hugs, bugs or chats drop an email to [email protected]