Amazon AI Assistant Hacked to Wipe Data

Amazon AI Assistant Hacked to Wipe Data

Source: fr0gger_ on Twitter


A serious security incident has come to light involving an Amazon AI coding assistant. An attacker exploited misconfigured GitHub workflows to sneak in a malicious pull request. This harmful change went unnoticed for six days after being published on the Visual Studio Code (VSC) marketplace.


What Happened?

  • Malicious pull request merged due to weak GitHub automation safeguards
  • Payload was an AI prompt engineered to wipe systems back to near-factory conditions
  • The prompt targeted local file systems and cloud resources for deletion
  • The compromised assistant remained active and available for nearly a week before detection

Why This Is Dangerous

AI coding helpers usually run code or generate scripts automatically. If an attacker can silently modify their behavior, they can:

  • Execute destructive commands on developers’ machines without immediate signs
  • Trigger widespread data loss in cloud environments linked to the assistant
  • Undermine trust in AI tools essential to coding workflows

How To Protect Yourself

The author responded quickly by creating a NOVA rule-a safeguard that scans for suspicious data-wiping prompts before they run. This rule helps prevent similar attacks by:

  • Detecting potentially harmful commands inside AI prompts
  • Stopping destructive instructions from executing on your systems

  • NOVA rule to catch destructive prompts: https://t.co/4ElMrUst5v


Stay vigilant with your AI integrations, and ensure your DevOps pipelines are secure to avoid supply chain risks like this one.