Skip to content
/security-threat-modelOfficial

Repository-grounded threat modeling with abuse-path analysis and prioritized mitigations.

SecurityThreat ModelingAnalysisΒ· 1 min read

Quick import: Download the .md file and save it to .claude/commands/ (Claude Code), .cursorrules (Cursor), or paste as a system prompt in ChatGPT, Gemini, or any LLM API.

#What it does

The security-threat-model skill creates repository-grounded threat models by analyzing your actual codebase. It identifies trust boundaries, data flows, entry points, and sensitive assets, then maps abuse paths with prioritized mitigations. Unlike generic threat modeling, it grounds every finding in real code paths.

#How to use

bash
$security-threat-model
Create a repository-grounded threat model for this codebase with prioritized abuse paths and mitigations.

#What it produces

  • Trust boundary map -- Identifies where untrusted input enters the system
  • Data flow analysis -- Traces sensitive data through the application
  • Entry point catalog -- All external interfaces (APIs, webhooks, file uploads, etc.)
  • Abuse path analysis -- Realistic attack scenarios grounded in actual code
  • Prioritized mitigations -- Actionable fixes ranked by risk and effort
  • STRIDE classification -- Threats categorized by Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege

This skill is from the OpenAI Skills Catalog.

OpenAIΒ·
View all skills